Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to Criticize Computer Scientists (2001) (purdue.edu)
232 points by agomez314 on March 29, 2021 | hide | past | favorite | 101 comments


I feel it misses a reliable take-down of the systems people. Recipe: (1) ask the person if they have evaluated some framework which would take months to evaluate; (2) suggest that their hard work reinvents the wheel; (3) step back and enjoy a smug victory.

On the odd occasion that they have evaluated the framework you named, you can simply try again a few minutes later with a different framework. If you are challenged, use condescension to imply that the victim did not try hard enough.

This tactic works even for situations where the victim has used a first-principles approach to completely demolish a problem to the satisfaction of all stakeholders. 'Reinventing the wheel' is a type of sin. The mere suggestion of it will stick like shit to a blanket.


You can attack them even if you have no idea what they are doing. Just look at the plots in the report and mumble something about "missing confidence intervals", "unrealistic workload", and "it's not even heavy tailed".


"Missing confidence intervals" is my (IMO) legit pet peeve.

People fit frighteningly complicated models with millions-to-billions of parameters. They throw in hairy regularization schemes justified by fancy math.

When it comes time to evaluate the output, however: "Our model is better because this one number is bigger than their number [once, on this one test set]."


As an undergrad, my circle had this drinking game called "Big Number." It came about because two of the biggest wastoids were the last people awake in the wee hours of Sunday, and one of them said to the other, "I'm too drunk to deal. Let's just roll dice, and the one with the biggest number drinks."

Of course, over the years, the game developed dozens of other rules.


Cool Story. Any details on more of the rules?


Confidence intervals aren't even that informative. Like using boxplots when you could inform the viewer so much more with sina plots [1]. Why not show me the whole posterior probability distribution (perhaps helpfully marking the 95% highest density region [2])? Or if you don't have a distribution, show me the 95%, 97.5% and 99.5% intervals.

[1] https://clauswilke.com/dataviz/boxplots-violins.html

[2] https://www.sciencedirect.com/topics/mathematics/highest-den...


Sure, there are better ways to actually do it; I was just riffing off the bit in the article.

It is super weird that an field devoted to doing inference somehow just...doesn't when it comes to evaluating their/our own work.


Nice comment, but you should rewrite it in Rust.


Nah man, Rust has overplayed its hand and has been turned into a meme. We need a new Rust that's exactly like the old Rust before it was cool.


Yeah, come on, write it in Nim or don't write it at all.


Nim is too simple and lacks complex mathematics.


Lisp has joined the channel


I tried Rust a while ago (I do mainly Python) but not for long enough to get my head around it (the borrow checker).

Is it now past the peak hype cycle, or do people still love it?


I learned it a while back during some huge hype and put it down for a while after slow, minor, painful success. I recently came back and it feels much improved, and my troubles with the borrow checker seem to finally be something I have overcome. I think it will continue to be loved.


There was a real improvement to the borrow checker to make it use non-lexical-lifetimes. The practical implications were the borrow checker being much better at reasoning about mutually exclusive borrows between branches.


That beginner-level Rust code made Coq barf. Rewrite your POC in Haskell before reverting back.


I always liked this quote from Alan Kay - "Reinventing the flat tire"


I have a system full of reinvented wheels to keep running (going to try and replace them with more standard solutions).

The dev has now left but he would always insist that the existing solutions didn't meet every need. There was always one feature that required him to build some over engineered version of something. "Good enough" was never an option for him.


This takes on a sinister tone in connection with interminable "which X is the best for this project" discussions. Evaluating all Xes would take infinite time because new Xes (JavaScript offshoots, Python package managers, CMSes, IDEs, build systems, you name it) are coming out faster than you can fairly evaluate them, and every person on the project has used at least one X. So the question basically boils down to

1) try Foo because of all the Xes all of us have tried it was the only one which didn't cause manic depression,

2) continue the discussion until the most ruthless/charismatic/stubborn person "wins", or

3) try new shiny Bar, because all the cool kids are using it.

I wonder if this is part of the reason so many projects go for the new shiny thing: in a group of reasonable people a lot of the time all of them will agree that all existing solutions suck.


"Not reactive enough"


"Have you tried to quantify the QoE?"


> they secretly dream about impressing mathematicians

This made me chuckle, and then I read this:

> systems researchers will light up when telling you that they have constructed a system that is twice as fast, half the size, and more powerful than its predecessor

and thought "Oh God, that is me"! (I'm not in research) I almost don't want to read the rest.

On a (possibly) less personal topic, I've found that non-computer science researchers who program will dismiss all help with "it doesn't need to be run by anyone else", (which is kind of scary considering the problems with reproducibility) or, if they are open to help but they don't understand what you've suggested/written/submitted etc, will try hard to drop it quietly. I think it's so they don't have to admit any lack of understanding, which is weird - I can barely understand code I wrote 6 months ago and I've written plenty of Perl in the past, too. The idea that code should be so easily understood that to ask a question would make one seem inadequate strikes me as a fanciful dream.

Edit: don't want to mislead, I'm not a researcher, it just sounds like me :/ :)


> they secretly dream about impressing mathematicians

I read that, chuckled and felt offended because IT ME.

> systems researchers will light up when telling you that they have constructed a system that is twice as fast, half the size, and more powerful than its predecessor

Then I read this and I _also_ thought IT ME.

I am not sure whether this means I have the skills of both sides or the foibles.


In the past I've worked around non-computer science researchers and saw many places where having a programmer on-board would help greatly.

It wasn't so much about understanding, I guess, but the apparent loss of ownership. Like if bits of the research get automated, then it isn't no longer their work, but the computer's.


They fall victim to the same thing every programmer in over their head rushing to finish does.

It looks to them like if they can just add one more floor to the house of cards it will be over, why bother explaining the whole project to someone else?


> I've found that non-computer science researchers who program will dismiss all help with "it doesn't need to be run by anyone else",

I used to work with bioinformaticians, and often wasn't impressed by a lot of the code that they churned out. But to be fair a lot of the work was just to produce a one off graph / heatmap to prove or disprove something, so most of the time it wasn't so important.

> I can barely understand code I wrote 6 months ago and I've written plenty of Perl in the past, too. The idea that code should be so easily understood that to ask a question would make one seem inadequate strikes me as a fanciful dream.

This was a big learning experience for me. I wrote and maintained a Django based system for four and a half years. When I couldn't understand my own code a few months later it was time to refactor. Ask yourself why you don't understand it and how you would expect it to be if it was written in an easier to understand way. It will save you time in the long run. Ex-colleagues commented that they found my code / database design fairly logical after I had left that job.


I can also provide some input on how to insult different kinds of astronomers depending on the field they work in.

Cosmology: Use the word "cosmology" interchangeably with "astronomy."

Simulations: "Did you include dust?" If they say yes then ask about magnetic fields. Either way, claim that they tuned their physical hyperparameters to achieve their results.

Observations: "Did you follow up with <X telescope at different wavelength>?" Pick one that's especially competitive to request observations, such as Hubble or SOFIA or ALMA, so that if the answer is no they'll feel extra bad about their rejected Hubble/SOFIA/ALMA observing proposal.


Or call them an astrologist. They love that.


> In fact, this is merely an extension of a ploy used by children on a playground: "Oh yeah? I could have done that if I wanted to."

Or by adult professionals on Hacker News.


I had an insufferable coworker who doubled down on that attitude. If he hadn't thought of something, instead of just saying he could have done it, he'd start rambling about why it was a bad idea, and that he was smart for choosing not to do it. His whole attitude was "I know everything, and if I don't know it, it's because it's not worth knowing". Worst part was the non-technical management ate it up because he was so confidently wrong, so people got dragged into working on his convuluted solutions that just happened to always match up with his skillset.


This describes an uncomfortably high fraction of CS people.


And very clumpy. Some places are nearly full, some nearly empty.


So how do you productively deal with this? There’s got to be a way, I just don’t know what it is.

The worse form of this is when they: 1) make poor choices faster than you can catch up

2) have a less senior team (in ability, not title) that can’t see more than a couple commits ahead to keep the damage in check.


> So how do you productively deal with this?

Explain it in your exit interview.


I know you're joking, but it's considered impolite to explain things in exit interviews.


I'm not- in every exit interview I offer polite, constructive feedback.


Not if done professionally, positively, specifically, and helpfully.


I have to deal with a couple of people like this and it's a waking nightmare. I'm trying to figure out if it's even possible to mitigate their damage or successfully negotiate with them, or if I just need to change jobs. This behavior is especially bad when it comes from manager / team lead types.


What a lame comment. I could have written one better If I had the time!


What a great comment! I wrote a similar one on Reddit.


What a lame comment. I could have written one better If I had the time!


A rather bold usage of "professionals" there, and for that matter "adult"


Well played sir!


> adult professionals on Hacker News

No such thing.


> Like mathematicians, theorists in Computer Science take the greatest pride in knowing and using the most sophisticated mathematics to solve problems.

This reminds me of some papers that got their proofs wrong by applying some fancy theorems. The fact that their results are correct clearly suggests that those theorems were added as a second thought.


If you browse r/MachineLearning, you will see many people complaining about unnecessarily convoluted mathematics that are literally there to appease reviewers and don't actually say anything useful.


In my understanding this comes from the fact that many ML is more engineering than science but wants to be seen (or is reviewed as) theoretical research. So it fits the situation in the article even better.


I agree with you. I beliebe that many people are trying to emulate papers like Vapnik's SVMs in an attempt to appear as ground-breaking because, well, the competition is enormous.

There isn't clear distinction between the engineering aspects and the theoretical aspects, as the article said, it looks as if we are trying to get approval by mathematicians so papers become a convoluted amalgamation of different ideas and more often than not actually provide the worst of both worlds.


I think many ML isn’t even engineering. Speeding up the implementation of an algorithm is, but the algorithms themselves mostly are of the “something like this seemed to work for problem P, so for problem Q, I tried this variation” type.

Yes, there may be solid math behind it that says “if your problem is of type T, this algorithm will get within Foo of the optimal solution in O(n log n) time”, but the problem is that nobody can tell whether a given real-world problem is of type T. Yet, people happily run the algorithm and if it works, it works.


Can confirm. I once watched an engineer build a beautiful, functional subscription management system. When I asked him some practical questions he said "Elegant programs allow us to code as though the physical machine doesn't exist"

The only problem was that under real conditions his code behaved like the stack was infinite; you know like it is in a theoretical computer.


> Have you tested this on the chip Intel got running last week in their lab?

This is my favorite, not because I've heard this particular one, but because I've heard this vein of low-effort comment after nearly every talk I've ever seen. "Did you consider this specific aspect of niche-thing-only-I'm-working-on?" Some people seem to always look for the opportunity to show how much they know about a topic rather than actually discuss in good faith the topic at hand.


> "Did you consider this specific aspect of niche-thing-only-I'm-working-on?"

I've seen 2 kinds of people that usually make this comment. Some do because niche-thing is the one they currently know better, and they have to make some comment, so they ask about what they know better. The others are the more interesting kind, they ask because they want to use niche-thing in some new way, so if you did for a chance consider it, they are very interested in it, and if you didn't, it may be an opportunity for them.


This is great fun, but did anyone else think the suggested insults for experimentalists were very weak? The Xerox Parc one is good but most experimentalists would love a question about bottlenecks! And who cares about testing on Intel's latest chip? I suggest a proper insult would suggest an easily avoided bottleneck or unnecessary point of failure.


I feel like experimentalists have changed quite a bit in 20 years as big distributed systems have become more common. A good question might be “Sure your performance scales better than other systems but how does it compare to a single core, or a single big machine?”


This essay seems to have been published 20 years ago. In the modern version, the experimentalists use terms like "deep learning" and "neural nets" and the theorists use terms like "fuck all this deep learning bullshit"


Worth noting that this seems to be by Douglas Comer, the guy who write the book on TCP/IP.

https://en.wikipedia.org/wiki/Douglas_Comer


As an engineer supporting researchers I ran into a peculiar problem. If I suggested an idea they often HAD to ignore it, because they couldn't claim it as their own. Especially the grad students who were trying to get a PhD by coming up with a unique idea of their own, but also the profs because it would be an admission that they "the experts" didn't understand what they were doing. So I had to lead them to come round to a perspective so they could think it was their own idea and hence be comfortable adopting it. A lot of time was wasted and ideas thrown away before I figured this out.

Also, if you are ever involved in academic research and you hear the words "that's just engineering" be very wary. It's a strong indicator that 1) the idea is not practical 2) they don't understand what they are doing. 3) you are going to have to make it work, often as an unacknowledged side project "that should only take a few days". I often spent years figuring out these "that's just engineering" side projects. Even worse, we'd build a "proof of concept" that ignored all the engineering, and then try to get other researchers or companies interested in the system as though it was complete.

Yet another thing I discovered is that for engineers one of the most important words they use in discussions is "no". Listen to two engineers discuss a problem and almost every other sentence will start with "no", "no that won't work because..." It's an important part of how they figure out how to make things work, by figuring out what does not work. CS researchers take it as an insult however, as in "no, you're an idiot and here's why". Perhaps because lead researchers are treated as infallible by their grad students they get the idea they can't be wrong, so they are not used to being contradicted. It's a huge problem in getting anything done. It can take months or years and a lot of wasted work to lead them around the circular path back to the original bad decision and try to get them to reconsider it (they are often very proud of it which makes it even more difficult). Saying "no" is somewhat like the mindset that is necessary for computer security work, you have to be able to attack systems or ideas from a ferocious point of view, seeking any weakness, without feeling that attacks on ideas are attacks on you. It's one of the most productive parts of discussion and is not taught directly, only by example and many CS researchers are so intent on building their reputations they will not tolerate it and are highly insulted by it and will defend their ideas to the point of absurdity. Framing a contradiction as a question helps here somewhat, though you still have to be careful in framing the question so it only leads to the contradiction rather than stating it outright.


I really think this misses the mark. Grad students don't treat their prof as infallible - at least good labs. In most labs, the grad students are the ones actually doing the work and research. The prof is just the marketing man. And a good prof in CS will understand they're the clueless marketing man. It's not really feasible for them to keep up with writing code and the whole grant writing (and networking with companies like you...) game.

Also I'm surprised that you had academics shopping "products" to you. I've definitely shopped ideas to companies before, but always with the explicit understanding that I'm doing "research" - i.e. you're spending a bunch of money on something that may go nowhere, and you're not going to get a product out of it.

That misunderstanding often makes the conversations end right there.

As a counter, I've found that many engineers will shut down ideas before you can even get started working on them because "no that won't work". It can be very frustrating, as the technical arguments they offer are stiff as a board. And not always as technical as they think.

Granted I work with GPU hardware research, so it could just be a lack of products in this area. Maybe viz? ML especially probably? I get the impression ML profs are a load of shit tbh.


> I get the impression ML profs are a load of shit tbh.

The field has exploded over the recent years and the new people are somewhat filtered for the ability to hype up hiring committees.


what's worse is when the architect says; "that's an antipattern".


> Even worse, we'd build a "proof of concept" that ignored all the engineering, and then try to get other researchers or companies interested in the system as though it was complete.

This is far from unique to academic circles...


Career science / academia has these problems in general. But probably also industry as well. (Like who takes credit for what, if it's someone else's idea, it must be axed etc., if it comes from the boss' favorite then it must not be questioned etc)


Regarding the feeling offended to hearing "no", I think this has little to do with researchers/academics and more to do with managers and people who have to do it.

Generally, I would argue most researchers are very similar to what you attribute to engineers, e.g. they often answer "no" first (I have heard that commented on by outside observers multiple times). In fact researchers (academic or otherwise) are trained to find flaws in systems quickly, which typically involves saying "no that doesn't work because ...".

The flip side of the coin is that managers (and Professors or group leaders are essentially managers) don't like to be told that something can't be done. Especially if it was their idea. This is the same in many industry settings. So they really dislike to be told "no".

The irony is that at some point in the transition from researcher in the trenches (PhD, postdoc ...) to Professor/group leader many start to think that the PhD students/postdocs say know because they don't want to do the work. It's really weird, because pretty much everyone who is in academia is strongly self-motivated as they should know from their own experience.


> "the experts" didn't understand what they were doing. So I had to lead them to come round to a perspective so they could think it was their own idea and hence be comfortable adopting it.

This also applies to industry. This is basically managing up 101 for engineers. If the boss thinks it's their idea, it'll get approved and supported. If they think it's someone else's idea, depending on environment, it'll get blocked or sabotaged.


My degree course is split across my university’s Computer Science and Engineering departments, so I often get to hear both of the discussions you described. I would say that CS supervisors talking to their students are more direct in saying that the ideas that they come with won’t work or that their mathematics is incorrect, whereas engineering professors seemed to be a bit more subtle and guiding where their students falter, as though they perhaps had the exact same ideas when they were younger and want to gently discourage the same mistakes. That’s not to say Engineering professors wouldn’t tell you you’re wrong, it just feels that the knowledge gap between the experienced and inexperienced engineer is larger than that for the computer scientist, and so there is more room for small errors to grow into bigger problems if ignored.


> whereas engineering professors seemed to be a bit more subtle and guiding where their students falter, as though they perhaps had the exact same ideas when they were younger and want to gently discourage the same mistakes.

It's because once something fails, in engineering the work doesn't stop. You have to root cause it and understand the failure. That's a valuable exercise in itself.


> Yet another thing I discovered is that for engineers one of the most important words they use in discussions is "no"...

This paragraph is a great insight - knowing how someone else is going to receive your (as you perceive it) constructive criticism cuts down miles to the destination of achieving the common goal. Speaking so your listeners will be able to "hear" you without perceiving an attack is critical in cross-discipline groups.


> if you are ever involved in academic research and you hear the words "that's just engineering" be very wary

An eminent computer scientist (Jeff Mogul I think) once pointed out that computer systems research is basically engineering.

I concur with this assessment and suggest that it's something that systems researchers should be proud of. After all, engineering means you're actually designing and building something that could potentially work.


> If I suggested an idea they often HAD to ignore it, because they couldn't claim it as their own. Especially the grad students who were trying to get a PhD by coming up with a unique idea of their own, but also the profs because it would be an admission that they "the experts" didn't understand what they were doing.

That's what authorship is for. What's the point in supporting academia if you can't get authorship?


I got authorship. The problem is that if I came up with the idea I'd have to be listed as the lead author, and that didn't seem to go over well. If I just contributed to their ideas they were fine with that. It varied per prof, some were happy to work on ideas from anyone because they saw the development of an idea as the important part, some were very strict about not accepting primary concepts that they or their grad students didn't come up with. Some listed a lead author, some put authors in alphabetical order. In one case I discovered the cause, we'd been brainstorming about new ideas for a while and I suggested a combination of several of those ideas that seemed interesting. They all got flustered and pushed the idea aside. I learned quite a while later (after we'd actually built that system) they had filed a patent a few days before for almost the identical idea and decided to leave me out of it. It made building the system awkward because they were continually coming up with alternate explanations for why this wasn't the same as the idea I had presented (and they had already also had and patented). I didn't work with them much longer.


> you have to be able to attack systems or ideas from a ferocious point of view, seeking any weakness, without feeling that attacks on ideas are attacks on you

This is how you should work in science anyways: "we tried to refute a theory, failed at that and are now forced to assume that it is valid to some degree."

Does CS have a problem with that in general?


I really wanted to like this writing but it seems that the author doesn't know what's a decision graph.

It should have been better concluded and summarized by a decision graph.

:)


Decision graphs are too easy to understand. That's why we avoid them for machine learning.


Richard Feynmann once made a comment to Danny Hillis (Connection Machine) on Computer Science. It amounted to, "What is it you guys DO!? I thought of all that stuff during the Manhattan Project!"


> I once sat through an hour lecture where someone proved that after a computer executed an assignment statement that put the integer 1 into variable x, the value in x was 1.

As someone who's mostly self-taught in CS (EE degree), I'd love to listen to something like this if anyone knows of a recording somewhere.


This is very true. In all peer reviews I've sat through, you can see these two types pop up again and again. The deft presenter will play the counter-type argument (and including with bounds to allow switching between the two viewepoints):

"There's no mathematical rigor" --> "That's a strength, it's simple, performant, and therefore easy to verify for our use cases"

"This is unnecessarily complex, and anyway you've ignored the constants. Our N is small." --> "Asymptotic performance just let's us sleep at night knowing it'll never be that bad. Here's a plot of the predicted and actual cost over our sized N's, you can see they agree well".

Perhaps the moral of the story is, be an engineer for your use cases, and a theorist for scaling.


I particularly like this because it applies across STEM disciplines. One could easily apply this in behavioral neuroscience (my field) and get quality results. Although as both a theoretician and experimentalist I’m not sure which would hurt me more: isn’t your reinforcement learning model just a recapitulation of Pavlovian conditioning? Or isn’t your experimental design problematic because it doesn’t make use of [insert favored alternative methods of insulter]. Probably the former. Ego often gets involved with theory.


> Like mathematicians, theorists in Computer Science take the greatest pride in knowing and using the most sophisticated mathematics to solve problems. For example, theorists will light up when telling you that they have discovered how an obscure theorem from geometry can be used in the analysis of a computer algorithm. Theorists focus on mathematical analysis and the asymptotic behavior of computation; they take pride in the beauty of equations and don't worry about constants. Although they usually imply that their results are relevant to real computers, they secretly dream about impressing mathematicians.

This is true about all fields that involve substantial amount of mathematics, e.g., physics and economics. In physics (and to a lesser extent, economics) you see a sort of "rift" between experimentalists and theorists very often, with both groups thinking that they are better than the other. Even among theoretical physicists, you tend to see this sort of petty rivalry: high-energy theorists thinking that they are better because they are after the fundamental laws of the universe, condensed matter (both hard and soft) theorists who feel that they are the better lot since their theories can be compared with experiments, hard-condensed matter theorists often look down upon soft-condensed matter theory as being a "classical" discipline invented to bring in more grant money, etc. While there's some truth to all these beliefs, rather than engage in this petty rivalry, it would actually do a lot more good if people just did honest work in their own fields.


I would actually love the non-joke version of this article about how to constructively criticize, especially when you have to do it via text.



Your meaning is perfectly clear (and I’d also like that article), but for some reason 25% of my brain is stuck reading “text” as “text message” and imagining someone banging out a critiquing response in T9 mode on some old Nokia phone.


Like this:

LOL UR PPR SUX JK ;)


"You wouldn't know an IF from a THEN!"

"Oh look, another dev surprise."

"They improved the code... To run in O(2^n) time."

"I'm sure it worked in dev."

"I can't believe it passed QA!"


This is pretty awful, as in mean. Why would you deliberately want to insult someone who hasn't done anything to you? If it's getting them back sure, but as sport? Just cruel.

  Despite all the equations, it seems to me that your work didn't require any real mathematical sophistication. Did I miss something? (This is an especially good ploy if you observe others struggling to understand the talk because they will not want to admit to that after you imply it was easy.)
  Isn't this just a straightforward extension of an old result by Hartmanis? (Not even Hartmanis remembers all the theorems Hartmanis proved, but everyone else will assume you remember something they have forgotten.)
  Am I missing something here? Can you identify any deep mathematical content in this work? (Once again, audience members who found the talk difficult to understand will be unwilling to admit it.)

  Wasn't all this done years ago at Xerox PARC? (No one remembers what was really done at PARC, but everyone else will assume you remember something they don't.)
  Have you tested this on the chip Intel got running last week in their lab? (No one knows what 
  chip Intel got running last week, but everyone will assume you do.)
  Am I missing something? Isn't it obvious that there's a bottleneck in the system that prevents scaling to arbitrary size? (This is safe because there's a bottleneck in every system that prevents arbitrary scaling.)
Reminds me of low effort "comments" on Show HNs. Now I'm thinking that maybe people were deliberately trying to be insulting. But why...? Especially when people are being vulnerable sharing their work....


This is pretty awful, as in mean. Why would you deliberately want to insult someone who hasn't done anything to you? If it's getting them back sure, but as sport? Just cruel.

  Despite all the equations, it seems to me that your work didn't require any real mathematical sophistication. Did I miss something? (This is an especially good ploy if you observe others struggling to understand the talk because they will not want to admit to that after you imply it was easy.)
  Isn't this just a straightforward extension of an old result by Hartmanis? (Not even Hartmanis remembers all the theorems Hartmanis proved, but everyone else will assume you remember something they have forgotten.)
  Am I missing something here? Can you identify any deep mathematical content in this work? (Once again, audience members who found the talk difficult to understand will be unwilling to admit it.)

  Wasn't all this done years ago at Xerox PARC? (No one remembers what was really done at PARC, but everyone else will assume you remember something they don't.)
  Have you tested this on the chip Intel got running last week in their lab? (No one knows what 
  chip Intel got running last week, but everyone will assume you do.)
  Am I missing something? Isn't it obvious that there's a bottleneck in the system that prevents scaling to arbitrary size? (This is safe because there's a bottleneck in every system that prevents arbitrary scaling.)
Reminds me of low effort "comments" on Show HNs. Now I'm thinking that maybe people were deliberately trying to be insulting. But why...? Especially when people are being vulnerable sharing their work....


Whaat? Your comment is invalid. This is totally different, this is not an insult it's my reaction, don't know you difference?

Reaction to this guy issuing instructions on how to be cruel.

Now this de_nied guy is objecting to someone calling out bullying uses the schoolyard bullying tactic of repeating the words back to you. What's wrong with you? Unless you're a bully... Or is the de_nied guy the bully who was bullied?


The article is supposed to be a joke.

If the author did not intend for it to be a joke, you can make the conclusion that the author has psychopathic tendencies.


Right.... I just didn't read it that way, but i get that some people could. Thanks for pointing it out. It's weird i didn't think there was anything jokey about it, i definitely see this type of thing could be a joke. I just didn't see that there at all. I guess that says more about me and what I'm at that i couldn't see it as a joke...i should probably lighten up

I read it like the guy meant it because he felt unfairly sidelined and had some departmental spat with computer scientists...i don't care about that spat, i just thought he was being mean. Maybe it is a meant as a joke... But they say in every joke there's truth, and a lot of comics are the most angry people inside. Humor is their sublimation

Edit: i find this one funnier

https://www.cs.purdue.edu/homes/dec/essay.jargon.html

But it definitely has an edge of bitterness within the cs department, as i suspected initially. The themes i read into the first essay are clearly present in the second. I think i was right that this guy has an axe to grind. Just because he writes some supposedly satirical essays doesn't mean he isn't a toxoc person inside or possibly out...


The goal of the typical developer is to add as much complexity as possible while ensuring that the system works correctly 99.999999% of the time such that failure will not be detected for a few years - Just long enough for the project to be scrapped and rewritten by the next generation of developers.


Or you could go full guilt-by-association:

"Isn't that same approach suggested and funded by Jeffrey Epstein?"


This article had the most volunteer translations I've ever seen (26). I wonder why.


Well the Hungarian one feels like somebody took every single word in the essay and replaced it with the first result from a rather small dictionary.

Google Translate produces a better translation (which is of course still awful), but I have a feeling that the one in question was made by an earlier version of Google Translate as well.


Criticism should not be insulting.

The primary title of this document is a thinly veiled attempt of shrouding the author's true intent. It's clear from their final words on the topic that they mean to insult, not criticize.


The full title:

> How To Criticize Computer Scientists > or > Avoiding Ineffective Deprecation And > Making Insults More Pointed

Am I missing something? Isn't your failure to read the full title a result of your desire to insult the author, rather than to criticize them? Wasn't this done already at Xerox PARC?


Despite the seemingly original nature of your comments, is there a deep satirical result here that I am missing? Did you decide to try a second approach because you had insufficient results from the first?


Agree. Articles like this are good fun (another favourite of mine is How to Write Unmaintainable Code [0]), but misleading titles are still poor form.

For good general advice on productive criticism, I recommend How to Criticize with Kindness, by the philosopher Daniel Dennett. [1]

[0] https://github.com/Droogans/unmaintainable-code

[1] https://www.brainpickings.org/2014/03/28/daniel-dennett-rapo...


The author's primary field is Applied Criticism, so this is to be expected.


If anyone wondered what a quality email forward looked like, this could be one.


Sounds like it is also applicable to regular non research computer scientists.


The Complete Guide to Writing Stack Overflow Comments


"Attacking Crossover Work" -- Ouch.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: