Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Software has its own Gresham's Law (uni.edu)
100 points by johndcook on June 15, 2015 | hide | past | favorite | 36 comments


The same dynamic shows up in a number of other systems. For example:

The Peter Principle. People who are performing well in their current role get promoted out of that role, until they reach their level of incompetence. At that point, they get stuck. Eventually the whole organization consists of nothing but incompetent people.

Gerrymandering. A political incumbent who redraws his district to include more supporters will last longer than one who doesn't. Eventually, all districts are gerrymandered, and all incumbents are virtually unassailable.

Vendor lock-in. A company that promotes consumer choice is easy to switch away from; one that promotes lock-in, by definition, is hard to switch away from. Eventually everyone will be buying from vendors they're locked into. (Unless the former company's products are much better - this was the strategy Google pursued until ~2011. Note, though, that they achieved this through some measure of employee lock-in.)

Basically the only requirement is that "good" solutions are more liquid than "bad" ones. Many, many systems exhibit this property.

You could look at most of modern society as a way to generate feedback loops on top of this dynamic to mitigate it. For example, an organization full of incompetent people is likely to go out of business and be replaced by a new one, often one founded by the very people who were driven out of the original. (See: Disney => Pixar, Apple => NeXT, Shockley => Fairchild => Intel, Netscape => Firefox => Chrome.) Similarly, a company full of bad, hard-to-replace code either embarks on a complete rewrite, or they're vulnerable to a startup without that baggage.


'Basically the only requirement is that "good" solutions are more liquid than "bad" ones' - this could be used as a criteria to identify both good and bad stuff


I look at code as biology. It competes in its environment. As it becomes more complex it can fight off all competitors.

Code grows toward irreplaceability. That's why we are surrounded by code that is hard/impossible to replace.

We shouldn't be surprised if code feels like Kudzu after it has been around for 10+ years.


Interesting that nobody have brought up the topic of computer viruses yet. People tend to grasp the evolutionary parallel better with parasitic software than with symbiotic one.


Exactly this happened with a lot of Smalltalk projects. The ones which were structured such that the Smalltalk Refactoring Browser parser could be used to accelerate porting projects -- usually the better architected and factored codebases -- could leave Smalltalk for other programming environments. Even if syntax driven code translation wasn't used, the better factored projects were still easier to port.

(And to head off the usual criticisms of automated code translation, this tends to work well, when the project has well adhered-to coding standards and patterns, so that idiomatic code in language A can be matched and translated to idiomatic code in language B. In other words, if there is a consistent use of project-level idioms, it's easy to do good idiomatic translation at the language level. The other necessary ingredient is a powerful parser+meta-language which can fully express the capabilities of the source and target languages.)


And this is why I hate systemd: its primary design criterion, it seems, is to be as difficult to replace as practically possible -- in stark contrast with sysvinit, OpenRC, etc. Once it's suitably entrenched it can simply be declared "the standard" and then Linux systems without systemd will fall out of compliance and hence out of support by the greater ecosystem.


I disagree with Sustrik's assumption that software drifts to become a collection of non-reusable components. His observation is an interesting theory but I think it breaks down because it could only really be a "law" if it is true that reusable (does he mean replaceable?) components are always switched out for irreplaceable ones.

Most projects I've worked on start out hairy messes and if they are cursed with success new requirements will eventually justify the cost of replacing the irreplaceable. Well designed components aren't quickly switched out for poorly designed ones because developers don't want to let that happen. Its not safe to assume that poorly written components are better suited to survival... to the contrary, they are the most likely targets for removal in the first place.


It doesn't require that replaceable components are always replaced with non-replaceable ones. The point is that over time, a replaceable component is likely to be swapped out multiple times, until, probably by accident, it is replaced with a component that is no easily replaceable, as which point it becomes stuck (e.g. glue code is written specifically to interface with that component and would be difficult to rewrite for a replacement).

In mathematical terms, you can imagine it as a Markov chain describing the component currently slotted in, where non-replaceable components are the absorbing states.


If the easly replacable component is good, there will be no reason to replace it at all. Components aren't getting replaced willy-nilly.


Yes, but easily-replaceable components are still going to be replaced more often than difficult-to-replace ones.


Zero * n is still zero.


You could see this in action with GCC if I remember correctly; they purposely made it a monolith so that it was harder for proprietary plugins to be added; therefore only GPL'd and LGPL'd components would be worked on.


The Gresham's law analogy is a crock. Gresham's law happens because the government compels merchants to accept the bad money as being equivalent to the good, but it can't effectively compel customers to spend the good money.


Yes, without government coercion, it's simply Thier's law: good money drives out bad. So if software tends to degrade over time, what's playing the perverse role of preventing people from switching to better modules/software?


There are of course all sorts of ways that a public API can become an entrenched standard (x86, Windows, PHP, JavaScript, C++, HTML, Java). Collectively we call it "lock in".

The thing is, the software wouldn't have been adopted in the first place if it didn't fulfill some need. Evolutionarily speaking, it's better for the cost of removing to be high and the benefit of keeping it to also be high. The most evolutionarily fit would be software that's essential, yet complicated, obscure, and unsexy, so it doesn't attract attention of idealists who will put in sufficient resources into rewriting it.

Even better if it attracts passionate advocates who will fight removal.


OpenSSL immediately comes to mind, but I don't think JavaScript really fits the model - it is still evolving into a better language. In a very real sense, EMCA7 is an entirely different language than EMCA5.


Managers? They don't stop bad code that still solves the problem but they do stop refactoring and other improvements that aren't directly tied to a customer's issue.


...or the customers themselves, who are unwilling to pay for any change for which they cannot gauge the impact upon their user experience. Sometimes the management explicitly says, "We know this is bad practice, but the customer will not pay for good."

This is why cheats like the "speed up loop" appear. If developers can justify any change to the code on the basis of perceived performance improvements, there is an incentive to intentionally degrade initial performance in an easily reversible way. Removing the speed bump is then bundled with the developer-desired refactoring.

The practice is ethically dubious, but then again, so is allowing an ignorant to command an expert in his own area of expertise. When the management has no understanding of technical debt or the software life cycle, some may choose to simply pre-pay some of the pension plan for the product's maintenance phase out of the development budget, rather than waste time explaining things.

The management life cycle plans for some software products are very much like a prospective father, who decides that his child will be born in 4.5 months, because he'll assign two mothers to the gestation project, instead of just one. It will learn calculus by age 4, will be fully grown by age 8, and will be more beautiful than an airbrushed model. It will then stay young and productive forever, while working 24-7, without complaint, for the benefit of the family. When the kid dies from everything cancer at 12, the dad blames the doctors, who did exactly as he asked, while repeatedly warning him that deviating so widely from the established parameters for life is certain to end in disaster.

You cannot command a doctor to raise the dead. You cannot command a lawyer to win the case. You cannot command the contractor to build the structure as both "safe from EF5 tornadoes" and "with lots of windows". You cannot order the scientist to find significance in the experimental data. And you can't tell a software team to build the "do what I want button" in a short amount of time, with a tiny amount of money, with great quality.

The basis behind Gresham's Law is that the state fixes one variable for market value and intrinsic value is left floating. People retain the items with the highest ratio of intrinsic value to market value (good money), and spend those with the lowest (bad money). Everyone does it, so the worst money circulates fastest.

In the case of software, the quality is often fixed at "meets customer requirements" and the cost is left to float. People with the lowest cost to quality ratio are retained, and those with the highest are let go. If the cost variable were fixed instead, higher-quality software would dominate.


> You cannot order the scientist to find significance in the experimental data.

Heh.

https://en.wikipedia.org/wiki/Multiple_comparisons_problem


The answer to that question is probably full of insights for Product Designers trying to improve the retention metrics of their products :)


This makes sense at the level of programmers as well. A programmer that does their job well is easily replaceable because the code they write is easy to maintain and so they will be replaced by someone who is not as good of a programmer and writes less maintainable code.


Speaking through my long white beard, I would say this is not trivially true. If we follow what the article says and say this is about survival in a particular niche, then yes, Programmers who are good at a specific job will eventually be replaced at that job, while programmers who are terrible at that job but not obviously so will tend to stay in that job.

Thus, if you want to maintain the same piece of the same legacy application indefinitely, be terrible at at. Whereas, if you are good, do the job and move on. Whether that means moving upwards, or to new pastures, or embracing new technologies, being good means constantly renewing yourself.

The caution in all of this is that if you fail to be good, not only will you be irreplaceable, but you will also be immobile. So, either polish your skills on a regular basis, or polish your Swingline stapler.


This made me recall Meilir Page-Jones' book "Practical Project Management" from way back...

"Second, employees who are truly competent and are eager to make a genuine contribution to the department soon resign from a mediocracy, leaving behind them the dross of nonproducers and internecine warriors. I term this effect the Inverse Gresham’s Law: A mediocracy hoards mediocre people and drives good people into general circulation."

http://www.waysys.com/book-excerpts/ppm-ch15.html


Good point. This law is talking about "slots" (a.k.a. jobs in your comment), not the things (or people) that are slotted into them.


It's more complicated than that. People who suck at their job can also be more replaceable, and people who are great at their job can be less so.

To me what's more interesting is sociological studies of business promotion. I took some sociology courses at Cornell, and though I don't remember the exact studies, I understood the gist to look like this: businesses generally can be understood to promote like this:

First, promote whoever is (roughly) more senior: your new IT head is more likely the Alice, the Senior Developer who has been here for 5 years than Bob, Carol, or Dennis, the Developers who have each been here for 1 or 2.

If those are equal (e.g. Alice declines the promotion or reveals that she's also about to accept a different job), promote whoever is more replaceable. Bob, who is an amazing developer who has been stuck maintaining a big hairball system at the behest of a fussy client who constantly asks him to make little tweaks, is totally out of the running, even if he's the most qualified in other respects: we can't promote him because nobody can easily take over his old job.

Third, promote whoever "feels" better. This is a complex agglomeration of personal marketing, actual leadership instincts, and even factors like preferring someone who is worse at their job because the other person seems to have "found their niche". Carol, who is a "rockstar developer" who can pull together an awesome prototype in a week, is totally helpless if Dennis, a more hum-drum average programmer type, has higher-visibility projects, a good rapport with others on the management team, and who fought unsuccessfully but valiantly that one time to keep that key client. He's way less talented than Carol, but he "feels" more right for the job.

Another detail from that sociology course was studies of social power. People who were viewed as "powerful" in buisnesses were usually not the best-connected, but the most strategically-connected. That is, if your social graph can be mostly factored into two or three subgraphs which are well-connected internally but not well connected to each other (e.g. different departments that don't talk much to each other), then within a subgraph, the node which connects to the other subgraphs will feel like the most "powerful". Social power is about connecting subnets. The person with the most social power in IT is not the person who connects best with everyone in IT, but the one who talks regularly with people in Client Services, Marketing, and Accounting: they're "powerful" because someone comes to them saying, "hey, I have an accounting problem with my latest paycheck, do you know who I can talk to about that?" and they can immediately say, "Well, you normally want Erica but she said she's swamped this week, let me see if Flynn will look into it for you, he owes me a favor anyway since I solved a database problem for him."


And that last paragraph explains why you want that to be true. However, if your company reward system only rewards power and position then that is where the problem lies.

Your developer why does a great job maintaining a hairball should be rewarded with life boosting things: paid for conference trips, a workspace of his/her choice, a book allowance, training courses of their choice etc.

Maslow's hierarchy of needs should guide the reward system, not Machaveilli's Prince.


I think this is not so much a "law" as a failure mode. It up to the programmers, objectives, time constraints and economic considerations whether a concerted effort is made to increase code quality or to go for the "quick fix" at the expense of long-term maintainability.


Anything that changes continually with a bias toward preserving existing structure ends up looking like biology in some way.

When people add to existing methods rather than creating new methods, or add to existing classes rather creating new classes we end up with that organic feel. Fruit starts green, grows ripe, and ends rotten.

The fact we need to face is that this is what people do naturally. It's not an accident. It's more like behavioral economic incentive.


I'm currently tending a 16 year old Java codebase, with very little in the way of maintenance in the years since it was written.

"Software over time tends towards monsterism." is apt in my mind.


Any constraint not imposed by a compiler is a degree-of-freedom that will drive a system toward complexity. It takes extraordinary effort and discipline to avoid this, especially over the long term. (One way of thinking about what architecture is is defining what allowed DoFs are on a system, so if you need to add a field to a form or implement some feature, the architecture is the thing that maps to individual concrete artifacts.)


With the amount of code developed in open git repositories, surely it is possible now to quantify these observations? Bring hard facts to the discussion.


That's a great point but the amount of analysis seems huge. Also what to quantify exactly? (and the criteria is different per programming paradigm)


What about looking at evolutionary biology? Those guys already have some methodological apparatus in place.


systemd anyone?


Thank you. You beat me with that one :)


Gold is never "good money".

it was a good material for making tokens back when making hard worn notes was a virtual impossibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: