Google is failing me right now, but I remember Kent Beck explaining why he used the term "extreme" in XP. It refers to things like extreme skiing. You can helicopter to the top of a mountain and do crazy things, but you will get killed. Extreme skiers avoid getting killed by reducing risk. You break down all of your movements to their absolute basics and you perfect them. Then when you are skiing, you stick to those basics, performing them almost perfectly time after time. You never do anything extraordinary. By doing this you de-risk each movement to the point that you can chain them together and do something astonishing. In reality, though, extreme skiing (and extreme programming) is simply monotonous repetition of basic skills executed to near perfection.
To me, this is what "agile" means. It means boiling it all down to a small set of skills that you can perfect and repeating them over and over again without exception. Because you are never taking any risks, the overall project can appear to be insane, but still be accomplished.
If you are negotiating what you are doing by saying, "maybe we can get away with X", or "maybe this will be good enough", then you will not be able to achieve this kind of agility.
And before someone asks: yes, the code will still not be perfect in the end. You goal is the perfection and de-risking of your actions not your artifacts. It is a subtle but important point.
I think part of the "Extreme" was also to "turn the practices up to 11" - eg. Code Reviews are good? Then we'll have continuous code reviews thru Pair Programming. Short Development Cycles are good? Then we'll have cycles that go on the order of minutes or even seconds (Red/Green/Refactor). Customer Involvement is good? Then we'll have them actually sit with the dev team. etc.
No risk, no experiments, no innovation and getting caught in a rut of mediocrity and boredom.
I love experimenting and taking chances with code and ideas. Yeah you end up spending more nights and it might take more time or it might blow up in your face but what's the point of being a programmer if you can't push limits.
This is actually a good point. In XP this is one of the reasons you do spikes. Often you will throw away all of your rigour when you do a spike and just see what comes out the other side. Then you throw away the code and reimplement it using your rigour. That way you get the best of both worlds with a small penalty of having to do some rewrites. It takes a considerable amount of discipline, though, to throw away working code when people are screaming at you to deliver ;-)
You can certainly experiment and innovate. Nothing about agile stops you from adding an issue to spend half a day investigating the right tool to solve a problem, or implementing something two different ways to determine the best one. But by explicitly defining that objective and blocking out time to do that you de-risk the development process related to it.
These "extreme" processes force you to define the motions (issues, work tasks) and allow you to practice getting them done using the same process over and over again.
"maybe we can get away with X", or "maybe this will be good enough"
I think it is important to ponder about what is the best (efficient, best matching, ...) solution to the problems the user has. In your definition agility sounds like the absence of pragmatism.
I've tried about 3 times to respond to your comment because I believe it deserves a response. Alas, I am not yet expert enough to write what needs to be said in a format smaller than a book ;-)
Hopefully a few clues will lead you to a better understanding of what I am trying to convey. There is a difference between the artifacts you are building and the practices you are using to build them. When you are pondering different solutions to the user's problems, you are pondering about the artifacts that you wish to build. In such things, you must be as free as you can be.
On the other hand, the practices you use to build those artifacts must necessarily be more strict. If you use a haphazard approach to choosing your practices, you will not be able to refine them. If you choose complicated practices that require large amounts of coordination with other people, you will likewise not be able to refine your practices. The goal, IMHO, is to simplify your practices and maximize your facility with them.
There isn't just one way to write software. You need to choose practices that will work for you and your team. The choice of those practices is of paramount importance to the success of your team. It must be things that you are good at and that you can coordinate efficiently on. There isn't a one size fits all. Having said that, you must be very strict about what you are doing because (at least in my experience) most practices are not compatible. Even very subtle changes or misunderstandings can cause various practices to misalign and backfire.
So far, what I have said is not specific to "agile" processes. To talk about agile in a public forum is difficult because for many of us old guys the term has been diluted considerably. So keep in mind that this is my opinion (a non-famous, random guy on the internet) and there will be luminaries that disagree (possibly violently ;-) ).
I believe that an "agile" process is one in which the main artifact (the software that you are delivering) maintains at least a constant potential for modification over time. Your goal is maximize throughput over time. In my experience, you can actually accelerate throughput over time with the right team/practices. I will steal Ron Jeffries term, "hyper productivity" to refer to this condition.
In more clear terms, what you want to say is that at the beginning of the project adding a feature is easy. If you are "agile" then a month or a year from now it is just as easy. If you are "hyper productive", then a month from now it is easier to add a feature. A year from now it is much easier to add features.
My assertion (which I can not back up in such a small space) is that if you are inconsistent applying your practices, then you will never reach the level I call "agile", much less "hyper productive". Some people might call this kind of inconsistency, "pragmatism". I do not know if you would be one of those people. I will say, though, that one of the reasons I have put such an effort into writing this response is that I recognise your name from other postings you have made and think that it is not a wasted effort ;-)
I have experienced what I refer to as "hyper productivity" on a few projects. It is magical. I know of only one way to get there. In the 30 years I've been in this industry I have never seen a flexible approach to practices achieve it. That doesn't mean it is impossible, but I am doubtful.
Thank you for taking time and the effort to response. I appreciate that.
I agree with the notion that you need to be free in choosing what you want to build. I like the notion of why, what, how in http://abbytheia.com/2016/02/07/why-what-how/
So we should be free in what we build but how should be influenced 1) by the what and 2) your team's experience.
The why is key to know what to build. Without knowing why, pondering about what is pointless.
The constant potential for modification in an efficient manner is indeed a key of software development and very difficult to achieve in my experience. Besides concentrating on small things you need to watch and maintain your architecture of the system. Alas there is a point where you need to change the architecture. I think Martin Fowler was it what found out that at a certain point an architecture which was a right fit at the beginning cannot be maintained in a reasonable way and has to be stepped up.
I would be interested what are the combination of practices that work for you is (even if they do not apply to my team).
With pragmatism I refer to the artifact, the what, it needs to be a good fit to the problem and the users. And it is important to thing with what solution can "you get away with". Not in the way it is build but what does it achieve, the capability it provides. I have seen too many developers failing to fulfill requirements in time and budget because they stuck to the wording of the requirement. The requirement was a box for them, a prison.
The problem behind the requirement is the one the developer needs to solve, not the requirement as is which is often wrong or unclear anyway. I have seen so much leverage here that I am putting most of my effort here: teaching to find and eliminate assumptions, getting to the problem, goals and needs of the user and the business, ...
The practices, the how. In my 15 years of professional development I tried many different practices and some are good for some situations, others feel like a waste of time and effort. I am yet on the search for a consistent set that fits me and my team.
How do you consistently apply the practices? Only in projects? Or do you use katas or something like that?
Again thank you for discussing. I believe such honest and clear discussions are needed more than the many arguments about what is better or worse.
Ha ha! That's a very nice cartoon. I really like it :-)
Picking practices is very hard because it is highly dependent upon the people. Personally, I prefer to use a mostly unmodified set of XP practices. As there are many confusing descriptions of these practices I encourage you to look at this one: http://c2.com/cgi/wiki?ExtremeProgrammingCorePractices
As I said, I don't really modify these practices except for the coding standard. That is to say, I have one, but I actually prefer to appoint an arbiter for coding standards rather than maintain a document. It reduces a potential point of politics where people try to game the coding standards to get their way (yes, unfortunately it happens... sigh).
One of the problems with this is that programming practices are very subtle and are surprisingly interdependent. Kent Beck has talked about trying to create the minimal set of "generative" practices: practices, which when followed, generate the behaviour you want without needing to state it. I think this is a good idea, but it requires that you truly understand how the practices work together.
A good example is testing. Unit testing has gotten reasonably popular. Some people focussed on the word "unit" and decided that they should mock all of the collaborators in their tests. Then they found that their tests were brittle when they refactored. Quite a few people were happy with this and just decided not to refactor very much because, hey, they thought about the design up front a lot and probably got it right. Right at that point, you have broken your XP practices because XP practices require constant incremental design.
Other people will think, "I'll replace these unit tests with integration tests" because they aren't as brittle. But integration tests require a lot of set up with lots of things in motion. So you end up with slow tests. It's fine because, hey, 0.1 seconds is still pretty damn fast for a test, isn't it? But then because you need to find all the corner cases, you write lots and lots of tests and your test suite takes an hour to run. Right there you have destroyed your XP processes because you can't do TDD any more.
Of course, the answer is that "unit testing" was always just a word and you shouldn't "test" units like you would a black box. It goes on and on and on. It's super subtle and really easy to get wrong. However, because the practices are generative, you can actually test that you are doing the right things in the manner I described above. Breaking one of the 12 XP practices will break at least one of the others. It's a matter of keeping you eyes open to the things you are breaking.
Which all reads like an advertisement for XP. Rest assured that my biases are simply formed because this is what I have experience with. I'm sure that other successful shapes exist. Indeed I often work with teams that are not equipped to handle all of XP. In those cases I try to find alternative approaches. I meet with varying degrees of success, but have never even come close to "hyper productive" without full on XP.
You next question is a good one. How do I consistently apply the practices? Personally, I practice a lot (I try to put in at least an hour a day outside of working hours). I do katas and work on side projects. The group I work with is also keen on practice and we have "hack time" and organized katas during work hours. But just like in sports, training is fine but there is no substitute for the real thing.
Usually there is a lot of pressure when you are working. At the end of the day, though, the programmer is programming and has a lot of power to decide how the practices work. The question always come down to: are you willing to do it?
I've worked as an "agile coach" quite a bit (although I always do a lot of programming). One day one of my younger colleagues came up to me and said, "I could fix this quickly or I could fix this right. Which one should I pick?" I answered, "Are you expecting me to suggest you choose the wrong solution?" He happily went back to his task. Of course, my head is now on the block if we get it wrong. :-)
Are you able/willing to make the same kinds of decisions? It is not possible in many situations. This can be fine, but my experience is that one will not be able to build an amazing team if you can not.
One last thing, which may sound patronizing. When I was about 15 years in, it was the time when I first started realizing how everything fit together. It is very encouraging for me to hear that your are on that same search. I think you will find what you are looking for very soon.
While I agree with the content, I dislike the straw-man pattern that is so commonly found: "you can use method A stupidly or method B (my favorite) intelligently. Hence method B is the best."
Take for example:
> Developers work in total isolation for weeks or months at a time on feature branches and then try to merge all their work together into a release branch at the very last minute.
You have also option 3: Come up with a specification (not design) for all the components up front in such a way that it minimizes integration risks. Split tasks and define interfaces and/or protocols so that the work can be developed in parallel. Still, try to follow up with the changes in other branches and keep your work up-to-date with major changes that may happen in the main branch.
Try to merge to see if there is any conflict, but don't push half-work. However, if you have to modify a main component that is used by other in order to implement your thing, and that change can be merged back safely, do it as soon as possible.
Author here. FWIW, that sentence "Developers work in total isolation for weeks or months at a time on feature branches and then try to merge all their work together into a release branch at the very last minute" is actually the exact opposite of a straw man: it's a real occurrence I witnessed first hand at a number of companies (if you want to hear all the gory details about one of them, check out the book: http://www.hello-startup.net/). There certainly are companies that are able to make feature branches work for them the way you described, but based on my personal experience and interviews with several dozen other successful companies, feature branches usually lead to disaster.
Using one anecdotal experience as proof is not the exact opposite of a strawman.
Similarly: "That one hour you “saved” by not writing tests will cost you five hours of tracking down a nasty bug in production, and five hours more when your “hotfix” causes a new bug."
These numbers don't line up with any experience I've ever had. In my opinion the value of testing should stand on its own without having to exaggerate.
> Using one anecdotal experience as proof is not the exact opposite of a strawman.
A strawman argument is an argument no one is actually making, but one that's easy to debate against (to "knock down"). So even "one anecdotal experience" implies there is someone making that argument and it's not a strawman. Moreover, as I said above, it's not a single anecdote, but experience with many, many companies, including ones I worked for directly, those that are the clients of my company (http://atomic-squirrel.net/), and the many companies I interviewed while writing my book. Of course, the plural of anecdote is not "data", but I'm pretty sure that you don't need statistically significant data sets to show your argument isn't a strawman.
> These numbers don't line up with any experience I've ever had. In my opinion the value of testing should stand on its own without having to exaggerate.
If anything, it's not an exaggeration, but an underestimate. I can't count how many hours I've lost to debugging that could've easily been saved by a handful of automated tests. But perhaps you're a better programmer than I am, and I envy that your code works perfectly regardless of whether you write tests or not.
> I can't count how many hours I've lost to debugging that could've easily been saved by a handful of automated tests.
Undoubtedly, but be careful you're not suffering from confirmation bias, are you properly accounting for all the tests you wrote that never detected an issue, where they didn't save tons of time in a major refactor (because it never happened)?
> But perhaps you're a better programmer than I am, and I envy that your code works perfectly regardless of whether you write tests or not.
I've been having some trainings on agility and software development, at an european Big Corp™ (where I work).
Here at european Big Corp™, management is worried that our size is making us slow, and they also want to be cool like the start-up kids in the valley. As such, we've been using/trying to use Agile, but with very limited success. Big Corp™'s solution to this obviously low success is to buy thousands and thousands of Euros worth of training, with Agile and Scrum certified trainers and partners, which - every single time - repeat ad nauseam the same doctrine and dogmas, much like liturgy in a church. A few examples:
* "So your colleagues are over-estimating every single task - pardon me, user story! - to have time to browse reddit? Estimate with story points, they'll see that their velocity is slow.";
* "So your colleagues don't like resolving bugs? Put a bug chart in the office where everyone can see it, they'll feel guilty and solve them!";
* "So your colleagues don't like to create tests and write half-assed code, totally ignoring definitions of done and the like? Just wait until velocity drops because of technical debt, they'll understand and learn!".
Well, what's my point here? Agile may require "safety" and all the technical goodies described in this article, but - before that - it requires a team of committed people. This is the basis of Agile and Scrum. And that's where my company fails, and that's why Agile - or any other approach - won't work here until people are responsible and committed. The build is now broken, will a team of uncommitted people will care? They'll push around the responsibility until someone fixes it.
So, yeah, Agile requires safety; but, before that, it requires commitment. I feel it's an engineering-type trait, to try to solve human issues with tools (bug charts, code coverage, continuous integration/delivery), and many engineering-driven companies seem to play the game that way. But all these tools won't solve the real problems which explain why a team might be failing. And if a team is committed, they'll eventually succeed, even without the shiny tools and cool approaches.
> So, yeah, Agile requires safety; but, before that, it requires commitment.
Completely agreed. There are certainly tools and processes that are more effective than others (as I discussed in the post), but for a creative discipline like programming, no process or tool will be effective unless the creators (the programmers) buy into it. That reminds me of a quote from Peopleware:
> The maddening thing about most of our organizations is that they are only as good as the people who staff them. Wouldn't it be nice if we could get around that natural limit, and have good organizations even though they were staffed by mediocre or incompetent people? Nothing could be easier—all we need is (trumpet fanfare, please) a Methodology.
Tellingly, the "high discipline methodologies"[1] page on c2 was kicked off by listing XP and the Personal Software Process.
(You can create systems for enabling median folk to accomplish things, even if they are disinterested. We call it "bureaucracy", and it sucks, but a lot of the time it kinda-sorta works. A bit.)
Anyway, as usual: you need good people, good process and good tools.
None of these are substitutable for the others, despite what methodologists, tool vendors and various worthies might tell you.
> (You can create systems for enabling median folk to accomplish things, even if they are disinterested. We call it "bureaucracy", and it sucks, but a lot of the time it kinda-sorta works. A bit.)
Yeah, it's true. And the kinda-sorta might be just enough to some companies (which are too big to fail and have enough leverage to push mediocre stuff to the market).
On a side note, I've been feeling, lately (and within the context of all this Agile BS my company tries to indoctrinate me with), that management is really the art of accomplishing stuff without making large assumptions about your resources (in software, without assuming any kind of talent, commitment or responsibility from the team). And, although it seems horrible to me, there's really a lot of knowledge and value in achieving things even when you only have a bunch of uncaring, undedicated and uncommitted monkeys which only care about collecting their paycheck.
"But all these tools won't solve the real problems which explain why a team might be failing. And if a team is committed, they'll eventually succeed, even without the shiny tools and cool approaches."
Commitment is the first requirement, but it doesn't guarantee the team understands or will derive an effective process in time to deliver .
Failed startups are usually an example of committed individuals that ran out of time.
I was reading your point (quite well put, actually) and thinking about how the Big Corp™ world is the exact opposite of the example you mention - failed start-ups. Big Corp™ initiatives succeed, even though - by my standards - I feel they're failing, and even though everyone is totally uncommitted.
Agile enthusiasts really enjoy applying agile methodologies to functional requirements such as user stories.
But non-functional requirements are rarely explicit: security, reliability, redundancy, durability, concurrency, performance, scalability, configuration, deployment, documentation, logging, monitoring, supervision, maintainability, construction for verification...
You don't get a user story saying: "as a user i would like my information to be private" or "as a user i would not like to experience a concurrency bug".
To neglect those non-functional requirements in favor of perceived progress is not in the company's best interest. A solution that doesn't comply with those requirements also has a name: a functional prototype. There's a difference between developing production software and developing prototypes, even if you consider yourself agile.
Now, as a software engineer, you should be able to identify these requirements and include them in estimations. You can also ignore them, and project an image of a highly productive engineer, but some day your code will crash and you won't have 10 days to fix it. That day you will be miserably fired to the sound of a trumpet.
I think most successful agile teams would adopt those non-functional requirements not as tasks to be implemented but as cross-cutting concerns of all work. Ie, on a Scrum team they could be part of the Definition of Done, ie "Is the code maintainable" which would not allow some work to be considered completed until it met that definition.
I'm not 100% clear on what you are saying but it sounds like you think agile teams ignore these non-functional requirements in favour of perceived progress on features. I think that may well be true in practice but it is also true of non-agile teams. It is in fact just true of immature development teams the world over. Being agile or not has nothing to do with it. Scrum and other processes do at least try to have something in place to cater for these requirements, which often sadly get overlooked when pressure is applied to make flat out progress on features.
I concur. But it was mostly in response to the article.
I do believe though that there are many engineers that highly disregard good practices because they get a political benefit from releasing more.
Just like politicians get appraisal when doing ribbon cutting at inaugurations, and never when repairing a bridge, many organizations promote the engineers that deliver more features, not the ones that keep the system working.
By doing this they create a culture of technical debt and failure.
> You don't get a user story saying: "as a user i would like my information to be private" or "as a user i would not like to experience a concurrency bug".
Uh, you do. You totally do.
"As a User, I want the site to appear on my browser quickly".
In acceptance criteria, "Quickly is defined as 98th percentile 400ms full roundtrip to AWS US-east-1".
How do you do this? You write a test. What kind? A performance test. When does it run? Every time you check into CI.
How is this different from standard agile practice?
The quote is about 'privacy' and your user story is about 'speed' - these are different criteria and 'privacy' is much harder to specify in a user story.
I focused on performance because that's one example I've had brought up several times. And it's probably the easiest one.
I've seen privacy stories too. The style I like is to create a malicious user and deny them.
As a Malicious User
I want to steal credit card numbers
So that I can sell them on the black market
Given I have access to the web app
When I supply malformed URLs
I am ignored
A/C
Pentest tool with decent corpus
Another approach, also used, is exploratory testing. Security and privacy are tricky because you're dealing with humans who can react creatively; so the best test is humans who react creatively.
A functional requirement is like adding a room to your house. A non functional requirement is defining what the construction material is and what the construction standards are.
What you describe is like constructing a house, then once it is constructed, specify it needs to be earthquake resistant.
That should be specified as an upfront requirement not a story.
If you construct it, and it was already earthquake resistant, should not be a problem. But if it wasn't, you will need to rebuild.
> What you describe is like constructing a house, then once it is constructed, specify it needs to be earthquake resistant. That should be specified as an upfront requirement not a story.
There's no law against writing every story with some NFR component, or interspersing NFRs with regular feature stories.
The process of writing, test-driving and accepting stories is a framework, it's guidelines. When I or other engineers see obvious architectural landmarks ahead, we often encourage the product managers to bring those forward.
A story can have user value, and it can also have business value outside of user value. One such business value is attacking architectural risks.
And sometimes you tell business "this unaddressed architectural scope is a big risk", and they say "OK, let's fix it".
And that is still agile. The point was never rejecting high ceremony in order to replace it with low ceremony. It was that professionals should talk to each other. Constantly.
Nobody is arguing with that. But it's also very wasteful to build an earthquake resistant house in a geologically stable area. You have to talk it over.
I have done agile development successfully, and yet, the article rings very hollow for me, because most of its examples have very little to do with the principles the author tries to explain.
For instance, he talks about working strategies, and puts Google as an example. Google is a terrible example for almost every other company out there. They have a gigantic monorepo, which is only manageable because they have custom tooling from hell. For most of us, the organization doesn't have said custom tooling from hell: The OSS version of bazel just doesn't work quite as well out of the box. If you aren't running a highly modified version control system like they do, check in performance dies. You might be able to pretend to be Google if you are tiny instead, but anyone in between will just meet suffering. It's a bit like trying to match apple in industrial design.
Then there's the talk about feature flags. They are great, useful things, but there is also hidden suffering behind feature flagging all the things. There is much extra gardening required, and a completely different set of headaches when relatively large changes affecting the same parts of the code are being hidden behind feature flags. They have weird interactions too! And don't forget how much fun it is to have intermediate data structures changing for features that aren't active: He is just glossing over big, big problems, that happen to be different than the one he discusses.
Given that the author is consulting for tiny startups, chances are he hasn't stared at those problems in the face, but they exist, and they are especially pernicious when your tiny company believes that imitating google will work, and then reaches 60-80 programmers: Then all the advice above starts to crack, and it only gets worse when you get into the 200 engineer range.
The difficult part is not being a small company, where any and all practices will work. It's not that hard being huge either: Just invest heavily in your own ecosystem. PHP too slow? Rewrite it! Git too slow? Write a new engine for Mercurial! It is when a company is growing fast, bit isn't really large, when your technical practices can cost you your company, and the article's advice is PRECISELY the way to get murdered.
> For instance, he talks about working strategies, and puts Google as an example.
First, I also listed LinkedIn and Facebook. Second, many small companies use the same strategies, but most people probably haven't heard of them, so they don't serve as particularly useful examples.
> Given that the author is consulting for tiny startups, chances are he hasn't stared at those problems in the face
If you're going to make an ad hominem argument, you should at least do a more thorough job of looking up my background :)
I've worked at and with small, medium, and large companies. Every single tool and technique involves trade-offs, and no one approach will fit everyone, which is exactly the point I discuss at the end of the post.
Agility requires prior knowledge and any action that puts that knowledge at risk of catastrophic lose is questionable. That said, agents taking on risks that are expendable without major loss of knowledge or unreasonable expenditures of resources will outperform competing systems that avoid loss at all costs.
> all the teams would use the metric system, except one
We all know which one. Side question: why don't tech companies help push the USA public towards the metric system and the SI (International System of Units) in general?
> Side question: why don't tech companies help push the USA public towards the metric system and the SI (International System of Units) in general?
Because French units are objectively worse for concrete manipulation (while being admittedly objectively superior for abstract conversion, and under the metric of popularity).
Switch to decimal units was among the dumber ideas of the French Revolution; switching to duodecimal digits would have been far wiser.
Because units don't matter anymore ever since the units command line utility was created. I can use watts/hogshead or horsepower/liter just as easily as kilometers or miles.
In practical terms, for the UI, it's best to use whatever customers are most comfortable with, since you don't want to scare them away over irrelevant issues.
I'm sure in 1998 there was a similar utility such as units; still the Mars Climate Orbiter disaster [1] and the Gimli Glider [2] happened. By Engineers or technical people, not the general public. I'm sure everyday small misunderstandings/errors happen from the disparity.
>I'm sure in 1998 there was a similar utility such as units; still the Mars Climate Orbiter disaster
The utility did exist then, but a reading of the mishap report[0] for that event makes it clear that NASA was not using it. It looks like someone assigned a constant using the Imperial measure rather than Metric representation of a value. The function that used this constant did not verify the unit of measure, and the constant was likely just a floating point number, not a data type that even stored unit of measure.
I haven't read the Gimli report yet, but it's probably the same story: human error.
Edit: The Gimli Glider [1] was caused by human error.
[0] ftp://ftp.hq.nasa.gov/pub/pao/reports/1999/MCO_report.pdf see page 16
I agree, and I don't mean to argue against moving to the metric system. I only intended to dispel the notion that software safeguards that would have prevented these incidents were in place and failed.
Educating developers about this potential issue and ensuring safeguards are in place to prevent disaster seems like a more readily achievable goal than inducing a national migration to the metric system.
Sod national, I'm talking about corporate. If my company mandates that all code must be written in Lisp, then that's what I'll do. If they mandate all monetary values must be stored as USD then that's what I'll do. And if they require that all code must use the metric system - with display options to convert to imperial? Thats. What. I'll. Do.
I don't care what the coding guidelines are - I only care that they exist. [within reason]
As a former draftsman: this is not accurate. You might think that units are convertible and we have plenty of software for that. But it doesn't work that way.
If a drawing says to drill 3mm holes and you have imperial drills, what to do?
If a drawing says tolerance is 1/64 inch, how do you use your metric digital calipers quickly to verify it?
If you have a paper drawing provided to you, which is the reality of life in the field, how do you visualize all the dimensions in your head if they are in a system of units you don't use every day?
If you receive a part with 28mm wrench flats, how can you be sure they aren't meant for a 1 1/8 spanner?
There are real mass produced parts where the metric female threads can easily be enlarged (or stripped!) to accept imperial male threads. But the converse doesn't work, and even if you possess a tap you probably don't have a metric thread gauge in your imperial shop.
It goes on and on. Software won't eat this problem, so we have to.
Spot on, with one small exception. I suspect every machine shop has thread gauges for Imperial and metric. I'm a backyard hack and I have both, just because this problem is so common and annoying.
28mm is compatible "enough" to 1 1/8, as 14mm and 9/16 are a very common "either one works" interchange for sockets/wrenches.
Fun fact: I recently used units(1) to convert a power of two of something, when it threw a syntax error on me because the power operator (like "2^6") was not known.
It turned out that I was accidentally not using the units(1) from my Linux VM, but the one on my Mac's native terminal, which according to the manpage dates back to 1991. Apple has literally not gotten around to updating their units for 25 years.
Yep, that is the reality most of the time. It would be easy if you had a fixed list of features for a project. But they change most of the time throughout the development. You get used to it and start planning accordingly.
That's not quite what I got from it. More like, here's a more robust way to go about writing software, that may end up saving you time in the long run.
To me, this is what "agile" means. It means boiling it all down to a small set of skills that you can perfect and repeating them over and over again without exception. Because you are never taking any risks, the overall project can appear to be insane, but still be accomplished.
If you are negotiating what you are doing by saying, "maybe we can get away with X", or "maybe this will be good enough", then you will not be able to achieve this kind of agility.
And before someone asks: yes, the code will still not be perfect in the end. You goal is the perfection and de-risking of your actions not your artifacts. It is a subtle but important point.