Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's striking to me how out of touch Martin seems to be with the realities of software engineering in this transcript. Stylistic refactors that induce performance regressions, extremely long and tortured method names for three-line methods, near-total animus towards comments ... regardless of who is right/wrong about what, those takes seem like sophomoric extremism at its worst, not reasoned pragmatism that can be applied to software development in the large.

When I encounter an Uncle Bob devotee, I'm nearly always disappointed with the sheer rigidity of their approach: everything must be reduced thus, into these kind of pieces, because it is Objectively Better software design, period. Sure, standard default approaches and best practices are important things to keep in mind, but the amount of dogma displayed by folks who like Martin is really shocking and concerning.

I worry that his approach allows a certain kind of programmer to focus on like ... aesthetic, dogmatic uniformity (and the associated unproductivity of making primarily aesthetically-motivated, dogmatic changes rather than enhancements, bugfixes, or things that other coders on a project agree improves maintainability) instead of increasing their skills and familiarity with their craft.

Maintainability/appropriate factoring are subjective qualities that depend a lot on the project, the other programmers on it, and the expectations around how software engineering is done in that environment.

Pretending that's not true--that a uniform "one clean code style to rule them all" is a viable approach--does everyone involved a disservice. Seasoned engineers trying to corral complexity, new engineers in search of direction and rigor, customers waiting for engineering to ship a feature, business stakeholders confused as to why three sprints have gone by with "refactor into smaller methods" being the only deliverable--everyone.



The Uncle Bob thing is something I'm experiencing right now.

I hired a friend who was a huge Uncle Bob mark, and he kept trying to flex his knowledge during interviews with other people in the company. I didn't really think much of it and told the other interviewer that it was just his personal quirk and not to worry much.

I had him work with some junior devs on a project while I took care of something more urgent. After finishing it, I went over to take a look at how it was going on his end. I was horrified at the unnecessary use of indirection; 4 or 5 levels in order to do something simple (like a database call). Worse, he had juniors build entire classes as an interface with a database class that was "wrong".

No practical work was done, and I've spent the past 4 weeks building the real project, while tossing out the unnecessary junk.

I liked Clean Code when I read it, but I always assumed a lot of it was meant for a specific language at a specific time. If you are using it verbatim for a Python project in 2025, why?


I don't see how it is uncle Bob's fault that your friend misunderstood his book.


Judging from this thread, it seems like a lot of people have similar issues with UB's work.


It might just be that the divide of a getting-things-done developer and a bloat developer isn't really caused by Uncle Bob but merely correlates with it. I.e, good developers also agree with Uncle Bob, though apparently with a different interpretation of what he said.


Smart people read a book and critically think about it. The others think it was written by a superhuman and turn everything into religious beliefs.


I meant that the book is interpreted differently by different people. (That no-one takes it as religion, but that some read it as recommending to "create a mountain of unnecessary abstractions", and others read it that "add necessary abstractions".)


Not so much UB himself, but a developer being told a book or person is authoritative / has the final word on a subject isn't healthy.


> I worry that his approach allows a certain kind of programmer to focus on like ... aesthetic, dogmatic uniformity (and the associated unproductivity of making primarily aesthetically-motivated, dogmatic changes rather than enhancements, bugfixes, or things that other coders on a project agree improves maintainability) instead of increasing their skills and familiarity with their craft.

Funny, I find the opposite. In my experience people that are willing to take a "dogmatic" position on code style are those who are able to actually get on with implementing features and bugfixes. It's the ones who think there's a time and place for everything and you need to re-litigate the same debates on every PR who tie themselves in knots getting nothing done.

Do I agree with absolutely everything Martin writes? In principle, no. But I'd far rather work on a codebase and team that agrees to follow his standards (or any similar set of equally rigid standards, as long as they weren't insane) than one that doesn't.


I'm not familiar with the Clean Code book etc; my introduction is the article. UB seems to be advocating consistently for patterns that are not my cup of tea! For example: Functions sometimes make sense as 2-3 lines. Often 5-20. Less often, but not rarely, much more than that!

I'm also a fan of detailed doc comments on every module and function, and many fields/variants as well. And, anything that needs special note, is unintuitive, denotes units or a source etc.


Function length also depends on language. Every line of one language requires three line in another if the former has implicit error handling and the latter explicit. But I find the cognitive load of the two to be similar.

I am also okay with 1000 line functions where appropriate. Making me jump around the code instead of reading one line at a time, in a straight line? No thanks!


The issue that of function length is irrelevant and incidental. Keep paying attention to what UB is saying.


I didn't get that impression from reading it. I also find the TDD approach discussed to be high inertia.


> It's striking to me how out of touch Martin seems to be with the realities of software engineering in this transcript

It was always like that. And Fowler the same thing with his criticism of anemic domain model. But software-engineering is no exceptions to having a mass of people believing someone without thinking by themselves.


> It was always like that. And Fowler the same thing with his criticism of anemic domain model.

What leads you to disagree with the fact that anemic domain models are an anti-pattern?

https://martinfowler.com/bliki/AnemicDomainModel.html

I think it's obvious that his critique makes sense if you actually take a moment to try learn and understand what he says and where he comes from. Take a moment to understand what case he makes: it's not object-oriented programming. That's it.

See,in a anemic domain model, instead of objects you have DTOs that are fed into functions. That violates basic tenners of OO programming. It's either straight up procedural programming or, if you squint hard enough, functional programming. If you focus on OO as a goal, it's clearly an anti-pattern.

His main argument is summarized in the following sentence:

> In essence the problem with anemic domain models is that they incur all of the costs of a domain model, without yielding any of the benefits.

Do you actually argue against it?

Listen, people like Fowler and Uncle Bob advocate for specific styles. This means they have to adopt a rethoric style which focuses on stressing the virtues of a style and underlining the problems solved by the style and created by not following the style. That's perfectly fine. It's also fine if you don't follow something with a religious fervor. If you have a different taste, does it mean anyone who disagrees with you is wrong?

What's not cool is criticizing someone out of ignorance and laziness, and talking down on someone or something just because you feel that's how your personal taste is valued.


"It's not object oriented programming" is only a good case to make if you think object oriented programming is synonomous with good. I don't think that's true. It's sometimes good, often not good.

Why would focusing on OO be a goal? The goal is to write good software that can be easily maintained. Nobody outside of book writers are shipping UML charts


Why would you not focus on writing OO code in an OO language for example? Would you start writing OO code in a functional langugage? No you wouldn't, because it would be pointless. There are programming paradigms for a reason


> Why would you not focus on writing OO code in an OO language for example?

Often people do this to deliver higher quality software. Most languages still have some OO features, and people don't use them because they know they lead to bad code. Inheritence (a core OO feature) comes to mind. Most professionals nowadays agree that it should not be used.

OO designs are often over-abstracted which makes them hard to understand and hard to change. They lack "locality of behavior". Trivial algorithms look complicated because parts of them are strewn across several classes. This is why more modern langues tend to move away from OOP.

My guess is that im the long term, what we will keep from OO is the possibility to associate methods with structs.


> Why would you not focus on writing OO code in an OO language for example?

That's circular logic. I wouldn't focus on writing OO code because I know from experience that the result is usually worse. If I had to use a language that was oriented towards writing OO code, I'd still try to limit the damage.

> There are programming paradigms for a reason

Nah. A lot of them are just accidents of history.


> Why would you not focus on writing OO code in an OO language for example? Would you start writing OO code in a functional language? No you wouldn't, because it would be pointless. There are programming paradigms for a reason

I'm paid for efficiently solving business problems with software, not using a particular paradigm. If an FP solution is more appropriate and the team can support it, then that's what I'll use.


> Why would you not focus on writing OO code in an OO language

It should be the best solution to the problem direct whether or not use of OO is best, not the language.


> "It's not object oriented programming" is only a good case to make if you think object oriented programming is synonomous with good. I don't think that's true. It's sometimes good, often not good.

See, this is the sort of lazy ignorance that adds nothing of value to the discussion, and just reads as spiteful adhominems.

Domain models are fundamentally an object-oriented programming concept. You model the business domain with classes, meaning you specify in them the behavior that reflects your business domain. Your Order class has a collection of Product items, but you can update an order, cancel a order, repeat an order, etc. This behavior should be member functions. In Domain-Driven design, with its basis on OO, you implement these operations at the class level, because your classes model the business domain and implement business rules.

The argument being made against anemic domain models is that a domain model without behavior fails to meet the most basic requirements of a domain model. Your domain model is just DTOs that you pass around as if the were value types, and have no behavior at all. Does it make sense to have objects without behavior? No, not in OO and elsewhere as well. Why? Because a domain model without behavior means you are wasting all development effort building up a structure that does nothing and adds none of the benefits, and thus represents wasted effort. You are better off just doing something entirely different which is certainly not Domain-Driven design.

In fact, the whole problem with the blend of argument you are making is that you are trying to push a buzzword onto something that resembles none of it. It's like you want the benefit of playing buzzword bingo without even bothering to learn the absolute basics of it, or anything at all. You don't know what you're doing, and somehow you're calling it Domain-Driven design.

> Why would focusing on OO be a goal?

You are adopting a OO concept, which the most basic traits is that it models business domains with objects. Do you understand the absurdity of this sort of argument?


> Domain models are fundamentally an object-oriented programming concept.

They are not.

> You model the business domain with classes, meaning you specify in them the behavior that reflects your business domain.

I have better tools for doing that.

> In Domain-Driven design, with its basis on OO, you implement these operations at the class level, because your classes model the business domain and implement business rules.

You're still not explaining the "why". You're just repeating a bunch of dogma.

> a domain model without behavior means you are wasting all development effort building up a structure that does nothing and adds none of the benefits, and thus represents wasted effort.

I know from experience that this is completely false.

> You don't know what you're doing, and somehow you're calling it Domain-Driven design.

I don't call it domain-driven. You can call it domain-driven if you want, or not if you don't want. I don't care what it's called, I care whether it results in effective, maintainable software with low defect rates.


> I care whether it results in effective, maintainable software with low defect rates.

This is what it is about. All the other things that have been invented need to be in service of this goal.


> I have better tools for doing that.

For example?


I would assume he meant to use the typesystem of his PL.


> Your Order class has a collection of Product items, but you can update an order, cancel a order, repeat an order, etc. This behavior should be member functions.

This is how to fuck up OO and give it a bad name:

  order.update(..) // Now your Order knows about the database.

  order.cancel(..) // Now your Order can Email the Customer about a cancellation.

  order.repeat(..) // Now your Order knows about the Scheduler.
What else could Order know about? Maybe give it a JSON renderer .toJson(), a pricing mechanism .getCost(), discounting rules .applyDiscount(), and access to customer bank accounts for .directDebit(); Logging and backup too. And if a class has 10+ behaviours you've probably forgotten 5 more.

An Order is a piece of paper that arrived in your mailbox. You can't take a sharpie to it, you can't tell it to march itself into the filing cabinet. It's a piece of paper which you.read() so that you.pack() something into a box and take it to the post office. You have behaviours and the post office has behaviours. The Order and the Box do not. At best they have a few getters() or some mostly-static methods for returning aggregate data - but even then I'd probably steer clear. For instance: if the Order gave me a nice totalPrice() method, it simplifies things for later right? Well no, because in TaxCalculator (not order.calculateTax()) I will want to drill down into the details, not the aggregate. Likewise for DiscountApplier.

> Does it make sense to have objects without behavior? No, not in OO and elsewhere as well.

It does, just like in the Domain (real-world Orders). Incidentally, I believe objects-without-behaviours is one of the core Clojure tenets.

Since this is HN's monthly UB-bashing thread, I should point out that I learnt most of this stuff from him. (It's more from SOLID though, I don't think I have much to say on about cleanliness.)

The above examples violate SRP and DI.

"Single reason to change": If order.cancel(..) knows about email, then this is code I have to change if the cancellation rules change or if the email system changes. What if we don't notify over email anymore? Order has to become aware of SMS or some other tech which will cause more reasons for change.

"Dependency inversion": People know what Orders are, regardless of technical competence. They can exist without computers or any particular implementation. They are therefore (relative to other concerns here) high-level and abstract. Orders are processed using a database, Kafka and/or a bunch of other technologies (or implementation details). DI states that abstract things should not depend on concrete things.


We have a disagreement about the core of OOP. In English, a simple sentence like "The cat eats the rat" can be broken down as follows:

- Cat is the subject noun

- Eats is the verb

- Rat is the object noun

In object-oriented programming, the subject is most often the programmer, the program, the computer, the user agent, or the user. The object is... the object. The verb is the method.

So, imagine the sentence "the customer canceled the order."

- Customer is the subject noun

- Canceled is the verb

- Order is the object noun

In OOP style you do not express this as customer.cancel(order) even though that reads aloud left-to-right similarly to English. Instead, you orient the expression around the object. The order is the object noun, and is what is being canceled. Thus, order.cancel(). The subject noun is left implicit, because it is redundant. Nearly every subject noun in a given method (or even system) will be the same programmer, program, computer, user agent, or user.

For additional perspectives, I recommend reading Part I of "Object-Oriented Analysis and Design with Applications" (3rd edition) by Grady Booch et. al, and "Object Thinking" by David West.

---

That said, I think you're right about the single responsibility principle in this example. A class with too many behaviors should usually be decomposed into multiple classes, with the responsibilities distributed appropriately. However, the object should not be left behavior-less. It must still be an anthropomorphized object that encapsulates whatever data it owns with behavior.


> So, imagine the sentence "the customer canceled the order."

> - Customer is the subject noun

And this is wrong. Because the customer did not cancel the order. The customer actually asked for the order to be canceled. And the order was then canceled by "the system". Whatever that system is.

And that is the reason why it is not expressed as customer.cancel(order) but rather system.cancel(order, reason = "customer asked for it").

> Thus, order.cancel(). The subject noun is left implicit, because it is redundant.

Ah, is that so? Then, I would like you to tell me: what happens if there are two systems (e.g. a legacy system and a new system, or even more systems) and the order needs to be sometimes cancelled in both, or just one of those systems? How does that work now in your world?


mrkeen mentioned dependency inversion (DI). I think it makes sense in oop for an order to have a cancel method, but the selection of this method might be better as something configured with DI. This is because the caller might not be aware of everything involved, as well.

If the system is new and there's only one way to do it, it's not worth sweating over it. But if a new requirement comes up it makes sense to choose a way to handle that.

For example, an order may be entered by a salesperson or maybe by a customer on the web. The cancellation process (a strategy, perhaps) might be different. Different users might have different permissions to cancel one order or another. The designer of the website probably shouldn't have to code all that in, maybe they just should have a cancel function for the order and let the business logic handle it. Each order object could be configured with the correct strategy.

If you don't want to use OO, that's fine, but you still have to handle these situations. In what module do you put the function the web designer calls? And how do you choose the right process? These patterns will perhaps have other names in other paradigms but the model is effectively the same. The difference is where you stuff the complexity.


> The difference is where you stuff the complexity.

Exactly. If it were so simple, why not just put everything in one big file / class? I guess we both agree that this very quickly leads to an unmaintainable mess.

So my rule of thumb is: can a feature theoretically be removed without touching the Order entity at all? If so, then NONE of the features parts can live in the Order entity (or even be referred by it).

That means: the Order entity must know nothing about customers, sales, how it stored or cached, how prices and taxes are calculated, how an order is cancelled or repeated or orders can be archived and viewed.

Because any of those features can be removed while the others keep working and using the exact same Order entity.


> The customer actually asked for the order to be canceled.

This is why many object-oriented programmers prefer to talk about message passing instead of method calling. It is indeed about asking for the order to be canceled, and the order can decide whether to fulfill that request.


> the order can decide whether to fulfill that request.

In my world of thinking, orders don't make decisions. If I go to the business team and say "the order decided to" they'll look at me funny. And for good reasons.


Go back and read what I said about subject nouns and object nouns. When converting OO concepts to English for non-programmers, it is indeed confusing to say “the order decided not to”—you say “the order couldn’t be” instead.

I highly recommend reading the two books I recommended for further perspective on the topic. OO is predicated not on the idea that data is a bag of dead bits on which operations are performed, but that data is embodied within and encapsulated by anthropomorphic objects with their own behavior.

It is possible to get to that world of thinking from where you are now. But it is a different world. A different way of thinking.


> When converting OO concepts to English for non-programmers, it is indeed confusing to say “the order decided not to”—you say “the order couldn’t be” instead.

Passive language like "the order couldn't be" might be fine in some real world situations where I don't care about who caused the action. But in code I do care. Because somewhere in code the action has to be made. And yeah, you can put that logic into the Order entity, but then we are back to square one where "the order made the decision".

If we are talking about some event that happend, then sure, "the order was canceled" is perfectly fine. So making an "OrderWasCancelled" (or "OrderWasNotCancelled") object and storing it somewhere is intuitive. But we were talking about the action happening and that is a different thing.

Also, just to make that clear, I'm not talking just theoretically here. I started my career during the OOP hype time. I actually read books like head first design patterns and others about OOP. But ultimately, I found it's not productive at all, because it doesn't reflect how most people think - at least from my experience.

Therefore, I tend to write my code in the same way that non-technical people think. And it turns out, OOP is very far from that.


> I highly recommend reading the two books I recommended

From "Object-Oriented Analysis and Design with Applications" (3rd edition) by Grady Booch:

p.52: Separation of Concerns

  We do not make it a responsibility of the Heater abstraction to maintain a fixed temperature. Instead, we choose to give this responsibility to another object (e.g., the Heater Controller), which must collaborate with a temperature sensor and a heater to achieve this higher-level behavior. We call this behavior higher-level because it builds on the primitive semantics of temperature sensors and heaters and adds some new semantics, namely, hysteresis, which prevents the heater from being turned on and off too rapidly when the temperature is near boundary conditions. By deciding on this separation of responsibilities, we make each individual abstraction more cohesive.


The comparison to English grammar is unnecessary. I didn't use it in my argument and you said it doesn't work that way either, so when you arrive at

> The order is the object noun, and is what is being canceled. Thus, order.cancel()

You've just restated the position I argued against, without an argument.


Aw, you described it so much nicer than me. I feel bad now.


> Domain models are fundamentally an object-oriented programming concept

They are absolutely not. In fact, they are not even specific to even just programming, let alone OOP.


I really don't understand this fixation on domain modelling. It looks like a lot of UML mixed with a "*DD" (life-pro tip: pretty much any X Driven Development is something experienced programmers rarely care about. You can borrow good ideas from almost any methodology without becoming obsessed with its primary subject. Being obsessed with the One True Way is a great way to waste a lot of brain cells). Also nobody sane touches UML. Or makes big official charts of classes and their relationships. It's a massive waste of time. You might come up with some core concepts and relationships, like a B-REP, but you don't need some jargon-heavy official way to do this.

> The argument being made against anemic domain models is that a domain model without behavior fails to meet the most basic requirements of a domain model. Your domain model is just DTOs that you pass around as if the were value types, and have no behavior at all. Does it make sense to have objects without behavior? No, not in OO and elsewhere as well. Why? Because a domain model without behavior means you are wasting all development effort building up a structure that does nothing and adds none of the benefits, and thus represents wasted effort. You are better off just doing something entirely different which is certainly not Domain-Driven design.

I have barely any idea what you're saying, but I will agree that I'm probably better off without DDD.

> You are adopting a OO concept, which the most basic traits is that it models business domains with objects. Do you understand the absurdity of this sort of argument?

Except I'm not, because I don't care about DDD? My argument is simply: caring how much your code adheres to some third party methodology doesn't matter, what matters is if you're writing good code or not.


> it's not object-oriented programming. That's it.

Yes, exactly. And this "classical" object-oriented programming is an anti-pattern itself.

(That being said, OOP is not well defined. And, for example, I have nothing against putting related data structures and functionality into the same namespace. But that's not what OOP means to him here)


I'll reply here with a very quick example why the anemic domain model is superior in general, no matter if you do OOP or anything else.

You used the example of an "order" yourself, so I'll built upon it.

I would never combine functionality to update an order with the data and structure of the an order. The reason is simple: the business constraints don't always live inside the order.

Here's an example why such an approach inevitably must fail: if the business says that orders can only be made until 10000 items have been ordered in a month, then you cannot model that constraint inside of the order class. You must move it outside - to the entity that knows about all the orders in the system. That would be the OrderRepository or however you want to call it.

Remember, here is what you said in your other post:

> Your Order class has a collection of Product items, but you can update an order, cancel a order, repeat an order, etc. This behavior should be member functions.

So your Order should have a repeat function? But how can the order know if it can be repeated? It might violate the max-monthly-items constraint. The only way for the Order to do it is to hold a reference to the OrderRepository.

And this is a big problem. You have now entangled the concept of an OrderRepository and of an Order. In fact, Orders could totally live without an OrderRepository alltogether, for example when you build an OrderSimulation where no orders are actually being executed/persisted. But to do so, now you have this OrderRepository, even if you don't need it.

The rule of the thumb is: if the business says "we don't need feature A anymore, remove it" then you should be able to remove that feature from the code without touching any unrelated feature. If you now remove the OrderRepository and cause a bug in the Order class due to your code changes, the business will probably wonder how that could be, because while the OrderRepository cannot exist without Orders, Orders can exist without an OrderRepository.

And if that seems a bit unrealistic, think of users: A user can easily exist without a UserRepository, but not the other way around.

That makes clear, that you the rich domain model is an unsuitable and generally suboptimal solution to modeling the domain of a business. The anemic domain model on the other hand matches it perfectly.

And one more thing: even natural language disagrees with the rich domain model. Does an order repeat itself? No! An order is repeated and that is, it is repeated by something or someone. This alone makes clear that there is an entity beyond the Order that is responsible for such action. And again, the anemic domain model is a great solution for expressing this in code.

But if you disagree, I'd like you to explain what you believe the disadvantages of the anemic domain model are.


You made a great example here and I absolutely agree with you.

In fact I find this type of accidental / unneeded coupling is the number one cause of problems, bugs and limitations of re-use and thus development velocity in any software product. Concepts where a single way dependency is turned into a cycling dependency are really hard to evolve, maintain, test and understand.

In fact I'd go as far as to say that as a general rule of thumb if you have a situation where your class A depends on class B that depends on class A you've made a big doo doo and you should really seriously re-consider your design.

(Adjacent to this rule is that classes that exist in the same level of of the software hierarchy and are thus siblings should also not know about each other).

In fact when you structure your code so that the dependencies only go one way you end up with a neat lasagna code base and everything can easily slotted in. (Combined with this a secondary feature which is to eliminate all jumps upwards in the stack, i.e. callbacks)


> I would never combine functionality to update an order with the data and structure of the an order. The reason is simple: the business constraints don't always live inside the order.

> Here's an example why such an approach inevitably must fail: if the business says that orders can only be made until 10000 items have been ordered in a month, then you cannot model that constraint inside of the order class. You must move it outside - to the entity that knows about all the orders in the system. That would be the OrderRepository or however you want to call it.

It's not that hard.

If the constraint of your example is a domain constraint, and so it's *always* valid, then when you hydrate an Order entity, other than the Order data itself, you also need to provide the total number of orders.

```

// orders is the repository, and it hydrates the entity with the total number of orders for this month

order = orders.new()

// Apply the check

order.canBeCreated()

```

Where the `canBeCreated` method is as simple as:

```

if this.ordersInAMonth > TOTAL_NUMBER_OF_ORDER_IN_A_MONTH ...

```

Fixed.

It's the same as using the OrdersRepository to query the number of orders directly before creating one, but here, the logic is just in the class.

Now, your example is pretty stupid, so I know it must not be taken literally but...

PS: I'm all but not a DDD advocate.


I would say that this code is not good (or to be more diplomatic: not optimal). Firstly, because between `order = orders.new()` and `order.canBeCreated()`, it's possible to insert other calls or actions on or with that order, which should actually not be allowed/possible.

And second, because my original critics still holds: you now have some kind of order-entity-unrelated information inside the order (or inside its `canBeCreated()`). This will force you to touch the order-entity when removing a business constrain that is unrelated to the (single) order-entity. Because otherwise, where does "ordersInAMonth" come from? It must be able to talk to the database or something.

> Now, your example is pretty stupid, so I know it must not be taken literally but...

No no, you absolutely can take it literal. It might not be very realistic, but that doesn't change the fact that we can use it to discuss pros and cons of different designs.

> It's the same as using the OrdersRepository to query the number of orders directly before creating one, but here, the logic is just in the class.

As with my two issues that I mentioned above, the problem is the "just" in your sentence. It appears that your assessment is that the code living in a different place is merely a problem of the code being in a different place with no effect on productivity. But to me, having the code in a "wrong" place becomes a really big problem over time, especially in a big code base.

Also, we can extend this example. Let's say we have two or more entities. Like orders, users and stores and there are constraints that span and impact the state and/or creation of all of them at the same time.

Now let's compare the different approaches of us. In my case, it's rather easy: there must be some "system" or "service" the lives above all the entities that are constrained by a business rule. So if there is a business rule that touches entities A, B and C, then there must be some "system" or "service" that knows about all A, B and C and can control each of them. In other words, there cannot be a "system" or "service" that controls just A anymore. The logic to ensure the constraint then lives in that service.

With your approach, how and where do you put the code for that constraint?

And let's, just for the sake of the argument, assume that you cannot push the constraint into the database. Because that basically would be such an uber-service as described by me above. In reality, we might employ the database to (also) enforce constraints. But for the sake of the discussion, let's say we use a database where we cannot.

Looking forward to your response!


Links to any existing articles on this topic would be greatly appreciated.


I highly recommend a video if you don't mind the format: https://youtu.be/zHiWqnTWsn4?t=3134

The slide at 52:14 is on the SOLID principles, the first one is on SRP which gives pretty understandable advice about whether Order should have behaviours.


This is the original article from Fowler: https://martinfowler.com/bliki/AnemicDomainModel.html

By searching for that term, you'll easily find lots of other takes on the matter.


> I'll reply here with a very quick example why the anemic domain model is superior in general, no matter if you do OOP or anything else.

I can search for it of course but results that aren't about OOP purism appear to be rare.


I haven't seen a lot of evidence that Martin really has the coding chops to speak as authoritatively as he does. I think when you become famous for giving advice or being an "expert", it can be difficult to humble yourself enough to learn new things. I know personally I've said a lot of dumb things about coding in the past; luckily none of those things were codified into a "classic" book.

What strikes me about the advice in Clean Code is that the ideas are, at best, generally unproven (IE just Martin's opinion), and at worst, justify bad habits. Saying "I don't need to comment my code, my code speaks for itself" is alluring, but rarely true (and the best function names can't tell you WHY a function/module is the way it is.) Chopping up functions and moving things around looks and feels like work, except nothing gets done, and frankly often strikes me as being the coding equivalent of fidget spinners (although at least fidget spinners dont screw up your history). Whenever Martin is challenged on these things he just says to use "good judgement", but the code and advice is supposed to demonstrate good judgement and mostly it does not.

Personally I wish people would just forget about Clean Code. You're better off avoiding it or treating it as an example of things not to do.


I watched some talks he gave 15 years ago and what struck me was that he would use analogies to things like physics that were just objectively incorrect. He was confidently talking about a subject he clearly didn't understand at even an undergraduate level.

Then for the rest of the talk he would speak just as confidently about coding. Why would I believe anything he has to say when his confidence is clearly not correlated to how well he understand the material?


> I haven't seen a lot of evidence that Martin really has the coding chops to speak as authoritatively as he does

From what I can deduce, his major coding work was long in the past, and maybe in C++.


I read Clean Code when I started out my career and I think it was helpful for a time when I worked on a small team and we didn't really have any standards or care about maintainability but were getting to the point where it started mattering.

Sure, dogmatism is never perfect, but when you have nothing, a dogmatic teacher can put you in a good place to start from. I admired that he stuck to his guns and proved that the rules he laid out in clean code worked to make code more readable in lots of situations.

I don't know anything about him as a person. I never read his other books, but I got a lot out of that book. You can get a lot out of something without becoming a devotee to it.

EDIT: I think even UB will agree with me that his dogmatism was meant as an attitude, something strong to hit back against a strong lack of rigidity or care about readable code, vs a literal prescription that must be followed. See his comment here:

> Back in 2008 my concern was breaking the habit of the very large functions that were common in those early days of the web. I have been more balanced in the 2d ed.

And maybe I was lucky, but my coding life lined up pretty neatly with the time I read Clean Code. It was an aha moment for me and many others. For people who had already read about writing readable code, I'm sure this book didn't do much for them.


I'm going to have to admit to never having read Clean Code. It's just never appealed to me. I did read some of UBs articles a fair number of years ago. They did make me think - which I'd say is a positive and along the lines you are putting forwards.

Rigidity and "religious" zeal in software development is just not helpful I'd agree.

I do however love consistency in a codebase, a point discussed in "Philosophy of Software Design", I always boil this down to, even if I'm doing something wrong, or suboptimal, if I do it consistently, once I realise, or it matters I only have one thing to change to get the benefit.

It's the not being able to change regardless, in the face of evidence, that separates consistency and rigidity (I hope)!


I don't know why people take UB seriously. He never provided proof of any work experience - he claims to have worked for just a single company that... never shipped any code into production. Even his code examples on GitHub are just snippets, not even a to-do app (well, I think that his style of "just one thing per function" works as a self-fulfilling prophecy).

Maybe people like him are the reason why we have to do leet code tests (I don't believe he would be capable of solving even an easy problem).


Uncle Bob is one of the core contributors to Fitnesse, which had moderate success in the Java popularity era back in the day.

Also, you do understand that people worked as software engineers even before Github became popular, or open sourcing to begin with, do you? So if someone is 60+ year old, chances are that most of his work has never been open sourced, and his work was targeting use cases, platforms, services which have no utility in this age any more.

Which have all nothing to do with how good a software engineer someone is.

And finally, do you have any proof that he never shipped any code into production?


> So if someone is 60+ year old, chances are that most of his work has never been open sourced,

John Ousterhout is 70 years old and one of the open source pioneers. We don't know what Uncle Bob shipped or did not ship but his friendly opponent in this discussion definitley did ship high profile projects.


I'm 72. As for what I have shipped over the half-century of my career, you can read all about that in part two of my book We, Programmers. Suffice it to say I've shipped a LOT of code.


Do you mean commits to the project like this crap: https://github.com/unclebob/fitnesse/commit/d6034080a04c740c...

This level of pointless obfuscation would not survive a code review at any sane dev team.


It's the kind of commit that you get from someone that wants to look productive but is just renaming variables in their IDE.


The criticism was that UB worked at a company that allegedly didn’t ship code to production, not that he doesn’t have a corpus of open source projects on GitHub.


> So if someone is 60+ year old, chances are that most of his work has never been open source

Somewhat ageist? I'm 72 and have produced a number of FOSS tools.


Truly. I know plenty of people in their 60s and 70s who use Git and are still very sharp programmers.


Using Git is unrelated to whether the software you write is proprietary or open-source.


Another example of not quite pragmatic advice is Screaming Architecture. If you take some time to think about it, it’s actually not a good idea. One of the blog posts I’m working on is a counter argument to it.


I’d love for you to expand on this!


Short version: when designing new software, you don't have its architectural picture in the beginning. So when starting from scratch, the architecture shouldn't be screaming, but rather, it has to be non-committal/non-speculative to allow wiggle room for the future. (How to achieve non-committal architecture is the biggest topic I'm interested in, and I find 1 good tactic every few years). Specifically, the architecture should ephasize entry points and outputs. That's exactly what frameworks like Rails provide. You go by entry points until some sort of custom architecture starts emerging from the middle, which is when it can slowly begin "screaming" over time.


> It's striking to me how out of touch Martin seems to be with the realities of software engineering in this transcript. Stylistic refactors that induce performance regressions, extremely long and tortured method names for three-line methods, near-total animus towards comments ... regardless of who is right/wrong about what, those takes seem like sophomoric extremism at its worst, not reasoned pragmatism that can be applied to software development in the large.

I think you're talking out of ignorance. Let's take a moment to actually think about the arguments that Uncle Bob makes in his Clean Code book.

He argues in favor of optimizing your code for clarity and readability. The main goal of code is to help a programmer understand it and modify it easily and efficiently. What machines do with it is of lower priority. Why? Because a programmer's time is far more expensive than any infrastructure cost.

How do you make code clear and easy to read? Uncle Bob offers his advise. Have method names that tell you what they do, so that programmers can easily reason about the code without having to even check what the function does. Extract low-level code to higher level methods so that a function call describes what it does at the same level of detail. Comments is a self-admission you failed to write readable code, and you can fix your failure by refactoring code into self-descriptive member functions.

Overall, it's an optimization problem where the single objective is defined as readability. Consequently, it's obvious that performance regressions are acceptable.

Do you actually have any complain about it? If you read what you wrote, you'll notice you say nothing specific or concrete: you only throw blanket ad hominems that sound very spiteful, but are void of any substance.

What's the point of that?

> Maintainability/appropriate factoring are subjective qualities that depend a lot on the project, the other programmers on it, and the expectations around how software engineering is done in that environment.

The problem with your blend of arguments is that guy's like you are very keen on whining and criticizing others for the opinions they express, but when lightly pressed on the subject you show that you actually have nothing to offer in the way of alternative or guideline or anything at all. Your argument boils down to "you guys have a style which you follow consistently, but I think I have a style as well and somehow I believe my taste, which I can't even specify, should prevail". It's fine tha you have opinions, but why are you criticizing others for having them?


> Comments is a self-admission you failed to write readable code, and you can fix your failure by refactoring code into self-descriptive member functions

This may be true for some cases, but I don't see a non-contrived way for code to describe why it was written in the way it does or why the feature is implemented the way it is. If all comments are bad, then this kind of documentation needs to be written somewhere else, where it will be disconnected from the implementation and most probably forgotten


> This may be true for some cases, but I don't see a non-contrived way for code to describe why it was written in the way it does or why the feature is implemented the way it is.

I have to call bullshit on your argument. Either you aren't even looking because you have the misfortune of only looking at bad code written by incompetent developers, or you do not even know what it looks like to be able to tell.

The core principles are quite simple, and are pervasive. Take for example replacing comments with self-descriptive names. Isn't this something obvious? I mean, a member function called foobinator needs a combination of comments and drilling down to the definition to be able to get a clue on what it does. Do you need a comment to tell what a member function called postOrderMessageToEventBroker does?

Another very basic example: predicates. Is it hard to understand what a isMessageAnOrderRequest(message) does? What about a message.type == "command" && type.ToUpperCase() == "request" && message.class == RequestClass.Order ? Which one is cleaner and easier to read? You claim these examples are contrived, but in some domains they are more than idiomatic. Take user-defined type assertions. TypeScript even has specialized language constructs to implement them in the form of user-defined type guards. And yet you claim these examples are contrived?

I'm starting to believe all these vocal critics who criticize Uncle Bob or Eric Evans or any other author are actually talking out of sheer ignorance about things they know nothing about. They read some comment in some blog and suddenly they think they are an authority on a subject they know nothing about.

So much noise.


> Do you need a comment to tell what a member function called postOrderMessageToEventBroker does?

Which event broker? Will I get a response via some callback? What happens if the sending fails, is there a retry mechanism? If yes, how many retries? What happens when the retry count is exceeded? Will the order‘s ID be set by this method? Is the method thread-safe? Which errors can be thrown? „Don‘t call this before the event broker has warmed up / connected“. Etc etc


> Do you need a comment to tell what a member function called postOrderMessageToEventBroker does?

Obviously not and the comment you're replying to hasn't asserted otherwise. They say, clearly, that comments should explain the _why_, postOrderMessageToEventBroker explains only the _what_ (which is reflected in the verbiage of your question). Fortunately comments are practically free and we're not limited to doing the reader one-favor-per-statement, we can explain _both_ the why with a comment (when it's not obvious) and the what with clear function names.


> I'm starting to believe all these vocal critics who criticize Uncle Bob or Eric Evans or any other author are actually talking out of sheer ignorance about things they know nothing about.

I've been programming for 60 years in many different fields. I was programming long before he was. It is possible that I have written more code than he has in more languages (36 at last count), so my criticism of his are based on real-world experience.


What I was refering to was the "why", not the "what" or "how". This is not a good function name to my eye, but YMMV: get-station-id-working-around-vendor-limitation-that-forces-us-to-route-the-call-through-an-intermediary-entity.

Instead, a comment can clearly and succintly tell me why this implementation is seemingly more complex than it needs to be, link to relevant documentations or issues etc.


> The main goal of code is to help a programmer understand it and modify it easily and efficiently. What machines do with it is of lower priority.

This mentality sounds like a recipe for building leaky abstractions over the inherent traits of the von Neumann architecture, and, more recently, massive CPU parallelism. Bringing with it data races, deadlocks, and poor performance. A symptom of this mentality is also that modern software isn‘t really faster than it should be, considering the incredible performance gains in hardware.

> Comments is a self-admission you failed to write readable code

I‘m not buying this. It’s mostly just not possible to compress the behavior and contract of a function into its name. If it were, then the compiler would auto-generate code out of method names. You can use conventions and trigger words to codify behavior (eg bubbleSort, makeSHA256), but that only works for well-known concepts. At module boundaries, I‘m not interested in the module‘s inner workings, but in its contract. And any sufficiently complex module has a contract that is so complex that comments are absolutely required.


> This mentality sounds like a recipe for building leaky abstractions over the inherent traits of the von Neumann architecture, and, more recently, massive CPU parallelism. Bringing with it data races, deadlocks, and poor performance.

No,not really. Just because you think about how to name functions and what portions of your code should be easier to read if the were extracted to a function,that doesn't mean you are creating abstractions or creating problems.

The rest of your comments on von Neumann architecture etc is pure nonsense. Just because your code is easy to read it doesn't mean you're writing poetry that bears no resemblance with how the code is executed. Think about what you're saying: what is the point of making readable code? Is it to look nice at the expense of bugs, or to help the developer understand what the code does? If it's the latter, what point do you think you're making?


I was quoting you, where you said readability takes precedence over technical concerns. That‘s what I‘m challenging.

Every extra function call and object instantiation has a real cost, and abstracting ourselves away from the bare metal means we need to pay the price in terms of performance. Some very nicely readable algorithms are just sub-par in all dimensions except readability. We should optimize for performance and correctness, and readability comes second.


Code like Uncle Bob suggests is not easier to read and understand, it is harder, IMO and that of many others. Since the disagreement starts from this any further discussion is impossible.


[flagged]


I wrote IMO for a reason. The issue with your statement is that you are implying I disagree the improvements in clean code aren't improvements. They are for the most part, but they are improvements of those particular examples, and do not generalize. The function size thing in particular is absolutely stupid. There are also different improvements that could be done.

In the real world optimizing for the text of the program is the wrong thing to optimize for. It doesn't matter if the text reads nicely if the behavior is wrong. Then what you want is code optimized for debugging, and code optimized for debugging wants to avoid jumping around since the more information you see in a single stack frame the better.

Similar issues with just OOP style code in general. Isolated state is nice, distributed isolated state is a nightmare. Porting that challenge over from distributed computing makes debugging the program harder, not easier, since you must now understand the history of communication between the objects to understand how a certain global state was reached in aggregate.

Contrast that with a sequence of steps operating in that larger state directly, it's way easier to follow the logic since it is explicitly written down in a single place.

Comments are also much better than convoluted method names. Why comments I think even Bob would agree are important, but How comments are extremely useful. Consider python libraries that write out example code in the documentation string with the outputs (that gets turned into automatic tests no less). Does the method name matter that much then? Not really.

Consider also APL and its fans. While I'm not a fan the proponents make a good point why they like it: you can see much more of your program in one go and sequences of symbols form words with precise meaning.

Basically, mathematical notation and Kanji rolled into one. How does that fit into Bob's Clean Code approach?


Here is an example: https://qntm.org/clean


> Shit-talking while hand-waving adds nothing to the discussion.

This is not presuming good faith, as he guidelines ask that we do.


> Comments is a self-admission you failed to write readable code, and you can fix your failure by refactoring code into self-descriptive member functions

# This is not the way I wanted to do this, but due to bug #12345 in dependency [URL to github ticket] we're forced to work around that.

# TODO FIXME when above is done.

Oh no, I so failed at making self-descriptive code. I'm sorry, I totally should've named the method DoThisAndTHatButAlsoIncludeAnUglyHackBecauseSomeDubfuckUpstreamShippedWithABug.


> What machines do with it is of lower priority. Why? Because a programmer's time is far more expensive than any infrastructure cost.

This assumes that code runs on corporate infrastructure. What if it runs on an end user device? As a user I certainly care about my phone's battery life. And we aren't even talking about environmental concerns. Finally, there are quite a few applications where speed actually matters.

> Comments is a self-admission you failed to write readable code, and you can fix your failure by refactoring code into self-descriptive member functions.

Self-explaining code is a noble goal, but in practice you will always have at least some code that needs additional comments, except for the most trivial applications. The world is not binary.


I'm not going to spend a long time responding to your comment, since it seems accusatory and rude; if you modify it to be more substantive I'll happily engage more.

The one specific response I have is: it's not that I

> say nothing specific or concrete: [I] only throw blanket ad hominems that sound very spiteful

...rather, it's that I'm criticizing Martin's approach to teaching rather than his approach to programming. I expand on that criticism more in an adjacent comment, here: https://news.ycombinator.com/item?id=43171470


> sheer rigidity

That looks more like a communication style difference than anything else. Uncle Bob's talks and writing are prescriptive -- which is a style literally beaten into me back when I was in grade school, since it's implied just from the fact that it's you doing the speaking that you're only describing your opinions and that any additional hedging language weakens your position further than you actually intend.

If you listen to him in interviews and other contexts where he's explicitly asked about dogmatism as a whole or on this or that concept, he's very open to pragmatism and rarely needs much convincing in the face of even halfway decent examples.

> animus toward comments

Speaking as someone happy to drop mini-novels into the tricky parts of my code, I'll pick on this animus as directionally correct advice (so long as the engineer employing that advice is open to pragmatism).

For a recent $WORK example, I was writing some parsing code and had a `populate` method to generate an object/struct/POCO/POJO/dataclass/whatever-it-is-in-your-language, and as it grew in length I started writing some comments describing the sections, which for simplicity's sake we'll just say were "populate at just this level" and "recurse."

If you take that animus toward comments literally, you'll simply look at those comments and say they have to be removed. I try to be pragmatic, and I took it as an opportunity to check if there was some way to make the code more self-evident. As luck would have it, simply breaking that initial section into a `populate_no_recurse` method created exactly the documentation I was looking for and also wound up being helpful as a meaningful name for an action I actually wanted to perform in a few places.

That particular pattern (breaking a long method into a sequence of named intermediate parts) has failure modes, especially in the hot path in poorly optimized runtimes (C#, Java, ..., Python, ...), and definitely in future readability if employed indiscriminately, but I have more than enough experience to be confident it was a good choice here. The presence in my mind of some of Uncle Bob's directionally correct advice coloured how I thought about my partial solution and made it better.

> other animus

- Stylistic refactors that induce performance regressions can be worth it. As humans, we're pre-disposed to risk avoidance, so let's look at an opposite action with an opposite effect: How often are you willing to slow down feature velocity AND make the code harder to maintain just to squeeze out some performance (for a concrete example, suppose there's some operation with space/time/bandwidth tradeoffs which imply you should have a nasty recursive cte in your database to compute something like popcount on billion-bit-masks, or even better just rewrite that portion of the storage layer)? My job is 80% making shit faster and 10% teaching other people how to make shit faster, but there are only so many hours in the day. I absolutely still trade performance for code velocity and stability from time to time, and for all of those fledgeling startups with <1M QPS they should probably be making that trade more than I do (assuming it's an actual trade and not just an excuse for deploying garbage to prod).

- The "tortured method names" problem is the one I'm most on the fence about. Certainly you shouldn't torture a long name out of the ether if it doesn't fit well enough to actually give you the benefits of long names (knowing what it does from its name, searchability), but what about long names which do fit? For large enough codebases I think long names are still worth the other costs. It's invaluable to be able to go from some buggy HTML on some specific Android device straight to the one line in a billion creating the bug, especially after a couple hiring/firing sessions and not having anybody left who knows exactly how that subsystem works. I think that cutover point is pretty high though. In the 100k-1M lines range there just aren't enough similar concepts for searchability to benefit much from truly unique names, so the only real benefit is knowing what a thing does just from its name. The cost for long names is in information density, and when it's clear from context (and probably a comment or three) I'm fine writing a numeric routine with single-letter variable names, since to do otherwise would risk masking the real logic and preventing the pattern-recognition part of your brain from being able to help with matters. HOWEVER, names which properly tell you what a thing does are still helpful (the difference between calling `.resetRetainingCapacity()` and `.reset()` -- the latter you still have to check the source to see if it's the method you want, slowing down development if you're not intimately familiar with that data structure). I still handle this piece of advice on a case-by-case basis, and I won't necessarily agree with my past self from yesterday.

> "Uncle Bob devotees" vs "Uncle Bob"

This is maybe the core of your complaint? I _have_ met a lot of people who like his advice and aren't very pragmatic with it. Most IME are early-career and just trying to figure out how to go from "I can code" to "I can code well," and can therefore be coached if you have well-reasoned counter-examples. Most of the rest IME like Uncle Bob's advice but don't code much, and so their opinions are about as valuable as any other uninformed opinion, and I'm not sure I'd waste too much time lamenting that misinformation. For the rest of the rest? I don't have a large enough sample I've interacted with to be very helpful, but unrelenting dogmatism is pretty bad, and people like that certainly exist.


Thanks for the thoughtful response. I generally don't want to get into the specifics of what Martin advocates for. Whether to prefer or eschew comments, give methods a particular kind of names, accept a performance penalty for a refactor--those are all things that are good or bad in context.

I think a lot of engineers hear "there's a time and a place" or "in context" and assume that I'm saying that the approach to coding can or should differ between every contribution to a codebase. Not so! It's very important to have default approaches to things like comments, method length, coupling, naming, etc. The default approach that makes the most sense is, however, bounded by context, not Famous Author's One True Gospel Truth (or, in many cases, Change-Averse Senior Project Architect's One True Gospel Truth). The "context boundary" for a set of conventions/best practices is usually a codebase/team. Sometimes it's a sub-area within a codebase. More rarely, it's a type of code being worked on (e.g. payment processing code merits a different approach from kleenex/one-off scripts). Within those context boundaries, it's absolutely appropriate to question when contributors deviate from an agreed-upon set of best practices--they just might not be Martin's best practices.

Rather, the core of my critique is that Martin's approach lacks perspective. Perspective/pragmatism--not some abstract notion of "skill level in creating well-factored code according to a set of rules"--is the scarce commodity among the intermediate-seeking-senior engineers that Martin's work is primarily marketed toward and valued by.

From there, I see two things wrong with Martin's stance in the Osterhout transcript:

"Out of touch" was not an arbitrarily chosen ad-hominem. When Osterhout pressed Martin to improve and work on some code, Martin's output and his defense of it were really low-quality. I can tell they're really low quality because, in spite of differing specific opinions on things like method length/naming/SRP, almost everyone here and to whom I've showed that transcript finds something seriously wrong with Martin's version, while the most stringent critique of Osterhout's code I've seen mustered is "eh, it's fine, could be better". That, and Martin's statements around the "why" of his refactors, indicate that the applicability of his advice for material code quality improvements in 2025 (as opposed to, say, un-spaghettification of 2005 PHP 5000-line god-object monstrosities) is in doubt. On its own, that in-applicability wouldn't be a massive problem, which brings me to...

Second, Martin is a teacher. When you mention '"Uncle Bob devotees" vs "Uncle Bob"' and I talk about the rigidity I see in evidence among people that like Martin, I'm talking about him as a teacher. This isn't a Torvalds or Antirez or Fabrice Bellard-type legendary contributor discussing methodological approaches that worked for them to make important software. Martin is first and foremost (and perhaps solely) a teacher: that's how he markets himself and what people value him for. And that's OK! Teachers do not have to be contributors/builders to be great teachers. However, it does mean that we get to evaluate Martin based on the quality of his pedagogical approach rather than holding the ideas he teaches on their own merit alone. Put another way, teachers say half-right things all the time as a means of saving students from things they're not ready for, and we don't excoriate them for that--not so long as the goal of preparing the students to understand the material in general (even if some introductory shortcuts need to later be uninstalled) is upheld.

I think Martin has a really poor showing as a teacher. The people his work resonates the most strongly with are the people who take it to the most rigid, unhealthy extremes. His instructorial tone is absolute, interspersed with a few "...but only do this pragmatically of course" interjections that he himself doesn't really seem to believe. His material is often considered, in high-performing engineering departments, to be something that leaders have to check back against being taken too far rather than something they're happy to have juniors studying. Those things speak to failures as a teacher.

Sure, software engineers are often binary thinkers prone to taking things to extremes--which means that a widely regarded teacher of that crowd is obligated to take those tendencies into account. Martin does not do this well: he proposes dated and inappropriate-in-many-cases practices, while modeling a stubborn, absolutist tone in his instruction and responses to criticism. Even if I were to give his specific technical proposals the greatest possible benefit of the doubt, this is still bad pedagogy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: