The problem is that there are so many developers now who have never had any experience of anything that isn't some botched attempt at microservices. The idea that it's possible to encapsulate code and separate concerns in any other way is foreign to them, and an "API" to them is 100% synonymous with a rest/grpc interface. So there's nothing for them to revert to and they are doomed to repeat this pattern, clearly with the impression that this is what app development is.
Meanwhile a lot of the industry is trying to tell them that their problem is they haven't separated things enough and should be using lambdas for everything.
> So there's nothing for them to revert to and they are doomed to repeat this pattern, clearly with the impression that this is what app development is
The only practical solution I have found for this: Carefully select a Padawan and guide them gently away from the forces of evil until they have gained enough situational awareness to spot these patterns and defend themselves. Not everyone can be saved, and at this point I fear that it is a majority.
If someone goes too far into cloud insanity, there is (in my experience) very little you can do to bring them back down to reality. At least, not on time frames the business owners seemed interested in when looking at new hires. I have had a much easier time taking in someone totally green and getting them happy/productive on monolithic software than I have with uService/AWS-certified 'experts' (et. al.).
It's like I'm talking to children, they think I'm just old and stuck in my ways and that I just don't understand.
Eventually my prediction comes true but none of them have ever said anything, they just go implement what I suggested and act like they solved the world's greatest mystery.
I guess it's easy to forget my comments from months earlier, it's almost as if I've seen their code before and know how it ends.
One thing that has periodically worked for me is the application of some fun infographic-tier latency figures to really drive home the argument for why distributed anything generally sucks ass. I.e. Would you rather that customer transaction either be:
1) A direct method invocation resolved within the same L1 cache contents.
or
2) One quick network hop just 5 milliseconds away?
Assuming worst case processing semantics (global total order), you would be able to process ~10 million times more customer transactions per unit time with option 1 vs option 2. This is seven orders of magnitude.
In my experience, not a whole lot is truly worst case, but most complicated & important business systems (banking/finance/inventory/logistics/crm/etc.) are pretty close if you don't want to be chasing temporal rabbits around all day.
I worked at a place that had a microservice for validations. You made a request with the item (singular) to be validated and the rule to validate it by. The most common validations were things like "is this value greater than 15" or "are these values equal".
Nobody else saw the problem here. Establishing a http connection, serializing a JSON document, deseralizing it on the other side, then doing "14 == 15" anyway before going back the way it came. Lighting up a hundred thousand lines of code vs one.
This was done as it iterated over large files, millions of items, millions of requests.
Your story both makes me want to laugh and cry because I've seen this pattern so many times.
These days when I do architecture interviews, I always make sure the problem can be solved distributed or monolithic and when they do distributed, I asked them why and to explain the trade-offs. If they do monolithic, I ask them why not distributed. Etc. For senior engineers, it's such a good indicator of whether or not they've truly worked in distributed systems and understand the human side of the problem.
The big O stuff is great, but the human and organizational costs are easily as important to assess as the computational and memory costs.
I am sure "ValidationService" made perfect sense to everyone the first day it was dreamed up. IIRC we had one named that too for a while back when we were lost in the dark forest.
This is actually why I think we need more cloud and not less. The problem is poorly designed code, not monolith or microservice.
For example, ValidationService as described above could be replaced with OpenPolicyAgent and would scale much better.
That said, I could see someone pointing at OpenPolicyAgent and asking why they didn’t separate distributed updates from the policy spec as two distinct projects, this way you could re-use parts of OPA to distribute config files or feature flags. You can now too, but it requires the extra hop of compiling a function to answer.
Come to think of it, distributed state updating is roughly the same general problem that Kubernetes Operator pattern solves.
Another way of putting it, we have cloud primitives but not enough of them, maybe - or they aren’t well enough explained - to make the choices or restrictions of different architectures a bit more obvious? I especially look forward to a future when we get “design systems” for the programming we do with business logic in the cloud.
I’m over-emphasizing cloud here. I mean code you don’t directly have to maintain and write regardless of where it runs or who wrote it.
Perhaps I didn’t explain myself fully. OpenPolicyAgent distributes the policy so you can integrate[1] it with systems that need to ask for validation such that lookups are constant time and performed on the same machine but distributed - scaled - using code and data shipped to every consumer/integrator of the validation service, instead of making a separate call to validate. So you can have centralized control of your validation logic and data but have it run as part of a decentralized application or on the same host as part of a pod. That is what I meant by a design that “scales”.
That said, depending on the type of validation performed, zero trust systems for example are often built using centralized validation endpoints, so it’s not entirely a bad practice.
Reading your comments I get the sense that you consider system architecture to be reducible to 2 possible choices (monolith and m.s.) and you are confusing distributed systems with micro-service architecture. Would you consider a classic 3-tier (api - domain logic - database) to be a “distributed” system? Are micro-service systems “distributed”? (The answer is no to both, btw).
Micro-services were promoted for a variety of reasons. One of which was facilitating rolling out projects by consulting companies given the economic incentives for such businesses. Related was the VC driven development model that required rolling out features at greater speed than that of thoughtful development. Another was (remains?) the prevalent lack of competence in designing schemas (required for the basic n-tier system culminating in the database tier).
My advice would be to carefully construct an explanations of the scenarios when microservices do apply, and then explain why you are not currently in that situation. the best microservice examples I've ever found were for the addition of features to legacy systems, the ability to write minor additional things in alternate languages, and most critically, when the original code base couldn't be changed or was lost, and yet more functionality was required. There are valid reasons to add a microservice there are probably not good reasons to take a normal code base made by a modern company and completely shift to only microservices.
>My advice would be to carefully construct an explanations of the scenarios when microservices do apply, and then explain why you are not currently in that situation.
I tried that once. Didn't work. End result was just pure pain.
Meanwhile a lot of the industry is trying to tell them that their problem is they haven't separated things enough and should be using lambdas for everything.