Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> if you have a small org with one CRUD app on a handful of VMs

I think most would be surprised how big chunk of modern apps are within that space without active intervention by stuff like microservices. No need to stop at a handful of VMs though, although I imagine that most companies could easily be covered by a few chunky VMs today.

And yes, if you're a PaaS, multi-tenancy something something, then sure, that sounds more like a suitable target audience of a generic platform factory.



I don't know. If you have more than a few hosts, it already seems like you already need some configuration management setup so you can reproducibly set up your boxes the same way every time--I certainly wouldn't want to work anywhere with more than a couple of production systems that have humans SSH-ing into them to make changes.

And if that's the territory you're in, you need probably need to set up log aggregation, monitoring, certificate management, secrets management, process management, disk backups, network firewall rules, load balancing, dns, reverse proxy, and probably a dozen other things, all of which are either readily available in popular Kubernetes distributions or else added by applying a manifest or installing a helm chart.

I don't doubt that there are a lot of systems that are running monoliths on a handful of machines, but I doubt they have many development teams which are decoupled from their ops team such that the former can deploy frequently (read: daily or semiweekly) and if they are, I'm guessing it's because their ops team built something of comparable complexity to k8s.


No one changed VMs manually over SSH beyond perhaps deep debugging. Yes, creating a VM image might be needed, it might not, depending on approach [1]. Most of those things you listed are primarily an issue after you've already dug a hole with microservices that you then need to fill. VMs are still a managed solution. I'm not sure where people have gotten the idea that k8s is somehow easier and not requiring significant continuous investment in training and expertise by both devs and ops. It's also a big assumption that your use-case fits within all of these off-the-shelf components rather than having to adapt these sightly, or account for various caveats, that then instantly requires additional k8s expertise. Not to even mention the knowledge required to actually debug a deployment with a lot of moving parts and generic abstraction layers.

One downside I also see compared to the "old-school" approach, albeit maybe an indirect one, is that it's also a very leaky abstraction that makes the environment setup phase stick around seemingly in perpetuity for everyone rather than being encapsulated away by a subset of people with that expertise. No normal backend/frontend dev needed to know what particular linux distro or whatever the VMs were running or similar infra details, just focus on code, the env was set up months ago and is none of your concern now (and I know there's some devops idea that devs should be aware of this, but in practice it usually just results in placeholder stuff until actual full-system load testing can be done anyway). So a dev team working on a particular module of a monolith should be just as decoupled as with microservices. Finally, for stateless app servers, the maintenance required was much rarer than people seem to believe today.

I realize that a lot of this is still subjective and includes trade-offs but I really think the myth building that things was maintenance ridden and fragile earlier is far overblown.

[1] https://cloud.google.com/compute/docs/containers/deploying-c...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: