Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Personally I think it's pretty great that I can write Dockerfiles, run them locally, and call it the day because this is also what runs in production.

It goes far, far beyond Dockerfiles. Are you on-call for production? Can you debug a problem in production?

If not, then that's the root of the problem that the dude is talking about. If these are true for you (not specifically you, any reader of this):

- You don't have a good idea of how to ship a change all the way to production after committing it

- You would not know how to debug and fix a failure of your code in production

- Your "Dev" and "DevOps" teams are in separate departments

...then congratulations, you have just reinvented the old-school sysadmin/developer dichotomy from the 90s. You just happen to be using Dockerfiles instead of Makefiles.



I'm glad there's a dichotomy. I (a dev) never signed up to be on call 24/7 for production and it's ridiculous to just assume that I should be. I have a life outside of my work. If the system needs 24/7 uptime, they better be paying someone other than me to ensure that's the case, because my time is too valuable to waste my already precious freetime on fixing bugs in prod.


> If the system needs 24/7 uptime, ..., because my time is too valuable to waste my already precious freetime on fixing bugs in prod.

As an ops-sided person, this is the exact reason why ops folks always said no to releases on a Friday afternoon.

Ops people would rather be in the pub enjoying their bug free weekend by not risking pushing buggy broken code from developers, in spite of the desperate pleas and guarantees that this batch of code is *definitely* bug free. Last week was just a one-off. Honest. We swear.

EDIT: In the old style of Ops team versus Dev team versus Infra team versus everyone else.


Totally, and I'd never suggest deploying on a Friday afternoon


You've completely missed the point of what I was saying and/or are unwilling to see what my point was.

That will only work if you never need to release anything over a weekend... and some of us actually need/want to be able to do that -- safely. As a result we work in a different team structure, with different tooling to mitigate risks and we share the responsibility as best we can (no, not everyone is expected to be 24/7 on call for production).


Who wrote those bugs in the first place?


Good point, devs should either just not write bugs OR be on call 24/7. That's a totally reasonable way of running things.

I understand if people's lives are on the line. Otherwise, recognize that bugs happen, and either be ok with that or come up with a process (QA) that finds these bugs before deployment or that allows you to roll back to a more stable version easily.


I'm not sure where you got this idea of people being 24/7 on call for production. That's not a thing in the real world. We use rotas.

> come up with a process (QA) that finds these bugs before deployment or that allows you to roll back to a more stable version easily.

Aha! This magical process that mitigates deployment risks is also known as CI/CD. Which, it turns out, is usually used quite a lot in DevOps teams as it means each team member can see their code move from development all the way to production and fix problems as they appear. (hint: understanding this process is the "being responsible for your code in production" part).


What's your point?


His point is that Ops time is more expensive than Dev time, they get paid more. The Dev teams should really fix their own bugs if they happen in the wee-small-hours.


Not just that (since there's a lot of situations you may not be able to actually fix bugs in the moment, just work around them), but it also aligns incentives and enforces lessons learned. If you know you're gonna be on the other end of that pager, maybe that hack won't see the light of day. For more senior folks, pattern recognition around how things were built and what issues they led to are incredibly valuable. Being able to tell a junior engineer no don't do it like that, we did that last time and it led to x y and z, could save the company a ton of money and pain.

If you're separated out from ops, what would ever drill that into you? Saying you're too busy for ops is just wild to me. That's some of the highest effort to reward work someone can do.


All they said is they want to keep work within contracted hours.

> waste my already precious *freetime* on fixing bugs in prod

It doesn't mean bugs cannot be raised, planned and addressed through the normal development process.

Also, abundance of bugs in production could indicate cutting costs on testing.


Not to mention the layer that lives behind this.

Can you troubleshoot network level issues that seem to occur between your system and someone else's?

What about storage? Can you make sure your data is still their if your environment gets royally screwed.

I know these are usually done by seperate companies nowadays, but getting an an actual computing environment up and running with redundancy in place (down to the cooling and electrical level) is no easy feat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: