Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We are moving to such cloud environment, and it makes me sick.

Maybe you need to dockerise Mongo, MySql and 5 other dependencies - I can get this, but I don't get it why the rest the code should still be running in the cloud. Python, Rails, Node? Why? Developers should be able to run 1 shell commands to install node.

Dev experience excuses are just excuses for a bad setup. So, fix your setup, please.

Not being able to run the project natively is big red flag for me and when I move jobs will be my 1st question.



> Why?

Three reasons from my perspective:

1) There's no setup steps. You just open your editor of choice and everything is set up for you. All the build tools, linters, specific versions of software $XYZ, etc.

2) Large VM (16 core, 96GB of RAM in my case) speeds builds and tests up dramatically.

3) Zero productivity lost if your laptop breaks. Just grab a new one from IT and you're up and running exactly where you left off with zero effort.

> Not being able to run the project natively

What do you mean by this? It's just running on a remote server rather than your laptop.


I don't see #1 happening. There is no way anyone has my personally customized development setup in some VM ready to use. More likely it is going to be one common setup that they want everyone to use.


Why are you speaking as if this is some weird future state that doesn't exist and has lots of unknown downsides? This is the current state in some companies. I work in this flow daily and really appreciate the value it delivers.


I haven't experienced it yet, only read about it in various places. So it only exists as a future possibility for me at this point and as speculation about how the various places I've worked or know well for other reasons would use it.


> my personally customized development setup

Why would your personally customized development setup even need to be on the remote host? You pop up your IDE, connect to the remote host via SSH and you are set.


Also most “personally customized “ environments can be copied around with 1 config file or a few dot files at this point


My IDE is designed to run on the remote system and be accessed via SSH and so it needs it's configs synced over or packaged up in some way. It is a bit more work to setup, but pays dividends in many ways.


For things like github codespaces, you can bring your own dotfiles to company codespaces. https://docs.github.com/en/codespaces/customizing-your-codes...


Ever seen Google Cloud Shell? Setup your own local install of software and configuration, it persists on disk.


It's not for you, it's for every new hire. You can always build on top of the base image, but it sets a nice lower bound and gets people started fast.


Problem is with people who put together these sorts of systems tend to think theirs is best and everyone should use it. For example I once interviewed at a place where every developer got an identical Mac with the dev environment already setup and you couldn't use something else as they thought it better if everyone had the same setup to enable easier pairing. I could really see something similar happening with these systems.. seems like the kind of setup that would have appealed to that company I interviewed at.


Setting up a working dev environment can take days to a week+ in my experience. On boarding can take a while, so getting a VM image or whatever that works out of the box that you can build on is a really good thing.

It's great too because if your own environment breaks, you can compare to a working one to fix it.

Just because a company might abuse it doesn't mean we should avoid it.


I have no problem with the idea of having a pre-existing image to kick off your dev setup. Just more that, human nature being what it is, it is unlikely to stop there. It will vary from company to company of course and I think my current company would adopt it in a good way but I also can think of one or two previous employers who would politicize and abuse the idea.

IE. it's no slam dunk. Like any other tool of this sort is will be used for good and ill.


Yeah as ops gets more complex i don't want to tell developers they need to install xyz with a hundred specific versions. Just grab the latest app image


For work-related stuff: I disagree. We're not even at Uber's scale at my company and I hate having to manage the ever-changing set of dependencies that I have no control over.


The thing is, it doesn’t have to be ever-changing, and creating a reproducible working setup locally can be achieved by appropriate setup scripts.


One of the reasons we developed devpods was because our setup scripts were unmanageable.

Instructions to new employees would say things like "run this thing, scroll up past dozens of pages of stdout noise and manually deal w/ the errors buried therein by looking up relevant FAQs in some doc somewhere"

The scripts would touch every technology imaginable, from brew to npm to arc (phabricator's cli) to proprietary tools and no single person understands how setup scripts work in their entirety.

One exercise we'd get new employees to run through was get them to brainstorm about how some system ought to work. The lesson was that just about any idea they could come up with would have already been tried (and failed).

I'm told that devpods aren't even the first time we tried cloud dev envs. Presumably lots of lessons were learned from previous attempts at improving dev envs.


> run this thing, scroll up past dozens of pages of stdout noise and manually deal w/ the errors buried therein by looking up relevant FAQs in some doc somewhere

Okay, but if you can define/script your environment enough to run in a pod, couldn't you just run that locally? You already have to solve the manual steps either way...


Re: if a computer can run scripts in a pod, can’t you just run them locally?

It’s a construct of the order of operations.

One core issue with local is the variety of OS’ and local build tools that would fundamentally mess with the centralized scripts. Getting company-wide setup scripts to work on top of existing laptop config was a continuous challenge. Hence, having a consistent baseline (OS flavor, system-level packages) on top of which the company-wide “setup script” is added followed by “developer-customizations” seems to work great.

Central teams can manage the first couple of steps and individual user-specific configuration can be managed much better in a decentralized manner.


It's the same convo as docker: can't one just use setup scripts for reproducible envs? In theory yes, but in practice, theory and practice aren't the same thing.

It's easier to apply best practices to a greenfield project written in a modern language (hence devpods) than trying to comb through a decade+ of tech debt written in bash.


Long before devpods (circa 2014) we had boxer, which was a command line tool that abstracted away the nitty gritty details of setting up a dev environment by orchestrating calls to AWS, vagrant, puppet, rsync, etc for you. However this was also a time when there were only a handful of services you would need to run, and because remote editing tools at the time weren’t as nice as they are now, it only worked really well if your preferred editor was vim/emacs so you didn’t have to deal with the (occasionally buggy) remote file syncing.

The devpod flow is a lot smoother. I had my laptop replaced recently and was up it running again in an amount of time that felt like cheating.


Setup scripts that mutate an operating system are fragile. Break something? Unless you understand how the scripts work (which will become stupidly complex at a scale like Uber’s) you will have to reinstall your machine. Or you will have to staff a ton of support to help users when they do break their configuration.

Spinning up a VM with an image containing all the development tools is a much smoother experience most of the time. The only reason why I don’t use it where I work is because I use vim and network adds too much latency for me.


That's a false dichotomy. Just because you don't have a VM doesn't mean that the alternative is a build environment and setup that mutates an operating system.


Using a VM is fine. One can use it locally.


In practice setup scripts are brittle. New person joins the team and their script fails because it turns out everyone else’s dev environment was only working due to something left behind by some older script. Hotfix requires checking out an old branch but now I need to run an old script but my setup is from the future - the old script doesn’t know what needs to be un-done to get the system to a consistent state. And what about data? My old data has gone, the data I have is from the future. Never mind simple stuff like the script author assuming node is in /usr/bin but that’s not true for me because I use nvm.


We’ve had good luck using nix for this. Same dependencies for the local dev environment as for the built containers. Same config. All deterministic. And not just major deps like programming languages, but sed, bash, grep, and all the other shell tools, so no more worrying about people running scripts on mac’s ancient version of bash.


I'm sorry, but this is the task of proper configuration management. Yes, don't depend on local stuff that isn't configuration-managed. Don't have a workflow where you check out an old branch in a newer environment. Of course you need a way to establish the older environment in that case. I'm assuming that Devpod does a similar thing on the server side. My point is, the ability to reproduce a working setup doesn't imply a requirement of having to work remotely.


> I'm assuming that Devpod does a similar thing on the server side

That's why they use devpods. You moved entire configuration to the cloud. There is absolutely no reason why it should be run in your local environment.


But now you're saying "we shouldn't change dependencies". Pypy turns out to work nicer than CPython? now you have a rollout project to your team. Swapping out some libraries? Another rollout project.

Now, it's great if you can avoid complex setups in general cuz complex is harder no matter what! But if you're starting from a complex setup, having easy ways to roll out changes is an important step in actually doing the simplification work to get to where you want to be!


Easy in theory, but always breaks down for a non-trivial project with >5 developers on it.


> it doesn’t have to be ever-changing

It IS ever-changing. One library among the zillions of local dependencies that you need to build something changes, and you have to go through the dependency hell.

If entire software world valued backwards compatibility and vigilantly guarded it, that wouldnt be a problem. But in the package hell that we are living in today, every other day a package update brings some incompatibility or breaking change for this or that other thing.


I'm 100% the opposite side of this argument. Running your entire stack locally is a silly trend that cost us a decade of productivity. In the early to late 2000's the remote-dev approach was very common. It wasn't "push a button and you have a dev instance!" easy but it yielded similar results.


I'm slowly getting to the point of "I want my computers to be a thin client around some config files". Treat your computer like cattle, rather than a pet, etc etc.


Same. I'm considering using a flatpak for this as it seems to have an interesting feature set to build a pre-setup development environment on top of. Just want my shell setup, tooling configs and a few other things bundled up in an easy to use fashion.

Currently I just have a git repo with my setup mostly in it (sans executables) with a way to get it going on a new machine. It works but is rather hackish and requires a bit of work to keep in sync.


[Obligatory Nix stanning here]


oblig. advice to wipe / on every boot: https://grahamc.com/blog/erase-your-darlings


Great idea! Also, restore from backup on every boot. The trouble is, I never reboot.


Unless your dev environment literally took one decade to set-up, then it didn't take "a decade of productivity" ;)

For people who knew how to run vms or even chroots, this was not a big issue


It's one thing to "develop" in vim over a slow ssh session or VNC. The latency makes that very annoying.

But we're at the point where we can still have a lot of analysis on the machine, and offload the slow analysis and building onto a more powerful computer, so that remote development is faster even with latency. And also getting to the point where internet connectivity is really fast so latency is low.

Like with most systems, the important part of remote development is that it's done well. And it seems like most employees are comfortable with Uber's setup.

Although, I do have to say that most companies really shouldn't use remote development, only if they have some excuse like they're Uber sized so that the benefits outweigh the costs. I've done remote work at uni, and they're servers and integrations aren't nearly as good so it's a chore; it's faster to use Mutagen and just develop locally then build/deploy remotely.


For me it’s about the hardware, I do molecular simulations and I have a machine with 96 cores and 512GB of RAM and 4 GPUs in it - I wouldn’t want this noisy machine in my house so using VSCode with remote server allows for everything I need to do - interactive debugging just like it’s local, automatic ssh means the terminal feels like it’s local, only a slight (few hundred millisecond) delay on saving,

It really feels like it’s local, I enjoy it


Also, It just passes the problem down the line. If your dependencies are too messy to manage locally as a dev then they're gonna be an ops nightmare too. Just commit to maintaining a flake.nix and use whatever computer you want.


Its a lot easier to manage dev envs if everyone is literally using cloned ec2 instances that can be remotely fixed/managed by devexp teams. These type of projects are a big step forward IMO


Thats definitely one way to do it. Im a founder at a remote dev infra startup (usenimbus.com) and this was the model we started with because it was just so simple. But we quickly learned theres no one size fit all solution so expanded into supporting containers, terraform, etc.


> why the rest the code should still be running in the cloud. Python, Rails, Node? Why? Developers should be able to run 1 shell commands to install node.

Because, making local environments exactly identical to the actual production environment is nigh on impossible. There still will be minor differences. And, to maintain the local development environment, crap ton of work will go to however that environment is maintained.

If engineers are maintaining it themselves, each of them will literally waste time on maintaining the local dependencies needed for the local environment - frequently encountering blockers due to package management hell that we are living in these days breaking one thing or the other. If you have 100 engineers as an example, your organization will lose 100 man-hours each month to such local development environment issues.

If you go the route of having infra or dev experience teams etc maintain them, then that team will be spending that effort to keep the scripts and whatever being used to keep the remote local environments in the engineers' computers up to date and working.

Instead, that infra team can just prop up dev versions of their infra/cluster/whatever, give the engineers access to that environment through a vpn etc, and voila - you instantly removed a lot of that lost man-hours.

Moreover, you will not never encounter any totally unexpected bug or performance problems that could end up coming to being from there being unforeseen incompatibilities in between local environments of the engineers and the actual prod environment.

> So, fix your setup

Life is not long enough for hundreds of engineers being spending their time on fixing totally unnecessary package management conflicts that are created by the utterly insufferable package and dependency hell that we are living in today. If you like suffering through that dependency hell, good for you. Most of us prefer to ship code and make things happen.


I think you’re overlooking the beginning of this article where they specifically call out the use-case is moving towards their monorepo(s). Do you really think all of Uber’s source could be built and rebuilt continuously on one developer laptop while keeping pace with organization-wide releases? It’s one thing to build a sub module or a few modules locally, but they’re specifically talking about the efficiencies gained through caching ALL build artifacts, source code, and leveraging data locality to place an otherwise inordinate amount of compute power in the hands of developers to do as they please.


How do you run a job that requires 64GB of RAM and dies when you run out of memory? Or is very CPU intense and takes 3 days on your laptop? If you test it in CI, how long is the dev cycle waiting for your CI to run? What if every dev ends up with slightly different local env that creates different build deps, that turn into test differences? How do you connect the cloud stuff to the non-cloud stuff without running into yet more complexity trying to connect them together?


Same here. We have a similar setup as described in this article. But they also issued me a 16-core workstation with 64 GB of RAM! Like just let me use my hardware goddammit.


In a large org those local resources are allocated for running Tanium.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: