Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go at Digital Ocean (speakerdeck.com)
189 points by dcu on Nov 5, 2017 | hide | past | favorite | 103 comments


Very interesting presentation. From my perspective it was interesting that they used GitHub Enterprise with Drone for CI and Concourse and GoCD for CD. At GitLab we're planning a 'CI Only' mode http://bit.ly/2Aj70zv because companies like Datadog are using GitHub Enterprise with GitLab for the CI. Based on this presentation I've asked the team to rename it to CI/CD only to stress that people are able to use one application for both CI and CD.

BTW I'm not sure if this comment is adding to the conversation or if it is too self promotional. If it gets down voted I'll delete it.


Author here. Thanks for the advice. I love Gitlab and what they are doing.

Because the slides don’t reflect them very well I thought I chime in. Currently Drone and GoCD both are in the process of decommissioning to fully migrate to Concourse as our single CI/CD platform. Because our current business runs like a clock, you can imagine it’s not a one click process.


This looks great, I've worked with teams that have their code on GitHub and don't want to move but would benefit from using GitLab for CI/CD.


Any way to get notice when CI/CD Only mode comes out? My org would definitely be interested in using GitLab for CI while keeping our repos on GitHub


Thanks for being interested. We don’t have a set date. The issues that are linked from the presentation give an indication. You can follow them.


Conversely, i'd be interested to know how and why they're using Concourse for CD but not CI. My experience with Concourse taught me that it's model deals much better with CI, which is isolated from the rest of the world, than CD, which isn't. For CD it just became a very complicated way to run shell scripts.


Personally I find Concourse to be better at CD than CI. We've been using it for a couple of years now to deploy Cloud Foundry which has a complex dependency graph of deployment steps. Concourse pipelines are great at modelling this, and resources like Terraform[1], Docker and S3 save you writing what would otherwise be a hell of a lot of Bash.

Disclosure: I run a company that sells hosted Concourse.

1: https://github.com/ljfranklin/terraform-resource


The go vendoring stuff and GOPATH sounds like a complete nightmare to me. Also the fact that imports are full git URLS sounds totally crazy. Is `dep` fixing all that stuff? Last time I checked out go a few years ago, I didn't continue because I found the story of actually installing dependencies and keeping track of versions very confusing. You couldn't even `go get` a specific git tag or something.

I'm happy that there seems to be some movement. But now we have godep, govendor, glide, dep. and every project uses one or the other. Are they all compatible? Or is it a minefield?


Go vendoring gets such a bad rap undeservedly. In practice it's the simplest and most straightforward dependency system (since 1.4/5 added vendoring) I've used.

Step 1: Check out code that you want to use. That's actually the only step and you can do it by hand, or with one of a multitude of tools. All the tools differ slightly, and any metadata or manifests or things like that they do aren't compatible, but the code they put in your repository is, enabling anyone to clone a repository, and instantly build the exact version of all your dependencies.

Imports also aren't full git URLs, they can be as a convenience that tells you where to get a package. You can't `go get` a specific tag, but that's fine, because `go get` isn't a package or dependency manager, it's just a convenience to grab code to your system, not to freeze a version as a dependency into your project.

Frankly, the amount of people that put their nose in the air at go is just fine with me, just makes it more of a 'secret productivity tool' I guess.

Edit: not to say it wouldn't have been nice if `dep` had been the one blessed path at version 1. Still, I have extremely little to complain about with the language and tooling since 1.4/1.5.


The fact that the full url is used to import packages make it impossible to fork a repository.

Once you forked it, you have to change all the imports across the repo, and then it's kinda either very hard to make Pull Requests, or pull the new commits from the original repo.

This is one thing I really dislike about go dependencies.

If you have a solution for this, let me know, I would be very interested to know more about it.

But overall, I love go. I use it all the time anyway.


You don't have to change the package name just because it's forked.

I threw together an example of doing this with `dep`. I forked Fatih's color package, and am using it as `github.com/fatih/color` no problem: https://github.com/sofuture/colortest (forked dep here: https://github.com/sofuture/color)


As I said in my other comment (https://news.ycombinator.com/item?id=15632049)

It works for you because it's a small project and you are not actually importing other of your own packages inside it.

But if you look at "echo", it import some other "echo" packages inside of it.


Okay, updated my example with `echo`. Check it out:

Using the original package name: https://github.com/sofuture/colortest/blob/master/main.go#L7

I can refer to my fork: https://github.com/sofuture/colortest/blob/master/Gopkg.toml...


Oh I see.

Thank you for the example :)


That doesn't matter is what I'm saying :) The package name of my fork isn't changed, and doesn't need to be, regardless of what else uses it. The only thing I had to do was specify 'use my fork's repo' in the Gopkg.toml file.

Edit: I'll fork echo in my example and show you.


You don't have to use the full url to import a package. The name of a git repository doesn't have to match the remote name though, so I don't see why it would be a problem to fork a repository.


Let say I would like to fork "echo".

Echo is using some imports to their own packages inside the project.

IE: https://github.com/labstack/echo/blob/a098bcd3b0c445dde3d380...

So if I fork "echo", I have to replace the urls in their imports so it imports my fork packages.


I agree venturing is actually good with just a simple folder, though IMO (with the benefit of hindsight), gopath was a mistake, and a per project vendor folder for dependencies would make much more sense for collaborative projects outside large companies.

Go get does have support for tags, but it was intended for they have code in go get to find tags like go1 and go2 which are as yet unused (I wish they'd used it for dependency version tags like v5.2 etc instead, it may never be used):

https://golang.org/src/cmd/go/internal/get/get.go#L524

Really some simple additions to go get would have gone a long way - recognise semantic version tags like v1.2 on go get and put them in vendor with some command like 'go vendor my/dep -v 1.2'.

Not really sure they need diamond dependencies, updating them automatically up to version x, manifest+lock files and all the other intricacies which in theory a package manager should solve - humans can resolve them as they come up and in real life use they're not a huge deal (as the incredibly simple go get we currently have shows).


GOPATH isn't a real problem; it's just a search directory like PATH or PYTHONPATH or CLASSPATH or NODEPATH, and it has a default value that makes it painless. Vendoring can be truly painful, but `dep` aims to fix that. Imports are not git URLs or indeed any kind of URLs; they're just directory paths into your GOPATH which the `go get` program can use if the directory path resembles the URL of a git (or hg/bzr/svn) repo. Don't think of `go get` as a package manager; think of it as a convenience utility to play around with a package before locking down the version via vendoring. Go's package management has a lot to improve on, but many of the common complaints are misinformed.

> I'm happy that there seems to be some movement. But now we have godep, govendor, glide, dep. and every project uses one or the other. Are they all compatible? Or is it a minefield?

They're not all compatible, but it's not really a problem because (last I checked) the prevailing philosophy was "libraries should not vendor--only binaries" which is to say "libraries shouldn't specify versions for their dependencies; the downstream binary projects should figure out what versions of each library should be used to build the binary". This means dependency tool compatibility isn't a problem because the dependency tooling punts on the dependency-resolution problem altogether, which seems like an even bigger problem.

Remember that this is only painful if you're trying for deterministic builds, and in the absolute worst case, you forego Go's build tooling and use something like Nix. This is absolutely not a reason to change language.


The existence of the GOPATH environment variable is not a problem. The fact that the toolchain is violently opinionated about where you locate your working copies, and goes out of its way to fight you if you try to fake it with symlinks, is.

The GOPATH issue will be solved when I can clone a project to anywhere in $HOME and build it without issue.


This is how I felt when I started using go, but I was unable to rationalize why beyond 'it's different'.

What's the real difference between `cd ~/dev` and `cd $MYGO` (~/dev/go/src/github.com/myusername)?


The difference is the Go tooling is dictating your whole code layout that may not match your personal preference. I work in a multitude of languages and I organize my code in ~/code/work for work/contracting code bases, ~/code/personal for my code bases and ~/code/third-party for source code I want to look at and maybe contribute casually to. I have never figured out a good way to fit GOPATH into this without completely rearranging how I organize my code, which I have zero desire to do for one language, especially one that's not my main.


You can specify multiple GOPATHs which will be searched in order for packages;

Your GOPATH could be set to "~/libs/go:~/code/work for work/contracting/go:~code/personal:~/code/third-party"

which would put any go-get libaries into ~/libs/go and allow using packages from the other folders. Keep in mind that Go will still use these Gopaths like any other Gopath repository; create a src/pkg/bin folder set for use with the compiler


That's good to know, I had never come across that! Thanks.


Well imagine you use language foo, bar and go. Go insists all code lives in ~/go/src and foo insists it all lives in ~/foo/source. Neither likes symlinks, which one wins?

Also people like to keep other artefacts alongside source code like scripts, resources, templates, tools etc and often have an existing elaborate project structure in order to accommodate that. Go forces them to throw that out and use something under gopath. It's annoying and an unnecessary stumbling block for people starting out in Go.


Go doesn't insist that your code lives in ~/go/src. It can live anywhere beneath a directory of your choice. Note that there's no requirement that GOPATH contains only Go source code. You can put your scripts and resources, as well as other source code, right next to the Go sources.

So you just put your code into ~/src/project/a.go and ~/src/project/b.foo (assuming a GOPATH=$HOME).


Note src vs source in the original comment. Go forces you to change everything else to conform to gopath. I use go a lot and am used to it now but IMO that is a Bad Thing.


Put Go stuff in $GOPATH/src and Foo stuff in ~/foo/source. Problem solved. Go doesn't dictate where non-Go code lives...

This is the least of all problems.


So now your go stuff and foo stuff lives in different dirs based on which programming language it happens to use, not which project it is for. To you perhaps this is a non-problem, to others it appears to be and it is a recurring cause of grief for newcomers to go.


    export GOPATH="~/go:~/foo"
GOPATH will use ~/go as default for go get and importing and fall back to any further folders specified.


The difference is that I don't want all my Go programs in a different place. I have my place where I put projects. I have ~/projects, ~/oldprojects, ~/defunctprojects. I have ~/contribprojects for things that aren't mine. I decide where my files go.


> The go vendoring stuff and GOPATH sounds like a complete nightmare to me. Also the fact that imports are full git URLS sounds totally crazy.

It is.

> Is `dep` fixing all that stuff?

Not from what I can tell; it lets you pin to git tags and branches, but the only versioning you'll get beyond that is if the maintainer of your dependency is gracious enough to use tags. It certainly isn't changing the way import paths work. Vendoring is still part of the workflow for projects I've seen using dep. I've heard there's work being done to get rid of the GOPATH nonsense, but from what I can tell people have been saying that for a while, so I'm not optimistic that it'll land any time soon.


'dep' does have plans to address this, but yes, it is not going to happen soon.

'dep' seems great for Noddy projects where all the code you are working on is in a single package, and fails (like most of the vendoring tools) when you need to work on multiple packages. It is has unfortunately been focused on vendoring at the package level rather than at the project level, and has many assumptions buried deeply into the code. The recommendation for larger projects is the 'vg' tool, which wraps 'dep' with some GOPATH tricks to help manage your dependencies at the project level.


What's wrong with vendoring? You put the package you want inside the vendors/ directory. I think it's much better than a package manager that tries to download some version that satisfies some version specification in a file that declares dependencies.


Managing vendoring yourself is impractical once you end up with more than a handful of dependencies, or once one of your dependencies pulls in more than a handful of transient dependencies. A larger project will have hundreds of dependencies and transient dependencies, and if you just pull them into your vendor directory and forget about them then you have a problem waiting to happen.


I find the opposite to be true. If you casually add dependencies without understanding the dependency tree, you have a lot of problems waiting to happen.

I prefer to be conservative about adding dependencies, and to always manually include them into the project.


You can be conservative about adding dependencies and still end up with hundreds. Database driver, database utilities, gRPC framework, monitoring & metrics, CLI & configuration tools, /x/ packages such as crypto or net, logging, linters, libraries to talk to other moving parts like Rabbit or Vault, and then all the transitive dependencies like yaml, toml, websockets, and the various 'standard' helpers the authors of your dependencies use... all for a microservice and administrative tool... and one part of the larger project.


That's the definition of not being conservative about adding dependencies.


Oh, that could certainly be a conservative list. It depends on the requirements of the software. Drop any item on that list and you are either losing capabilities or have more software to implement yourself. And it is certainly tempting to reimplement things like Cobra, Viper, sqlx, prometheus end point, and everyone knows we should all implement our own crypto, but overall it ends up costing you time, and probably usability since your time is limited.


That's definitely not conservative.

Maybe you don't know what a conservative attitude about libraries is because you've never seen one.

Your original comment:

> You can be conservative about adding dependencies and still end up with hundreds

No. Just no. If you have hundreds of dependencies, you are by definition not conservative about adding dependencies.

> Database driver

About the only legit thing on the list.

> database utilities

Not needed.

> gRPC framework

Not needed.

> monitoring & metrics

Not needed.

> CLI & configuration tools

NOT NEEDED!

.. etc.

> Drop any item on that list and you are either losing capabilities or have more software to implement yourself

What are you losing by not using grpc? You can do a lot of things by just using the builtin rpc package or the encoding/gob with the http package.

Being conservative about adding dependencies means resisting the urge to add them and figuring out a way to do without them.

Every dependency you add is a potential point of failure and headaches.

Just like every layer of abstraction you create.

The cost for dependencies is huge, so every dependency has to justify itself by giving such a huge benefits that it's worth the cost.

> Cobra

Do you need a special library to handle command line arguments? How complex is your CLI interface? How many commands and sub commands do you have? 10? 20? It's trivial to handle that much manually using the builtin libraries.

> Viper

How hard is it to just read a json file using builtin libraries? How much do you lose by not using "Viper" or whatever?

How many times will you have to read config files? One time when the app launches? Does not justify adding a dependency.

> it is certainly tempting to reimplement things like Cobra, Viper, sqlx, prometheus end point, and everyone knows we should all implement our own crypto

You don't even need to re-implement anything. Just do the simplest thing that works using the standard library.

You know the standard library comes with a crypto package, right?

The mindset you are presenting is the exact opposite of conservative. You are just grabbing every library that you think can save you 20 lines of code.


Yes, you can certainly be conservative by ignoring the requirements of the project and throwing away features. It will be a complete failure, because it doesn't do what it is supposed to, but it won't have any dependencies.

I will stop instrumenting my projects because its not needed, no matter what operations says about our SLAs, and my gRPC endpoint will stop talking gRPC because its not needed, and it will be more reliable because clients can no longer talk to it by the expected protocol, and I'll ignore the usability requirements of the CLI and configuration management because specification documents and usability requirements be damned, and I'll implement by own helpers for the database code because NIH syndrome is fantastic, and I'm sure my employer will be happy for me to reinvent the wheel. Except I probably won't have an employer any more, because I seem to have stopped delivering the software they tasked me to.


No, it will not be a complete failure when you are careful about what you include.

It will be a complete failure when things get so hairy with so many layers and failure points that you can't tell what's going on anymore.


If the new version has moved from Git to new suppa-duppa source control, you will need to update all import statements after replacing it on the vendors directory.

With proper package names via a package manager, usually the name stays the same.


i believe dep and glide allow you to pin to specific versions. i actually really like that imports are URLs, makes it really obvious where packages come from, and it's a better system imo than a centralized package manager like pypi / rubygem where names need to be unique.


> it's a better system imo than a centralized package manager like pypi / rubygem where names need to be unique

I'll have to disagree there; while having namespaced packages gives some advantages, having a centralized repository gives you things like proper versioning, which is worth all the terrible package names in the world to me.


A central repository doesn't automatically give you versioning, and URLs as package IDs doesn't take it away. These are orthogonal issues. It just happens that Go has chosen an approach for now that results in a lot of pain.


I don't think they're completely orthogonal; I think it's much easier to enforce versioning with a central repository than URLs as package IDs. It's not a coincidence that the most common centralized package managers (e.g. Ruby, Python, NodeJS, Rust, Linux distro package managers, and brew) all enforce versioning and things like Go and Vim packaging don't.


Go and vim don't have "distributed package managers"; they just punt on package management altogether. The next step up from that is a centralized package management system, and if you care that much you'll probably build in some notion of versions (even if they're nondeterministic or otherwise broken, as in the case of pip, npm, and many Linux package managers). Building a distributed package manager is quite a lot more work.


When I joined DO it was transitioning out of it's very Perlly beginnings - one of the engineers was pushing for rust. I guess Go won, and it's nice to see they have picked up such an awesome competency in building with it. https://github.com/digitalocean?language=go (also Grumpy MacB reference who wrote the first go service at DO is one of the best engineers I've ever met, and also one of the nicest dudes: https://github.com/macb)


While rust is the safer, and more featureful language, I think Go is quite possibly the better pick for large corporations just due to it's simplicity. Rust to me is the more interesting language, but it definitely offers more power and more choice -- they probably did the right thing by going for Go IMO.

Also really cool to see all the tooling they've built up -- all of the things mentioned seem really cool, "who's gonna clean up all the stale branches" is definitely a question that gets asked (and answered) at more companies that it probably should.


I'm a big fan of doing fancy things on CI, so I'm always looking for cool ideas there. This talk seems to mostly be about tooling around dependencies, though, so your comparison to Rust is interesting: Rust's cargo is all-around wonderful IMHO and seems to support everything DO needed and built themselves (incl. multiple versions of dependencies, tooling for mono-repo workspaces, etc.).


Yeah my bet was that they decided on the language first then built the tooling afterwards. Rust person was probably saying the same thing as it was happening...

Rust can be really daunting to look at, and if you compare the rust book to the go language tour, the easiest one to learn is pretty clear -- I get the feeling they won't even regret the choice because the things rust protects you from go sidesteps by giving you slow-but-safe-and-restrictive channels. Go's data race detector also is probably gonna be good-enough for a long time.


Rust wasn't a stable language until much later, when we (DO) were already pretty far along in our Go practice.


Another possible reason: my understanding is that Go works really well for web apps (including the standard library covering http and templates), while Rust isn't as strong there. But I might just have seen less web-focused Rust; are there any good frameworks for web apps in Rust?


There's a bunch of stuff! https://crates.io/categories/web-programming::http-server

However, it's still very much in early days, so there's no particular consensus, other than Tokio being the async I/O component. Much less mature than Go. We'll get there!


In my experience Rust works well as a C/C++ replacement. As a high level language much less so. Go is clearly the better choice for web apps and probably will remain so.


I'd definitely agree that Go is currently clearly the better choice for web apps. I'm not so sure it will stay that way though. Rust has some really powerful abstraction abilities, which can make for super nice apis.

I reckon it will end up like the choice between python/js/php/ruby on the backend, where each have their strength and weaknesses, but none are universally better.


I agree and disagree, for the reason you stated. I think Rust is a good language for web apps, with the caveat that you must understand the syntax and the power afforded first. That makes the language harder to learn, but for the purposes of webapps that makes it better.

If you start from bare micro-framework (set a route, attach a handler), you're eventually going to write some generic function that fetches an entity from a data store. In Go, this becomes a bit of a kludge (interface{} + casting + etc), but in rust it's robustly supported (traits/generic functions).

Of course, if you pick the right library for the database you wouldn't have that issue, but that's just the kind of abstraction/complexity-hiding problem you run into building webapps (from scratch at least) that I think rust is the better tool for. Like I mentoined in another post, basically all the micro-frameworks have the same shape to me at this point: add route(s), add handler function(s), and start the server.


The first one I ever saw was iron (https://github.com/iron/iron) -- weirdly enough, it's not on the list @ https://crates.io/categories/web-programming::http-server.

I wouldn't think that web frameworks were the reason -- at this point, almost all the frameworks that aren't django/rails size are the same, set up a route, add a handler, do whatever you need to in the handler, start the server.

python's flask/ruby's sinatra/go's http.server/rust's iron/haskell's servant/clojure's ring are all the same to me at this point, and generally what I'm comfortable starting with (I pick those micro-frameworks over 'batteries-included' django/rails)


It's not on the list because the categories feature is new, and Iron has had maintainership issues, where people have not been able to publish a new version. So it hasn't had a publish including the category since categories were introduced.


Many big companies are shipping Rust. There's ones we know about, like Oracle, and there's ones we've only heard whispers of, like a poster on the Reddit who claims they are shipping Rust at a Fortune 500 company.

That said, Go makes sense for a lot of the things DO does.


I also think Rust is the more interesting choice, but the company I work for went with Go. It does seem to be the more popular choice.


Yeah, Go being stable earlier, plus being so simple is great for its adoption. That said, Rust and Go have some overlap, but also areas where each is clearly the better choice. There's no reason this needs to be a zero-sum thing!


Yeah, they both have their use cases and they can both be popular equally. It's all about looking at the problem in hand and deciding what do you need to solve that problem.


Go is becoming the new Java from what I can tell so it's good to know enough to find your way around and patch a Go project if needed. Being the new Java, despite being less expressive, explains the uptake in the enterprise and startups.


Pity it is the new Java 1.0, instead of being the new Java 9.


If Java had good AOT and fast startup coupled with comparable initial GC heap sizes, it could have had a better chance at fighting off Go. Java will not go away and many teams that adopt Go also migrate to other languages at some point if their projects outgrow the capabilities of Go and the pain gets too strong.

I'm partial to GHC's language extension model over a cornucopia of Go pre- and post-processing tools like Java had (e.g. AspectJ and all the tools making use of annotations? or Go processors using comments). It will be easier to improve GHC's GC or complete OCaml's multicore branch than bring Go to the current century of proven programming language features. Go has found a niche as a replacement for C and Python in network programming, which is great. It just doesn't scale as well with project and team size. OCaml's multicore project also introduces algebraic effects (comprehensive alternative to monadic programming) to the mainstream, so I can't wait for OCaml multicore to land in mainline.


Java has had AOT support since version 1.0, available to anyone that understands the value of paying for our tools, just like in any other profession.

For those that rather get everything for free, OpenJDK has initial support of Linux x64, with other platforms being already available on OpenJDK 10 master.


Go has become pretty big in China, and it will only grow around from now on. I wish Rust will reach the same critical mass.


What's Oracle doing with Rust?



Neat! Oracle, being the epitome of Enterprise, is the last company I would have expected to see using Rust and containers; nice to be proven wrong :)

EDIT: typo


Well Oracle was the last big company to join Cloud Native foundation. They have realized that Kubernetes, containers are becoming new standard layer of infrastructure. Combined with fact Oracle donated JavaEE to Eclipse foundation, effectively washing their hands of legacy tech.


Author here. Fun fact MacB is also my current manager. Indeed a great person and mentor.


Am I alone in finding it unsurprising but unfortunate that all those addons to the official go toolchain are created by everyone to paper over the limitations of the Google Go implementation (which naturally reflects Google's development process and needs more than anything)?

EDIT: E.g moving .git/ back and forth or adding extra vetting/linting tools instead of extending `go vet`.


The most painful parts of the Go ecosystem are directly tied to the fact that the Google is making Go for themselves first and foremost and the community is an afterthought.


True and the more surprising aspect is that to land a develop position at Google you need to be on top of all CS theory and fresh in memory, just to unlearn everything and use Go for unspectacular business/enterprise projects, unless you're on certain teams like V8 or DeepMind for example. I think Go is meant to replace Google's Java coders with Go coders and have a language that fits exactly into the mold of their coding guides and rules for their monorepo and all the business/enterprise code written by the hordes of the rest of their developers.

Google's also opensourced Abseil, their C++ standard library (not meant to completely replact STL, to be clear), which contains all kinds of classes which were copied into many existing Google C++ projects in some form or fashion and sometimes incompatibly (e.g. Google's StringPiece found in several projects).

Either way I applaud them for doing a lot to open source projects. It's not something we can take for granted.


I suspected this might get downvoted and wanted to make clear I'm not demeaning the 90% of Google developers, but I failed, so I deserved the downvote. Just to make it clear I'm aware of where my comment failed to express what I was trying to communicate. Will try to be less lazy next time. It's hard to explain the purpose Google built Go for without considering where it's not used, and I failed to write a good comment.


Author here. Sure, there a reflections of their vision in Go. However all these deficiencies are now merged with the vision of current stakeholders, aka community. Go has learned and changed a lot in the past years ( in a good way). Going forward everyone will benefit from it, including us.


Coffee and bag geek

I'm a coffee geek, but i'm intrigued at the idea of being a bag geek. How does one geek out over bags? What are the cool things bag geeks know that ordinary people don't?


I'm not a bag geek, but I know firsthand how hard it is to find the right bag, if you care about functionality more than form.

For instance, I still haven't found the right carry on backpack that's sturdy, doesn't cost $200+, allows me to take clothes for a couple days and a laptop. It's either too big, too expensive, too heavy, too fragile, take your pick. So I can understand how someone can become a bag geek if they have to travel professionally on their own (without assistants who carry your baggage). Bonus points if I can detach a small laptop bag to take with me in order not to leave it at the hotel with the rest of the bag. So far I've always had to carry around the backpack and leave clothes in the hotel room. Pack, arrive, unpack, carry laptop, repack, go home.


I don’t understand the objection to price if it fulfills all your other requirements. A quality bag will have a lifetime warranty. Have you looked at Go Ruck?


Past experience makes me wary to risk spending that much and be disappointed two trips later. Lifetime doesn't mean they give money back if unsatisfied, except one or two companies. Also, upping my budget didn't magically reveal viable options either. So I could have skipped the price tag mention and conclude that it's just hard to find the perfect bag.

I've seen the Go Ruck GR2 and it doesn't fit my requirements.


Gotcha. Perhaps REI? They carry a handful of brands and you can return anything you aren’t happy with.


Thanks. Basically I want the sturdy laptop backpack you got from HP 10 years ago with a clever way to pack clothes and ideally waterproof bottom (rubber/plastic) so that I don't worry when having it sit on random floors at airports or train stations.

Having a subdivision of the main compartment bigger than 2 is also important. Laptop, papers, cloth, food maybe.

I'm in Germany and looking for a shop nearby where they have a diverse selection of bags to inspect. I'm kinda tired of ordering bags and having had to send them all back. Amazon is threatening a penalty for returning too often, so there's that :).

First world problems, right?


See https://arslan.io for his bag reviews.


Author here. Yeap there are bag geeks out there. As someone else pointed out, checkout my blog posts at arslan.io

A bag geek is someone who tries to find the perfect for the current occasion. So if you travel for one week, a bag geek will try to take only one bag. Which material to choose? Are you going to take a laptop with you? Shoulder or backpack? Are you going to bring gifts back to home? Etc... these are all questions that can’t be answered with a single bag. For example checkout Tom Bihn bags and their community of bag geeks (they have a forum).


Can someone explain slides 54-58, about how it takes many minutes to lowercase strings?

slide 55 reads: "each string operation takes 21 seconds" (an amount of time which I translate conservatively into tens of billions of operations or gigabytes of in-memory lookups).

How can performance be that bad - I would think it's trivial? Like, I'm not getting what lowercasing can possibly be doing that is so resource intensive. Any ideas?


Looks like the output of pprof, so perhaps the slide means all calls to this function (when run 10000 times say during checking dependencies over a very large codebase), take 21 second in total, reduced to 4 seconds by storing the results of comparisons in a map and using that instead. Go strings are immutable so calling ToLower repeateadly would copy them each time.


I'm curious as to why they went with the monorepo approach. This might sound really stupid, but couldn't you just have a seperate repo for each team/service?

My approach is to shove everything in $GOPATH/src.


Author here. The approach of a single repository for each project has its downsides, which I’ve tried to explain in the beginning of the slides already.


If anyone over at DO is reading this, we've got some technical and conceptual overlap at Reddit. Would love to compare notes.


Are there any chances to see `gta` tool in open source?


This seems like an enormous amount of tooling complexity to simply have a few dependencies for your program.


Author here. It’s not by any means a few dependencies. There are over 2 million lines of vendored packages, you can imagine how much effort it’s needed to upgrade them, be sure each team uses the corrext version, has no security exposures, etc...


I really like Digital Ocean and use it for all my small to mid-sized projects. I agree with a lot of the other comments that languages like Rust (and I'll add Elixir) are far more interesting, fun to program in, and feature rich than Go, but I really don't care what DO choses to use, as long as their offerings continue to be great.


Hmm. Must have made sensitive Go programmers cry. Go is a boring language, and it isn't as good at concurrency as other, better designed languages. I'm sorry, but that is a fact.


As someone with both a C++ and JS background, Go's boring nature is the best thing about it. It gets out of the way and lets you focus on the application logic. Programming languages are tools, not ends unto themselves.


Don't insert words into my commentary that I didn't say. I said Go was boring. You agreed. I said Go wasn't the best designed language, which it isn't. Look at its package/dependency management, memory usage for processes, inability to guarantee a process will relinquish control so other processes can run. Go enforces code bloat that other languages don't. For example, a program in D will often times be half the length of a Go program that does the exact same thing.

You are right. Programming languages are tools, not ends unto themselves (something you said, not I), and just as I can use tools from Snap-on or tools from Harbor Freight, I'm personally inclined to use tools with superior engineering. Tools that are designed and manufactured to be robust and to last, because they make it so much easier for me to build meaningful things.

I also think it is an unfair comparison to take two of the most awful languages in computer history and use them to make a case for Go's superiority. It is like saying a Fiat 500L is an awesome car because it isn't a Yugo.


Maybe but better designed language doesn't mean better language overall. Also anything Erlang based is slower than Go.


And anything Erlang based is going to be more stable/scalable than Go because Go takes 10 times the memory to spawn a process, uses shared memory to store processes, and doesn't guarantee a process will relinquish control. Package/dependency control in Elixir and Erlang are better than the mess that Go started out with, and is still struggling with, apparently. Hot deploys are also much nicer in Elixir than they are in . . . oh wait. Go doesn't do hot deploys. I'd rather lose a couple milliseconds, especially when language speed is not the bottleneck, and code in something better designed.


Anything Erlang is doing for deployment is moot since Kubernetes / Docker and the like where you have a platform to do that better than Erlang does and it's language agnostic. No one cares about hot deploy tbh.


I disagree, but even if you were right about Kubernetes and Docker being able to handle every use case out there, I still find it fascinating how you skip over all the other relevant issues with your beloved baby.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: