Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Brain fart or not, the consequences are the same and this particular issue where verbose is left enabled [0] would 100% have been questioned in a code review, even by the most junior developer on the team. Now you probably shouldn't have such a security gap easily enabled by verbose flag in the first place but that's a parallel issue.

The author of his own hobby project is of course free to do whatever he wants without review and nobody can blame him for it. But anyone claiming code review doesn't find bugs has obviously never used them in a functional setting or only worked in small teams with only 10x developers. I estimate we find at least 1 bug per day in our code review process, even after tests and lint. On top of that there is also benefits by knowledge sharing of what's being changed and general improvement suggestions.

[0] https://news.ycombinator.com/item?id=42579607



I didn't say that code reviews don't find bugs.

I said that they don't find major bugs. A code review wouldn't find a bug where the configuration at build time was incorrect for the build for production as it was in this case.

Testing finds the major bugs, not code reviews. If you are finding at least 1 bug per day, then there's something wrong with your development process, not your code reviews.

Oh and that's over 40 years as a developer and engineering manager in charge of delivering quality software with teams of ~10-20 for systems with 4 nines up time requirements and 24/7 operations.


> Testing finds the major bugs, not code reviews

This bug was undeniably major and i highly doubt a standard test would have found this.

What would such a test look like, "test that no file in /tmp was touched"? That can only be done after you know such issue might exist, which is great to prevent regressions, but doesn't help to stop new unknown bugs. What else are you going to test for, no files in /foo were touched, no files in /bar and so on to infinity? "No files at all were touched", sure could be reasonable, but again keeps growing to an infinite set of "X must not happen", including other forms of IO like connections and API calls. Most security issues have this property, you can't test if a system is free of injection vulnerabilities, without exhausting every possible input combination or using advanced fuzzing techniques.


No, by making the bug impossible. Sandbox applications at the OS level so that they can't share /tmp. Apple has that for its OS, apps are jailed.


If we all lived in a fairy tale, sure, sandboxes are preferable. In this case to avoid the bug, every ssh server in the world would need a per-user tmpfs. Ideally, that would indeed be neat, short term it's not realistic. For the iterm2 case of a ssh client, an admin may also need to inspect the actual /tmp when debugging the server and then need to bypass the sandbox. A sandbox will never have the perfect granularity for every scenario. So we can't just throw our hands in the air and say "not my problem", alternative forms of verification are needed.

Besides, how do you test or review your sandbox and its configuration? Both are still needed.

Incidentally, k8s works a bit like this with no default shared tmpfs across containers. So such large scale production deployments are more protected against this type of issue. On the other hand, for debugging, as you would with ssh, it hardly has a concept of users at all, and lots of other ways to shoot yourself in the foot with :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: