Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can tell you that the commercial software that you use is unlikely to be living up to the unachievable standards that are being demanded in this thread.

In this case, the person is a solo developer, so who exactly should be reviewing the PRs?

I trust this developer because they have a long history of delivering quality software, with quick turnaround on bugs in general, and even faster on security related bugs like this one.

His "responsibility" is to maintain the trust that he will develop to the best of his ability and will react quickly to issues.

The so-called "rigor and processes" in current SW engineering are nonsense and busy work. Not once in my 40 years of SW development has a code review revealed a major bug that wasn't some sort of developer brain fart.

Maybe the actual security issue here is that a) /tmp is world read/writeable on many Unix/Linux VMs/machines, and b) you should lock down your SSH connections so that they don't have access to it.

Stop blaming the other person's software and look at your own security "rigor and processes".



What part of my post said the solution is commercial software?

If I understand the security vulnerability correctly here, what happened is a well-meaning and skilled engineer accidentally checked in debugging code and shipped it in multiple releases. This shouldn't happen if people are reviewing your PRs and if you're being cautious about what you ship.

If nobody else is reviewing the iTerm2 code that means the developer isn't getting the support he needs from the community, which is a shame.

The general tone of your comment is confusing though since it seems you're suggesting the solution to iTerm2 shipping a major security vulnerability is to just assume every piece of software is broken, instead of come up with ways to develop and ship more secure software. Are you really comfortable with every part of the software stack you use - firmware, OS kernel, libc, userspace - meeting this non-standard and being full of holes? Do you want to spend hours every day chasing down the latest vulnerability report?

If your experience with code review is that it never catches anything, either you're the greatest programmer in human history who never makes mistakes, or the people doing your code reviews aren't trying. I participate in open source code reviews on a daily basis against multiple repositories and we catch bugs/oversights in each others' work all the time.


My experience in commercial development is that code reviews don't work because the incentives are misaligned. There's no incentive for someone to do a proper code review, because finding a bug isn't rewarded in any way by either the reviewer or the developer. Most of the "bugs" found are either things that a good linter will pick up (variable naming, etc) or are minor.

Code reviews of peer's code in an open source project is very different because the incentives are there to promote transparency and visibility and there is a negative incentive for delivering code that doesn't pass review (general reputation, removal of committal rights etc).

The solution to iTerm2 shipping a major (it wasn't) security vulnerability is that when it is discovered, a new release with a fix is quickly released, the effects of the defect are clearly described and the mechanism for rectification is made clear.

iTerm2 did that, clearly and transparently.

The solution for developing and shipping more secure software is to remove options for things like world readable temporary files. The operating system should remove the capability such that you have to specifically enable it, which requires a conscious decision to do so.

Apple's SIP has removed a large number of opportunities for bugs, more could be done to fully sandbox user processes.

Making it impossible for a certain class of bugs to occur is a much better approach than code review attempting to find the problem after development.


Brain fart or not, the consequences are the same and this particular issue where verbose is left enabled [0] would 100% have been questioned in a code review, even by the most junior developer on the team. Now you probably shouldn't have such a security gap easily enabled by verbose flag in the first place but that's a parallel issue.

The author of his own hobby project is of course free to do whatever he wants without review and nobody can blame him for it. But anyone claiming code review doesn't find bugs has obviously never used them in a functional setting or only worked in small teams with only 10x developers. I estimate we find at least 1 bug per day in our code review process, even after tests and lint. On top of that there is also benefits by knowledge sharing of what's being changed and general improvement suggestions.

[0] https://news.ycombinator.com/item?id=42579607


I didn't say that code reviews don't find bugs.

I said that they don't find major bugs. A code review wouldn't find a bug where the configuration at build time was incorrect for the build for production as it was in this case.

Testing finds the major bugs, not code reviews. If you are finding at least 1 bug per day, then there's something wrong with your development process, not your code reviews.

Oh and that's over 40 years as a developer and engineering manager in charge of delivering quality software with teams of ~10-20 for systems with 4 nines up time requirements and 24/7 operations.


> Testing finds the major bugs, not code reviews

This bug was undeniably major and i highly doubt a standard test would have found this.

What would such a test look like, "test that no file in /tmp was touched"? That can only be done after you know such issue might exist, which is great to prevent regressions, but doesn't help to stop new unknown bugs. What else are you going to test for, no files in /foo were touched, no files in /bar and so on to infinity? "No files at all were touched", sure could be reasonable, but again keeps growing to an infinite set of "X must not happen", including other forms of IO like connections and API calls. Most security issues have this property, you can't test if a system is free of injection vulnerabilities, without exhausting every possible input combination or using advanced fuzzing techniques.


No, by making the bug impossible. Sandbox applications at the OS level so that they can't share /tmp. Apple has that for its OS, apps are jailed.


If we all lived in a fairy tale, sure, sandboxes are preferable. In this case to avoid the bug, every ssh server in the world would need a per-user tmpfs. Ideally, that would indeed be neat, short term it's not realistic. For the iterm2 case of a ssh client, an admin may also need to inspect the actual /tmp when debugging the server and then need to bypass the sandbox. A sandbox will never have the perfect granularity for every scenario. So we can't just throw our hands in the air and say "not my problem", alternative forms of verification are needed.

Besides, how do you test or review your sandbox and its configuration? Both are still needed.

Incidentally, k8s works a bit like this with no default shared tmpfs across containers. So such large scale production deployments are more protected against this type of issue. On the other hand, for debugging, as you would with ssh, it hardly has a concept of users at all, and lots of other ways to shoot yourself in the foot with :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: