*Note:* This is mostly relevant for solo developers, where accountability is much lower than in a team environment.
Code review is the only thing that has kept this house of cards from falling over. So undermining its importance makes me question the hype around LLM tooling even more. I’ve been using these tools since their inception as well, but with AI tooling, we need to hold on to the best practices we’ve built over the last 50 years even more, instead of trying to reinvent the wheel in the name of “rethinking.”
Code generation is cheap, and reviews are getting more and more expensive. If you don’t know what you generated, and your team doesn’t know either because they just rubber-stamped the code since you used AI review, then no one has a proper mental model of what the code actually does.
So when things come crashing down in production, debugging and investigation will be a nightmare. We’re already seeing scenarios where on-call has no idea what’s going on with a system, so they page the SME—and apparently the SME is AI, and the person who did the work also has no idea what’s going on.
Until omniscient AI can do the debugging as well, we need to focus on how we keep practicing the things we’ve organically developed over such a long time, instead of discarding them.
Code review is the only thing that has kept this house of cards from falling over. So undermining its importance makes me question the hype around LLM tooling even more. I’ve been using these tools since their inception as well, but with AI tooling, we need to hold on to the best practices we’ve built over the last 50 years even more, instead of trying to reinvent the wheel in the name of “rethinking.”
Code generation is cheap, and reviews are getting more and more expensive. If you don’t know what you generated, and your team doesn’t know either because they just rubber-stamped the code since you used AI review, then no one has a proper mental model of what the code actually does.
So when things come crashing down in production, debugging and investigation will be a nightmare. We’re already seeing scenarios where on-call has no idea what’s going on with a system, so they page the SME—and apparently the SME is AI, and the person who did the work also has no idea what’s going on.
Until omniscient AI can do the debugging as well, we need to focus on how we keep practicing the things we’ve organically developed over such a long time, instead of discarding them.