Hacker Newsnew | past | comments | ask | show | jobs | submit | nickm12's commentslogin

This is a really nice take and matches my experiences as well. It also calls out one of the biggest incongruity of this current age: the sudden desire from management to have good specs and documentation for the benefit of the coding assistants.

For the first time in my career, I've seen an engineering org add "improve tech documentation" as a high-level goal. It makes me sad that it was never worthwhile to do for the engineers, but now we need it for the coding assistants who can't tell that our docs really out of date. On the flip side, the coding assistants will actually read the docs, unlike many engineers.


This is such a strange take. The definition of "magic" in this post is apparently "other people's code" and it even admits that that no practical program can avoid depending on other people's code. I think what the author is really saying that they like to minimize dependencies and abstractions, particularly in web client development, and then throws in a connection to coding assistants.

I don't see it, either the notion that other people's code is to be avoided for its own sake nor that depending on LLM-generated code is somehow analogous to depending on React.


Python crossed the chasm in the early 2000s with scripting, web applications, and teaching. Yes, it's riding an ML rocket, but it didn't become popular because it was used for ML, it was chosen for ML because it was popular.

Because you generally cannot test every possible input, output pair.


Yes, because no one conflates an engineer with a compiler. But there are people making the argument that we should treat natural language specs/prompts as the new source and computer language code as a transient artifact.


No, for the reasons given in the sibling comments: you won't want to be locked into a single model for the rest of time and, even if you did, floating point execution order will still cause non-determinism.


I'm not sure what the high-level point of the article is, but I agree with the observation that we (programmers) should generally prefer using AI agents to write correct, efficient programs to do what we want, to have agents do that work.

Not that everything we want an agent to do is easy to express as a program, but we do know what computers are classically good at. If you had to bet on a correct outcome, would you rather an AI model sort 5000 numbers "in its head" or write a program to do the sort and execute that program?

I'd think this is obvious, but I see people professionally inserting AI models in very weird places these days, just to say they are a GenAI adopter.


Thank you. I had to scroll way down to find anyone defending using SI prefixes to mean what they mean everywhere else. A decade ago, I decided to alias "du" to "du --si" and not look back. Entire countries have switched from imperial to metric units. Switching to using base 10 for RAM is really just fine.


I use the phrase "drive-by review" frequently too. As a senior engineer, I worry about doing drive-bys myself. Sometimes my gut tells me something is not quite right about the project, but I just don't know enough about the problem domain or technology/architecture choices to advise definitively.

In this case, I try to question the project owners on their assumptions and whether they have validated them. Usually this line of questioning reveals whether they have "done their homework".


This article resonates with me a lot, but as a senior engineer I would not share it in a big team setting. Even though it's correct, it's too cynical for big team morale. I think it would be worth sharing with peers or managers when discussing whether and how to intervene on a bad project.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: