Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The pattern that gets missed in these discussions: every "no-code will replace developers" wave actually creates more developer jobs, not fewer.

Doesn't mean it will happen this time (i.e. if AI truly becomes what was promised) and actually it's not likely it will!





I felt like the article had a good argument for why the AI hype will similarly be unsuccessful at erasing developers.

> AI changes how developers work rather than eliminating the need for their judgment. The complexity remains. Someone must understand the business problem, evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve.

What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?


No previous tool was able to learn on its own mistakes (RLVR).

It might be not enough by itself, but it shows that something has changed in comparison with the 70-odd previous years.


LLM's don't learn on their own mistakes in the same way that real developers and businesses do, at least not in a way that lends itself to RLVR.

Meaningful consequences of mistakes in software don't manifest themselves through compilation errors, but through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.


> through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.

That is, the problems are a) how to generate a training signal without formally verifiable results, b) hierarchical planning, c) credit assignment in a hierarchical planning system. Those problems are being worked on.

There are some preliminary research results that suggest that RL induces hierarchical reasoning in LLMs.


My argument would be that while some complexity remains, it might not require a large team of developers.

What previously needed five devs, might be doable by just two or three.

In the article, he says there are no shortcuts to this part of the job. That does not seem likely to be true. The research and thinking through the solution goes much faster using AI, compared to before where I had to look up everything.

In some cases, agentic AI tools are already able to ask the questions about architecture and edge cases, and you only need to select which option you want the agent to implement.

There are shortcuts.

Then the question becomes how large the productivity boost will be and whether the idea that demand will just scale with productivity is realistic.


> evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve

I think you are basing your reasoning on the current generation of models. But if future generation will be able to do everything you've listed above, what work will be there left for developers? I'm not saying that we will ever get such models, just that when they appear, they will actually displace developers and not create more jobs for them. The business problem will be specified by business people, and even if they get it wrong it won't matter because iteration will be quick and cheap.

> What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?

The entire argument is based on assumption that models won't get better and will never be able to do things you've listed! But once they become capable of these things - what work will be there for developers?


Yes, if we assume that AI can do the job of developers then tautologically it can do the job of developers.

It's not obvious at all. Some people believe that once AI can do the things I've listed, the role of developers will change instead of getting replaced (because advances always led to more jobs, not less).

And your entire argument is based around the possibility of it turning into a magic genie that can do anything

Turning into a human-level intelligence. If you believe that it requires magic, well, it's your right.

A $3 calculator today is capable of doing arithmetic that would require superhuman intelligence to do 100 years ago.

It's extremely hard to define "human-level intelligence" but I think we can all agree that the definition of it changes with the tools available to humans. Humans seem remarkably suited to adapt to operate at the edges of what the technology of time can do.


> that would require superhuman intelligence to do 100 years ago

It had required a ton of ordinary intelligence people doing routine work (see Computer(occupation)). On the other hand, I don't think anyone has seriously considered to replace, say, von Neumann with a large collective of laypeople.


We are actually already at the level of magic genie or some sci-fi level device. It can't do anything obviously but what it can is mind blowing. And the basis of argument is obviously right - potential possibility is really low bar to pass and AGI is clearly possible.

> if AI truly becomes what was promised

I mean they are promising AGI.

Of course in that case it will not happen this time. However, in that case software dev getting automated would concern me less than the risk of getting turned into some manner of office supply.

Imo as long as we do NOT have AGI, software-focused professional will stay a viable career path. Someone will have to design software systems on some level of abstraction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: