This wave of AI innovation reveals that a lot of activity in coding turns out to be of accidental complexity instead of essential. Or put it another way, a lot of tasks in coding is conceptual to human, but procedural to AI. Conceptual tasks require intuitive understanding, rigorous reasoning, and long-term planning. AI is not there yet. On the other hand, procedural tasks are low entropy with high priors: once a prompt is given, what follows is almost certain. For instance, one had to learn many concepts to write "public static void main(String[] args)" when writing Java code in the old days. But for AI, the conditional probability Pr(write "public static void main(String[] args)" | prompt = "write the entry method for a given class") is practically 1. Or if I'd like to use Python to implement linear regression, there will be pretty much one way to implement it right, and AI knows about it - nothing magical, but only because we human have been doing so for years and the optimal solution for most of the cases have converged, so it turns into procedural to AI.
Fortunate or unfortunate, many procedural tasks are extremely hard for humans to master, but easy to AI to generate. In the meantime, we structured our society to support such procedural work. As the wave of innovation spreads, many people will rise but many will also suffer.
You understate the capabilities of the latest gen LLMs. I can typically describe a user's bug in a few sentences or tell Claude to check fetch the 500 error in Cloud run logs and it will explain the root cause, propose a fix, and throw in new unit test in a two minutes.
Fortunate or unfortunate, many procedural tasks are extremely hard for humans to master, but easy to AI to generate. In the meantime, we structured our society to support such procedural work. As the wave of innovation spreads, many people will rise but many will also suffer.