Yeah, I feel like I get really good results from AI, and this is very much how I prompt as well. It just takes care of writing the code, making sure to update everything that is touched by that code guided by linters and type-checkers, but it's always executing my architecture and algorithm, and I spend time carefully trying to understand the problem before I even begin.
But this is what I don't get. Writing code is not that hard. If the act of physically typing my code out is a bottleneck to my process, I am doing something wrong. Either I've under-abstracted, or over-abstracted, or flat out have the wrong abstractions. It's time to sit back and figure out why there's a mismatch with the problem domain and come back at it from another direction.
To me this reads like people have learned to put up with poor abstractions for so long that having the LLM take care of it feels like an improvement? It's the classic C++ vs Lisp discussion all over again, but people forgot the old lessons.
It's not that hard, but it's not that easy. If it was easy, everyone would be doing it. I'm a journalist who learned to code because it helped me do some stories that I wouldn't have done otherwise.
But I don't like to type out the code. It's just no fun to me to deal with what seem to me arbitrary syntax choices made by someone decades ago, or to learn new jargon for each language/tool (even though other languages/tools already have jargon for the exact same thing), or to wade through someone's undocumented code to understand how to use an imported function. If I had a choice, I'd rather learn a new human language than a programming one.
I think people like me, who (used to) code out of necessity but don't get much gratification out of it, are one of the primary targets of vibe coding.
I'm pretty damn sure the parent, by saying "writing code" meant the physical act of pushing down buttons to produce text, not the problem solving process that preceeds writing said code.
This. Most people defer the solving of hard problems to when they write the code. This is wrong, and too late to be effective. In one way, using agents to write code forces the thinking to occur closer to the right level - not at the code level - but in another way, if the thinking isn’t done or done correctly, the agent can’t help.
I can spend all the time I want inside my ivory tower, hatching out plans and architecture, but the moment I start hammering letters in the IDE my watertight plan suddenly looks like Swiss cheese: constraints and edge cases that weren't accounted for during planning, flows that turn out to be unfeasible without a clunky implementation, etc...
That's why Writing code has become my favorite method of planning. The code IS the spec, and English is woefully insufficient when it comes to precision.
This makes Agentic workflows even worse because you'll only your architectural flaws much much later down the process.
I also think this is why AI works okay-ish on tiny new greenfield webapps and absolutely doesn't on large legacy software.
You can't accurately plan every little detail in an existing codebase, because you'll only find out about all the edge cases and side effects when trying to work in it.
So, sure, you can plan what your feature is supposed to do, but your plan of how to do that will change the minute you start working in the codebase.
Yeah, I think this is the fundamental thing I'm trying to get at.
If you think through a problem as you're writing the code for it, you're going to end up the wrong creek because you'll have been furiously head down rowing the entire time, paying attention to whatever local problem you were solving or whatever piece of syntax or library trivia or compiler satisfaction game you were doing instead of the bigger picture.
Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on; but the problem with doing that without an agent is then it becomes boring. You've basically laid out a plan ahead of time and now you've just got to execute on the plan, which means (even though you might even fairly often revise the plan as you learn unknown unknowns or iterate on the design) that you've kind of sucked all the fun and discovery out of the code rights process. And it sort of means that you've essentially implemented the whole thing twice.
Meanwhile, with a coding agent, you can spend all the time you like building up that initial software design document, or specification, and then you can have it implement that. Basically, you can spend all the time in your hammock thinking through things and looking ahead, but then have that immediately directly translated into pull requests you can accept or iterate on instead of then having to do an intermediate step that repeats the effort of the hammock time.
Crucially, this specification or design document doesn't have to remain static. As you would discover problems or limitations or unknown unknowns, you can revise it and then keep executing on it, meaning it's a living documentation of your overall architecture and goals as they change. This means that you can really stay thinking about the high level instead of getting sucked into the low level. Coding agents also make it much easier to send something off to vibe out a prototype or explore the code base of a library or existing project in detail to figure out the feasibility of some idea, meaning that the parts that traditionally would have been a lot of effort to verify that what your planning makes sense have a much lower activation energy. so you're more likely to actually try things out in the process of building a spec
I believe programming languages are the better language for planning architecture, the algorithms, the domain model, etc... compared to English.
The way I develop mirrors the process of creating said design document. I start with a high level overview, define what Entities the program should represent, define their attributes, etc... only now I'm using a more specific language than English. By creating a class or a TS interface with some code documentation I can use my IDEs capabilities to discover connections between entities.
I can then give the code to an LLM to produce a technical document for managers or something. It'll be a throwaway document because such documents are rarely used for actual decision making.
> Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on;
I do this with code, and the IDE is much better than MS Word or whatevah at detecting my logical inconsistencies.
The problem is that you actually can't really model or describe a lot of the things that I do with my specifications using code without just ending up fully writing the low level code. Most languages don't have a type system that actually lets you describe the logic and desired behavior of various parts of the system and which functions should call which other functions and what your concurrency model is and so on without just writing the specific code that does it; in fact, I think the only languages that would allow you to do something like that would have to be like dependently typed languages or languages adjacent to formal methods. This is literally what the point of pseudocode and architecture graphs and so on are for.
Ah, perhaps. I understood it a little more broadly to include everything beyond pseudocode, rather than purely being able to use your fingers. You can solve a problem with pseudocode, and seasoned devs won't have much of an issue converting it to actual code, but it's not a fun process for everyone.
But this is exactly my point: if your "code" is different than your "pseudocode", something is wrong. There's a reason why people call Lisp "executable pseudocode", and it's because it shrinks the gap between the human-level description of what needs to happen and the text that is required to actually get there. (There will always be a gap, because no one understands the requirements perfectly. But at least it won't be exacerbated by irrelevant details.)
To me, reading the prompt example half a dozen levels up, reminds me of Greenspun's tenth rule:
> Any sufficiently complicated C++ program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. [1]
But now the "program" doesn't even have formal semantics and isn't a permanent artifact. It's like running a compiler and then throwing away the source program and only hand-editing the machine code when you don't like what it does. To me that seems crazy and misses many of the most important lessons from the last half-century.
the problem is that you actually have to implement that high level DSL to get Lisp to look like that, and most DSLs are not going to be able to be as concise and abstract as a natural language description of what you want, and then just making sure it resulted in the right thing — which then I'd want to use AI for, to write that initial boilerplate, from a high level description of what the DSL should do.
And a Lisp macro DSL is not going to help with automating refactors, automatically iterating to take care of small compiler issues or minor bugs without your involvement so you can focus on the overall goal, remembering or discovering specific library APIs or syntax, etc.
I think of it more like moving from sole developer to a small team lead. Which I have experienced in my career a few times.
I still write my code in all the places I care about, but I don’t get stuck on “looking up how to enable websockets when creating the listener before I even pass anything to hyper.”
I do not care to spend hours or days to know that API detail from personal pain, because it is hyper-specific, in both senses of hyper-specific.
(For posterity, it’s `with_upgrades`… thanks chatgpt circa 12 months ago!)
I get my dopamine from solving problems, not trying to figure out why that damn API is returning the wrong type of field for three hours. Claude will find it out in minutes - while I do something else. Or from writing 40 slightly different unit tests to cover all the edge cases for said feature.
> it's time to sit back and figure out why there's a mismatch with the problem domain and come back at it from another direction
But this is exactly what LLMs help me with! If I decide I want to shift the abstractions I'm using in a codebase in a big way, I'd usually be discouraged by all the error, lint, and warning chasing I'd need to do to update everything else; with agents I can write the new code (or describe it and have it write it) and then have it set off and update everything else to align: a task that is just varied and context specific enough that refactoring tools wouldn't work, but is repetitive and time consuming enough that it makes sense to pass off to a machine.
The thing is that it's not necessarily a bottleneck in terms of absolute speed (I know my editor well and I'm a fast typist, and LLMs are in their dialup era) but it is a bottleneck in terms of motivation, when some refactor or change in algorithm I want to make requires a lot of changes all over a codebase, that are boring to make but not quite rote enough to handle with sed or IDE refactoring. It really isn't, for me, even mostly about the inconvenience of typing out the initial code. It's about the inconvenience of trying to munge text from one state to another, or handle big refactors that require a lot of little mostly rote changes in a lot of places; but it's also about dealing with APIs or libraries where I don't want to have to constantly remind myself what functions to use, what to pass as arguments, what config data I need to construct to pass in, etc, or spend hours trawling through docs to figure out how to do something with a library when I can just feed its source code directly to an LLM and have it figure it out. There's a lot of friction and snags to writing code beyond typing that has nothing to do with having come up with a wrong abstraction, that very often lead to me missing the forest for the trees when I'm in the weeds.
Also, there is ALWAYS boilerplate scaffolding to do, even with the most macrotastic Lisp; and let's be real: Lisp macros have their own severe downsides in return for eliminating boilerplate, and Lisp itself is not really the best language (in terms of ecosystem, toolchain, runtime, performance) for many or most tasks someone like me might want to do, and languages adapted to the runtime and performance constraints of their domain may be more verbose.
Which means that, yes, we're using languages that have more boilerplate and scaffolding to do than strictly ideally necessary, which is part of why we like LLMs, but that's just the thing: LLMs give you the boilerplate eliminating benefits of Lisp without having to give up the massive benefits in other areas of whatever other language you wanted to use, and without having to write and debug macro soup and deal with private languages.
There's also how staying out of the code writing oar wells changes how you think about code as well:
If you think through a problem as you're writing the code for it, you're going to end up the wrong creek because you'll have been furiously head down rowing the entire time, paying attention to whatever local problem you were solving or whatever piece of syntax or library trivia or compiler satisfaction game you were doing instead of the bigger picture.
Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on; but the problem with doing that without an agent is then it becomes boring. You've basically laid out a plan ahead of time and now you've just got to execute on the plan, which means (even though you might even fairly often revise the plan as you learn unknown unknowns or iterate on the design) that you've kind of sucked all the fun and discovery out of the code rights process. And it sort of means that you've essentially implemented the whole thing twice.
Meanwhile, with a coding agent, you can spend all the time you like building up that initial software design document, or specification, and then you can have it implement that. Basically, you can spend all the time in your hammock thinking through things and looking ahead, but then have that immediately directly translated into pull requests you can accept or iterate on instead of then having to do an intermediate step that repeats the effort of the hammock tim