”I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me.” - AI software engineering in a nutshell. Leaving the human artisan era of code behind. Function over form. Substance over style. Getting stuff done.
“Human artisan era of code” is hilarious if you’ve worked in any corporate codebase whatsoever. I’m still not entirely sure what some of the snippets I’ve seen actually are, but I can say with determination and certainty that none of it was art.
The truth about vibe coding is that, fundamentally, it’s not much more than a fast-forward button: ff you were going to write good code by hand, you know how to guide an LLM to write good code for you. If, given infinite time, you would never have been able to achieve what you’re trying to get the LLM to do anyway, then the result is going to be a complete dumpster load.
It’s still garbage in, garbage out, as it’s always been; there’s just a lot more of it now.
There should never have been an "artisan era". We use computers to solve problems. You should have always getting stuff done instead of bikeshedding over nitty-gritty details, like when in the office people have been spending weeks on optimizing code... just to have the exact same output, exact same time, but now "nicer".
> Plenty of people are writing code without being paid for it.
This is rhetorically a non sequitur. As in, if you get paid (X) then you get stuff done (Y). But if you're not paid (~X), then, ?
Not being paid doesn't mean one does or doesn't get stuff done, it has no bearing on it. So the parent wasn't saying anything about people who don't get paid, they can do whatever they want, but yes, at a job if you're paid, then you better get stuff done over bikeshedding.
It depends how much money and energy in the form of manhours were spent to write it in an artisan way in the first place. I've been in a lot of PR reviews where it was clear that the amount of back and forth we had was simply not worth it for the code we wrote.
I think you're both right. There's a time and place for beautifully crafted code, but there's also a place for a hot mess that barely passes its own non-existing tests, and for anything in between.
> there's also a place for a hot mess that barely passes its own non-existing tests
For a long time that place has been "the commercial software marketplace". Let's all stop pretending that the code coming out of shops until now has been something you'd find at a guild craft expo. It's always been a ball of spit and duct tape, which is why AI code is often spit and duct tape.
Yeah. Exactly the same as there should never be an “artisan era” for chairs, tables, buildings, etc.
Hell even art! Why should art even be a thing? We are machine driven by neurons, feelings do not exist.
Might be your life, it ain’t mine. I’m an artisan of code, and I’m proud to be one. I might finally use AI one of these days at work because I’ll have to, but I’ll never stop cherishing doing hand-crafted code.
>> Yeah. Exactly the same as there should never be an “artisan era” for chairs, tables, buildings, etc.
That's funny you bring up those examples, because they have all moved on to the mass manufacturing era. You can still get artisan quality stuff but it typically costs a lot more and there's a lot less of it. Which is why mass-manufacturing won. Same is going to happen with software. LLMs are just the beginning.
I live in a city where there are new houses being built. They are ugly. Meanwhile, the ones that exist since a long time ago have charm and feel homely.
I don’t know, I‘m probably just a regular old man yelling at clouds, but I still think we’re going in the wrong direction. For pretty much everything. And for what? Money. Yay!
You're continuing to make good arguments for why mass-production should exist _alongside_ artisanal craftsmanship. Broad availability of housing which is functional, albeit of questionable aesthetic appeal, is a good thing to improve housing availability[0]; and also it is a good thing for (fewer) well-built, charming, individual homes to be available for those who want to spend more and to get more.
[0] I'm extremely aware that there are other contributing factors to housing shortages. Tax Billionaires, etc. My metaphor still works despite not being total.
The difference is that end users don't interact with the code that the artisan created, and don't care what it "feels like". One type of code that I do agree should be artisanal is the interface end of libraries.
Yes, it's like artisanal plumbing or electrical wiring... all hidden behind walls. A plumber might take pride in the quality of his soldered joints, but artisanal? Who wants to pay for that?
> just to have the exact same output, exact same time, but now "nicer".
The majority of code work is maintaining someone else's code. That's the reason it is "nicer".
There is also the matter of performance and reducing redundancy.
Two recent pulls I saw where it was AI generated did neither. Both attempted to recreate from scratch rather than using industry tested modules. One was using csv instead of polars for the intensive work.
So while they worked, they became an unmaintainable mess.
You use computers to solve problems. I use computers to communicate and create art. For me, the code I write is first and foremost a form of self expression. No one paid me to write 99% of the code I've written in my life.
For a long time computers were so expensive they could only be used to do things that generate enough money to justify their purchase. But those days are long gone so computers are for much much more than just solving problems and getting stuff done. Code can be beautiful in its own right.
The exact mindset is what has led to the transition from quality products to commercialized crapware, not just with software, but across all industries.
It sounds like you hate your job? To be sure, I've done plenty of grinding over my career as a software engineer but in fact I coded as a hobby before it turned into a career, I then continued to code on the side, now I am retired and code still.
I love my job FWIW. I work at performance engineering and we work with the most complex systems in the world (GB200/B300/...). Couldn't be happier.
But I just don't care if I have 5 layers of abstraction and SOLID principles and clean code and.... bah. I get it. I have an MSc in it and I've been doing this as a hobby and then professionally for decades now. It just doesn't matter. At the end of the day, we get paid to ship something that solves a problem.
It might be a novel problem. And it might be at the frontier of what we can do today. But it's still a problem that needs solving and the path we take is irrelevant from a user's perspective as long as it solves the problem.
I don't think they hate their job, just seem to be frustrated at slow bureaucratic processes and long code reviews which I've experienced too. After a while it can get aggravating as to why some people want to nitpick minute details of the code which slows down development overall. I am talking about cases where the initially submitted PR is perfectly fine, not grossly incorrect.
Oh wow, if we're talking about code reviews that's a different topic. I've never, FWIW, encountered "artisans" in code reviews. More like "that's not how I would have coded itsans" and "let me show you some new tricksans".
Yeah, to hell with code reviews. The best years of my career were when I was given carte blanche control over an entire framework, etc. When code reviews came along coding at work sucked.
If anything, the code reviews killed the artisanship.
90% of the CRs I've ever gotten have been "artisanal" just because nitpicking superficial nonsense is easier than meaningful critique, and even when the code is perfectly fine it looks more productive from a managers perspective if you're nitpicking a function name than if you just respond with lgtm.
Yeah that's what I understood them to mean from "like when in the office people have been spending weeks on optimizing code... just to have the exact same output, exact same time, but now "nicer"." There does come such a time either way when the juice isn't worth the squeeze so to speak in terms of optimization of code.
Was about to comment precisely this, that line does not inspire any confidence.
And it reminds me of a comment I saw in a thread 2 days ago. One about how RAPIDLY ITERATIVE the environment is now. There area lot of weekend projects being made over the knee of a robot nowadays and then instantly shared. Even OpenClaw is to a great extent, an example of that at its current age. Which comes in contrast to the length of time it used to take to get these small projects off the ground in the past. And also in contrast with how much code gets abandoned before and after "public release.
I'm looking at AI evangelists and I know they're largely correct about AI. I also look at what the heck they built, and either they're selling me something AI related, or have a bunch of defunct one-shot babies or mostly tools so limited in scope they server only themselves with it. We used to have a filter for these things. Salesmen always sold promises, so, no change there, just the buzzwords. But the cloutchasers? Those were way smaller in number. People building the "thing" so the "thing" exists mostly stopped before we ever heard of the "thing", because, turns out, caring about the "thing" does not actually translate to the motivation to getting it done. Or Maintain it.
What we have now is a reverse survivorship bias.
OOP stating they don't care about the state of their code during their public release, means I must assume they're a Cloutchaser. Either they don't care because they know they can do better which means they shared something that isn't their best, so their motivation with the comment is to highlight the idea. They just wanted to be first. Clout. Or they don't exactly concern with if they can as they just don't care about code in general and just want the product, be it good or be it not. They believe in the idea enough they want to ensure it exists, regardless of what's in the pudding. Which means to me, they also don't care to understand what's in the ingredient list. Which means they aren't best to maintain it. And that latter is the kind that, before the LLM slop was a concept in our minds, were precisely ones among the people who would give up half way through Making The "Thing".
Code is the means to an end of getting stuff done, not the end in itself as some people seem to think. Yes, being a code artisan is fun, but do not mistake the fun for its ultimate purpose.
If you want to say something just say it no need for trap questions.
Faster delivery of a project being better for engineering is obviously one of the most important things because it gives you back time to invest in other parts of your project. All engineering is trade-offs. Being faster at developing basic code is better, the end. If nothing else you can now spend more time on requirements and on a second iteration with your customer.
Most of the time that's pretty divorced from capital-E engineering, which is why we get to be cavalier about the quality of the result - let me know how you feel about the bridges and tunnels you drive on being built "as fast as possible, to hell with safety"
Don't put words in my mouth, you don't care about safety not me. And for what it's worth I'm an electrical engineer first, so if you have some inferiority complex about software you don't have to apply it to me.
Consider applying the strongest version of an argument than the weakest. Obviously faster it's better means to a similar standard. Not faster due to a shittier standard.
> AI software engineering in a nutshell. Leaving the human artisan era of code behind. Function over form. Substance over style. Getting stuff done
The invention of calculators and computers also left the human artisan era of slide rules, calculation charts and accounting. If that's really what you care about, what are you even doing here?
The trade off between utilization and latency is rarely understood in organizations. Little’s law should be mandatory (management) reading. Unused capacity is not waste, but buffers that absorb variability, and thus keeps latency down.
It reminds me of Kingman's Formula in queueing theory: As server utilization approaches 100%, the wait time approaches infinity.
We intuitively understand this for servers (you never run a CPU at 99% if you want responsiveness), yet for some reason, we decided that a human brain—which is infinitely more complex—should run at 99% capacity and still be expected to handle urgent interruptions without crashing.
Who is responsible for the terrible decision? In the pro vs con analysis, saving 20% size occasionally vs updating ALL pdf libraries/apps/viewers ever built SHOULD be a no-brainer.
The one-time purchase version of Microsoft Office is not available worldwide. Where offered, it is reduced to Word, Excel, PowerPoint, and OneNote, with Outlook as a Business edition extra. Individual apps can sometimes be bought separately, but pricing usually makes this impractical. This is to push buyers to Microsoft 365 subscriptions which is the primary product.
Upgrade regret here. What used to be solid performance is now random hangs and unresponsiveness. Most things work but it’s Apples least polished OS in many years.
After recently applying Codex to a gigantic old and hairy project that is as far from greenfield it can be, I can assure you this assertion is false. It’s bonkers seeing 5.2 churn though the complexity and understanding dependencies that would take me days or weeks to wrap my head around.
Note: At the point of writing this, the comments are largely skeptical.
Reading this as an avid Codex CLI user, some things make sense and reflect lessons learned along the way. However, the patterns also get stale fast as agents improve and may be counterproductive. One such pattern is context anxiety, which probably reflects a particular model more than a general problem, and is likely an issue that will go away over time.
There are certainly patterns that need to be learned, and relearned over time. Learning the patterns is sort of an anti-pattern, since it is the model that should be trained to alleviate its shortcomings rather than the human. Then again, a successful mindset over the last three years has been to treat models as another form of intelligence, not as human intelligence, by getting to know them and being mindful of their strengths and weaknesses. This is quite a demanding task in terms of communication, reflection, and perspective-taking, and it is understandable that this knowledge is being documented.
But models change over time. The strengths and weaknesses of yesterday’s models are not the same as today’s, and reasoning models have actually removed some capabilities. A simple example is giving a reasoning model with tools the task of inspecting logs. It will most likely grep and parse out smaller sections, and may also refuse an instruction to load the file into context to inspect it. The model then relies on its reasoning (system 2) rather than its intuitive (system 1) thinking.
This means that many of these patterns are temporary, and optimizing for them risks locking human behavior to quirks that may disappear or even reverse as models evolve. YMMV.
I have the theory that agents will improve a lot when trained on more recent training data. Like I‘ve had agents have context anxiety because they still think an average LLM context window is around 32k tokens. Also building agents with agents, letting them do prompt engineering etc, still is very unsatisfactory, they keep talking about GPT-3.5 or Gemini 1.5 and try to optimize the prompts for those old models, which of course was almost a totally different thing. So I‘m thinking if that‘s how they are thinking of themselves as well, maybe that artificially limits their agentic behavior too, because they just don’t know how much more capable they are than GPT-3.5
Because “strengths” of a model is based not on inherit characteristics, but on various user perception. It feels that model A is doing some thing better, same at it feels that your productive is high.
Strong point. I’m considering to tag patterns better and add stuff like “model/toolchain-specific,” and something like “last validated (month/year)” field. Things change fast and for example “Context anxiety” is likely less relevant and should be reframed that way (or retired).
reply