> That's the whole selling point of AI tools. "You can do this without learning it, because the AI knows how"
I'm sure we are veering into "No true Scotsman" territory but that's not the type of learning/tools I'm suggesting. "Vibe Coding" is a scourge for anything more than a one-off POC but LLMs themselves are very helpful in pinpointing errors, writing common blocks of code (Copilot auto-complete style), and even things like Aider/Claude Code can be used in a good way if and only if you are reviewing _all_ the code it generates.
As soon as you disconnect yourself from the code it's game over. If you find yourself saying "Well it does what I want, commit/ship it" then you're doing it wrong.
On the other hand, there are some people who refuse to use LLMs for a wide range of reasons ranging from silly to absurd. Those people will be passed by and have no one to blame but themselves. LLMs are simply another tool in the tool box.
I am not a horse cart driver, I am a transportation expert. If the means of transport changes/advances then so will I. I will not get bogged down in "I've been driving horses for XX years and that's what I want do till the day I die", that's just silly. You have to change with the times.
> As soon as you disconnect yourself from the code it's game over
We agree on this
The only difference is that I view using LLM generated code as already a significant disconnect from the code, and you seem to think some LLM usage is possible without disconnecting from the code
Maybe you're right but I have been trying to use them this way and so far I find it makes me completely detached from what I'm building
> The only difference is that I view using LLM generated code as already a significant disconnect from the code, and you seem to think some LLM usage is possible without disconnecting from the code
It's a gray area for sure and almost no one online is talking about the same thing when they say "LLM Tools", "LLM", "Vibe Coding", "AI", etc so it makes it even harder to have conversations. It's probably a lot like the joke "Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?".
For myself, I'm fine with Github Copilot auto-completions (up to ~10 lines max) and I review every line it wrote. Most often I enjoy it for boilerplate-ish things where an abstraction would be premature but I still have to type out a bunch of boilerplate. Being able to write 1-2 examples and have it extrapolate the rest is quite nice.
I've used Aider/Claude Code [0] as well and had success but I don't love the workflow of asking it to do something, then waiting for it to spit out a bunch of code I need to review. I expect this will improve and I have seen some improvement already. For some tasks it has me beat (speed of writing UI) but most logic-type things I have been unable to prompt it well enough or give it enough/the right context to solve the problem. Because of this I mainly use these tools for one-off, POC, or just screwing around.
I also find things like explanation of errors or tracking down what the root cause of an error are useful.
I am very much _not_ a fan of "Vibe Coding" or anything that pretends it can be "no code"/"low code". I don't know if I'll ever be comfortable not reviewing the code directly but we will see. I'm sure assembly developers swore to never use C, who then swore to never use C++, who swore they'd never use python, and so on and so forth. It's not clear to me if LLM-generated code is another step up or just a tool for the current level, I'm leaning heavily towards them just being a tool. I don't think "prompt engineer" is going to be a thing.
[0] And Continue.dev, Cursor, Windsurf, Codeium, Tabnine, Junie, Jetbrains AI, and more
> For myself, I'm fine with Github Copilot auto-completions (up to ~10 lines max) and I review every line it wrote
This is what I would like to use it for, but I have been struggling quite a bit with it
If I have a rough idea of what a 10-line function might look like and Cursor does the Autocomplete suggestion, it is nice when it is basically what I had in mind and I can just accept the suggestion. This happens very rarely for me though
More often I find the suggestion is just wrong enough that I want to change it, so I don't accept it. But this also shoves the idea I had in my head right out of my brain and now I'm in a worse position, having to reconstruct the idea I already had
This happened to me enough that I wound up entirely turning off these suggestions. It was ruining my ability to achieve any kind of flow
> Because of this I mainly use these tools for one-off, POC, or just screwing around.
Yeah... My company is making these tools mandatory and I suspect they are collecting metrics to see who is using them and how much
It's been very stressful and overall an extremely negative experience for me, which is made worse when I read the constant cheerleading online, and the "You're just using it wrong" criticisms of my negative experience
> Yeah... My company is making these tools mandatory and I suspect they are collecting metrics to see who is using them and how much
I'm sorry to hear this. I have encouraged the developers I manage to try out the tools but we're no where close to "forcing" anyone to use them. It hasn't come up yet but I'll be pushing back hard on any code that is clearly LLM-generated, especially if the developer who "wrote" it can't explain what's happening. Understanding and _owning_ the code the LLMs generate is part of it, "ChatGPT said..." or "Cursor wrote..." are not valid sentence starters to question like "Why did you do it this way?". LLM-washing (or whatever you want to call it) will not be tolerated, if you commit it, you are responsible for it.
> It's been very stressful and overall an extremely negative experience for me, which is made worse when I read the constant cheerleading online, and the "You're just using it wrong" criticisms of my negative experience
I hate hearing this because there are plenty of people writing blog posts or making youtube videos about how they are 10000x-ing their workflow. I think most of those people are completely full of it. I do believe it can be done (managing multiple Claude Code or similar instance running) but it turns you into a code reviewer and because you've already ceded so much control to the LLM it's easy to fall into the trap of thinking "One more back and forth and the LLM will get it" (cut to 10+ back and forths later when you need to pull the ripcord and reset back to the start).
Copilot and short suggestions (no prompting from me, just it suggesting in-line the next few lines) are the sweet spot for me. I fear many people are incorrectly extrapolating LLM capability. "Because I prompted my way to a POC then clearly an LLM would have no problem adding a simple feature to my existing code base" - Not so, not by a longshot.
Yes. Not only is this the least enjoyable part of the job in general for me, I think it is a task that a lot of devs, even pretty diligent ones, wind up half assing
I personally don't mind reviewing coworkers code because I think it is an opportunity to mentor and learn, but that is not really the case with LLM generated code. That code review becomes purely "Does this do what I want and does it match the style guide"
I would much rather LLMs review my code than the other way around. Unfortunately even that workflow is more annoying than anything, because the LLM is often not a good reviewer either
I think I just expect more reliability out of the tools I use.
I completely agree, I’m not anti-code review but it’s by far the least enjoyable part of my job. It’s never going to give you same understanding that getting into the code will.
That’s acceptable when there is another human who _does_ understand the code (they wrote it) and someone who can learn and grow via the code review process.
None of that applies for LLM-generate code.
In many cases if it fails at the task, it’s much easier for me to just do it myself than to go a couple more rounds with the LLM (cause it’s almost never as easy as a normally code review, you have to prompt better and be more explicit).
That my biggest annoyance with Claude Code/Aider, always feeling like I’m 1 prompt away from everything slotting into place. When, in reality, each time I get back on the merry-go-round it might fix 1 thing and then break another. Or it’s “fix” might be absurd (“I’ll just cast this so it’s the right type” :facepalm:).
> Yeah... My company is making these tools mandatory and I suspect they are collecting metrics to see who is using them and how much
This I just don't get. If the tool is actually useful and makes you more productive, then developers will be banging down management's door to let them use it, not the other way around. If the company has to resort to forcing their employees to use a tool, what does that say about the tool?
Using AI is the opposite of learning.
I'm not just trying to be snarky and dismissive, either
That's the whole selling point of AI tools. "You can do this without learning it, because the AI knows how"