Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I haven't had nearly the same experience of success with AI.

I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt

My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do

I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between

I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!

The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!

It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out

I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless



You're being gaslit. The point is to make you look unproductive.

The untrained temp workers using AI to do the entirety of their jobs aren't producing code of professional quality, it doesn't adhere to best practices or security unless you monitor that shit like a hawk but if you're still engineering for quality then AI is not the first train you've missed.

They will get code into production quicker and cheaper than you through brute force iteration. Nothing else matters. Best practices went the way of the rest of the social contract the instant feigned competence became cheaper.

Even my podunk employer has AI metrics. You won't escape it. AI will eventually gatekeep all expertise and the future employee becomes just a disposable meat interface (technician) running around doing whatever SHODAN tells them to.


My "agentic" experience is mostly Aider, working across a Golang webapp codebase. I've mostly used Gemini (whatever model Aider chooses to use at the moment).

Most of my experience has been similar to yours. But yesterday, out of the blue, it spit out a commit that I accepted almost verbatim (just added some line breaks and stuff). I was actually really surprised: not only it followed the existing codebase conventions and variable naming style, but also introduced a couple of patterns that I haven't thought of (and I liked).

But it also charged me $2 for the privilege :) (On a related note, Gemini API has become noticeably more expensive compared to, say, a month ago.)

I find that with Aider managing context (what files you add to it) can make all the difference.


That $2 represents how many minutes of your annual labor? 2 minutes? Less than 1 if you account for all the non-coding drag on your tota working time?


AI coding tools aren't equally effective across all software domains or languages. They're going to be the "best" (relative to their own ability distribution) in the "fat middle" of software engineering where they have the most training data. Popular tasks in popular languages and popular libraries (web dev in React, for example). You're probably out of luck if your task is writing netcode for a game engine, for instance.


I am a web dev in React, though

My experience is in one of the areas that people are saying it is most helpful

Which really just adds to the gaslighting effect


>letting the AI just generate it while I prompt

But isn't prompting and iterating another way of instructing the computer to do what you want? Perhaps we could view it as a step up in the level of abstraction we work at.

We had similar arguments when high-level languages were introduced. Experienced programmers of that era maintained that they could write better programs in assembly language than in COBOL/FORTRAN/PL-I/Pascal etc. Yet even today we still need core portions of code written in assembler, but not much.


But we aren't moving up a layer of abstraction here

We are operating at the same level of abstraction, just with tools to generate high volumes of inconsistent quality of it for us

Edit: It would have to produce a lot higher quality a lot more consistently for me to seriously consider moving up to the "LLM prompt" abstraction layer permanently. As it is I think I'm just better off writing the code myself


I have a working theory that it's mostly bad programmers who are achieving massive productivity gains. Really good programmers will probably have trouble getting the LLM tools to perform as well as their normal level of output.

This could be cope but I don't think it is.


I have seen good programmers, ones I respect a lot, get good results with AI.

I don't think this is it, personally.


My approach has always been to consult 3 different models to get my understanding on the right path and then do the rest myself. Are we really just doing blind copy pastes? As an example, I recently was able to prototype several different one dimensional computational fluid dynamic GLSL shaders. Claude outputted everything with vec3s, and so the flux math matched what you’d see in the theory. It’s rapid iteration and a declutterred search engine for me with an interactive inline comment section, though I do understand some would disagree that statement, especially since it’s lacking any sort of formal verification. I counter with the old adage that anyone can be a dog on the internet


> My approach has always been to consult 3 different models to get my understanding on the right path and then do the rest myself. Are we really just doing blind copy pastes

For me, if I spent the time testing 3 different models I would definitely be slower than writing the code myself


But I'm not writing code. It's research with iteration. Punching out manual CFD is time consuming


I'm not sure if it is cope, but I sort of feel the same

The quality of LLM code is consistently average at best, and usually quite bad imo. People say it is like a junior, but if a junior I hired produced such code consistently and never improved I would be recommending the company PIP them out.

Having output like a Junior would be fine, if I didn't have to fix it myself. As it stands, I've never been able to get it to produce code of the quality I want so I have to spend more time fixing it than I would just writing it.

I dunno. It sucks man


And the irony is that those of us using AI to amplify our output to produce at exponential speeds feel like your comments are gaslighting us instead! Ive never seen such an outright divide in practitioners of a technology in terms of perception and outcomes. I got into LLMs super early, using them daily since 2022; so that may have bolstered the way I’ve augmented my approaches and tooling. Now almost everything I build uses AI at runtime to generate better tools for my AI to generate tools at runtime.


Can we use this micro moment to try to bridge the gap? I was sold on cocaine but all I've gotten so far was corn starch. Is there like a definitive tutorial on this? I mean look I am proud of my work but if I can drop 200-1000/month for the "blue stuff" I'm not gonna turn my nose up at it.

I've been pretty deeply into LLMs myself since 2023 and built several small models myself from scratch and (SFT) trained many more so it's not like I'm ignorant of how it works, I'm just not getting the workflow results.


It's going to depend heavily on what you're doing. If you're doing common tasks in popular languages, and not using cutting edge library features, the tools are pretty good at automating a large amount of the code production. Just make sure the context/instruction file (i.e. claude.md) and codebase are set up to properly constrict the bot and you can get away with a lot.

If you're not doing tasks that are statistically common in the training data however you're not going to have a great experience. That being said, very little in software is "novel" anymore so you might be surprised.


Just because it's not strictly novel doesn't mean that the LLM is outputting the right thing

We used to caution people not to copy and paste from StackOverflow without understanding the code snippets, now we have people generating "vibe code" from nothing using AI, never reading it once and pushing it to master?

It feels like an insane fever dream


> amplify our output to produce at exponential speeds

I think I blacked out when my brain tried to process this phrase.

Nothing personal, but I automatically discount all claims like this (something something require extraordinary evidence and all that…).


Maybe I need to watch some videos on YouTube to understand what other people are seeing.

I couldn't even get Zed hooked up to GitHub Copilot. I use ChatGPT for snippets and search and it's okay but I don't want to bother checking its work on a large scale


> And the irony is that those of us using AI to amplify our output

I'm guessing you don't care about quality very much, since you are focusing on your output volume




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: