> I don’t do this a lot, but sometimes when I’m really stuck on a bug, I’ll attach the entire file or files to Copilot chat, paste the error message, and just ask “can you help?”
The "reasoning" models are MUCH better than this. I've had genuinely fantastic results with this kind of thing against o1 and Gemini Thinking and the new o3-mini - I paste in the whole codebase (usually via my https://github.com/simonw/files-to-prompt tool) and describe the bug or just paste in the error message and the model frequently finds the source, sometimes following the path through several modules to get there.
I have picked up the cursor tool which allows me to throw in relevant files with a drop down menu. Previously I was copy pasting files into the chatgpt browser page, but now I point cursor to o1 and do it within the ide.
One of my favourite things is to ask it if it thinks there are any bugs - this helps a lot with validating any logic that I might be exploring. I recently ported some code to a different environment with slightly different interfaces and it wasn't working - I asked o1 to carefully go over each implementation in detail why it might be producing a different output. It thought for 2 whole minutes and gave me a report of possible causes - the third of which was entirely correct and had to do with how my environment was coercing pandas data types.
There have been 10 or so wow moments over the past few years where I've been shocked by the capabilities of genai and that one made the list.
The "attach the entire file" part is very critical.
I've had the experience of seeing some junior dev posting error messages into ChatGPT, applying the suggestions of ChatGPT, and posting the next error message into ChatGPT again. They ended up applying fixes for 3 different kinds of bugs that didn't exist in the code base.
---
Another cause, I think, is that they didn't try to understand any of those (not the solutions, and not the problems that those solutions are supposed to fix). If they did, they would have figured out that the solutions were mismatches to what they were witnessing.
There's a big difference between using LLM as a tool, and treating it like an oracle.
This is why in-IDE LLMs like Copilot are really good.
I just had a case where I was adding stuff to two projects, both open at the same time.
I added new fields to the backend project, then I swapped to the front-end side and the LLM autocomplete gave me 100% exactly what I wanted to add there.
And similar super-accurate autocompletes happen every day for me.
I really don't understand people who complain about "AI slop", what kind of projects are they writing?
> I don’t do this a lot, but sometimes when I’m really stuck on a bug, I’ll attach the entire file or files to Copilot chat, paste the error message, and just ask “can you help?”
The "reasoning" models are MUCH better than this. I've had genuinely fantastic results with this kind of thing against o1 and Gemini Thinking and the new o3-mini - I paste in the whole codebase (usually via my https://github.com/simonw/files-to-prompt tool) and describe the bug or just paste in the error message and the model frequently finds the source, sometimes following the path through several modules to get there.
Here's a slightly order example: https://gist.github.com/simonw/03776d9f80534aa8e5348580dc6a8... - finding a bug in some Django middleware