> Often this takes the form of trying to access an endpoint that does not exist, or read a property off the API response that does not exist. After probing a bit more, my suspicions are usually confirmed — ChatGPT hallucinated that endpoint or property
In some cases, you might be able to use this to your advantage to improve the product.
When working with third-party APIs, I've often run into situations where my code could be simplified greatly if the API had an extra endpoint or parameter to filter certain data, only to be disappointed when it turns out to have nothing of the sort.
It's possible that ChatGPT is "thinking" the same thing here; that most APIs have an X endpoint to make a task easier, so surely yours does too?
Over time I've sent in a few support tickets with ideas for new endpoints/parameters and on one occasion had the developer add them, which was a great feeling and allowed me to write cleaner code and make fewer redundant API calls.
> It's possible that ChatGPT is "thinking" the same thing here; that most APIs have an X endpoint to make a task easier, so surely yours does too?
While this is possible, I would caution with an anecdote from my ongoing side project of "can I use it to make browser games?", in which 3.5 would create a reasonable Vector2D class and then get confused and try to call .mul() and .sub() instead of the .multiply() and .subtract() that it had just created.
Sometimes it's exceptionally insightful, other times it needs to RTFM.
I feel like we'll eventually all agree that it's a mistake to ask a generalist LLM for code. I've found ChatGPT to be fine at talking about code - like describing the difference between two APIs - but for generating nontrivial chunks of working code I think it's miles behind Copilot and similar.
And I assume that's just because, y'know, ChatGPT can write sonnets and translate Korean to Swahili and whatnot. It's amazingly broad, but it's not the right tool for the comparatively narrow problem of code generation.
It's powered by the same models, but it's not submitting questions to the Q&A prompt like people do when they ask ChatGPT to generate code for them.
(..I guess? I don't think any of it is public - one might naively suppose that by now it's actually using only a subset of ChatGPT's MoEs, or something, but who knows?)
another consideration: if a popular AI model hallucinates an endpoint for your API for one customer, chances are another customer will run into the same situation
In some cases, you might be able to use this to your advantage to improve the product.
When working with third-party APIs, I've often run into situations where my code could be simplified greatly if the API had an extra endpoint or parameter to filter certain data, only to be disappointed when it turns out to have nothing of the sort.
It's possible that ChatGPT is "thinking" the same thing here; that most APIs have an X endpoint to make a task easier, so surely yours does too?
Over time I've sent in a few support tickets with ideas for new endpoints/parameters and on one occasion had the developer add them, which was a great feeling and allowed me to write cleaner code and make fewer redundant API calls.