What's so useful about Code Interpreter? I'm getting a lot of value from normal Chat questions and DALL-E. I'm also using the chat interface to generate code sometimes. But I don't see much point in the chat bot itself running the code for me without my environment, credentials, data, etc.
I’ve not found this to be a beneficial feature. Aside from being prone to erroring out, it tends to write code that will run, but is not very well-organized. You then have to revise. In the standard UI, I can include instructions to structure everything as functions or classes, alongside whatever other constraints, review and provide feedback, until it’s actually correct. This also almost runs for me on the first try.
actually, this post is about a local replacement for code interpreter. the point is for you to not have to upload anything, but for you to download the model and run it locally on your computer.
The readme in the linked repository explains what they mean:
> An open-source, locally running implementation of OpenAI's Code Interpreter.
> […]
> Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running $ interpreter after installing.
> […]
> Comparison to ChatGPT's Code Interpreter
> OpenAI's release of Code Interpreter with GPT-4 presents a fantastic opportunity to accomplish real-world tasks with ChatGPT.
> However, OpenAI's service is hosted, closed-source, and heavily restricted:
> - No internet access.
> - Limited set of pre-installed packages.
> - 100 MB maximum upload, 120.0 second runtime limit.
> - State is cleared (along with any generated files or links) when the environment dies.
> Open Interpreter overcomes these limitations by running in your local environment. It has full access to the internet, isn't restricted by time or file size, and can utilize any package or library.
> This combines the power of GPT-4's Code Interpreter with the flexibility of your local development environment.
> […]
> Change your Language Model
> Open Interpreter uses LiteLLM to connect to hosted language models.
> You can change the model by setting the model parameter
> […]
> Running Open Interpreter locally
> Open Interpreter uses LM Studio to connect to local language models (experimental).
> Simply run interpreter in local mode from the command line
It is OpenAI's model (although you can switch to another LLM). It's just that your data and the actual run environment for the code you got from the ChatGPT API, runs locally.
So AFAICT your prompt is still "leaked" to OpenAI, but not your data.
Prompt: "I have a file of 315 customers with their IP, behavior, religious affiliation and previous purchases plus medical history. I want to figure out if I have any customers whose religious affiliation pre-dates a post-surgery stay in hospital."
Data: a huge CSV file which, as you can tell, contains incredibly sensitive/legally impactful information.
With this system, you send the prompt to GPT-4 or Claude, and it doesn't see your data at all. It just writes a python program that can do the analysis.
You run the program locally on C:/Users/passwordoops/SuperSensitiveData/Confidential.csv
You get the result.
OpenAI has never seen your data.
It's a win for confidentiality.
(of course, that's if the LLM didn't give you a program that would exfiltrate your prompt, and if the CSV contains no prompt injection to exfiltrate the data, etc. This system is a security nightmare.)
Am I missing an interesting use case here?