Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's so useful about Code Interpreter? I'm getting a lot of value from normal Chat questions and DALL-E. I'm also using the chat interface to generate code sometimes. But I don't see much point in the chat bot itself running the code for me without my environment, credentials, data, etc.

Am I missing an interesting use case here?



Everybody knows GPT hallucinates - so occasionally it suggests code, you try it, and discover the functions or arguments it’s suggesting don’t exist.

Code interpreter takes you out of that loop - GPT can find and fix its own mistakes.


I’ve not found this to be a beneficial feature. Aside from being prone to erroring out, it tends to write code that will run, but is not very well-organized. You then have to revise. In the standard UI, I can include instructions to structure everything as functions or classes, alongside whatever other constraints, review and provide feedback, until it’s actually correct. This also almost runs for me on the first try.


Let's say you have a complex question that requires some math or difficult logic to solve.

The idea of Code Interpreter is that ChatGPT, instead of inventing/hallucinating the answer to your complex question, can write code to solve it.

ChatGPT is better at writing code that matches the logic of the problem, than it is at coming up with a straight-up reply.

So it comes up with the code, it runs it with Code Interpreter, and then reads the answer out loud to you.


GPT4 knows it sucks at math, but it's awesome at writing simple code, so it answers math problems by writing and executing simple scripts.


It’s handy for uploading a dataset and having it play with it.


Cool stuff. You can upload sensitive customer data, employer proprietary code. Sounds good to me.


Need to think twice before using any tool that isn’t running locally. OpenAI isn’t unique here.


Op wrote "uploading a dataset and having it play with it.", not that they would use it locally.


actually, this post is about a local replacement for code interpreter. the point is for you to not have to upload anything, but for you to download the model and run it locally on your computer.


I doubt this to be true openAI does not allow you to download models (with few exceptions)


This lets you use OpenAI’s models or local LLMs.

The API is governed by a different policy than ChatGPT and won’t be used for training data.


The original post clearly states that you may use any number of models, even a local LLM it tells you how to run using a simple CLI flag.


openai does not have a monopoly on large language models


It might not, but the post has "OpenAI's Code Interpreter" in the title


The readme in the linked repository explains what they mean:

> An open-source, locally running implementation of OpenAI's Code Interpreter.

> […]

> Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running $ interpreter after installing.

> […]

> Comparison to ChatGPT's Code Interpreter

> OpenAI's release of Code Interpreter with GPT-4 presents a fantastic opportunity to accomplish real-world tasks with ChatGPT.

> However, OpenAI's service is hosted, closed-source, and heavily restricted:

> - No internet access.

> - Limited set of pre-installed packages.

> - 100 MB maximum upload, 120.0 second runtime limit.

> - State is cleared (along with any generated files or links) when the environment dies.

> Open Interpreter overcomes these limitations by running in your local environment. It has full access to the internet, isn't restricted by time or file size, and can utilize any package or library.

> This combines the power of GPT-4's Code Interpreter with the flexibility of your local development environment.

> […]

> Change your Language Model

> Open Interpreter uses LiteLLM to connect to hosted language models.

> You can change the model by setting the model parameter

> […]

> Running Open Interpreter locally

> Open Interpreter uses LM Studio to connect to local language models (experimental).

> Simply run interpreter in local mode from the command line


It is OpenAI's model (although you can switch to another LLM). It's just that your data and the actual run environment for the code you got from the ChatGPT API, runs locally.

So AFAICT your prompt is still "leaked" to OpenAI, but not your data.


Probably splitting hairs and I'm not arguing one way or another, just wondering - why isn't the prompt itself considered leaked data?


Well I'm saying the prompt is leaked.

But that's not the biggest issue in most cases.

Prompt: "I have a file of 315 customers with their IP, behavior, religious affiliation and previous purchases plus medical history. I want to figure out if I have any customers whose religious affiliation pre-dates a post-surgery stay in hospital."

Data: a huge CSV file which, as you can tell, contains incredibly sensitive/legally impactful information.

With this system, you send the prompt to GPT-4 or Claude, and it doesn't see your data at all. It just writes a python program that can do the analysis.

You run the program locally on C:/Users/passwordoops/SuperSensitiveData/Confidential.csv

You get the result.

OpenAI has never seen your data.

It's a win for confidentiality.

(of course, that's if the LLM didn't give you a program that would exfiltrate your prompt, and if the CSV contains no prompt injection to exfiltrate the data, etc. This system is a security nightmare.)


chatgpt is bad at math.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: