Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

actually, this post is about a local replacement for code interpreter. the point is for you to not have to upload anything, but for you to download the model and run it locally on your computer.


I doubt this to be true openAI does not allow you to download models (with few exceptions)


This lets you use OpenAI’s models or local LLMs.

The API is governed by a different policy than ChatGPT and won’t be used for training data.


The original post clearly states that you may use any number of models, even a local LLM it tells you how to run using a simple CLI flag.


openai does not have a monopoly on large language models


It might not, but the post has "OpenAI's Code Interpreter" in the title


The readme in the linked repository explains what they mean:

> An open-source, locally running implementation of OpenAI's Code Interpreter.

> […]

> Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running $ interpreter after installing.

> […]

> Comparison to ChatGPT's Code Interpreter

> OpenAI's release of Code Interpreter with GPT-4 presents a fantastic opportunity to accomplish real-world tasks with ChatGPT.

> However, OpenAI's service is hosted, closed-source, and heavily restricted:

> - No internet access.

> - Limited set of pre-installed packages.

> - 100 MB maximum upload, 120.0 second runtime limit.

> - State is cleared (along with any generated files or links) when the environment dies.

> Open Interpreter overcomes these limitations by running in your local environment. It has full access to the internet, isn't restricted by time or file size, and can utilize any package or library.

> This combines the power of GPT-4's Code Interpreter with the flexibility of your local development environment.

> […]

> Change your Language Model

> Open Interpreter uses LiteLLM to connect to hosted language models.

> You can change the model by setting the model parameter

> […]

> Running Open Interpreter locally

> Open Interpreter uses LM Studio to connect to local language models (experimental).

> Simply run interpreter in local mode from the command line


It is OpenAI's model (although you can switch to another LLM). It's just that your data and the actual run environment for the code you got from the ChatGPT API, runs locally.

So AFAICT your prompt is still "leaked" to OpenAI, but not your data.


Probably splitting hairs and I'm not arguing one way or another, just wondering - why isn't the prompt itself considered leaked data?


Well I'm saying the prompt is leaked.

But that's not the biggest issue in most cases.

Prompt: "I have a file of 315 customers with their IP, behavior, religious affiliation and previous purchases plus medical history. I want to figure out if I have any customers whose religious affiliation pre-dates a post-surgery stay in hospital."

Data: a huge CSV file which, as you can tell, contains incredibly sensitive/legally impactful information.

With this system, you send the prompt to GPT-4 or Claude, and it doesn't see your data at all. It just writes a python program that can do the analysis.

You run the program locally on C:/Users/passwordoops/SuperSensitiveData/Confidential.csv

You get the result.

OpenAI has never seen your data.

It's a win for confidentiality.

(of course, that's if the LLM didn't give you a program that would exfiltrate your prompt, and if the CSV contains no prompt injection to exfiltrate the data, etc. This system is a security nightmare.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: