Hacker Newsnew | past | comments | ask | show | jobs | submit | planckscnst's commentslogin

The API key is not a subscription. The title says subscriptions are blocked from using third-party tools. Or am I misunderstanding?


Headline's been edited since my post. It previously said something along the lines of "Anthropic bans API use in OpenCode CLI"


I've been (adding an OpenCode feature that allows the LLM to edit its own context)[1] and trying to debug an issue with the Anthropic API because I'm calling it with missing fields that it expects. I hope my multiple erronious API calls aren't what triggered this decision

[1]: https://github.com/Vibecodelicious/opencode/tree/surgical_co...


For now, this is mitigated by only including trusted content in the context; for instance, absolutely do not allow it to access general web content.

I suspect that as it becomes more economical to play with training your own models, people will get better at including obscured malicious content in data that will be used during training, which could cause the LLM to intrinsically carry a trigger/path that would cause malicious content to be output by the LLM under certain conditions.

And of course we have to worry about malicious content being added to sources that we trust, but that already exists - we as an industry typically pull in public repositories without a complete review of what we're pulling. We outsource the verification to the owners of the repository. Just as we currently have cases of malicious code sneaking into common libraries, we'll have malicious content targeted at LLMs


I've directly faced this problem with automatic code formatters, but it was back around Claude 3.5 and 3.7. It would consistently write nonconforming code - regardless of having context demanding proper formatting. This caused both extra turns/invocations with the LLM and would cause context issues - both filling the context with multiple variants of the file and also having a confounding/polluting/poisoning effect due to having these multiple variations.

I haven't had this problem in a while, but I expect current LLMs would probably handle those formatting instructions more closely than the 3.5 era.


LLMs are very good at looking at a change set and finding untested paths. As a standard part of my workflow, I always pass the LLM's work through a "reviewer", which is a fresh LLM session with instructions to review the uncommitted changes. I include instructions for reviewing test coverage.

I've also found that LLMs typically just partially implement a given task/story/spec/whatever. The reviewer stage will also notice a mismatch between the spec and the implementation.

I have an orchestrator bounce the flow back and forth between developing and reviewing until the review comes back clean, and only then do I bother to review its work. It saves so much time and frustration.


What tooling are you using for the orchestration?


For interactive programs like this, I use tmux and mention "send-keys" and "capture-pane" and it's able to use it to drive an interactive program. My demo/poc for this is making the agent play 20 questions with another agent via tmux


I don't have a link to share just yet, but I'm working on an LLM coding agent that can modify its own context and is given hints on when and why that would be useful.

I expect it to make it possible to not think about when to reset back to a clean session. I also expect it to be more efficient as it will clear out all the "garbage context" that only serves to "confuse" the LLM, cost more tokens, make responses slower, etc.

Once I get a working prototype, then I will test the feature by using it while reimplementing it in other open source agents to get a feel for whether it has the effects I'm expecting.


If they were using this compression for storage on the cache layer, it could allow more videos closer to where they serve them, but they decide the. Back to webm or whatever before sending them to the client.

I don't think that's actually what's up, but I don't think it's completely ruled out either.


That doesn't sound worth it, storage is cheap, encoding videos is expensive, caching videos in a more compact form but having to rapidly re-encode them into a different codec every single time they're requested would be ungodly expensive.


Storage gets less cheap for short-form tiktoks where the average rate of consumption is extremely high and the number of niches is extremely large.


The law of entropy appears true of TikToks and Shorts. It would make sense to take advantage of this. That is to say, the content becomes so generic that it merges into one.


If you like this tool, you might also be interested in reptyr, which lets you reparent a process to a different tty.

https://blog.nelhage.com/2011/02/changing-ctty/



Tried 3/4 of the tools, and none helped me reattach neovim.

Ended up using dtach. Needs to be run ahead of time, but very direct and minimal stdin/stdout piping tool that's worked great with everything I've thrown at it. https://github.com/crigler/dtach


have you tried diss, shpool or abduco?

also vmux appears to be specifically tailored to vim/neovim

https://github.com/yazgoo/vmux


Whenever I use LLM-generated content, I get another LLM and pre-bias it by asking if it's familiar with common complaints about LLM generated content. And then I ask it to review the content and ask for it to identify those patterns in the content and rewrite it to avoid those. And only after that do I bother to give it a first read. That clearly didn't happen here. Current LLM models can produce much better content than this if you do that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: