Hacker Newsnew | past | comments | ask | show | jobs | submit | hendersoon's commentslogin

The -p flag should be fine, so long as you don't use their oauth in a third-party tool. Gemini also supports A2A for this sort of thing.

But the question is - why is the -p flag fine? It hits the same endpoints with the same OAuth token and same quotas.

Comments section here and on related news from Anthropic seems to be centered around the idea that the reason for these bans is that it burns tokens quickly, while their plans are subsidized. What changes with the -p flag? You're just using cli instead of HTTP.

Are the metrics from their cli more valuable than the treasure trove of prompt data that passes through to them either way that justifies this PR?


I assume that -p is the same that "codex exec".

The difference is that in this case the agent loop is executed, which has all the caching and behaviour guarantees. What I assume OpenClaw is doing is calling the endpoint directly while retaining its own "agent logic" so it doesn't follow whatever conventions is the backend expecting.

How important is that difference, I can't say, but aside the cost factor I assume Google doesn't want to subsidize agents that aren't theirs and in some way "the competition".


> Are the metrics from their cli more valuable than the treasure trove of prompt data that passes through to them either way that justifies this PR?

Yes. The only reason they subsidise all-you-can-prompt subscriptions is to collect additional data / signals. They can use those signals to further improve their models.


Because the ToS explicitly says the -p flag is fine, but the Agent SDK is not.

Yes, I also use Handy. It supports local transcription via Nvidia Parakeet TDT2, which is extremely fast and accurate. I also use gemini 2.5 flash lite for post-processing via the free AI studio API (post-processing is optional and can also use a locally-hosted LM).

Could well be running on Google TPUs.


Yes, this was a defensive move from Nvidia.

My understanding is Groq failed to deploy their second-gen chips on time, which caused their stock to deflate.

Groq's primary advantage over Cerebras and SambaNova, as I see it, is they don't fabricate on TSMC. That's attractive to Nvidia, who doesn't want to give up any of their datacenter GPU allocation.


So with support for OCI container images, does this mean I can run docker images as LXCs natively in proxmox? I guess it's an entirely manual process, no mature orchestration like portainer or even docker-compose, no easy upgrades, manually setting up bind mounts, etc. It would be a nice first step.


Also hoping that this work continues and tooling is made available. I suppose eventually someone could even make a wrapper around it that implements Docker's remote API


There is a vid showing the process on their youtube

https://youtu.be/4-u4x9L6k1s?t=21

>no mature orchestration

Seems to borrow the LXC tooling...which has a decent command line tool at least. You could in theory automate against that.

Presumably it'll mature


It's just another declarative adblocker, as that is all Safari (and now Chrome) allows. There's vanishingly little room for differentiation in this space.


That info is outdated. Safari also allows JS scripts running on sites, i.e. extensions working like script injectors. The difference with content blockers is those extensions must be explicitly allowed to access sites being browsed first, for privacy reasons.


Chrome can do that too on desktop, and on iOS Chrome can't run any extensions at all. Safari web extensions have been around since iOS15, so several years now.


MCP is convenient and the context pollution issue is easily solved by running them in subagents. The real miss here was not doing that from the start.

Well, stdio security issues when not sandboxed are another huge miss, although that's a bit of a derail.


Google approached this the right way. No, not with "ai mode", that sucks. With the Chrome dev tools MCP. You allow AI to control the browser if the user opts-in and sets it up.


Exactly right, the OCR isn't the interesting part. 10x context compression is potentially huge. (With caveats, at only ~97% accuracy, so not appropriate for everything.)


Gemini 2.5 pro is generally non-competitive with GPT-5-medium or Sonnet 4.5.

But never fear, Gemini 3.0 is rumored to be coming out Tuesday.


The random people tweets I've seen said Oct 9th which is Thursday. I suppose we will know when we know.


based on what? LLM benchmarks are all bullshit, so this is based on... your gut?

Gemini outputs what I want with a similar regularity as the other bots.

I'm so tired of the religious thinking around these models. show me a measurement.


> LLM benchmarks are all bullshit

> show me a measurement

Your comment encapsulates why we have religious thinking around models.


Please tell me this comment is a joke.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: