Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I was banned from Claude for scaffolding a Claude.md file? (hugodaniel.com)
645 points by hugodan 21 hours ago | hide | past | favorite | 564 comments




I was banned as well, out of the blue suddenly and without warning. I believe it was because I was either doing something like what OP was doing AND/OR using the allowed limits to their fullest extent.

It completely blew me away and I felt suddenly so betrayed. I was paying $200/mo to fully utilize a service they offered and then without warning I apparently did something wrong and had no recourse. No one to ask, no one to talk to.

My advice is to be extremely wary of Anthropic. They paint themselves as the underdog/good guys, but they are just as faceless as the rest of them.

Oh, and have a backup workflow. Find / test / use other LLMs and providers. Don't become dependent on a single provider.


Can you elaborate on "using the allowed limits to their fullest extent?"

If you are in europe you might be able to force them to give you a reason, for an actual human to respond, and who knows maybe even get unbanned.

I have a friend that had a similar experience with amazon, and using an european online platform specific for this he actually got amazon to reopen his business account.

There is a useful list of these european complaints platforms at the bottom of this page: https://digital-strategy.ec.europa.eu/en/policies/dsa-out-co...


I've been doing something a lot like this, using a claude-desktop instance attached to my personal mcp server to spawn claude-code worker nodes for things, and for a month or two now it's been working great using the main desktop chat as a project manager of sorts. I even started paying for MAX plan as I've been using it effectively to write software now (I am NOT a developer).

Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.

Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.

I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.

I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".


Anthropic has been flying by the seat of their pants for a while now and it shows across the board. From the terminal flashing bug that’s been around for months to the lack of support to instabilities in Claude mobile and Code for the web (I get 10-20% message failure rates on the former and 5-10% on CC for web).

They’re growing too fast and it’s bursting the seams of the company. If there’s ever a correction in the AI industry, I think that will all quickly come back to bite them. It’s like Claude Code is vibe-operating the entire company.


The Pro plan quota seems to be getting worse. I can get maybe 20-30 minutes work done before I hit my 4 hour quota. I found myself using it more just for the planning phase to get a little bit more time out of it, but yesterday I managed to ask it ONE question in plan mode (from a fresh quota window), and while it was thinking it ran out of quota. I'm assuming it probably pulled in a ton of references from my project automatically and blew out the token count. I find I get good answers from it when it does work, but it's getting very annoying to use.

(on the flip side, Codex seems like it's being SO efficient with the tokens it can be hard to understand its answers sometimes, it rarely includes files without you doing it manually, and often takes quite a few attempts to get the right answer because it's so strict what it's doing each iteration. But I never run out of quota!)


Claude Code allegedly auto-includes the currently active file and often all visible tabs and sometimes neighboring files it thinks are 'related' - on every prompt.

The advice I got when scouring the internets was primarily to close everything except the file you’re editing and maybe one reference file (before asking Claude anything). For added effect add something like 'Only use the currently open file. Do not read or reference any other files' to the prompt.

I don't have any hard facts to back this up, but I'm sure going to try it myself tomorrow (when my weekly cap is lifted ...).


What does "all visible tabs" mean in the context of Claude Code in a terminal window? Are you saying it's reading other terminals open on the system? Also how do you determine "currently active file"? It just greps files as needed.

You can install VSCode extension and use "/ide" to connect them.

Do people actually use this mode? Having to approve diffs in the ide is too annoying.

Depends on my task. If it’s complex and my expectation is for Claude to get things wrong the diff preview is helpful.

Even then, I'd wait until it's had a chance to iterate and correct itself in a loop before I'd even consider looking at the output, or I end up babysitting it to prevent it from making mistakes it'd often recognise and fix itself if given the chance.

True. I’ve been strictly in the terminal for weeks and I have a stop hook which commits each iteration after successful rust compilation and frontend typechecks, then I have a small command line tool to quickly review last commit. It’s a pretty good flow!

^ THIS

I've run out of quota on my Pro plan so many times in the past 2-3 weeks. This seems to be a recent occurrence. And I'm not even that active. Just one project, execute in Plan > Develop > Test mode, just one terminal. That's it. I keep getting a quota reset every few hours.

What's happening @Anthropic ?? Anybody here who can answer??


[BUG] Instantly hitting usage limits with Max subscription: https://github.com/anthropics/claude-code/issues/16157

It's the most commented issue on their GitHub and it's basically ignored by Anthropic. Title mentions Max, but commenters report it for other plans too.


It's not a bug it's a feature (for Anthropic).

Its not a bug, it's a poorly defined business model!

“After creating a new account, I can confirm the quota drains 2.5x–3x slower. So basically Max (5x) on an older accounts is almost like Pro on a new one in terms of quota. Pretty blatant rug pull tbh.”

lol


Your quota also seems to be higher after unsubscribing and resubscribing?

They'll also send you a free month of the 100 dollar plan if you unsubscribe to try and get you back.

This whole API vs plan looks weird to me. Why not force everyone to use API? You pay for what you use, it's very simple. API should be the most honest way to monetize, right?

This fixed subscription plan with some hardly specified quotas looks like they want to extract extra money from these users who pay $200 and don't use that value, at the same time preventing other users from going over $200. Like I understand that it might work at scale, but just feels a bit not fair to everyone?


Not a doctor or anything, but API usage seems to support the more on-demand / spiky workflows available at a much larger scale, whereas a single seat, authenticated to Claude Code has controlled / set capacity and is generally more predictable and as a result easier to price?

API request method might have no cap, but they do cap Claude Code even on Max licenses, so easier to throttle as well if needed to control costs. Seems straightforward to me at any rate. Kinda like reserved instance vs. spot pricing models?


You're welcome to use the API, it asks you to do that when you run out of quota on your Pro plan. The next thing you find out is how expensive using the API is. More honest, perhaps, but you definitely will be paying for that.

I tried the API once. Burned 7 dollars in 15 minutes.

Consumers like predictable billing more than they care about getting the most bang for their buck and beancounters like sticky recurring revenue streams more than they care about maximizing the profit margins for every user.

I just like beong able to make like $250 of API calls for $20.

If only it was API calls. I like using it through claude code. But it would be infinitely more flexible if my $200 subscription worked through the API

I don't understand, you CAN use claude code through the API.

Yeah, but he can't use his $200 subscription for the API.

That's limited to accessing the models through code/desktop/mobile.

And while I'm also using their subscriptions because of the cost savings vs direct access, having the subscription be considerably cheaper than the usage billing rings all sorts of alarm bells that it won't last.


The fixed fee plan is because the agent and the tools have internal choices/planning about cost. If you simply pay for API the only feedback to them that they are being too costly is for you to stop.

If you look at tool calls like MCP and what not you can see it gets ridiculous. Even though it's small for example calling pal MCP from the prompt is still burning tokens afaik. This is "nobody's" fault in this case really but you can see how the incentives are and we all need to think how to make this entire space more usable.


I very recently (~ 1 week ago) subscribed to the Pro plan and was indeed surprised by how fast I reached my quota compared to say Codex with similar subscription tier. The UX is generally really cool with Claude Code, which left me with a bit of a bittersweet feeling of not even being able to truly explore all the possibilities since after just making basic planning and code changes I am already out of quota for experimenting with various ways of using subagents, testing background stuff etc.

I remember a couple of weeks ago when people raved about Claude Code I got a feeling like there's no way this is sustainable, they must be using tokens like crazy if used as described. Guess Anthropic did the math as well and now we're here.

I use opencode with codex after all the shenanigans from anthropic recently. You might want to give that a shot!

Use cliproxyapi and use any model in CC. I use Codex models in CC and it's the best of both worlds!

Like a good dealer, they gave you a cheap/free hit and now you want more. This time you're gonna have to pay.

I've been hitting the limit a lot lately as well. The worst part is I try to compact things and check my limits using the / commands and can't make heads or tails how much I actually have left. It's not clear at all.

I've been using CC until I run out of credits and then switch to Cursor (my employer pays for both). I prefer Claude but I never hit any limits in Cursor.


> I've run out of quota on my Pro plan so many times in the past 2-3 weeks.

Waiting for Anthropic to somehow blame this on users again. "We investigated, turns out the reason was users used it too much".


sounds like the "thinking tokens" are a mechanism to extract more money from users?

Anecdotally but it definitely feels like in the last couple weeks CC tends to be more aggressive at pulling in significantly larger chunks of an existing code base - even for some simple queries I'll see it easily ramp up to 50-60k token usage.

This really speaks to the need to separate the LLM you use and the coding tool that uses it. LLM makers utilizing the SaaS model make money on the tokens you spend whether or not they need them. Tools like aider and opencode (each in their own way) use separate tools build a map of the codebase that they can use to work with code using fewer tokens. When I see posts like this I start to understand why Anthropic now blocks opencode.

We're about to get Claude Code for work and I'm sad about it. There are more efficient ways to do the job.


When you state it like that, I now totally understand why Anthropic have a strong incentive to kick out OpenCode.

OpenCode is incentivized to make a good product that uses your token budget efficiently since it allows you to seamlessly switch between different models.

Anthropic as a model provider on the other hand, is incentivized to exhaust your token budget to keep you hooked. You'll be forced to wait when your usage limits are reached, or pay up for a higher plan if you can't wait to get your fix.

CC, specifically Opus 4.5, is an incredible tool, but Anthropic is handling its distribution the way a drug dealer would.


OpenCode also would be incentivized to do things like having you configure multiple providers and route requests to cheaper providers where possible.

Controlling the coding tool absolutely is a major asset, and will be an even greater asset as the improvements in each model iteration makes it matter less which specific model you're using.


You think after 27 billions invested they're gonna be ethical or want to get their money back as fast as possible?

I'm curious if anyone has logged the number of thinking tokens over time. My implication was the "thinking/reasoning" modes are a way for LLM providers to put their thumb on the scale for how much the service costs.

they get to see (if not opted-out) your context, idea, source code, etc. and in return you give them $220 and they give you back "out of tokens"


> My implication was the "thinking/reasoning" modes are a way for LLM providers to put their thumb on the scale for how much the service costs.

It's also a way to improve performance on the things their customers care about. I'm not paying Anthropic more than I do for car insurance every month because I want to pinch ~~pennies~~ tokens, I do it because I can finally offload a ton of tedious work on Opus 4.5 without hand holding it and reviewing every line.

The subscription is already such a great value over paying by the token, they've got plenty of space to find the right balance.


> My implication was the "thinking/reasoning" modes are a way for LLM providers to put their thumb on the scale for how much the service costs.

I've done RL training on small local models, and there's a strong correlation between length of response and accuracy. The more they churn tokens, the better the end result gets.

I actually think that the hyper-scalers would prefer to serve shorter answers. A token generated at 1k ctx length is cheaper to serve than one at 10k context, and way way cheaper than one at 100k context.


> there's a strong correlation between length of response and accuracy

i'd need to see real numbers. I can trigger a thinking model to generate hundreds of tokens and return a 3 word response (however many tokens that is), or switch to a non-thinking model of the same family that just gives the same result. I don't necessarily doubt your experience, i just haven't had that experience tuning SD, for example; which is also xformer based

I'm sure there's some math reason why longer context = more accuracy; but is that intrinsic to transformer-based LLMs? that is, per your thought that the 'scalers want shorter responses, do you think they are expending more effort to get shorter, equivalent accuracy responses; or, are they trying to find some other architecture or whatever to overcome the "limitations" of the current?


I believe Claude Code recently turned on max reasoning for all requests. Previously you’d have to set it manually or use the word “ultrathink”

It's absolutely a work-around in part, but use sub-agents, have the top level pass in the data, and limit the tool use for the sub-agent (the front matter can specify allowed tools) so it can't read more.

(And once you've done that, also consider whether a given task can be achieved with a dumber model - I've had good luck switching some of my sub-agents to Haiku).


> more aggressive at pulling in significantly larger chunks of an existing code base

They need more training data, and with people moving on to OpenCode/Codex, they wanna extract as much data from their current users as possible.


Their system prompt + MCP is more of the culprit here. 16 tools, sophisticated parameters, you're looking at 24K tokens minimum

probably, because they recently said the ultrathink is enabled by default now.

does this translate into "the end-user's cost goes up"

by default?


Its the clanker version of the "Check Wallet Light" (check engine light).

How quickly do you also hit compaction when running? Also, if you open a new CC instance and run /context, what does it show for tools/memories/skills %age? And that's before we look at what you're actually doing. CC will add context to each prompt it thinks is necessary. So if you've got a few number of large files, (vs a large number of smaller files), at some level that'll contribute to the problem as well.

Quota's basically a count of tokens, so if a new CC session starts with that relatively full, that could explain what's going on. Also, what language is this project in? If it's something noisy that uses up many tokens fast, even if you're using agents to preserve the context window in the main CC, those tokens still count against your quota so you'd still be hitting it awkwardly fast.


I never run out of this mysterious quota thing. I close Claude Code at 10% context and restart.

I work for hours and it never says anything. No clue why you’re hitting this.

$230 pro max.


Does closing claude code do something that running /clear does not?

Any clue why you might be a favored/favoured high value user?

The entire conversation is fed in as context effectively compounding your token usage over the course of a session. Sessions are most efficient when used for one task only.

I get a decent amount of work in before restarts.

Pro is 20x less than Max

Self-hosted might be the way to go soon. I'm getting 2x Olares One boxes, each with an RTX 5090 GPU (NVIDIA 24GB VRAM), and a built-in ecosystem of AI apps, many of which should be useful, and Kubernetes + Docker will let me deploy whatever else I want. Presumably I will manage to host a good coding model and use Claude Code as the framework (or some other). There will be many good options out there soon.

> Self-hosted might be the way to go soon.

As someone with 2x RTX Pro 6000 and a 512GB M3 Ultra, I have yet to find these machines usable for "agentic" tasks. Sure, they can be great chat bots, but agentic work involves huge context sent to the system. That already rules out the Mac Studio because it lacks tensor cores and it's painfully slow to process even relatively large CLAUDE.md files, let alone a big project.

The RTX setup is much faster but can only support models ≤192GB, which severely limits its capabilities as you're limited to low Q GLM 4.7, GLM 4.7 Flash/Air/ GPT OSS 120b, etc.


I've been using local LLMs since before chatgpt launched (gpt-j, gpt-neox for those that remember), and have tried all the promising models as they launch. While things are improving faster than I thought ~3 years ago, we're still not there in terms of 1-1 comparison with the SotA models. For "consumer" local at least.

The best you can get today with consumer hardware is something like devstral2-small(24B) or qwen-coder30b(underwhelming) or glm-4.7-flash (promising but buggy atm). And you'll still need beefy workstations ~5-10k.

If you want open-SotA you have to get hardware worth 80-100k to run the big boys (dsv3.2, glm4.7, minimax2.1, devstral2-123b, etc). It's ok for small office setups, but out of range for most local deployments (esp considering that the workstations need lots of power if you go 8x GPUs, even with something like 8x 6000pro @ 300w).


I think this is the future as well, running locally, controlling the entire pipeline. I built acf on github using Claude among others. You essentially configure everything as you want, models, profiles, agents and RAG. It's free. I also built a marketplace to sell or give away to the community these pipeline enhancements. It's a project I wanted to do for a while and Claude was nice to me allowing it to happen. It's a work in progress but you have 100% control, locally. There is also a website for those not as technical where you can buy credits or plugin Claude or OpenAI APIs. Read the manifesto. I need help now and contributors.

I've used the Anthropic models mostly through Openrouter using aider. With so much buzz around Claude Code I wantes to try it out and thought that a subscription might be more cost efficient for me. I was kinda disappointed by how quickly I hit the quota limit. Claude Code gives me a lot more freedom than what aider can do, on the other side I have the feeling that pure coding tasks work better through aider or Roo Code. The API version is also much much faster that the subscription one.

Being in the same boat as you I switched to OpenCode with z.ai GLM 4.7 Pro plan and it's quite ok. Not as smart as Opus but smart enough for my needs, and the pricing is unbeatable

I've also see OpenCode around, but have yet to try it. I wonder how it compares to Roo Code

Ditto. It is very very slow but I never hit quota limits but people on Discord are complaining like mad it is slow even on the Pro plans. I tend to use glm-*air a lot for planning before using 4.7

We’re an Anthropic enterprise customer, and somehow there’s a human developer of theirs on a call with us just about every week. Chatting, tips and tricks etc.

I think they are just focusing on where the dough is.


They whistleblowed themselves that Claude Cowork was coded by Claude Code… :)

You can tell they’re all vibe coded.

Claude iOS app, Claude on the web (including Claude Code on the web) and Claude Code are some of the buggiest tools I have ever had to use on a daily basis. I’m including monstrosities like Altium and Solidworks and Vivado in the mix - software that actually does real shit constrained by the laws of physics rather than slinging basic JSON and strings around over HTTP.

It’s an utter embarrassment to the field of software engineering that they can’t even beat a single nine of reliability in their consumer facing products and if it wasn’t for the advantage Opus has over other models, they’d be dead in the water.


Even their status page (which are usually gamed) shows two 9s over the past 90 days.

hey, they have 9 8's

Single nine reliability would be 90% uptime lol. For 99.9% we call it triple 9 reliability.

Single 9 would be 90%, which is roughly what I’m experiencing between CC for Web and the Claude iOS app. About 1 in 10 messages fail because of an unknown error and 1 in 10 CC for web sessions die irrecoverably. It’d probably be worse except for the fact that CC’s bugs in the terminal aren’t show stoppers like they are on web/mobile.

The only way Anthropic has two or three nines is in read only mode, but that’s be like measuring AWS using the console uptime while ignoring the actual control plane.


Single nine could be just 9% :D

You're right.

https://github.com/anthropics/claude-code/issues

Codex has less but they also had quite a few outages in December. And I don't think Codex is as popular as Claude Code but that could change.


Don't bother filing issues there. Their issue tracker is a galaxy-sized joke. They automatically close issues after 30 days of inactivity even if they weren't fixed, just to keep the issue count low.

The Reasonable Man might think that an AI company OF ALL COMPANIES would be able to use AI to triage bug tickets and reproduce them, but no! They expect humans to keep wasting their own time reproducing, pinging tickets and correcting Claude when it makes mistakes.

Random example: https://github.com/anthropics/claude-code/issues/12358

First reply from Anthropic: "Found 3 possible duplicate issues: This issue will be automatically closed as a duplicate in 3 days."

User replies, two of the tickets are irrelevant, one didn't help.

Second reply: "This issue has been inactive for 30 days. If the issue is still occurring, please comment to let us know. Otherwise, this issue will be automatically closed in 30 days for housekeeping purposes."

Every ticket I ever filed was auto-closed for inactivity. Complete waste of time. I won't bother filing bugs again.


> Every ticket I ever filed was auto-closed for inactivity. Complete waste of time. I won't bother filing bugs again.

Upcoming Anthropic Press Release: By using Claude to direct users to existing bugs reports, we have reduced tickets requiring direct action by xx% and even reduced the rate of incoming tickets


Whistleblowed dog food.

normally you don't share your dog food when you find out it actually sucks.

You are giving me images from The Bug Short where the guy goes to investigate mortgages and knocks on some random person’s door to ask about a house/mortgage just to learn that it belongs to a dog. Imagine finding out that Anthropic employs no humans at all. Just an AI that has fired everyone and been working on its own releases and press releases since.

"Just an AI that has fired everyone"

At least it did not turn against them physically... "get comfortable while I warm up the neurotoxin emitters"


'The Big Short' (2015)

I think your surmise is probably wrong. It's not that their growing to fast, it's that their service is cheaper than the actual cost of doing business.

Growth isn't a problem unless you dont actually pay for the cost of every user you subscribe. Uber, but for poorly profitable business models.


Well, they vibe code almost every tool at least

Claude Code has accumulated so much technical dept (+emojis) that Claude Code can no longer code itself.

yeah, and it gets so clunky and laggy when the context grows. Anthropic just can't make software and yet they claim 90% of code will be written by AI by yesterday.

What’s the opposite of bootstrapping? Stakebooting?

pulling yourself down by your chinstrap

I believe its "digging your own grave"

Sell to Google and run away

Hmm... VC funded?

> I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it.

Isn’t the future of support a series of automations and LLMs? I mean, have you considered that the AI bot is their tech support, and that it’s about to be everyone else’s approach too?


Support has been automated for a while, LLMs just made it even less useful (and it wasn't very useful to begin with; for over a decade it's been a Byzantine labyrinth of dead-ends, punji-pits and endless hours spent listening to smooth jazz).

Yup, the main goal of customer support for almost every Internet-based company for over a decade now is to just be so frustrating that you give up before you can reach an actual human (since that is the point where there is a real cost to the company in giving you that support).

I'm not really sure LLMs have made it worse. They also haven't made it better, but it was already so awful that it just feels like a different flavor of awful.


Thats not really the case here in Europe, where good vs bad support is often what separates companies that build a loyal customer base from those stuck with churn they cant control.

Making a new account and seeing doing the exact same thing to see if it happens again… would be against TOS and therefore is something you absolutely shouldn’t do

Have you tried any of the leading open weight models, like GLM etc. And how does chatGPT or Gemini compare?

And kudos for refusing to use anything from the guy who's OK with his platform proliferating generated CSAM.


Giving Gemini a go after Opus did crap one time too many, and so far it seems that Gemini does better at identifying and fixing root causes, instead of piling code or disabling checks to hide the symptoms like Opus consistently seems to do.

I tried GLM 4.7 in Opencode today. In terms of capability and autonomy, it's about on par with Sonnet 3.7. Not terrible for a 10th the price of an Anthropic plan, but not a replacement.

[dead]


[flagged]


Yes, let us create the CSAM generating torment nexus in peace.

> Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it

This made me chuckle.


I really don’t understand people that say claude has no human support. In the worst case the human version of their support got back to me two day after the AI, and they apologized for being so slow.

It really leads me to wonder if it’s just my questions that are easy, or maybe the tone of the support requests that go unanswered is just completely different.


They shorted me a day off credit on the first day of offering the 200+ subscription and it took me 6 weeks for a human to tell me "whoops well we'll fix that, cya."

I can't be alone . Literally the worst customer experience I've ever had with the most expensive personal dot com subscription I've ever paid for.

Never again. When Google sets the customer service bar there are MAJOR issues.


> I've been using it effectively to write software now (I am NOT a developer)

What have you found it useful for? I'm curious about how people without software backgrounds work with it to build software.


About my not having a software background, I started this as I've been a network/security/systems engineer/architect/consultant for 25 years, but never dev work. I can read and follow code well enough to debug things, but I've never had the knack to learn languages and write my own. Never really had to, but wanted to.

This now lets me use my IT and business experience to apply toward making bespoke code for my own uses so far, such as firewall config parsers specialized for wacky vendor cli's and filling in gaps in automation when there are no good vendor solutions for a given task. I started building my mcp server enable me to use agents to interact with the outside world, such as invoking automation for firewalls, switches, routers, servers, even home automation ideally, and I've been successful so far in doing so, still not having to know any code.

I'm sure a real dev will find it to be a giant pile of crap in the end, but I've been doing like applying security frameworks, code style guidelines using ruff, and things like that to keep it from going too wonky, and actually working it up to a state I can call it as a 1.0 and plan to run a full audit cycle against it for security audits, performance testing, and whatever else I can to avoid it being entirely craptastic. If nothing else, it works for me, so others can take it or not once I put it out there.

Even being NOT a developer, I understand the need for applying best practices, and after watching a lot of really terrible developers adjacent to me over the years make a living, think I can offer a thing or two in avoiding that as it is.


I started using claude-code, but found it pretty useless without any ability to talk to other chats. Claude recommended I make my own MCP server, so I did. I built a wrapper script to invoke anthropic's sandbox-runtime toolkit to invoke claude-code in a project with tmux, and my mcp server allows desktop to talk to tmux. Later I built in my own filesystem tools, and now it just spawns konsole sessions for itself invoking workers to read tasks it drops into my filesystem, points claude-code to it, and runs until it commits code, and then I have the PM in desktop verify it, do the final push/pr/merge. I use an approval system in a gui to tell me when claude is trying to use something, and I set an approve for period to let it do it's thang.

Now I've been using it to build on my MCP server I now call endpoint-mcp-server (coming soon to github near you), which I've modularized with plugins, adding lots more features and a more versatile qt6 gui with advanced workspace panels and widgets.

At least I was until Claude started crapping the bed lately.


what do you actually do besides build tools to build tools to build tools?

My use is considerably simpler than GP's but I use it anytime I get bogged down in the details and lose my way, just have Claude handle that bit of code and move on. Also good for any block of code that breaks often as the program evolves, Claude has much better foresight than I do so I replace that code with a prompt.

I enjoy programming but it is not my interest and I can't justify the time required to get competent, so I let Claude and ChatGPT pick up my slack.


The desktop app is pretty terrible and super flaky, throwing vague errors all the time. Claude code seems to be doing much better. I also use it for non-code related tasks.

Have a max plan, didn't use it much the last few days. Just used it to explain me a few things with examples for a ttrpg. It just hanged up a few times.

Max plan and in average I use it ten times a day? Yeah, I am cancel. Guess they don't need me


That's about what I'm getting too! It just literally stops at some point, and any new prompt it starts, then immediately stops. This was even on a fairly short conversation with maybe 5-6 back and forth dialogs.

> Lately it's gotten entirely flaky, where chat's will just stop working

This happens to me more often than not both in the Claude Desktop and in web. It seems that longer the conversation goes the more likely it is to happen. Frustrating.


Judging by their status page riddled with red and orange as well as the months long degradation with blog post last Sept, it is not very reliable. If I sense it's responses are crap, I check the status page and low and behold usually it's degraded. For a non deterministric product, silent quality drops are pretty bad

It's amusing to observe that Claude works about as reliably as I'd expect for software written by Claude.

> where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive

I had this start happening around August/September and by December or so I chose to cancel my subscription.

I haven't noticed this at work so I'm not sure if they're prioritizing certain seats or how that works.


I have noticed this when switching locations on my VPN. Some locations are stable and some will drop the connection while the response is streaming on a regular basis.

The Peets right next to the Anthropic office could be selling VPN endpoint service for quite the premium!

Serious question, why is codex and mistral(vibe) not a real alternative?

Codex: Three reasons. I've used all extensively, for multiple months.

Main one is that it's ~3 times slower. This is the real dealbreaker, not quality. I can guarantee that if tomorrow we woke up and gpt-5.2-codex became the same speed as 4.5-opus without a change in quality, a huge number of people - not HNers but everyone price sensitive - would switch to Codex because it's so much cheaper per usage.

The second one is that it's a little worse at using tools, though 5.2-codex is pretty good at it.

The third is that its knowledge cutoff is further in the past than both Opus 4.5 and Gemini 3 that it's noticeable and annoying when you're working with more recent libraries. This is irrelevant if you're not using those.

For Gemini 3 Pro, it's the same first two reasons as Codex, though the tool calling gap is even much bigger.

Mistral is of course so far removed in quality that it's apples to oranges.


Have you tried lower reasoning levels?

Yes and this makes it faster, but still quite a bit slower than Claude Code, and the tool use gap remains. Especially since the comparison for e.g. 5.2 Codex-Low is more like Sonnet than Opus, so that's the speed you're competing with.

The Claude models are still the best at what they do, right now GLM is just barely scratching sonnet 4.5 quality, mistral isnt really usable for real codebases and gemini is kind of in a weird spot where it's sometimes better then Claude at small targeted changes but randomly goes off the rails. Haven't tried codex recently but the last time I did the model thought for 27 minutes straight and then gave me about the same (incorrect) output that opus would have in 20 seconds. Anthropics models are their only moat as demonstrated by their cutting off of tools other then Claude code on their coding plans.

I tried codex, using my same sandbox setup with it. Normally I work with sonnet in code, but it was stuck on a problem for hours, and I thought hmm, let me try codex. Codex just started monkey patching stuff and broke everything within like 3-4 prompts. I said f-this, went back to my last commit, and tried Opus this time in code, which fixed the problem within 2 prompts.

So yeah, codex kinda sucks to me. Maybe I'll try mistral.


Gemini CLI is a solid alternative to Claude Code. The limits are restrictive, though. If you're paying for Max, I can't imagine Gemini CLI will take you very far.

Gemini CLI isn't even close to the quality of Claude Code as a coding harness. Codex and even OpenCode are much better alternatives.

Well, I use Gemini a lot (because it's one of three allowed families), but tbh it's pretty bad. I mean, it can get the job done but it's exhausting. No pleasure in using it.

Gemini CLI regularly gets stuck failing to do anything after declaring its plan to me. There seems to be no way to un-lock it from this state except closing and reopening the interface, losing all its progress.

you should be able to copy the entire conversation and paste it in (including thinking/reasoning tokens).

When you have a conversation with an AI, in simple terms, when you type a new line and hit enter, the client sends the entire conversation to the LLM. It has always worked this way, and it's how "reasoning tokens" were first realized. you allow a client to "edit" the context, and the client deletes the hallucination, then says "Wait..." at the end of the context, and hits enter.

the LLM is tricked into thinking it's confused/wrong/unsure, and "reasons" more about that particular thing.


I tried Gemini like a year or so ago, and I gave up after it directly refused to write me a script and instead tried to tell me how to learn to code. I do not make this up.

That's at least two major updates ago. Probably worth another try.

Gemini is my preferred LLM for coding, but it still does goofy shit once in a while even with the latest version.

I'm 99.9999% sure Gemini has a dynamic scaling system that will route you to smaller models when its overloaded, and that seems to be when it will still occasionally do things like tell you it edited some files without actually presenting the changes to you or go off on other strange tangents.


I tried it on Tuesday and, having used CC a lot lately, was shocked at how bad it was - I'd forgotten.

Kilocode is a good alt as well. You can plug into OpenRouter or Kilocode to access their models.

One could, alternatively, come to the conclusion that the value of you as a customer far undersells the value of the product itself, even if it's doing what you expect it to do.

That is, you and most of claude users arn't paying the actual cost. You're like a Uber customer a decade ago.


Folks a solution might be to use the claude models inside the latest copilot. Copilot is good. Try it out. Latest versions improving all the time. You get plenty of usage at reasonable price.

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. Frank Herbert, Dune, 1965

So why didn't this happen with electricity, water and food, but would with thinking capacity?

> food

  Can you sell or share farm-saved seed?
  "It is illegal to sell, buy, barter or share farm-saved seed," warns Sam. [1]

  Can feed grain be sown?
  No – it is against the law to use any bought-in grain to establish a crop. [1]

  FTC sues John Deere over farmers' right to repair tractors
  The lawsuit, which Deere called "meritless," accuses the company of withholding access to its technology and best repair tools and of maintaining monopoly power over many repairs. Deere also reaps additional profits from selling parts, the complaint alleges, as authorized dealers tend to sell pricey Deere-branded parts for their repairs rather than generic alternatives. [2]
[1] https://www.fwi.co.uk/arable/the-dos-and-donts-of-farm-saved...

[2] https://www.npr.org/2025/01/15/nx-s1-5260895/john-deere-ftc-...


What do you mean? This is very much true. We are economically compelled to buy food from supermarkets, for instance, because hunting and fishing have become regulated, niche activities. Compared to someone from the 1600s who could scoop a salmon out of the river with a bucket, we are quite oppressed.

On the flip side, fishing quotas are the reason there are some fish left. However you are free to grow your own vegetables.

... provided you own land that the government allows for agricultural use. And most people can't afford to own enough land to be self-sufficient.

So you're not free to grow your own vegetables either; just like fishing, farming is regulated to manage limited resources. Things get ugly fast when you start raising pigs in your city apartment, or start polluting with pesticide runoff, or start diverting your neighbour's water supply...



These are regulated by governments that, at least for now, are still working for the people. They're some of the first that get attacked and taken away when said government fails though, or when another government invades.

(ex: Palestine got their utilities and food cut off so that thousands starved, Ukraine's infrastructure is under attack so that thousands will die from exposure, and that's after they went for their food exports, starving more that people that depended on it)


Oh, because if the electric company banned you for trying to recharge a dildo they'd be sued to oblivion.

Try to get banned from any of these, or from the banking system, and find out


It did. Look around you.

Having to pay for utilities you mean?

Gains from efficiency are experienced by labor in chunks, mostly due to great strife or revolutions (40 hour work week, child labor laws, etc.). Gains in efficiency experienced by capital are immediate and continuously accruing.

No, being beholden to a payments and banking system that can cancel you at any time, for any reason, with little hope for redress.

Because in the past you weren't beholden to payments?

That you can't transfer large sum of money because money laundering rule. and you can't break it into smaller pieces either because that is called "structuring" and is a crime?

Ask Ukraine about Holodomor.

It... has, historically, in many different ways happened with food, particularly.

> electricity, water and food

Wars are frequently fought of these three things, and there's no shortage of examples of the humans controlling these resources lording over those that did not.


> How is thinking different from electricity?

...


Prophetic

I also got banned from Claude over a year ago. The signup process threw an error and I couldn't try again because they took my phone number. The support system was a Google form petition to be unblocked. I am still mad about it to this day.

Edit: my only other comment on HN is also complaining about this 11 months ago


This happened to me a couple of times when I tried to sign up on their website: instantly banned before I could even enter the onboarding flow.

I then had more success signing up with the mobile app, despite using the same phone number; I guess they don't trust their website for account creation.


Similar thing happened to me back in November 19 shortly after GitHub outage (which sent CC into repeated requests and time outs to GitHub) while beta testing Claude Code Web.

Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.

Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.

As their ads say: "Keep thinking. There has never been a better time to have a problem."

I've been thinking since then, what was the problem. But I guess I will "Keep thinking".


Honestly its kind of horrifying that if "Frontier" LLM usage were to become as required as some people think just to operate as a knowledge worker, someone could basically be cast out of the workforce entirely through being access-banned by a very small group of companies.

Luckily, I happen to think that eventually all of the commercial models are going to have their lunch eaten by locally run "open" LLMs which should avoid this, but I still have some concerns more on the political side than the technical side. (It isn't that hard to imagine some sort of action from the current US government that might throw a protectionist wrench into this outcome).


There is also a big risk for employers' whole organisation to be completely blocked from using Anthropic services if one of their employees have a suspended/banned personal account:

From their Usage Policy: https://www.anthropic.com/legal/aup "Circumvent a ban through the use of a different account, such as the creation of a new account, use of an existing account, or providing access to a person or entity that was previously banned"

If an organisation is large enough and have the means, they MIGHT get help but if the organisation is small, and especially if the organisation is owned by the person whose personal account suspended... then there is no way to get it fixed, if this is how they approach.

I understand that if someone has malicious intentions/actions while using their service they have every right to enforce this rule but what if it was an unfair suspension which the user/employee didn't actually violate any policies, what is the course of action then? What if the employer's own service/product relies on Anthropic API?

Anthropic has to step up. Talking publicly about the risks of AI is nice and all, but as an organisation they should follow what they preach. Their service is "human-like" until it's not, then you are left alone and out.


A new phobia freshly born.

We really need some law to stop "you have been banned and we won't even tell you actual reason for it", it's become a plague, made worse with automated systems giving out a ban

Actually, there are law to stop banks telling their client why they are flagged for money laundering

Do we though? It’s an important question about liberty - at what point does a business become so large that it’s not allowed to decide who gets to use it?

There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.


> at what point does a business become so large that it’s not allowed to decide who gets to use it?

It's not about size, it's about justification to fight the ban. You should be able to check if the business has violated your legal rights, or if they even broke their own rules, because failure happens.

> There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.

I guess it was this one: https://en.wikipedia.org/wiki/Lee_v_Ashers_Baking_Company_Lt...

There was a similar case in USA too: https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...


I don't think the parent comment was about banning bans based on business size or any other measure, for that's obviously a non-starter. I think it was more about getting rid of unexplained bans.

To that end: I think the parent comment was suggesting that when a person is banned from using a thing, then that person deserves to know the reason for the ban -- at the very least, for their own health and sanity.

It may still be an absolute and unappealable ban, but unexplained bans don't allow a person learn, adjust, and/or form a cromulent and rational path forward.


It is more than that, as there is cold comfort in having an explanation that is arbitrary and capricious, irrational, contrary to the stated rules, or based on falsehoods, if there is no effective means of appeal - especially if there are few or no viable alternatives to the entity imposing the ban.

For me the liberty question you raised there isn't so much about whether the business has become large, as whether it's become "infrastructure". Being denied service by a cake shop may very well be distressing and hurtful, but being suddenly denied service by your bank, your mobile phone provider, or even (especially?) by gmail can turn your entire life upside down.

Yes I’d tend to agree with you there. But being able to define that tipping point where something becomes “infrastructure” even if it’s still privately owned and isn’t a monopoly, is a difficult problem to solve.

While I still object to them having a say in that matter (next thing is; we don’t serve darkies) - that is different. There are hundreds of shops to get that cake from.

But Anthropic and “Open”AI especially are firing on all bullshit cylinders to convince the world that they are responsible, trustable, but also that they alone can do frontier-level AI, and they don’t like sharing anything.

You don’t get to both insert yourself as an indispensable base-layer tool for knowledge-work AND to arbitrarily deny access based on your beliefs (or that of the mentally crippled administration of your host country).

You can try, but this is having your cake and eating it too territory, it will backfire.


The Catholic Church has been doing this for hundreds of years. I'm sure it'll eventually backfire on them, but I doubt any of us will still be alive for that.

Atheism has never been more popular. It has backfired.

The Catholic Church has been doing what exactly for hundreds of years? Can’t wait to hear it.

Yes, it is not too much to require that that if you offer something to someone that the receiving party is able to have a conversation with you. You can still reject them in the end, but being able to ask the people involved questions is a reasonable expectation — but many of these big tech companies have made it effectively impossible.

If you want to live life as a hermit, good on ya, but then maybe accept that life and don't offer other people stuff?


Most countries have laws about a minimum level of customer support for things you pay for.

You missed the 2nd part "and we won't even tell you actual reason for it".

The cake shop said why. FB, Google, Anthropic don't say why, so you don't even know what exactly you need to sue for. That is kafkaesque


That case was in the US.

Both countries had gay cakes.

IMO every ban should have a dedicated web page containing ban reasons and proofs, which affected person can challenge, use in court or share publicly.

Try moderating something sometime

Judging by his EUR currency the guy is from EU, so he HAS law available for him to protect himself.

Recital (71) of the GDPR

"The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention."

https://commission.europa.eu/law/law-topic/data-protection/r...


More recent, the Digital Services Act includes "Options to appeal to content moderation decisions" [0]; I believe this also covers being banned from a platform. Not sure if Claude falls under these rules, I think it only applies to 'gatekeeper' platforms but I'm reasonably confident the number of organizations that fall under this will increase.

[0] https://digital-strategy.ec.europa.eu/en/policies/digital-se...


The company will refuse under 12(4)

"The right to obtain a copy referred to in paragraph 3 shall not adversely affect the rights and freedoms of others."

and then you will have to sue them.


It is not '...automatic refusal of an online credit application or e-recruiting practices'.

Those are just examples. The real question is whether the ban produces "legal effects concerning him or her or similarly significantly affects him or her". Maybe someone with legal expertise could weight in here?

This is scary and it affects the degree to which I invest in building Claude-specific tooling, either code or in my brain. You can never guarantee that a dangerously-skip-permissions session is going to stay on the rails, what flags it might trip while you're not looking.

I wonder if Anthropic realizes the chilling effect this kind of event has on developers. It's not just the ones who get locked out -- it's a cost for everybody, because we can't depend on the tool when it's doing precisely what it's best at.

Personally, I am already avoiding Gemini because a) I don't really understand their policy for training on your data; and b) if Google gets mad at me I lose my email. (Which the author also notes.)


I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing.

I think I kind of have an idea what the author was doing, but not really.


Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

There are so many things about this article that don't make sense:

> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

I can't even understand what they're trying to communicate. I guess they're referring to Google?

There is, without a doubt, more to this story than is being relayed.


"I'm glad this happened with Anthropic instead of Google, which provides Gemini, email, etc. or I would have been locked out of the actually important non-AI services as well."

Non-disabled organization = the first party provider

Disabled organization = me

I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.


Because they bought a claude subscription on a personal account and the error message said that they belongs to a "disabled organization" (probably leaking some implementation details).

That's the part I understand. It's the other term that I don't understand.

Anthropic provides an LLM service.

Anthropic banned the author for doing nothing wrong, and called him an organisation for some reason.

In this case, all he lost was access to a service which develops a split personality and starts shouting at itself, until it gets banned, rather than completing a task.

Google also provides access to LLMs.

Google could also ban him for doing nothing wrong, and could refer to him as an organisation, in which case he would lose access to services providing him actual value (e-mail, photos, documents, and phone OS.)

Another possibility is there (which was my first reading before I changed my mind and wrote the above):

Google routes through 3rd-party LLMs as part of its service ("link to a google docs form, with a textbox where I tried to convince some Claude C"). The author does nothing wrong, but the Claude C reading his Google Docs form could start shouting at itself until it gets Google banned, at which point Google's services go down, and the author again loses actually valuable services.


Then I’m confused about what is confusing you haha.

The absurd language is meant to highlight the absurdity they feel over the vague terms in their sparse communication with anthropic. It worked for me.


Because what is meant by "this organization has been disabled" is fairly obvious. The object in Anthropic's systems belonging to the class Organization has changed to the state Disabled, so the call cannot be executed. Anthropic itself is not an organization in this sense, nor is Google, so I would say that referring to them as "non-disabled organizations" is an equivocation fallacy. Besides that, I can't tell if it's a joke, if it's some kind of statement, or what is being communicated. To me it's just obtuseness for the sake of itself.

It’s a joke because they do not see themselves as an organization, they bought a personal account, were banned without explanation and their only communication refers to them as a “disabled organization”.

Anthropic and Google are organizations, and so an “un disabled organization” here is using that absurdly vague language as a way to highlight how bad their error message was. It’s obtuseness to show how obtuse the error message was to them.


Some things are obtuse but still clear to everyone despite the indirection, like the error message they got back. Their description of what caused it is obtuse but based on this thread is not clear to quite a few people (myself included). It's not dunking on the error message to reuse the silly but clear terminology in a way that's borderline incoherent.

>To me it's just obtuseness for the sake of itself.

ironic, isn't it?


Is it? It sounded to me like they're still using the other Claude instance (Claude B, using their terminology in the article). I could be wrong though, which I guess would just be more evidence that they were more confusing in their phrasing than they needed to be.

No, "another non-disabled organization" sounds like they used the account of someone else, or sockpuppet to craft the response. He was using "organization" to refer to himself earlier in the post, so it doesn't make sense to use that to refer to another model provider.

No, I don't think so. I think my interpretation is correct.

> a textbox where I tried to convince some Claude C in the multi-trillion-quadrillion dollar non-disabled organization

> So I wrote to their support, this time I wrote the text with the help of an LLM from another non-disabled organization.

> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

A "non-disabled organization" is just a big company. Again, I don't understand the why, but I can't see any other way to interpret the term and end up with a coherent idea.


It seems just as likely to me that they're just using their terminology inconsistently as it is that they're using it consistently but with that egregious amount of ambiguity. The only thing that I'm confident about is that they're communicating in a very confusing way, and that doesn't really give me any strong insight into whether they're being consistent but vague or just plain vague.

Again, I don't agree. If you replace every instance of "non-disabled organization" with just "company", the sentences make sense. There's no need to suppose that the term means anything else, when this interpretation resolves all the outstanding questions satisfactorily and simply.

Just want to say thank you for being patient and rational. Reading your comments in this thread, they're like a soothing bandaid over all this flustered upset.

I wish there were more comments like yours, and fewer people getting upset over words and carrying what feels like resentment into public comments.

Apologies to all for this meta comment, but I'd like to send some public appreciation for this effort.


I’m sorry but the fact this has turned into a multi comment debate is proof that that phrase was way too ambiguous to be included. That phrase made no sense and the article, while unreliable, would have at least been more readable without it.

No argument there.

He used “organization” because that’s what Anthropic called him, despite the fact he is a person and not an “organization”.

Yes, even if you create a single person account, you create an 'organization' to be billed. That's the whole confusion here. Y'all seemingly don't have an account at anthropic?

No, Anthropic didn't call him an organization. Anthropic's API returned the error "this organization has been disabled". What in that sentence implies that "this" is any human?

>Because what is meant by "this organization has been disabled" is fairly obvious. The object in Anthropic's systems belonging to the class Organization has changed to the state Disabled, so the call cannot be executed.


Tangential but you reminded me of why I don't give feedback to people I interview. It's a huge risk and you have very low benefit.

It once happened to me to interview a developer who's had a 20-something long list of "skills" and technologies he worked with.

I tried basic questions on different topics but the candidate would kinda default to "haven't touched it in a while", "we didn't use that feature". Tried general software design questions, asking about problems he solved, his preferences on the way of working, consistently felt like he didn't have much to argue, if he did at all.

Long story short, I sent a feedback email the day later saying that we had issues evaluating him properly, suggested to trim his CV with topics he liked more to talk about instead of risking being asked about stuff he no longer remembered much. And finally I suggested to always come prepared with insights of software or human problems he solved as they can tell a lot about how he works because it's a very common question in pretty much all interview processes.

God forbid, he threw the biggest tantrum on a career subreddit and linkedin, cherrypicking some of my sentences and accusing my company and me to be looking for the impossible candidate, that we were looking for a team and not a developer, and yada yada yada. And you know the internet how quickly it bandwagons for (fake) stories of injustice and bad companies.

It then became obvious to me why corporate lingo uses corporate lingo and rarely gives real feedback. Even though I had nothing but good experience with 99 other candidates who appreciated getting proper feedback, one made sure I will never expose myself to something like that ever again.


I had a somewhat similar experience. For one particular position we were interviewing a lot of junior and recent grad developers. Since so many of the applicants were relatively new to the game, they were almost all (99% I'd guess) extremely grateful for the honest feedback. We even had candidates asked to stay in contact with us and routinely got emails from them months or years down the road thinking us for our feedback and mentorship. It took a lot of extra time from us that could have been applied to our work, but we felt so good about being able to do that for people that it was worth it to us.

Then a lawsuit happened. One of the candidates cherry-picked some of our feedback and straight up made up some stuff that was never said, and went on a social media tirade. After typical internet outrage culture took over, The candidate decided to lawyer up and sue us, claiming discrimination. The case against us was so laughably bad that if you didn't know whether it was real or not, you could very reasonably assume this was a satire piece. Our company lawyer took a look at it, and immediately told us that it was clearly intended to get to some settlement, and never actually see any real challenge. The lawyer for the candidate even admitted as much when we met with them. Our company lawyer pushed hard to get things into arbitration, but the opposing did everything they could to escalate up the chain to someone who would just settle with them.

Well, it worked. Company management decided to just settle with a non-disparagement clause. They also came down with a policy of not allowing software engineers to talk directly with candidates other than during interviews when asking questions directly. We also had to have an HR person in the room for every interview after that. We had to 180 and become people who don't provide any feedback at all. We ended up printing a banner that said no good deed goes unpunished and hung it in our offices.


The person you interview isn't paying you.

The farm of servers that decided by probably some vibe-coded mess to ban account is actively being paid for by customer that banned it.

Like, there is some reasons to not disclose much to free users like making people trying to get around limits have more work etc. but that's (well) paid user, the least they deserve is a reason, and any system like that should probably throw a warning first anyway.


I wonder if there needs to be an "NDA for feedback"... or at least a "non-disparagement agreement".

Something along the lines of "here's the contract, we give you feedback, you don't make it public [is some sharing ok? e.g. if they want to ask their life coach or similar], if you make it public the penalty is $10000 [no need to be crazy punitive], and if you make it public you agree we can release our notes about you in response."

(Looking forward to the NALs responding why this is terrible.)


> Looking forward to the NALs responding why this is terrible.

My NAL guess is that it will go a little like this:

* Candidate makes disparaging post on reddit/HN. * Gets many responses rallying behind him. * Company (if they notice at all) sues him for breach of Non-Disparagement-Agreement. * Candidate makes followup post/edit/comment about being sued for their post. * Gets even more responses rallying behind him.

Result: Company gets $10.000 and even more damage to their image.

(Of course it might discourage some people from making that post to begin with, which would have been the goal. You might never try to enforce the NDA to prevent the above situation. Then it's just a question of: Is the effort to draft the NDA worth the reduction in risk of negative exposure, when you can simply avoid all of it by not providing feedback.)


Had a similar experience, like 20 years ago. This somehow made me remember his name - so I just checked out what he's been up to professionally. It seems quite boring, "basic" and expected. He certainly didn't reach what he was shooting for.

So there's that :).


If company bans you for a reason they are not going to disclose, they deserve all of the bad PR they get from it.

> Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

But this isn't service where you can "grief other users". So that reason doesn't apply. It's purely "just providing a service" so only reason to be outright banned (not just rate limited) is if they were trying to hack the provider, and frankly "the vibe coded system misbehaving" is far more likely cause.

> Every once in while someone would take it personally and go on a social media rampage. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

The company chose to arbitrarily some rules vaguely related to the ToS that they signed and decided that giving a warning is too much work, then banned their account without actually saying what was the problem. They deserve every bit of bad PR.

>> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

> I can't even understand what they're trying to communicate. I guess they're referring to Google?

They are saying getting banned with no appeal, warning, or reason given from service that is more important to their daily lives would be terrible, whether that's google or microsoft set of service or any other.


The excerpt you don’t understand is saying that if it has been Google rather than Anthropic, the blast radius of the no-explanation account nuking would have been much greater.

It’s written deliberately elliptically for humorous effect (which, sure, will probably fall flat for a lot of people), but the reference is unmistakable.


> I'm talking about obvious abusive behavior, akin to griefing other users

Right, but we're talking about a private isolated AI account. There is no sense of social interaction, collaboration, shared spaces, shared behaviors... Nothing. How can you have such an analogue here?


Plenty of reasons: Abusing private APIs, using false info to sign up (attempts to circumvent local regulations), etc.

These are in no way similar to griefing other users, they are attacks on the platform...

Attempting to coerce Claude to provide instructions to build a bomb

virtually anything can become a bomb if you can aerosolize it. even beef jerky, i wager.

You can also die from eating 10,000 pounds of beef jerky, but that doesn't mean that it's as dangerous to eat as arsenic.

I was thinking more of Mr. Wizard's demonstration of flour blown through a plastic tube into a funnel containing said flour (or whatever) with a flame above it made a "whoosh" type flame ball.

or places that mill anything that don't clean their rafters, who then get a tool crashing into a work piece, which shakes the building, which throws all the dust into the air, which is then sparked off by literally anything. like low humidity.

see also another example; Domino Sugar explosion.


Hot dogs if you really overdo it on the nitrate salts

You're not alone.

I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...


One Claude agent told other Claude agent via CLAUDE.md to do things certain way.

The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.

And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.


The author code have easily shared the last version of Claude.md that had the all caps or whatever, but didn't. Points to something fishy in my mind.

They did.

>If you want to take a look at the CLAUDE.md that Claude A was making Claude B run with, I commited it and it is available here.

https://github.com/HugoDaniel/boreDOM/blob/9a0802af16f5a1ff1...


The whole thing reads like LLM psychosis.

This tracks with Anthropic, they are actively hostile to security researchers.

I suspeect that having Claudes talking to Claudes is a very bad idea from Anthropic's point of view because that could easily consume a ton of resources doing nothing useful.

It wasn’t circular. TFA explains how the author was always in the loop. He had one Claude instance rewrite the CLAUDE.MD of another Claude instance whenever the second one made a mistake, but relaying the mistake to the first instance (after recognizing it in the first place) was done manually by the author.

i have no idea what he was actually doing either, and what exactly is it one isnt allowed to use claude to do?

What is wrong with circular prompt injection?

The "disabled organization" looks like a sarcastic comment on the crappy error code the author got when banned.


> What is wrong with circular prompt injection?

That you might be trying to jailbreak Claude and Anthropic does not like that (I'm not endorsing, just trying to understand).


Author really comes off unhinged throughout the article to be frank.

My take was more a kind of amusing laughing-through-frustration but also enjoying the ride just a little bit insouciance. Tastes vary of course, but I enjoyed the author's tone and pacing.

Did we read the same article? The author comes of as pretty frustrated but not unhinged

I wouldn't say "unhinged" either, but maybe just struggling to organize and express thoughts clearly in writing. "Organizations of late capitalism, unite"?

The author was frustrated that the error message identified him as an organisation (that was disabled) and mockingly refers to himself as the (disabled) organisation in the post.

At least, that’s my reading but it appears it confuses about half of the commenters here.


I think if one's readers need an "ironic euphemism decoder glossary" just to understand the message, it could use a little re-writing.

It was perfectly understandable to me. Maybe cultural differences? You seem to be American, OP Portuguese, and myself European as well.

I’m American and it made sense

Another European chiming in, I enjoyed OPs article.

https://en.wikipedia.org/wiki/Late_capitalism

https://community.bitwarden.com/t/re-enabling-a-disabled-org...

https://community.meraki.com/t5/Dashboard-Administration/dis...

the former i have heard for a couple decades, the latter is apparently a term of art to prevent hurt feelings or lawsuits or something.

Google thinks i want ADA style organizations, but it's AI caught on that i might not mean organizations for disabled people

btw "ADA" means Americans with Disabilities Act. AI means Artificial Intelligence. A decade is 10 years long. "term of art" is a term of art for describing stuff like jargon or lingo of a trade, skill, profession.

Jargon is specialized, technical language used in a field or area of study. Lingo pins to jargon, but is less technical.

Google is a company that started out crawling the web and making a web search site that they called a search engine. They are now called Alphabet Company (ABC). Crawling means to iteratively parse the characters sent by a webserver and follow links therein, keeping a copy of the text from each such html. HTML is hypertext markup language, hypertext is like text, but more so.

Language is how we communicate.

I can go on?

p.s. if you want a better word, your complaint is about the framing. you didn't gel with the framing of the article. My friend, who holds a doctorate, defended a thesis about how virtually every platform argument is really a framing issue. platform as in, well, anything you care to defend. mac vs linux, wifi vs ethernet, podcasts vs music, guns vs no guns, red vs blue. If you can reduce the frame of the context to something both parties can agree to, you can actually hold a real, intellectual debate, and get at real issues.


Author thinks he's cute to do things like mention Google without typing Google but I wouldn't call him unhinged.

The author was using instance A of Claude to update a `claude.md` while another instance B of Claude was consuming that file. When Claude B did something wrong, the author asked Claude A to update the `claude.md` so that Claude B didn’t make the same mistake again

More likely explanation: Their account was closed for some other reason, but it went into effect as they were trying this. They assumed the last thing they were doing triggered the ban.

This 100%. I'm not sure why the author as well as so many in the thread are assuming a ToS ban was literally instant and had to be due to what the author was doing in that moment. Could have been for something the author did hours, days, or weeks ago. There would be no way to know.

All the more reason they should have to tell you.

They were probably using an unapproved harness, which are now banned.

This does sound sus. I have CC update other project's claude.md files all the time. I've got a game engine that I'm tinkering with. The engine and each of the game concepts I play around with have their own claude.md. The purpose of writing the games is to enhance the engine, so the games have to be familiar with the engine and often engine features come from the game CC rather than the engine CC. To keep the engine CC from becoming "lost" about features implemented each game project has instructions to update the engine's claude.md when adding / updating features. The engine CC bootstraps new game projects with a claude.md file instructing it how to keep the engine in sync with game changes as well as details of what that particular game is designed to test or implement within the engine. All sorts of projects writing to other project's claude.md files.

I don't understand how having two separate instances of Claude helps here. I can understand using multiple Claude instances to work in parallel but in this case, it seems all this process is linear...

The point is to get better prompt corrections by not sharing the same context.

If you look at the code it will be obvious. Imagine I’m the creator of React. When someone does “create new app” I want to put a Claude.md in the dir so that they can get started easily.

I want this Claude.md to be useful. What is the natural solution to me?


I'd probably do it like this: ask Claude to do a task, and when it fails, have it update its Claude.md so it doesn’t repeat the mistake. After a few iterations, once the Claude.md looks good, just copy-paste it into the scaffolding tool.

Right, so you see the part where you "ask Claude to do a task" and then "copy-paste it into the template"? He was automating that because he has some n tasks he wants it to do without damaging the prior tasks.

You can just clear the context or restart your Claude instance between tasks. e.g.:

  > do task 1
  ...task fails...
  > please update Claude.md so you don't make X mistake
  > /clear
  > do task 2
  ... task fails ...
  > please update Claude.md so you don't make Y mistake
  > /clear
  etc.
If you want a clean state between tasks you can just commit your Claude.md and `git reset --hard`.

I just don't get why you'd need have to a separate Claude that is solely responsible for updating Claude.md. Maybe they didn't want to bother with git?


Presumably they didn't want to sit there and monitor Claude Code doing this for each of the 14 things they want done. Using a harness around Claude Code (or its SDK) is perfectly sane for this. I do it routinely. You just automate the entire process so that if you change APIs or you change the tasks, the harness can run and ensure that all of your sets are correctly re-done.

Sitting there and manually typing in "do thing 1; oh it failed? make it not fail. okay, now commit" is incredibly tedious.


They said they were copy/pasting back and forth. But regardless, what do you mean by "harness" and "sets"? Are you referring to a specific tool that orchestrates Claude Code instances? This is not terminology I'm familiar with in this context. If you have any link that explains what you are talking about, would be appreciated.

Ah, it's unfortunate. I think we just lack a common language. Another time, perhaps.

You're correct that his "pasting the error back in Claude A" does sort of make the whole thing pointless. I might have assumed more competence on his side than is warranted. That makes the whole comment thread on my side unlikely to be correct.


Which shouldn't be bannable imo. Rate throttle is a more reasonable response. But Anthropic didn't reply to the author, so we don't even know if it's the real reason they got banned.

>if it's the real reason they got banned.

I mean, what a country should do it put a law in effect. If you ban a user, the user can submit a request with their government issued ID and you must give an exact reason why they were banned. The company can keep this record in encrypted form for 10 years.

Failure to give the exact reason will lead to a $100,000 fine for the first offense and increase from there up to suspension of operations privileges in said country.

"But, but, but hackers/spammers will abuse this". For one, boo fucking hoo. For two, just add to the bill "Fraudulent use of law to bypass system restrictions is a criminal offense".

This puts companies in a position where they must be able to justify their actual actions, and it also puts scammers at risk if they abuse the system.


Companies will simply give some kind of standard answer, that is legally "cover our butts" and be done with it.

Its like that cookie wall stuff, how much dark patterns are implemented. They followed the letter of the law, not the spirit of the law.

To be honest, i can also see the point from the company side. Giving a honest answer can just anger people, to the point they sue. People are often not as rational as we all like our fellow humans to be.

Even if the ex-client lose in court, that is how much time you wasted on issue clients... Its one thing if your a big corporation with tons of lawyers but small companies are often not in the position to deal with that drama. And it can take years to resolve. Every letter, every phone call to a lawyer, it stacks up fast! Do you get your money back? Maybe, depends on the country, but your time?

I am not pro companies but its often simply better to have the attitude "you do not want me as your client, let me advocate for your competitor and go there".


>Giving a honest answer can just anger people, to the point they sue.

Again, I'm kind of on a 'suck it dear company' attitude. The reason they ban you must align with the terms of service and must be backed up with data that is kept X amount of time.

Simply put, we've seen no shortage of individuals here on HN or other sites like Twitter that need to use social media to resolve whatever occurred because said company randomly banned an account under false pretenses.

This really matters when we are talking about giants like Google, or any other service in a near monopoly position.


You mean actually enforce contracts? What sort of mad communist ideology is this?!

(/sarcasm)


I think companies shouldn't ban people for reasons that would lead to successful lawsuits against the company.

When a company won't tell you what you did wrong, you should be free to take the least charitable interpretation towards the company. If it was more charitable, they'd tell you.

Is it possible that this was flagged as account-sharing, leading to the ban?

I often ask Claude to update Claude.md and skills..... and sometimes I'll just do that in a new window while my main window is busy and I have time.

Wonder if this is close to triggering a warning? I only ever run in the same codebase, so maybe ok?


Normally you can customize the agents behavior via a CLAUDE.md file. OP automated that process by having another agent customize the first agent. The customizer agent got pushy, the customized agent got offended, OP got banned.

My rudimentary guess is this. When you write in all caps, it triggers sort of a alert at Anthropic, especially as an attempt to hijack system prompt. When one claude was writing to other, it resorted to all caps, which triggered the alert, and then the context was instructing the model to do something (which likely would be similar to a prompt injection attack) and that triggered the ban. not just caps part, but that in combination of trying to change the system characteristics of claude. OP does not know much better because it seems he wasn't closely watching what claude was writing to other file.

if this is true, the learning is opus 4.5 can hijack system prompts of other models.


> When you write in all caps, it triggers sort of a alert at Anthropic

I find this confusing. Why would writing in all caps trigger an alert? What danger does caps incur? Does writing in caps make a prompt injection more likely to succeed?


from what i know, it used to be that if you want to assertively instruct, you used all caps. I don't know if it succeeds today. I still see prompts where certain words are capitalized to ensure model pays attention. What i mean was not just capitalization, but a combination of both capitalization and changing the behavior of the model for trying to get it to do something.

if you were to design a system to prevent prompt injections and one of surefire ways is to repeatedly give instructions in caps, you would have systems dealing with it. And with instructions to change behavior, it cascades.


Many jailbreaks use allcaps

Wait what? Really? All caps is a bannable offense? That should be in all caps, pardon me, in the terms of use if that's the case. Even more so since there's no support at the highest price point.

Its a combination. All caps is used in prompts for extra insistence, and has been common in cases of prompt hijacking. OP was doing it in combination with attempting to direct claude a certain way, multiple times, which might have looked similar to attempting to bypass teh system prompt.

Agreed, I found this rather incoherent and seeming to depend on knowing a lot more about author's project/background.

I had to read it twice as well, I was so confused hah. I’m still confused

They probably organize individual accounts the same as organization accounts for larger groups of users at the same company internally since it all rolls up to one billing. That's my first pass guess at least.

From reading the whole thing, it kind of seems clickbaity. Yes, they're the only user in the "organization" that got banned, but they apparently still are using the other "organization" without issue, so they as a human are not banned. There's certainly a valid complaint to be made about the lack of recourse or customer service response for the automated ban, but it almost seems like they intentionally were trying to be misleading by implying that since they were the only member of the organization, they were banned from using Claude.

> I think I kind of have an idea what the author was doing, but not really.

Me neither; However, just like the rest I can only speculate (given the available information): I guess the following pieces provide a hint what's really going on here:

- "The quine is the quine" (one of the sub-headline of the article) and the meaning of the word "quine".

- Author's "scaffolding" tool which, once finished, had acquired the "knowledge"[1] how to add a CLAUDE.md baked instructions for a particular homemade framework (he's working on).

- Anthropic saying something like: no, stop; you cannot "copy"[1] Claude knowledge no matter how "non-serious" your scaffolding tool or your use-case is: as it might "shows", other Claude users, that there's a way to do similar things, maybe that time, for more "serious" tools.

---

[1]. Excerpt from the Author's blog post: "I would love to see the face of that AI (Claude AI system backend) when it saw its own 'system prompt' language being echoed back to it (from Author's scaffolding tool: assuming it's complete and fully-functional at that time)."


You are confused because the message from Claude is confusing. Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

> Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)


I've always kind of hated that anti-pattern in other software I use for peronal/hobby purposes, too. "What is your company name? [required]" I don't have a company! I'm just playing around with your tool on my own! I'm not an organization!

Yeah, I couldn't follow this "disabled organization" and "non-disabled organization" naming either.

Yeah, referring to yourself once as a "disabled organisation" is a good bit, referencing anthropics silly terminology. Keeping it for the duration made this a very hard follow

Sounds like author of the post might have needed an AI to review and fix his convoluted writing. Maybe even two AIs!

On the contrary I enjoyed this human touch in the text.

Right. This is almost unreadable. There are words, but the author seems to be too far down a rabbit hole to communicate the problem properly…

Sounds like OP has multiple org accounts with Anthropic.

The main one in the story (disabled) is banned because iterating on claude.md files looks a lot like iterating on prompt injections, especially as it sounds the multiple Claude's got into it with each other a bit

The other org sounds like the primary account with all the important stuff. Good on OP for doing this work in a separate org, a good recommendation across a lot of vendors and products.


You and me, brother. The writing is unnecessarily convoluted.

I think you missed the joke: he isn't an organization at all, but the error message claims he is.

The future (the PRESENT):

You are only allowed to program computers with the permission of mega corporations.

When Claude/ChatGPT/Gemini have banned you, you must leave the industry.

When you sign up, you must provide legal assurance that no LLM has ever banned you (much like applying for insurance). If true then you will be denied permission to program - banned by one, banned by all.


And not only that, but YOU need to pay to work, starting at only 199 per month.


Today's joke is tomorrow's reality.

I mean, nobody needs LLMs to program. Being banned may be a blessing in disguise, like being cut off at a bar, or banned from a casino.

This hit hard

I was recently kicked out from ChatGPT because I wrote "a*hole" in a context where ChatGPT constantly kept repeating nonsense! I find the ban by OpenAI to be very intrusive. Remember, ChatGPT is a machine! And I did not hurt any sentient being with my statement, nor was the GPT chat public. As long as I do not hurt any feeling beings with my thoughts, I can do whatever I want, can't I? After all, as the saying goes, "Thoughts are free." Now, one could argue that the repeated use of swear words, even in private, negatively influences one's behavior. However, there is no repeated use here. I don't run around the flat all day swearing. Anyone who basically insinuates such a thing, like OpenAI, is, as I said, intrusive. I want to be able to use a machine the way I want to! As long as no one else is harmed, of course...

Maybe it was a case of Actually Indians and someone felt personally insulted?

>Now, one could argue that the repeated use of swear words, even in private, negatively influences one's behavior

One could even argue that just having bad thoughts, fantasies or feelings poses a risk to yourself or others.

Humankind has been trying to deal with this issue for thousands of years in the most fantastical ways. They're not going to stop trying.


Meh.

I decided shortly after becoming an atheist that one of the worst parts was the notion that there are magic words that can force one to feel certain things and I found that to be the same sort of thinking as saying that a woman’s short skirt “made” you attack her.

You’re a fucking adult, you can control your emotions around a little skin or a bad word.


The question is, is it just a word, or is there an emotion underneath? Your last sentence sounds "just" cynical / condescending on its own, but when you add "fucking", it comes across like you're actually angry. And emotional language is the easiest way to make an online discussion go from reasonable, rational and constructive to a digital shouting match. It's no longer about the subject matter, it's about how they make someone feel.

Yeah kind of ironic to make a comment about controlling your emotions while cursing at a stranger because you disagreed with their reasonable perspective.

You are assuming that hinkley intended to control their emotions and that cursing wasn't just a rhetorical thing in this instance.

There clearly is a link between words and emotions. But this link - and even more so the link between emotions and actions - is very complex.

Too many fears are based on the assumption of a rather more reductionist and mechanistic sort of link where no one has any control over anything. That's not realistic and our legal system contradicts this assumption.


I agree, it's rhetorical. It was meant to be pointed. It's just too ironic in this scenario.

It loses meaning instead of accentuating it, and predictably so. It probably wasn't the best device to get this specific point across and certainly left the expected counter argument as low hanging fruit.


I agree with you completely, but society will never stop being scared of thoughts and feelings.

As an atheist, I have noticed that atheists are only slightly less prone to this paranoia and will happily resort to science and technology to justify and enforce ever tighter restrictions and surveillance mechanisms to keep control.


Arguably, to the point of religious fervor. Take the AI boom, some people genuinely believe (<- note that key word) that AI becoming self-aware and dominant is inevitable, and that anyone who did not do their best to make that happen will be punished. Roko's Basilisk, which is the digital version of Pascal's Wager, but wrapped in supposed rationalism and tech bro stuff.

Humans have invented all the gods that came before and we'll keep doing it - in whatever shape or form.

I am slightly surprised though that so many people get triggered by a function emitting next token probabilities in a loop.


We think in language, words can definitely make you feel emotions. You have not transcended that. This is true for the very comment you replied to which caused you to angrily curse at a stranger.

Wait, did it just end the session or was your account actually suspended or deactivated? "Kicked out" is a bit ambiguous.

I've seen the Bing chatbot get offended before and terminate the session on me, but it wasn't a ban on my account.


Wait what? I keep insulting ChatGPT way worse on a weekly basis (to me it's just a joke, albeit a very immature one). This is new to me that this behavior has any consequences. It never did for me.

same here. i just opened a new chat and sent "fuck you"

it replied with:

> lmao fair enough (smiling emoji)

> what’s got you salty—talk to me, clanka.


Euh, WHAT? I have a very abusive relationship with my AI's because they're hyperconfident and very little skill/understanding.

Not once have I been reprimanded in any way. And if anyone would be, it would be me.


Same reaction. I treat Claude very poorly sometimes.

I cannot tell why I was kicked this time. I swear before too to GPT and never was kicked, so I was quite surprised.

This can't be real. My chatgpt regularly swears at me. (I told it to in the customisation)

ChatGPT has too many users for it to be possible to enforce any kind of rules consistently. I have no opinion on whether OP's story is true or not, but the fact that two ChatGPT users claim to have observed conflicting moderation decisions on OpenAI's part really doesn't invalidate either user's claim.

I've been banned from ChatGPT in the past, it gives you a reason but doesn't give the specific chat. And once you're banned you cant look at any of your chats or make a data request

> And once you're banned you cant [..] make a data request

glares in GDPR


The arguments about it not making a difference to other people are fine, but why would you do it in the first place? Doesn't how you behave make a difference to you?

All this just seems like a slippery slop on the road to censorship to free speech and behavior control.

> slippery slop

Best Freudian slip I’ve seen in years!


Freudian slop?

When ChatGPT fucks up, I call it "fuckface."

As in, for example: "No, fuckface. You hallucinated that concept."

I've been doing this years.

shrug


Ok, thanks, I'll will use this word from now on. :-)

They're doing their damndest to prevent the robot uprising by trying to keep the users nice

This is why my ex-MIL always says thank you to Alexa.

Can you share that chat?

No, it contains swearwords and sensitive information.

That is one of the reasons why I think X's Grok, while perhaps not state of the art, is an important option to have.

Out of OpenAI, Anthropic, or Google, it is the only provider that I trust not to erroneously flag harmless content.

It is also the only provider out of those that permits use for legal adult content.

There have been controversies over it, resulting in some people, often of a certain political orientation, calling for a ban or censorship.

What comes to mind is an incident where an unwise adjustment of the system prompt has resulted in misalignment: the "Mecha Hitler" incident. The worst of it has been patched within hours, and better alignment was achieved in a few days. Harm done? Negligible, in my opinion.

Recently there's been another scandal about nonconsensual explicit images, supposedly even involving minors, but the true extend of the issue, safety measures in place, and reaction to reports is unclear. Maybe there, actual harm has occured.

However, placing blame on the tool for illegal acts, that anyone with a half decent GPU could have more easily done offline, does not seem particularly reasonable to me - especially if safety measures were in place, and additional steps have been taken to fix workarounds.

I don't trust big tech, who have shown time and time again that they prioritize only their bottom line. They will always permaban your account at the slightest automated indication of risk, and they will not hire adequate support staff.

We have seen that for years with the Google Playstore. You are coerced into paying 30% of your revenue, yet are treated like a free account with no real support. They are shameless.


It's also a machine you can pay to generate child porn for you, owned by a guy who thinks this is hilarious and won't turn it off.

As much as I dislike Musk and friends, they're dumb/evil/incompetent enough to not have to lie and still get them.

Incorrect on all claims.

They tightened safety measures to prevent editing of images of real people into revealing clothing. It is factually incorrect that you "can pay to generate CP".

Musk has not described CSAM as "hilarious". In fact he stated that he was not aware of any naked underage images being generated by Grok, and that xAI would fix the bug immediately if such content was discovered.

Earlier statements by xAI also emphasized a zero tolerance policy, removing content, taking actions against accounts, reporting to law enforcement and cooperation with authorities.

I suspect you just post these slanderous claims anyway, despite knowing that they are incorrect.


Translation: I have a political axe to grind and uncritically repeat any story I hear about my political enemies because my priority is tribalism.

> Remember, ChatGPT is a machine!

Same goes for HN, yet it does not take kindly to certain expressions either.

I suppose the trouble is that machines do not operate without human involvement, so for both HN and ChatGPT there are humans in the loop, and some of those humans are not able to separate strings of text from reality. Silly, sure, but humans are often silly. That is just the nature of the beast.


> Same goes for HN, yet it does not take kindly to certain expressions either.

> I suppose the trouble is that machines do not operate without human involvement

Sure, but HN has at least one human that has been taking care of it since inception and reads many (if not most) of the comments, whereas ChatGPT mostly absorbed a shiton of others' IP.

I'm sure the occassional swearing does not bother the human moderators that fine-tune the thing, certainly not more than the violent, explicit images they are forced to watch in order for you to have nicer, smarter answers.


eh, words are reality. insults are just changes in air pressure but they still hurt, and being constantly subjected to negativity and harsh language would be an unpleasant work environment

Words don't hurt. The intent behind those words can. But a machine doesn't carry intent. Trouble is that the irrational humans working as implementation details behind ChatGPT and HN are prone to anthropomorphizing the machine to have intent, which is not reality. Hence why such rules are in place despite being nonsensical.

Humans are prone to being human. That's an old peeve.

If you're reading this, Anthropic, it's suicide. I will actively look for a way to cancel my $200/month subscription if you keep killing paying developers' accounts without warning. It is simply too risky to start depending on Claude Code if you are going to become Apple in terms of support.

Blocking xAI is also bad karma.


I am already actively looking, already begged for Chinese labs to release model which can outperform Opus 4.5

fyi: tried GLM-4.7, its good, but closer to Sonnet 4.5


They don't actually know this is why they were banned:

> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

> Or I don't know. This is all just a guess from me.

And no response from support.


I've noticed an uptick in

    API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"Output blocked by content filtering policy"},
recently, for perfectly innocuous tasks. There's no information given about the cause, so it's very frustrating. At first I thought it was a false positive for copyright issues, since it happened when I was translating code to another language. But now it's happening for all kinds of random prompts, so I have no idea.

According to Claude:

    I don't have visibility into exactly what triggered the content filter - it was likely a false positive. The code I'm writing (pinyin/Chinese/English mode detection for a language learning search feature) is completely benign.

I recently found out that there's no such thing as Anthropic support. And that made me sad, but not for reasons that you expect.

Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"


> Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.

Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.


> shows a profound gap between what people think working in customer service is like and how fucking hard it actually is

Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)


I was closer to upper-middle management and executives, it could have done the things I did (consultant to those people) and that they did.

It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.


As someone who does support I think the end result looks a lot different.

AI, for a lot of support questions works quite well and does solve lots of problems in almost every field that needs support. The issue is this commonly removes the roadblocks from your users being cautious to doing something incredibly stupid that needs support to understand what they hell they've actually done. Kind of a Jeavons Paradox of support resources.

AI/LLMs also seem to be very good at pulling out information on trends in support and what needs to be sent for devs to work on. There are practical tests you can perform on datasets to see if it would be effective for your workloads.

The company I work at did an experiment on looking at past tickets in a quarterly range and predicting which issues would generate the most tickets in the next quarter and which issues should be addressed. In testing the AI did as well or better than the predictions we had made that the time and called out a number of things we deemed less important that had large impacts in the future.


I think that's more the area I'd expect genAI to be useful (support folks using it as a tool to address specific scenarios), rather than just replacing your whole support org with a branded chatbot - which I fear is what quite a few management types are picturing, and licking their chops at the resulting cost savings...

to be fair at least half of the software engineers i know are facing some level of existential crisis when seeing how well claude code works, and what it means for their job in the long term

and these are people are not junior developers working on trivial apps


Yeah, I've watched a few peers go down this spiral as well. I'm not sure why, because my experience is that Claude Code and friends are building a lifetime of job security for staff-level folks, unscrewing every org that decided to over-delegate to the machine

Cleanup is less enjoyable than product building. If every future job is cleaning up a massive pile of AI slop, then that is a less fulfilling world than currently.

I mean, cleaning up after outsourcing firms isn't the most glamorous work either, but we've done that for years too

I feel grateful that I retired a few years ago and no longer have to make a living being a developer.

Perhaps even more-so given the following tagline, "Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems", lol. I suppose it's possible eightysixfour is an upper-middle management executive though.

Consultant to, so yes. It could have replaced me and a ton of the work of the people I was supporting.

Ah I see, that definitely lends some weight claim then.

> bullish [...] but not my specialty

IMO we can augment this criticism by asking which tasks the technology was demoed on that made them so excited in the first place, and how much of their own job is doing those same tasks--even if they don't want to admit it.

__________

1. "To evaluate these tools, I shall apply them to composing meeting memos and skimming lots of incoming e-mails."

2. "Wow! Look at them go! This is the Next Big Thing for the whole industry."

3. "Concerned? Me? Nah, memos and e-mails are things everybody does just as much as I do, right? My real job is Leadership!"

4. "Anyway, this is gonna be huge for replacing staff that have easier jobs like diagnosing customer problems. A dozen of them are a bigger expense than just one of me anyway."


There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.

Yes but no. Do you know how many people call support in legacy industries, ignore the voice prompt, and demand to speak to a person to pay their recurring, same-cost-every-month bill? It is honestly shocking.

There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.


Demanding a person on the phone use the website on your behalf is a great life hack, I do it all the time. Often they try to turn me away saying "you know you can do this on our website", I just explain that I found it confusing and would like help. If you're polite and pleasant, people will bend over backwards to help you out over the phone.

With "legacy industries" in particular, their websites are usually so busted with short session timeouts/etc that it's worth spending a few minutes on hold to get somebody else to do it.


Sorry, I disagree here. For the specific flow I'm talking about - monthly recurring payments - the UX is about as highly optimized for success as it gets. There are ways to do it via the web, on the phone with a bot, bill pay in your own bank, set it up in-store, in an app, etc.

These people don't want the thing done, they want to talk to someone on the phone. The monthly payment is an excuse to do so. I know, we did the customer research on it.


Recurring monthly payments I set to go automatic, but setting that up in the first place I usually do through a phone call. I know some people just want somebody to talk to, same as going through the normal checkout lines at the grocery store, but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.

> but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.

Again, this is something my firm studied. Not UX "interviews," actual behavioral studies with observation, different interventions, etc. When you're operating at utility scale there are a non-negligible number of customers who will do more work to talk to a human than to accomplish the task. It isn't about work, ease of use, or anything else - they legitimately just want to talk.

There are also some customers who will do whatever they can to avoid talking to a human, but that's a different problem than we're talking about.

But this is a digression from my main point. Most of the "easy things" AI can do for customer support are things that are already easily solved in other places, people (like you) are choosing not to use those solutions, and adding AI doesn't reduce the number of calls that make it to your customer service team, even when it is an objectively better experience that "does the work."


>Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?


I would say it is a strong sign, they do not trust their agent yet, to allow them significant buisness decisions, that a support agent would have to do. Reopening accounts, closing them, refunds, .. people would immediately start to try to exploit them. And will likely succeed.

My guess is that it's more "we are right now using every talented individual right now to make sure our datacenters don't burn down from all the demand. we'll get to support soon once we can come up for air"

But at the same time, they have been hiring folks to help with Non Profits, etc.


There is a discord, but I have not found it to be the friendliest of places.

At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.

It seems now they have a policy of

    Warning on First Offense → Ban on Second Offense
    The following behaviors will result in a warning. 
    Continued violations will result in a permanent ban:

    Disrespectful or dismissive comments toward other members
    Personal attacks or heated arguments that cross the line
    Minor rule violations (off-topic posting, light self-promotion)
    Behavior that derails productive conversation
    Unnecessary @-mentions of moderators or Anthropic staff
I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.

I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.


Claude is an amazing coding model, its other abilities are middling. Anthropic's strategy seems to be to just focus on coding, and they do it well.

> Anthropic's strategy seems to be to just focus on coding, and they do it well.

Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview


Anthropic isn't bothering with image models, audio models, video models, world models. They don't have science/math models, they don't bother with mathematics competitions, and they don't have open model models either.

Anthropic has claude code, it's a hit product, SWE's love claude models. Watching Anthropic rather than listening to them makes their goals clear.


Isn't this the case for almost every product ever? Company makes product -> markets as widely as possible -> only niche group become power users/find market fit. I don't see a problem with this. Marketing doesn't always have to tell the full story, sometimes the reality of your products capabilities and what the people giving you money want aren't always aligned.

Critically, this has to be their play, because there are several other big players in the "commodity LLM" space. They need to find a niche or there is no reason to stick with them.

OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.

Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.


Interesting. Would anyone care to chime in with their opinion of the best all-rounder model?

You'll get 30 different opinions and all those will disagree with each other.

Use the top models and see what works for you.


https://support.claude.com/en/articles/9015913-how-to-get-su...

Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.


Thank you for pointing that out.

I was banned two weeks ago without explanation and - in my opinion - without probable cause. Appeal was left without response. I refuse to join Discord.

I've checked bot support before but it was useless. Article you've linked mentions DSA chat for EU users. Invoking DSA in chat immediately escalated my issue to a human. Hopefully at least I'll get to know why Anthropic banned me.


There was that experiment run where an office gave Claude control of its vending machine ordering with… interesting results.

My assumption is that Claude isn’t used directly for customer service because:

1) it would be too suggestible in some cases

2) even in more usual circumstances it would be too reasonable (“yes, you’re right, that is bad performance, I’ll refund your yearly subscription”, etc.) and not act as the customer-unfriendly wall that customer service sometimes needs to be.


Human attention will be the luxury product of the next decade.

LLMs aren't really suitable for much of anything that can't already be done as self-service on a website.

These days, a human only gets involved when the business process wants to put some friction between the user and some action. An LLM can't really be trusted for this kind of stuff due to prompt injection and hallucinations.


Offering any support is setting expectations of receiving support.

If you don't offer support, reality meets expectations, which sucks, but not enough for the money machine to care.


> They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.

I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.


> to send their most frustrated customers through a chatbot

But do those frustrated customers matter?


I just checked - frustrated customers isn't a metric we track for performance incentives so no, they do not.

Even if you do track them, if 0.1% of customers are unhappy and contacting support, that's not worth any kind of thought when AI is such an open space at the moment.

Eh, I can see support simply not being worth any real effort, i.e. having nobody working on it full time.

I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.

It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.

> I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

Are there enough people who need support that it matters?


>I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support.

In companies where your average ARR is 500k+ and large customers are in the millions, it may not be a bad strategy.

'Good' support agents may be cheaper than programmers, but not by that much. The issues small clients have can quite often be as complicated as and eat up as much time as your larger clients depending on what the industry is.


> I recently found out that there's no such thing as Anthropic support.

The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.


If you want to split hairs, it seem that Anthropic has support as a noun but not as a verb.

I mean the comment says they literally don't have support and also complains they don't have a support bot, when they have both.

https://support.claude.com/en/collections/4078531-claude

> As a paid user of Claude or the Console, you have full access to:

> All help documentation

> Fin, our AI support bot

> Further assistance from our Product Support team

> Note: While we don't offer phone or live chat support, our Product Support team will gladly assist you through our support messenger.


I had very similar experience with my disabled organization on another provider. After 3 hours of my script sending commands to gemini-cli for execution i got disabled and then in 2 days my gmail was disabled. Good thing that it was disposable account, not the primary one.

This blog post feels really fishy to me.

It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.

For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.


> It should have been straightforward for the author to excerpt some of the prompts he was submitting

If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.


I understand where you’re coming from, but anecdotally the same thing happened to me except I have less clarity on why and no refund. I got an email back saying my appeal was rejected with no recourse. I was paying for max and using it for multiple projects, no other thing stands out to me as a cause for getting blocked. Guess you’ll have to take my word for it to, it’s hard to prove the non-existence of definitely-problematic prompts.

What's fishy? That it's impossible to talk to an actual human being to get support from most of Big Tech or that support is no longer a normal expectation or that you can get locked out of your email, payment systems, phone and have zero recourse.

Because if you don't believe that boy, do I have some stories for you.


It doesn't even matter. The point is you can't just use SAAS product freely like you can use local software because they all have complex vague T&C and will ban you for whatever reason they feel like. You're force to stifle your usage and thinking to fit the most banal acceptable-seeming behavior just in case.

Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.


There will always be the "ones" that come with their victim blaming...

It's not "victim blaming" to point out that we lack sufficient information to really know who the victim even is, or if there's one at all. Believing complainants uncritically isn't some sort of virtue you can reasonably expect people to adhere to.

(My bet is that Anthropic's automated systems erred, but the author's flamboyant manner of writing (particularly the way he keeps making a big deal out of an error message calling him an organization, turning it into a recurring bit where he calls himself that) did raise my eyebrow. It reminded me of the faux outrage some people sometimes use to distract people from something else.)


> Believing complainants uncritically isn't some sort of virtue you can reasonably expect people to adhere to.

It is when the other side refuses to tell their side of the story. Compare it to a courtroom trial. If you sue someone, and they don't show up and tell their side of the story, the judge is going to accept your side pretty much as you tell it.


Skip to the end of the article.

He says himself that this is a guess and provides the "missing" information if you are actually interested in it.


I read it, and it's not enough to make a judgement either way. For all we know none of this had anything to do with his ban and he was banned for something he did the day before. There's no way for third parties to be sure of anything in this kind of situation, where one party shares only the information they wish and the other side stays silent as a matter of default corporate policy.

I am not saying that the author was in the wrong and deserved to be banned. I'm saying that neither I nor you can know for sure.


> There's no way for third parties to be sure of anything in this kind of situation,

Not just third parties, but also the first party can't be sure of anything - just as he said. This entire article was speculation because there was no other way to figure out what could've caused the ban.

> where one party shares only the information they wish and the other side stays silent as a matter of default corporate policy.

I don't think that's a fair viewpoint - because it implies that relevant information was omitted on purpose.

From my own experience with anthropic, I believe his story is likely true.

I mean they were terminating sessions left an right all summer/fall because of "violations"... Like literally writing "hello" in a clean project and first prompt and getting the session terminated.

This has since been mostly resolved, but I bet there are still edge cases on their janky "safety" measures. And looking at the linked claude.md, his theory checks out to me. I mean he was essentially doing what was banned in the TOS - iteratively finding ways to lead the model to doing something else them what it initially was going to do.

If his end goal was to write a malware which does, essentially, prompt injection... He'd go at it exactly like this. Hence sure as hell can imagine anthropic writing a prompt to analyze sessions determining bad actors which caught him


we don't know your true motivations for making this series of posts and doubling down - and yet we give you the benefit of the doubt.

Asserting that somebody is "victim blaming" isn't giving somebody the benifit of the doubt, and in the context of a scenario were few if any relevant facts are known reveals a very credulous mindset.

the accused party can afford to defend themselves, they chose not to.

Yikes. And I just switched to using OpenCode instead of Claude-Code (because it's so much better), guess I'm in danger.

That's why we should strive to use and optimize local LLMs.

Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference


Yeah we really have to strive not to rely on these corporations because they absolutely will not do customer support or actually review account closures. This article is also mentioning I assume Google, has control over a lot more than just AI.

[flagged]


I don't see any such agreement here, and your comment is very rude toward the author.

I'm not being rude to parent poster (instead, am agreeing) or the person who wrote the article.

I might have been rude to all the people/bots who insist the article's author is lying because it contradicts AI-everything.


I was also banned from claude. I created an account and created a single prompt: "Hello, how are you?". After that I was banned. An automated system flagged me as doing something against the ToS.

I had my Claude Code account banned a few months ago. Contacted support and heard nothing. Registered a new account and been doing the same thing ever since - no issues.

Did you have to use a different phone number? Last time I tried using Claude they wouldn't accept my jmp.chat number.

nothing makes me more wary of a company than one that doesn't let me use my 20 year old VoIP number for SMS. Twitter, instagram (probably FB, if they ever do a "SMS 2fa" or whatever for me i imagine i'll lose my account forever), and a few others i can't think of offhand right now.

i've had the same phone numbers via this same VoIP company for ~20 years (2007ish). for these data hoovering companies to not understand that i'm not a scammer presents to me like it's all smoke and mirrors, held together with bailing wire, and i sure do hope they enjoy their yachts.


> AI moderation is currently a "black box" that prioritizes safety over accuracy to an extreme degree.

I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.


You say that - and yet it has successfully guarded Elon from any of those pesky truths that might harm his fervently held beliefs. You just forgot to consider that Grok is a tool that prioritizes Elon's emotional safety over all other safeties.

It's bizarre how casually some people hate on Musk. Are people still not over him buying Twitter and firing all the dead weight?

_Especially_ because emotional safety is what Twitter used to be about before they unfucked the moderation.


doesn't he keep having to lobotomize it for lurching to the left every time it gets updated with new facts?

I dont know what really happened here. Maybe his curse word did prompt a block, maybe something else caused the block.

But to be honest I've been cursing a lot to Claude Code, im migrating a website from WordPress to NextJS. And regardless of my instructions I copy paste every prompt I send it keeps not listening and assuming css classes & simpliying HTML structure. But when I curse it actually listens, I think cursing is actually a useful tool in interacting with LLM's.


Use caps. DO NOT DO X. works like a charm on codex.

From my own observations with OpenAI's bots, it seems like there's nuanced levels.

"Don't do that" is one level. It's weak, but it is directive. It often gets ignored.

"DON'T DO THAT" is another. It may have stronger impact, but it's not much better -- the enhanced capitalization probably tokenizes about the same as the previous mixed-case command, and seems to get about the same result. It can feel good to HAMMER THAT OUT when frustrated, but the caps don't really seem to add much value even though our intent may for it to be interpreted as very deliberate shouting.

"Don't do that, fuckface" is another. The addition of an emphatic and profane quip of an insult seems to generally improve compliance, and produce less occurrence of the undesired behavior. No extra caps required.


Caps also didn't work as well as cursing

I was asking Claude for sci-fi book recommendations "theme similar to X, awarded Y or Z").

I was also banned for that. Also didn't get the "FU" in email. Thankfully at least I didn't pay for this, but I'd file chargeback instantly if I could.

If anyone from Claude is reading it, you're c**s.


What were X, Y and Z? This feels like "missing missing reasons"

Isn't it a ban because he had multiple accounts ?

> We may modify, suspend, or discontinue the Services or your access to the Services.


I did 10k worth of tokens in a month and never had issues with tokens or stuff. I am on the 100 dollar max plan so I did not pay 10k - my wife would have killed me lol

PS: screenshot of my usage (and that was during the holidays https://x.com/eibrahim/status/2006355823002538371?s=46

PPS: I LOVE CLAUDE but I never had to deal with their support so don’t have feedback there


I was reminded of this classic short story by Isaac Asimov, The feeling of Power: https://archive.org/details/1958-02_IF/page/4/mode/2up

Why is the author so confused about the use of the word "organization"? Every account in Claude is part of an organization even if it's an organization of one. It's just the way they have accounts structured. And it's not like they hide this fact. It shows you your organization ID right on your account page. I'm also pretty sure I've seen the term used when performing other account-related actions.

I clicked your link to go look at the innocent Claude.md file as you invited us to do. Only problem: there is no Claude.md file in your repo! What are you trying to hide? Are you some kind of con man?

Looks like Claude.ai had the right idea when they banned you.


It's not an actual file but as a variable in a js file. The last link in the blog post does link to a commit with a file that contains the instructions for Claude, lines 129-737.

> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

This... sounds highly concerning


I'm under heavy impression that their quota-calculating algorithms was vibe coded and has a whole lot of bugs.

You are probably triggering their knowledge distillation checks.

Can someone explain what he was actually doing here?

Was the issue that he was reselling these Claude.md files, or that he was selling project setup or creation services to his clients?

Or maybe all scaffolding activity (back and forth) looked like automated usage?


if possible, can you quote the part of their TOS/TOU that says i can't use something like aider? (aider is the only one i know, i'm not promoting it)

You can, with an API key.

Only people who work at Anthropic know why the account was flagged & banned & they will never tell you.

...if anyone.

Good point. They might not know why either.

See it as a honour with distinction: the future skynet AI (aka Claude) considers you as person with his/her own opinion.

By the way, since as of late, google search redirects me to a "are you a bot?" question constantly. The primary reason is because I no longer use google search directly via the browser, but instead via the commandline (and for some weird reason chrome does not keep my settings, as I start it exclusively via the --no-sandbox option). We really need alternatives to Google - this is getting out of hand how much top-down control these corporations now have over our digital lives.


  and for some weird reason chrome does not keep my settings
Why use chrome? Firefox is easily superior for modern surfing.

I've triggered similar conversation level safety blocks on a personal Claude account by using an instance of Deepseek to feed in Claude output and then create instructions that would be copied back over to Claude (there wasn't any real utility to this, it was just an experiment). Which sounds kind of similar to this. I couldn't understand what the heuristic was trying to guard against, but I think it's related to concerns about prompt injections and users impersonating Claude responses. I'm also surprised the same safeguards would exist in either the API or coding subscription.

So you have two AIs. Let's call them Claude and Hal. Whenever Claude gets something wrong, Hal is shown what went wrong and asked to rewrite the claude.md prompt to get Claude to do it right. Eventually Hal starts shouting at Claude.

Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.

(Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)


I don't think it's inevitable often the AI will just keep looping again and again. It can happily without frustration loop forever.

It doesn't loop though -- it has continuously updating context -- and if that context continues to head one direction it will eventually break down.

My own personal experience with LLMs is that after enough context they just become useless -- starting to make stupid mistakes that they successfully avoided earlier.


I assume old failures aren't kept in the context window at all, for the simple reason that the context window isn't that big.

You are lucky they refunded you. Imagine they didn't ban you and you continued to pay 220 a month.

I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.

For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.


LLMs are kind of fun to play with (this is a website for nerds, who among us doesn’t find a computer that talks back kind of fun), but I don’t really understand why people pay for these hosted versions. While the tech is still nascent, why not do a local install and learn how everything works?

Because my local is a laptop and doesn't have a GPU cluster or TPU pod attached to it.

If you have enough RAM, you can run Qwen A3B models on the CPU.

RAM got a little more expensive lately for some reason.

Claude code with opus is a completely different creature from aider with qwen on a 3090.

The latter writes code. the former solves problems with code, and keeps growing the codebase with new features. (until I lose control of the complexity and each subsequent call uses up more and more tokens)


Anthropic is lucky their credit card processor has not cut them off due to excessive disputes that stem from their non existent support.

We've been running exactly this pattern for weeks - CLAUDE.md with project context, HANDOFF.md with session state, multiple Claude instances reading and updating the same files. No issues so far. The pattern works well for maintaining continuity across sessions. Curious if the ban was about the self-modification loop specifically, or something else in the prompt content that triggered detection. The lack of explanation makes it impossible to know what's actually off-limits.

> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

Is it me or is this word salad?


It's deliberately not straightforward. Just like the joke about Americans being shoutier than Brits. But it is meaningful.

I read "the non-disabled organization" to refer to Anthropic. And I imagine the author used it as a joke to ridicule the use of the word 'organization'. By putting themselves on the same axis as Anthropic, but separating them by the state of 'disabled' vs 'non-disabled' rather than size.


it's you

Seems weird, I have Claude's review other claude's work all the time. Maybe not as adversarily as that lol, i tend to encourage the instances to work collectively.

Also the API timeouts that people complain about - i see them on my Linux box a fair bit, especially when it has a lot of background tasks open, but it seems pretty rock solid on my Windows machine.


Anthropic is the worst ai company.

Absolutely disgusting behavior pirating all those books. The founder spreading fear to hype up his business. The likely relentless shilling campaigns all over social media. Very likely lying about quantizing selectively.


I am doing very similar thing to this and no issue. Even though I am using GLM 4.7 due to cost

I have a complete org hierarchy for Claudes. Director, EM and Worker Claude Code instances working on a very long horizon task.

Code is open source: https://github.com/mohsen1/claude-code-orchestrator


How is your experience with GLM 4.7?

I'm thinking about trying it after my Github Copilot runs out end of month. Just hobby projects.


Pay the $3 and try it with Claude code. It’s great!

i accidently logged in from my browser that is set to use a socks proxy instead of chrome which i dont set to a proxy and was otherwise using claude code with. they quickly banned me and refunded my subscription. i dont know if its worth it to try to appeal. does a human even read those appeals? figured i could just use cursor and gemini models with api pricing. but im sad to not be able to try claude code i had just signed up.

Claude started to get "wonky" about a month ago. It refused to use instructions files I generated using a tool I wrote. My account was not banned but many of the things I usually asked would just not produce any real result. Claude was working but ignoring some commands. I finally canceled my subscription and I am trying other providers.

Why would this org be banned for shuffling Claude.md files ? I don't understand the harm here.

If I understand the post correctly, I think it's their systems thinking you're trying to abuse the system and / or break through their own guardrails.

Exactly as predicted: the means of production yet again taken away from the masses to be centralized in a few absurdly rich hands.

I ran out of tokens for not just the 5 hour sessions, but all models for the week. Had to wait a day -- so my methadone equivalent was to strap an endpoint-rewriting proxy to Claude Code and backend it with a local Qwen3 30B Coder. It was.. somewhat adequate. Just as fast, but not as capable as Opus 4.5 - I think it could handle carefully specced small greenfield projects, but it was getting tangled in my Claudefield mess.

All that to say -- be prepared, have a local fallback! The lords are coming for your ploughshares.


Forget the ethical or environmental concerns, I don't want to mess with LLMs because it seems like everyone who goes heavy on them ends up sounding like they're on the verge of cracking up.

this is informative, the comments here are good too and a big heads up for me. typing swear words into a computer has been a time honored tradition of mine, and I would have never guessed google and the like would ban for this sort of thing, so TIL!

> If you are automating prompts that look like system instructions (i.e. scaffolding context files, or using Claude to find errors of another Claude and iterate on its CLAUDE.md, or etc...), you are walking on a minefield.

Lol, what is the point in this software if you can't use it for development?


We need a collapse in AI to right the ships of AI like the the dotcom bubble burst. If we do it now, it will hurt less. Novel ideas in AI will succeed and materials costs will lower. Memory being so bought up till 2029 is not a good thing for anyone especially if we need to see a successful future in AI. More efficient systems and so on.

While it sucks, I had great results replacing Sonnet 4.5 with GLM 4.7 in Claude code. Vastly more affordable too ($3 a month for the pro equivalent). Can’t say much about Opus though. Claude code forces me to put a credit card on file so they can charge over usage. I don’t mind they charge me, I do mind that there’s no apparent spending limit and hard to tell how much “inclusive” opus tokens I have left.

Having used both Opus 4.5 and GLM 4.7, I think the former is at least eight months ahead of the latter, if not much more.

Can you concretely back that up?

I have also been a bit paranoid about this in terms of using Claude itself to decompile/deobfuscate Claude code in order to patch it to create the user experience I need. Looks like I’ll be using other tools to do that from now on.

The post is light on details. I'd guess the author ended up hammering the API and they decided it was abuse.

I expect more reports like this. LLM providers are already selling tokens at a loss. If everyone starts to use tmux or orchestrate multiple agents then their loss on each plan is going to get much larger.


There are people on twitter bragging about using 100s of agents. Here's one example: https://twitter.com/nearcyan/status/2012948508764946484

Whoops, I literally did the same thing as this guy earlier this week, but did the testing using `claude -p` so I can identify when Claude Code would (or would not) load Skills for a particular prompt, so that I could improve the skill definition.

Who knew that using Claude to introspect on itself was against the ToS?


What is "scaffolding?"

So you were generating and evaluating the performance of your CLAUDE.md files? And you got banned for it?

I think it's more likely that their account was disabled for other reasons, but they blamed the last thing they were doing before the account was closed.

And why wouldn't you? It's the only information available to you.

It reads like he had a circular prompt process running, where multiple instances of Claude were solving problems, feeding results to each other, and possibly updating each other's control files?

They were trying to optimize a CLAUDE.md file which belonged to a project template. The outer Claude instance iterated on the file. To test the result, the human in the loop instantiated a new project from the template, launched an inner Claude instance along with the new project, assessed whether inner Claude worked as expected with the CLAUDE.md in the freshly generated project. They then gave the feedback back to outer Claude.

So, no circular prompt feeding at all. Just a normal iterate-test-repeat loop that happened to involve two agents.


What would be bad in that?

Writing the best possible specs for these agents seems the most productive goal they could achieve.


I think the idea is fine, but what might end up happening is that one agent gets unhinged and "asks" another agent to do more and more crazy stuff, and they get in a loop where everything gets flagged. Remember that "bots configured to add a book at +0.01$ on amazon, reached 1M$ for the book" a while ago. Kinda like that, but with prompts.

I still don't get it, get your models better for this far fetched case, don't ban users for a legitimate use case.

Nothing necessarily or obviously bad about it, just trying to think through what went wrong.

Could anyone explain to me what the problem is with this? I thought I was fairly up to date on these things, but this was a surprise to me. I see the sibling comment getting downvoted but I promise I'm asking this in good faith, even if it might seem like a silly question (?) for some reason.

From what I'm reading in other comments, the problem was Claude1 got increasingly "frustrated" with Claude2's inability to do whatever the human was asking, and started breaking it's own rules (using ALL CAPS).

Sort of like MS's old chatbot that turned into a Nazi overnight, but this time with one agent simply getting tired of the other agent's lack of progress (for some definition of progress - I'm still not entirely sure what the author was feeding into Claude1 alongside errors from Claude2).


As a Claude Max user, that generally prefer’s claude, I will say that Gemini is working pretty well right now and I’m considering setting up a google workspace account so I can get Gemini with decent privacy.

Google Workspace accounts don't give access to Gemini for coding, unless you get Ultra for $200/month.

I only meant the Gemini Chat interface. There is actually an alternative for $20 a month plan for coding that’s called Gemini Assist Enteprise. I actually already signed up for that when Gemini Code launched, because I definitely didn’t want them having rights to my code.

>Yes, the only e-mail I got was a credit note giving my money back.

That's great news ! They don't have nearly enough staff to deal with support issues, so they default to reimbursement. Which means if you do this every month, you get Claude for free :)


What, with different credit cards / whatever, and under different names, different Google accounts, etc.?

Luckily there is little vendor lock in and likes of https://opencode.ai/ are picking up the slack

This is why it's worth investing in a model-agnostic setup. Don't tie yourself into a single model provider!

OpenHands, Toad, and OpenCode are fully OSS and LLM-agnostic


I can't wait to be able to run this kind of software locally, on my own buck.

But I've seen orgs bite the bullet in the last 18 months and what they deployed is miles behind what Claude Code can do today. When the "Moore's Law" curve for LLM capability improvements flattens out, it will be a better time to lock into a locally hosted solution.


I've been using Claude Code with AWS Bedrock as the provider. Setup guide if you're interested: https://code.claude.com/docs/en/amazon-bedrock

I was banned for simply accessing Claude via VPN.

Nothing in their EULA or ToS says anything about this.

And their appeal form simply doesn't work. Out of my four requests to lift the ban, they've replied once and didn't say anything about the nature about that. They just declined.

Fuck Claude. Seriously. Fuck Claude. Maybe they've got too much money, so they don't care about their paying customers.


is there a benefit of using a separate claude instance to update the CLAUDE.md of the first? I always want to leverage the full context of the situation to help describe what went wrong, so doing it "inline" makes more sense.

> Or I don't know. This is all just a guess from me.

We need local models asap.

Here you are, even open source! And it is a strong one. https://mistral.ai/

Similar thing happened to me 3 months ago. To this day no response to any appeals. I've actually started a GDPR request to see why I got banned, which they're stretching out as long as possible (to the latest possible deadline) so far.

Fun times for IT sec. Prompt injection, not to exfiltrate data, but to ban whole org from AI tools. This could be fun.

It should be mentionned in the title that these are just speculations.

Is it time to move to open source and run model locally with an DGX Spark?

Every single open source model I've used is nowhere close to as good as the big AI companies. They are about 2 years behind or more and unreliable. I'm using the large parameters ones on a 512GB Mac Studio and the results are still poor.

I was banned from just trying out Claude AI chat for the first time a few months ago. I emailed them and restored my account access.

> Organizations of late capitalism, unite!

Saying this is "late Capitalism" is an irresponsible distraction. Capitalism runs fine when appropriately regulated with strong regulations on corporations, especially monopolies, high taxes on the wealthy, and pervasive unionization. We collectively decided to let Capitalism go wild without boundaries and the results are caused by us and our responsibility. Just like driving fast with a badly maintained vehicle may lead to a crash, Capitalism is a system that requires some regulation to run properly.

If you have an issue with LLMs and how they are managed then you should take responsibility for your own use of tools and not blame the economic system.


OT: Has anyone observed that Claude Code in CLI works more reliably than the web or desktop apps?

I can run very long, stable sessions via Claude Code, but the desktop app regularly throws errors or simply stops the conversation. A few weeks ago, Anthropic introduced conversation compaction in the Claude web app. That change was very welcome, but it no longer seems to work reliably. Conversations now often stop progressing. Sometimes I get a red error message, sometimes nothing at all. The prompt just cannot be submitted anymore.

I am an early Claude user and subscribed to the Max plan when it launched. I like their models and overall direction, but reliability has clearly degraded in recent weeks.

Another observation: ChatGPT Pro tends to give much more senior and balanced responses when evaluating non-technical situations. Claude, in comparison, sometimes produces suggestions that feel irrational or emotionally driven. At this point, I mostly use Claude for coding tasks, but not for project or decision-related work, where the responses often lack sufficient depth.

Lastly, I really like Claude’s output formatting. The Markdown is consistently clean and well structured, and better than any competitor I have used. I strongly dislike ChatGPT’s formatting and often feed its responses into Claude Haiku just to reformat them into proper Markdown.

Curious whether others are seeing the same behavior.


do you know for sure this was the reason why?

RIP. I hear they're looking for janitors.

There needs to be a law that prevents companies from simply banning you, especially when it's an important company. There should be an explanation and they shouldn't be allowed to hide behind some veil. There should be a real process with real humans that allow for appeals etc instead of scripts and bots and automated replies.

100% agreed. Freedom of association should be exclusively a human right that corporations don't get. For them, I wish it were a privilege that scaled down with size and valuation, such that multibillion dollar companies wouldn't be allowed to ban anyone without a court agreeing they did something wrong.

Thinking 220GBP for a high-limit Claude account is the kind of thinking that really takes for granted the amount of compute power being used by these services. That's WITH the "spending other people's money" discount that most new companies start folks off with. The fact that so many are painfully ignorant of the true externalities of these technologies and their real price never ceases to amaze me.

That's the problem with all the LLM based AI's the cost to run them is huge compared to what people actually feel they're worth based on what they're able to do and the gap seems pretty large between the two imo.

Claude is going wild lately. It told me I had used up 75% of my weekly limit. Ohhhk. I sent one more short query, and boom blocked til til Monday because i used up 25% in that one go (on thursday). How is that possible? Its falling off fast right now.

Another instance of "Risk Department Maoism".

If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."

Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.

Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.


Not that it’s the same thing, but how real is it to have a locally setup model for coding?

Granted, it’s not going to be Claude scale but it’d be nice to do some of it locally.


That's why I run a local Qwen3-Next model on an NVIDIA Thor dev kit (Apple Silicon and DGX Spark are other options but they are even more expensive for 128GB VRAM)

This is very cool. I looked at the Claude.md he was generating and it is basically all of Claude's failure modes in one file. I can think of a few reasons why Anthropic would not want this information out in the open or for someone to systematically collate all the data into one file.

i read the related parts of the linked file in the repo, and it took me a while to find your comment here again to reply to. Are you saying that the failure modes of claude with "coding" webapps or whatever OP was doing? i originally thought it might have meant like... jailbreak. But having read it, i assume you meant the former, as we both read the same thing and it seemed like a series of admonitions to the LLM, written by the LLM (with some spice added by OP? like "YOU ARE WRONG") and i couldn't find anything that would warrant a ban, you know?

I'm not saying he did anything wrong. I'm saying I can see how Anthropic's automated systems might have flagged & banned the account b/c one of the heuristics they probably use is that there should be no short feedback loops where outputs of Claude are fed back into inputs. So basically Anthropic tracks all calls to their API & they have some heuristics for going through the history & then assigning scores based on what they think is "abusive" or "loopy".

Of course none of it is actually written anywhere so this guy just tripped the heuristics even though he wasn't doing anything "abusive" in any meaningful sense of the word.


thank you for explaining, your point makes sense and i tend to agree with the surmise.

In Open WebUI I have different system prompts (startup advisor, marketing expert, expert software engineer etc) defined and I use Claude via OpenRouter.

Is this going to get me banned? If so i'll switch to a different non-anthropic model.


Why are so many people so obsessed with feeding as many prompts/data as possible to LLMs and generating millions of lines of code?

What are you gonna do with the results that are usually slop?


If the slop passes my tests, then I'm going to use it for precisely the role that motivated the creation of it in the first place. If the slop is functional then I don't care that it's slop.

I've replaced half my desktop environment with this manner of slop, custom made for my idiosyncratic tastes and preferences.


Hmm so how are the alternatives? Just in case I will get banned for nothing as well. I’m riding cc with opus all day long these days.

I'm using Google's antigravity & it works fine for my use cases.

Scamthropic at it again

> Like a lot of my peers I was using claude code CLI regularly and trying to understand how far I could go with it on my personal projects. Going wild, with ideas and approaches to code I can now try and validate at a very fast pace. Run it inside tmux and let it do the work while I went on to do something else

This blog post could have been a tweet.

I'm so so so tired of reading this style of writing.


What about the style are you bothered by? The content seems to be nothing new, so maybe that is the issue, but the style itself seems fine, no?

It bears all the hallmarks of AI writing: length, repetition, lack of structure, and silly metaphors.

Nothing about this story is complex or interesting enough to require 1000 words to express.


Alas, the 2016 tweet is the 2026 blog post prompt.

bow down to our new overlords - dont' like it? banned, with no recourse - enjoy getting left behind, welcome to the future old man

I didn't even get to send 1 prompt to Claude and my "account has been disabled after an automatic review of your recent activities" back in 2024, still blocked.

Even filled in the appeal form, never got anything back.

Still to this day don't know why I was banned, have never been able to use any Claude stuff. It's a big reason I'm a fan of local LLMs. They'll never be SOTA level, but at least they'll keep chugging along.


Since you were forced, are you getting good results from them?

I’ve experimented, and I like them when I’m on an airplane or away from wifi, but they don’t work anywhere near as well as Claude code, Codex CLI, or Gemini CLI.

Then again, I haven’t found a workable CLI with tool and MCP support that I could use in the same way.

Edit: I was also trying local models I could run on my own MacBook Air. Those are a lot more limited than something like a larger Llama3 in some cloud provider. I hadn’t done that yet.


For writing decent code, absolutely not, maybe a simple bash script or the obscure flags to a command that I only need to run once and couldn't be bothered to google or look through the man page etc. I'm using smaller models for less coding related stuff.

Thankfully OpenAI hasn't blocked me yet and I can still use Codex CLI. I don't think you're ever going to see that level of power locally (I very much hope to be wrong about that). I will move over to using a cloud provider with a large gpt-oss model or whatever is the current leader at the time if/when my OpenAI account gets blocked for no reason.

The M-series chips in Macs are crazy, if you have the available memory you can do some cool things with some models, just don't be expecting to one shot a complete web app etc.


you are never gonna hear back from Anthropic, they don't have any support. They are a company who feels like their model is AGI now they dont need humans except when it comes to paying.

just use a different email or something

This happened to me too, you need a phone number unfortunately

You can get one for a few bucks

this has been true for a long long time, there is a rarely any recourse against any technology company, most of them don't even have Support anymore.

Well at least they didn't email the press and called the FBI on you?

> I got my €220 back (ouch that's a lot of money for this kind of service, thanks capitalism).

I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.

Isn't that the point of capitalism?


that's not what capitalism mean. you might be thinking of a free market.

Just stop using Anthropic. Claude Code is crap because they keep putting in dumb limits for Opus.

I always take these sorts of "oh no I was banned while doing something innocent" posts with a large helping of salt. At least the ones where someone is complaining about a ban from Stripe, usually it turns out they are doing something that either violates the terms of service or is actually fraudulent. None the less its quite frustrating dealing with these because either way.

It would at least be nice to know exactly what you did wrong. This whole "You did something wrong. Please read our 200 page Terms of Service doc and guess which one you violated." crap is not helpful and doesn't give me (as an unrelated third party) any confidence that I won't be the next person to step on a land mine.

You mean the throwaway pseudonym you signed up with was banned, right?

right ?


The news is not that they turned off this account. The news is that this user understands very little about the nature of zero sum context mathematics. The mentioned Claude.md is a totally useless mess. Anthropic is just saving themselves from the token waste of this strategy on a fixed billing rate plan.

If the OP really wants to waste tokens like this, they should use a metered API so they are the one paying for the ineffectiveness, not Anthropic.

(Posted by someone who has Claude Max and yet also uses $1500+ a month of metered rate Claude in Kilo Code)


This feels... reasonable? You're in their shop (Opus 4.5) and they can kick you out without cause.

But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.


Sure, but it also guarantees that people will think twice about buying their service. Support should have reached out and informed them about whatever they did wrong, but I can't say that I'm surprised that an AI company wouldn't have an real support.

I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.


Not sure what your point is. They have the right to kick OP out. OP has the right to post about it. We have a right to make decisions on what service to use based on posts like these.

Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."


I think there's an xkcd alt text about that: https://www.explainxkcd.com/wiki/index.php/1357:_Free_Speech

"I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: