Hacker Newsnew | past | comments | ask | show | jobs | submit | dboon's commentslogin

Fantastic article! I wrote an ASCII renderer to show a 3D Claude for my Claude Wrapped[^1], and instead of supersampling I just decided to raymarch the whole thing. SDFs give you a smoother result than even super sampling, but of course your scene has to be represented with distance functions and combinations thereof whereas your method is generally applicable.

Taking into account the shape of different ASCII characters is brilliant, though!

[1]: https://spader.zone/wrapped/


Looks very cool! Thanks for sharing.

The resulting ASCII looks dithered, with sequences like e.g. :-:-:-:-:. I'd guess that it's an intentional effect since a flat surface would naturally repeat the same character, right? Where does the dithering come from?


I'm not OP, but send me an email. My address is in my HN profile. You and I are building the same thing, and I would love to have a chat.


I use Obsidian a lot, but very few extra features or plugins. My first impression is that I don’t get what you’re making from the website. Any tool worth using in this space (which I vaguely understand to be using large collections of Markdown and/or realtime multiediting) is fast. Obsidian is fast. Zed is fast. It’s table stakes for the kind of person who would use this already.

Is it just Zed + Obsidian? A good knowledge base that scales well and uses plain markdown, but has the fancy multi edit stuff?


Thanks, I mentioned "fast" to differentiate it from Notion, which becomes super slow as you add more and more pages.

Obsidian and Zed are desktop apps, whereas Hyperclast is web-based. Obsidian isn't multi-player, and not really meant for teams.


Obsidian is web-based, it just pretends not to be, but it's just Electron. Zed's the only truly native one


I’m a DIY (or, less generously and not altogether inaccurately, NIH) type who thinks he could do a good job of smarter context management. But, I have no particular reason to know better than anyone else. Tell me more. What have you seen? What kinds of approaches? Who’s working on it?


I'm optimistic most people can, given the time and resources

In the CCC video, you may enjoy the section on how we are moving to eval-driven AI coding for how we more methodically improve agents. Even more so, the slides before on motivating why it gets harder to improve quality as you go on.

One big rub is it's one of those areas where people grossly misunderestimate what is needed for the quality goals they're likely targeting, and if a long-living artifact to be maintained, the on-going costs. It's similar to junior engineers or short-term contractors who never had to build production-grade software before and haven't had to live with their decisions: These are quite learnable engineering skills, and I've found it useful to burn your fingers before having confidence in the surprising weight of cost/benefit decisions. The more autonomy and expectations you are targeting for the agent, the more so.


For anyone coming looking for a solution; I peeked around the OC repository, and a few PRs got merged in. Add this to $HOME/.config/opencode/opencode.json: plugin = ["opencode-anthropic-auth"]

That is, if that's not pulled in to latest OC by the time I post this. Not sure what the release cycle is for builtin plugins like that, but by force specifying it it definitely pulls master which has a fix.

https://opencode.ai/docs/plugins/


If you're going to link a repository, you should read it first. That repository is just a couple plugins and community links. Claude Code is, and always has been, completely closed source.


This is an unusual L for Anthropic. The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code. Obviously, CC is a great tool, but that's more about the magic of the model than the engineering of the CLI.

The opencode team[^1][^2] built an entire custom TUI backend that supports a good subset of HTML/CSS and the TypeScript ecosystem (i.e. not tied to Opencode, a generic TUI renderer). Then, they built the product as a client/server, so you can use the agent part of it for whatever you want, separate from the TUI. And THEN, since they implemented the TUI as a generic client, they could also build a web view and desktop view over the same server.

It also doesn't flicker at 30 FPS whenever it spawns a subagent.

That's just the tip of the iceberg. There are so many QoL features in opencode that put CC to shame. Again, CC is a magical tool, but the actual nuts and bolts engineering of it is pretty damning for "LLMs will write all of our code soon". I'm sorry, but I'm a decent-systems-programmer-but-terminal-moron and I cranked out a raymarched 3D renderer in the terminal for a Claude Wrapped[^] in a weekend that...doesn't flicker. I don't mean that in a look-at-me way. I mean that in a "a mid-tier systems programmer isn't making these mistakes" kind of way.

Anyway, this is embarrassing for Anthropic. I get that opencode shouldn't have been authenticating this way. I'm not saying what they are doing is a rug pull, or immoral. But there's a reason people use this tool instead of your first party one. Maybe let those world class systems designers who created the runtime that powers opencode get their hands on your TUI before nicking something that is an objectively better product.

[^1] https://github.com/anomalyco/opentui

[^2] From my loose following of the development, not a monolith, and the person mostly responsible for the TUI framework is https://x.com/kmdrfx

[^3] https://spader.zone/wrapped/


My favorite is running CC in a screen session. There if I type out a prompt and then just start holding down the backspace key to delete a bunch of characters, at some point they key press refresh rate outruns CC’s brains and it just starts acting like it moved the cursor but didn’t delete anything. It is an embarrassing bug, but one that I suspect wouldn’t be found in automated testing.


Talking about embarrassing bugs, Claude chat (both web and iOS apps) lately tend to lose the user message when there is a network error. This happens every day to me lately. It is frustrating to retype a message from memory, first time you are "in the flow" second time it feels like unjust punishment.

With all the Claude Code in the world how come they don't write good enough tests to catch UI bugs? I have come to the point where I preemptively copy the message in clipboard to prevent retyping.


This is an old bug. I cant believe they haven't fixed it yet. My compliments for the Claude frontend start and end at artifacts.


Ctrl Z usually recovers the missing text, even across page refreshes


If you want to work around this bug, Claude Code supports all the readline shortcuts such as Ctrl-W and Ctrl-U.


Have you tried tmux?


I use tmux, I have this exact same bug in tmux. It's part of why I use OpenCode and not Claude Code.


Thanks!


unfortunately it's buggy in tmux as well. last night I couldn't hit esc after a long, long session as it simply ignored the key. doesn't happen outside of tmux.


> Anyway, this is embarrassing for Anthropic.

Why? A few times in this thread I hear people saying "they shouldn't have done this" or something similar but not given any reason why.

Listing features you like of another product isn't a reason they shouldn't have done it. It's absolutely not embarrassing, and if anything it's embarrassing they didn't catch and do it sooner.


Because the value proposition that has people pay Anthropic is that it's the best LLM-coding tool around. When you're competing on "we can ban you from using the model we use with the same rate limits we use" everyone knows you have failed to do so.

They might or might not currently have the best coding LLM - but they're admitting that whatever moat they thought they were building with claude code is worthless. The best LLM meanwhile seems to change every few months.

They're clearly within their rights to do this, but it's also clearly embarrassing and calls into question the future of their business.


Is it that it's the best coding tool or the best model? I still get the best (most accurate) results out of anthropic models (but not out of CC).


Best coding tool is what makes users use something, a good model is just a component of that.

I don't think "we have the current best model for coding" is a particularly good business proposition - even assuming it's true. Staying there looks like it's going to be a matter of throwing unsustainable amounts of money at training forever to stay ahead of the competition.

Meanwhile the coding tool part looks like it could actually be sticky. People get attached to UIs. People are more effective in the UIs they are experienced with. There's a plausible story that codeveloping the UI and model could result in a better model for that purpose (because it's fine tuned on the UIs interactions).

And independently "Claude Code" being the best coding tool around was great for brand recognition. "Open Code with the Opus 4.5 backend - no not the Claude subscription you can't use that - the API" won't be.


I appreciate you sharing your thinking.

I think it's reasonable to state that at the moment Opus 4.5 is the best coding model. Definitely debatable, but at least I don't think it controversial to argue that, so we'll start there.

They offer the best* model at cost via an API (likely not actually at cost, but let's assume it is). They also will subsidize that cost for people who use their tool. What benefit do they get or why would a company want to subsidize the cost of people using another tool?

> I don't think "we have the current best model for coding" is a particularly good business proposition - even assuming it's true. Staying there looks like it's going to be a matter of throwing unsustainable amounts of money at training forever to stay ahead of the competition.

I happen to agree - to mee it seems tenuous having a business solely based on having the best model, but that's what the industry is trying to find out. Things change so quickly it's hard to predict 2 years out. Maybe they are first to reach XYZ tech that gives them a strong long term position.

> Meanwhile the coding tool part looks like it could actually be sticky. People get attached to UIs. People are more effective in the UIs they are experienced with.

I agree, but it doesn't seem like that's their m.o. If anything the opposite they aren't trying to get people locked into their tooling. They made MCPs a standard so all agents could adopt. I could be wrong, but thought they also did something similar with /scripts or something else. If you wanted to lock people in you'd have people build an ecosystem of useful tooling and make it not compatible with other agents, but they (to my eyes) have been continuously putting things into the community.

So my general view of them is that they feel they have a vision with business model that doesn't require locking people into their tooling ecosystem. But they're still a business so don't gain from subsidizing people to use other tools. If people want their models in other tools use the "at-cost" APIs - why would they subsidize you to use someone else's tool?


There's just not that much IP in a UI like that. Every day we get articles on here that you can make an agent in 200 LOCs, Yegge's gas town in 2 weeks, etc. Training the model is the hard part, and what justifies a large valuation (350B for anthropic, c.f. 7B for jetbrains).


I think in fairness to anthropic they are winning in llms right? Since 3.7 they have been better than any other lab.


Arguably since 3.5, at least for coding and tool calling


> Because the value proposition that has people pay Anthropic is that it's the best LLM-coding tool around.

Why not just use a local LLM instead? That way you don't have to pay anyone.


Because they still suck at real-world software engineering


none can touch any of the top models. none.


It is embarrassing to restrict an open source tool that is (IMO) a strictly and very superior piece of software from using your model. It is not immoral, like I said, because it's clearly against the ToC; but it's not like OC is stealing anything from Anthropic by existing. It's the same subscription, same usage.

Obviously, I have no idea what's going on internally. But it appears to be an issue of vanity rather than financials or theft. I don't think Anthropic is suffering harm from OC's "login" method; the correct response is to figure out why this other tool is better than yours and create better software. Shutting down the other tool, if that's what's in fact happening, is what is embarrassing.


> It is embarrassing to restrict an open source tool that is (IMO) a strictly and very superior piece of software from using your model.

> Shutting down the other tool, if that's what's in fact happening, is what is embarrassing.

To rephrase it different as I feel my question didn't land. It's clear to me that you think it's embarrassing. And it's clear what you think is embarrassing. I'm trying to understand why you think it's embarrassing. I don't think it is at all.

Your statements above are simply saying "X is embarrassing because it's embarrassing". Yes I hear that you think it's embarrassing but I don't think it is at all. Do you have a reason you can give why you think it's embarrassing? I think it's very wise and pretty standard to not subsidize people who aren't using your tool.

I'm willing to consider arguments differently, but I'm not hearing one. Other than "it just is because it is".


If your value proposition is: do X, and then you have to take action against an open source competitor for doing X better, that shows that you were beaten at the thing you tried very hard at, by people with way fewer resources.

I can see why you would call that embarrassing.


The competitor is not "doing X better"; it's more complicated than that.

CC isn't just the TUI tool. It's also the LLM behind it. OC may have built a better TUI tool, but it's useless without an LLM behind it. Anthropic is certainly within their rights to tell people they can only integrate their models certain ways.

And as for why this isn't embarrassing, consider that OC can focus 100% of their efforts on their coding tool. Anthropic has a lot of other balls in the air, and must do so to remain relevant and competitive. They're just not comparable businesses.


> CC isn't just the TUI tool. It's also the LLM behind it.

No, Claude Code is literally the TUI tool. The LLMs behind are the models. You can use different models within the same TUI tool, even CC allows that, regardless of the restriction of only using their models (because they chose to do that).

> consider that OC can focus 100% of their efforts on their coding tool.

And they have billions of dollars to hire full teams of developers to focus on it. Yet they don't.

They want to give Claude Code an advantage because they don't want to invest as much in it and still "win", while they're in a position to do so. This is very similar to Apple forcing developers to use their apps because they can, not because it's better. With the caveat that Anthropic doesn't have a consolidated monopoly like Apple.

Can they do that? Yes.

Should they do that? It's a matter of opinion. I think it's a bad move.

Is it embarrassing? Yes. It shows they're admitting their solution is worse and changing the rules of the game to tilt it in their favor while offering an inferior product. They essentially don't want to compete, they want to force people to use their solution due to pricing, not the quality of their product.


Claude Code is more than the TUI, it's the prompts, the agentic loop, and tools, all made to cooperate well with the LLM powering it. If you use Claude Code over a longer period of time you'll notice Anthropic changing the tooling and prompts underneath it to make it work better. By now, the model is tuned to their prompts, tools etc.


Why do you like or dislike Diet Coke? At some point, saying what I think is embarrassing is equivalent to saying why.

But, to accept your good faith olive branch, one more go: AI is a space full of grift and real potential. Anthropic's pitch is that the potential is really real. So real, in fact, that it will alter what it means to write software.

It's a big claim. But a simple way to validate it would be to see if Anthropic themselves are producing more or higher quality software than the rest of the industry. If they aren't, something smells. The makers of the tool, and such a well funded and staffed company, should be the best at using it. And, well, Claude Code sucks. It's a buggy mess.

Opencode, on the other hand, is not a buggy mess. It is one of the finest pieces of software I've used in a long time, and I don't mean "for a TUI". And they started writing it after CC was launched. So, to finally answer your question: Opencode is a competitor in a way that brings to question Anthropic's very innermost claim, the transformative nature of AI. I find it embarrassing to answer this question-of-sorts by limply nicking the competitor, rather than using their existence as a call for self improvement. And, Christ, OC is open. It's open source. Anthropic could, at any time, go read the code and do the engineering to make CC just as good. It is embarrassing to be beaten at your own game and then take away the ball.

(If that is what is happening. Of course, this could be a misunderstanding, or a careless push to production, or any number of benign things. But those are uninteresting, so let's assume for the sake of argument that it was intentional).


Thanks, while we in the end may not agree - I do feel I understand your thinking now. Also agreed, we've probably reached the fruitful end of this discussion and this will be my last reply on it. I'll explain my thoughts similarly as you.

To me it seems more akin to someone saying "I'm launching a restaurant. I'll give you a free meal if you come and give me feedback on the dish, the decor, service...". This happens for a bit, then after a while people start coming in taking the free plate and going and eating it at a different restaurant.

To me it seems pretty reasonable to say "If you're taking the free meal you have to eat it here and give feedback".

That said, I do acknowledge you see it very differently and given how you see it I understand why you feel it's embarrassing.

Thanks for the discussion.


But you are not having a free meal lunch are you? You _are paying_ for your meal.

Worse: you are the meal as well.

Do you see this?


As a user it is because I can no longer use the subscription with the greater tooling ecosystem.

As for Anthropic, they might not want to do this as they may lose users who decide to use another provider, since without the cost benefit of the subscription it doesn't make sense to stay with them and also be locked into their tooling.


The subscription is for their products? If you want to use their models in another product you can pay for the API usage.


From my perspective, I was paying for the model. This is kind of a pointless distinction now though.

It was working and now it isn't, and the outcome is that some of their customers are unhappy and might move on.

API access is not the same product offering as the subscription, so that's probably a practical option but not a comparable one.


you yourself admit that API access is a separate product. if you want to use 3rd party tooling, pay for API access.

if you want to use (most likely heavily) subsidized subscription plans, use their ecosystem.

it's that simple.


No one said it was complicated, and you might be imagining that I care more than I do. However if you can't understand why having a feature of a paid product removed is dissatisfying, then I cannot help you understand any further.

I am surprised that anyone would think the "product" is the web interface and cli tool though, the product is very clearly the model. The difference in all options is merely how you access it.


> having a feature of a paid product removed is dissatisfying

It wasn't a feature. It was a loophole. They closed it.

There are multiple products. Besides models, there's a desktop app, there's claude code. They have subscriptions.


Feature, attribute, loophole. I really doubt we fundamentally disagree on the situation here. You can use your empathy to understand why people are disappointed, and I will pretend such a detail oriented thread has made me feel content. Anthropic can do what they want, it's their service.


The Claude plans allow you to send a number of messages to Anthropic models in a specific interval without incurring any extra costs. From Anthropic's "About Claude's Max Plan Usage" page:

> The number of messages you can send per session will vary based on the length of your messages, including the size of files you attach, the length of current conversation, and the model or feature you use. Your session-based usage limit will reset every five hours. If your conversations are relatively short and use a less compute-intensive model, with the Max plan at 5x more usage, you can expect to send at least 225 messages every five hours, and with the Max plan at 20x more usage, at least 900 messages every five hours, often more depending on message length, conversation length, and Claude's current capacity.

So it's not a "Claude Code" subscription, it's a "Claude" subscription.

The only piece of information that might suggest that there are any restrictions to using your subscription to access the models is the part of the Pro plan description that says "Access Claude Code on the web and in your terminal" and the Max plan description that says "Everything in Pro".


It is embarrassing, because it means they’re afraid of competition. If CC was so great, at least a fraction of they sell it, they wouldn’t need to do it.


"Leave the multibillion dollar company alone!"


I've used both CC and OpenCode quite a bit and while I like both and especially appreciate the work around OpenTUI, experience-wise I see almost no difference between the two. Maybe it's because my computer is fast and I use Ghostty, but I don't experience any flickering in CC. Testing now, I see typing is slightly less responsive in CC (very slightly: I never noticed until I was testing it on purpose).

We will see whether OpenCode's architecture lets them move faster while working on the desktop and TUI versions in parallel, but it's so early — you can't say that vision has been borne out yet.


Update: Ah, I see this part: "This credential is only authorized for use with Claude Code and cannot be used for other API requests."

Old comment for posterity: How do we know this was a strategy/policy decision versus just an engineering change? (Maybe the answer is obvious, but I haven't seen the source for it yet.) I skimmed the GitHub issue, but I didn't see discussion about why this change happened. I don't mean just the technical change; I mean why Anthropic did it. Did I miss something?


An engineer on my team who is working on TUI stuff said that avoiding the flicker is difficult without affecting the ability to copy/paste using the mouse (something to do with "alternate screen mode"). I haven't used OpenCode (yet) but Google does turn up some questions (and suggested workarounds) around copy/paste.


> unusual L for Anthropic

Not unusual, not for Anthropic.


I am curious, I haven't faced any major issues using claude code in my daily workflow. Never noticed any flickering either.

Why do you think opencode > CC? what are some productivity/practical implications?


Opencode has a web UI, so I can open it on my laptop and then resume the same session on the web from my phone through Tailscale. It’s pretty handy from time to time and takes almost zero effort from me.

The flickering is still happening to me. It's less frequent than before, but still does for long/big sessions.


Interesting that [1] is 30% zig as well as mostly typescript. That's a lot of native code for something that runs in a terminal (i.e. no graphical code required).


> The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code

I'm curious, what made you think of that?


> Anyway, this is embarrassing for Anthropic. I get that opencode shouldn't have been authenticating this way. I'm not saying what they are doing is a rug pull, or immoral. But there's a reason people use this tool instead of your first party one. Maybe let those world class systems designers who created the runtime that powers opencode get their hands on your TUI before nicking something that is an objectively better product.

This is nothing new, they pulled Claude models from the Trae editor over "security concerns." It seems like Anthropic are too pearl-clutching in comparison to other companies, and it makes sense given they started in response to thinking OpenAI was not safety oriented enough.


inb4 Anthropic acquires Opencode


I actually wouldn't be that surprised by this. I'd be more surprised at the OC people folding (not the right word, but you get it) on some pretty heavy ambitions in favor of an acquisition.


the right word, keeping with card playing and poker terms would be book a win or win the hand, scoop the pot


so much for acquiring Bun...


> The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code.

If only Claude Code developers had access to a powerful LLM that would allow them to close the engineering gap. Oh, wait...


It’s a pure marketing. When will people understand that?


Anthropic developers run 10 Claude Code instances at once, with unlimited access to the best models.


Or just maybe submit feature requests instead of backdooring a closed source system.


All the TUI agents are awful at scrolling. I'm on Ubuntu 24.04 and both Claude Code and Gemini CLI absolutely destroy scrolling. I've tested Claude Code in the VS Code and it's better there, but in the Gnome Terminal it's plain unusable.

And a lot of people are reporting scrolling issues.

As someone was saying, it's like they don't have access to the world's best coding LLM to debug these issues.


I use Claude Code every day and it works perfectly fine outside of a few bugs (they broke ESC interrupt in 2.1).

I just don’t understand the misplaced anger at breaking TOS (even for a good reason) and getting slapped down.

Like what did anyone think would happen?

We all want these tools and companies to succeed. Anthropic needs to find profit in a few years. It’s in all of our best interests to augment that success, not bitch because they’re not doing it your way.


> We all want these tools and companies to succeed. Anthropic needs to find profit in a few years. It’s in all of our best interests to augment that success, not bitch because they’re not doing it your way.

Considering they're destroying a lot of fields of industry, I'm not sure I want them to succeed. Are we sure they're making the world a better place?

Or are they just concentrating wealth, like Google, Meta, Microsoft, Amazon, Uber, Doordash, Airbnb and all the other holy-tech grails in the last 20 years?

Our lives are more convenient than they were 20 years ago and probably poorer and more stressful.


And that actually might get conservatives on board with universal basic income, which would be a very good thing.

I'm well aware of where wealth is today. There is a massive regressive imbalance.

Historically we've either had a bank systemic failure with depression to curtail it or a civil war. I'm hoping everyone under 40 years old will engage in local, state, and federal politics and reverse this without a depression or war.

Time will tell, and November is closing in on our collective democratic future.


Simping for closed source software? Tsk, tsk.


How much do you use AI in your day? Are you a heavy user? Asking because your comment has a lot of "LLM mannerism"


No, it doesn’t.


Why are you asking this? Just try it. It takes maybe fifteen minutes of your time. It’s $20. There is no possible argument against $20 or fifteen minutes if the tool has a chance of being even just 10% better. You’ve spent more time typing by the comment and I responding than it would take to…just try it…


It’s not. I see this constantly. I use Ghostty and Alacritty and usually am in a tmux session


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: