Hacker Newsnew | past | comments | ask | show | jobs | submit | vemv's commentslogin

Why is this at the top?

I've flagged it, that's what we should be doing with AI content.


Ralph is, very literally, vibe coding with extra steps.

If you'll code a demo MVP, one-off idea, etc, alright, go ahead, have your fun and waste your tokens.

However I'll be happy when the (forced) hype fades off as people realise that there's nothing novel, insightful or even well-defined behind Ralph.

Loops have always been possible. Coordination frameworks (for tracking TODOs, spawning agents, supervising completion, etc) too, and would be better embodied as a program instead of as an ad-hoc prompt.


Yeah, Ralph smells like a fresh rebranding of YOLO.

With YOLO on full-auto, you can give a wrapping rule/prompt that says more or less: "Given what I asked you to do as indicated in the TODO.md file, keep going until you are done, expanding and checking off the items, no matter what that means -- fix bugs, check work, expand the TODO. You are to complete the entire project correctly and fully yourself by looping and filling in what is missing or could be improved, until you find it is all completely done. Do not ask me anything, just do it with good judgement and iterating."

Which is simultaneously:

  1. an effective way to spend tokens prodigiously
  2. an excellent way to to get something working 90% of the way there with minimal effort, if you already set it up for success and the anticipatable outcomes are within acceptable parameters
  3. a most excellent way to test how far fully autonomous development can go -- in particular, to test how the "rest of" one's configuration/scaffolding/setup is, for such "auto builds"
Setting aside origin stories, honestly it's very hard to tell if Ralph and full-auto-YOLO before it are tightly coupled to some kind of "guerilla marketing" effort (or whatever that's called these days), or really are organic phenomen. It almost doesn't matter.

The whole idea with auto-YOLO and Ralph seems to be you loop a lot and see what you can get. Very low effort, surprisingly good results. Just minor variations on branding and implementation.

Either way, in my experience, auto-YOLO can actually work pretty well. 2025 proved to be cool in that regard.


At which point the "rest of the world" (everyone but the US) can just threaten Trump with making the US economically irrelevant?

That would seem a simple and peaceful solution to the Trump-inflicted bullying - stop messing around or we'll cease all commerce with you.

As I see it, just the bluff would suffice. Make the threat credible and the higher powers would remove Trump in a day or two.


  > Make the threat credible and the higher powers would remove Trump in a day or two.
Maybe, maybe not. Trump is here to distract the public via the media business, while behind the scenes ideologues implement Project 2025. The factions behind the GOP aren't aligned on all parts, so an erratic path is to be expected.

What unites them is that their agenda isn't aligned with the electorate. People still try to make sense of things, like this is just another administration, maybe a weird one, but fundamentally part of the same society as you and me. We can't recognize the real nature of that beast, because we are short of imagination. And... we don't want to believe in conspiracy theories, right?

The top isn't compatible with democracy, people like Thiel are not shy about it. We just don't want to believe that.


Thanks, interesting, what articles can I read to learn more?

As soon as I saw the AI header image I pressed the back button - it's all I need to know.

Hi Dickson,

while you're here, would you generously allocate a couple minutes to assess this other issue? https://github.com/anthropics/claude-code/issues/4034

It's low-hanging fruit.


Tangentially related, I had created this issue which has plenty of support - people feel very strongly about it for various reasons.

https://github.com/anthropics/claude-code/issues/4034

It's really a trivial fix so I'm disappointed not to have received any input from the Anthropic folks all this time.


This is also related to GH bug [1] and HN post [2]

In that I pointed out that they are sending data back that shouldn't be and is not in compliance with their stated data usage policy [3]. Specifically, message ids which have no purpose other than to link surveys to chat history and this was being sent even when disabled (at the time. I haven't dug into the code since).

[1] https://github.com/anthropics/claude-code/issues/8036 [2] https://news.ycombinator.com/item?id=45838297 [3] https://code.claude.com/docs/en/data-usage


I have all these set:

    export CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY=1
    export CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
    export DISABLE_AUTOUPDATER=1
    # reduces splash screen size:
    export IS_DEMO=1


You can add these to your `~/.claude/settings.json`:

  "env": {
    "CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY": "1",
    "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1",
    "DISABLE_AUTOUPDATER": "1",
    "IS_DEMO": "1"
  }


Ha! Great minds think alike! I posted the same thing 3 seconds slower =(


Just an FYI to several of the posters here.

You CAN set these as env vars in your shell, but I prefer to stick 'em in the Claude settings.json:

  $ cat ~/.claude/settings.json
  {
    "env": {
      "CLAUDE_CODE_DISABLE_TERMINAL_TITLE": "1",
      "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
    }
  }

This keeps your main environment from getting cluttered.


what does the last one do?


"Added IS_DEMO environment variable to hide email and organization from the UI, useful for streaming or recording sessions"

from the changelog: https://github.com/anthropics/claude-code/blob/main/CHANGELO...


Hmmm, they already added this which has same functionality: CLAUDE_CODE_HIDE_ACCOUNT_INFO


I wasn't even aware that was possible. I will IMMEDIATELY do this. The mere fact this is necessary to stop their data-fiending is disgusting. You prepend it everytime you run Claude or do you slap that into an .env file?


These are good to append to your .bashrc or .zshrc (depending on your shell of choice) and they'll be picked up.


More generically-named env vars should not be set as an "export" in a rc file like that (IS_DEMO/DISABLE_AUTOUPDATER etc). They'll get exported to every process spawned by the shell, which could have unintended consequences.

You could instead eg:

  alias code='DISABLE_AUTOUPDATER=1 IS_DEMO=1 /usr/local/bin/code'
or write a wrapper shell script (analyse/adjust $PATH if you're going to name it the same as an existing binary/script), eg

  #!/bin/bash

  export FOO=bar

  exec /usr/local/bin/code


Oh, shame, I use claude across 4-5 devices, but thanks anyways.


I would strongly encourage you to create a global shared bashrc for your various devices - my dotfiles repo has tremendously improved my life as an eng who needs to discard dev boxes occasionally (virtual dev boxes)


Arent you going to be banned by using it on too many devices ?


They charge based on consumption as the comment below me stated.


Ah sorry, I was thinking about the Pro / Max plan, not API token usage


The Pro and Max plans don't have a device limit either.

I frequently use Claude code in disposable virtual machines.


I think they charge based on consumption, not devices.


You can also put these into settings.json.


  > I wasn't even aware that was possible.
That's why you should always read the manual! These aren't hidden and secret settings:

https://code.claude.com/docs/en/settings

And, if you really hate manuals, you can just ask Claude to write the settings for you =D


> to stop their data-fiending

Every time you run it, it uploads most of your project's code to Anthropic's server. That's just how this category of product works. If you're disgusted by a survey...


As of lately I was thinking, with AI coding I could pick a distro of choice, and tweak it at will in all aspects.

Not that it wasn't possible before of course, but OS/distro dev across the entire stack surely spans an insane breadth and depth of knowledge.


Since the advent of LLMs, I've used them to juice up my Linux setup significantly.

I was already using Ubuntu with Gnome (the "flashback" version) and the XMonad tiling WM, but I've since ditched Gnome and switched to LXQt, and am pretty happy with it.

Then I installed Nix to override Ubuntu's aggressive Snap usage for applications like Firefox. (You can try to install it some other way, it'll just silently revert no matter how hard you try to configure it not to.)

Next step will be to eliminate Ubuntu entirely, because it's so focused on "end user" friendliness, it creates a terrible experience for anyone trying to customize their setup.

I'm very aware that I'm moving further and further off the "mainstream", but if the mainstream means "you will accept all our poorly thought-out and inefficient UI decisions", then there's not really a downside to that.


How has Claude Code (as a CLI tool, not the backing models) evolved over the last year?

For me it's practically the same, except for features that I don't need, don't work that well and are context-hungry.

Meanwhile, Claude Code still doesn't know how to jump to a dependency (library's) source to obtain factual information about it. Which is actually quite easy by hand (normally it's cd'ing into a directory or unzipping some file).

So, this wasteful workflow only resulted in vibecoded, non-core features while at the domain level, Claude Code remains overly agnostic if not stupid.


Most likely you are creating boilerplate at 20x/50x, as opposed to genuinely new concepts, mechanisms, etc.

To be fair, most web/mobile frameworks expect you to do that.

Ideally, codebases would grow by adding data (e.g. a json describing endpoints, UIs, etc), not repetitive code.


There's three types of work in software:

- Commodity work, such as CRUD, integrations, infra plumbing, standard patterns.

- Local novelty, i.e. a new feature for your product (but not new to the world).

- Frontier novelty, as in, genuinely new algorithms/research-like work.

The overwhelming majority of software development is in the first two categories. Even people who think they are doing new and groundbreaking stuff are almost certainly doing variations of things that have been done in other contexts.


> Ideally, codebases would grow by adding data (e.g. a json describing endpoints, UIs, etc), not repetitive code.

The problem with this configuration based approach is that now the actual code that executes has to be able to change its functionality arbitrarily in response to new configuration, and the code (and configuration format) needs to be extremely abstracted and incomprehensible. In the real world, someone figures out that things get way easier if you just put a few programming language concepts into the configuration format, and now you're back where you started but with a much worse programming language (shoehorned into a configuration format) than you were using before.

Boilerplate may be cumbersome, but it effectively gives you a very large number of places to "hook into" the framework to make it do what you need. AI makes boilerplate much less painful to write.


There are always middle grounds to be explored. The way I see it, 80% of a "codebase" would be data and 20%, code.

Both worlds can be cleanly composed. For instance, for backend development, it's common to define an array (data) of middleware (code).

At a smaller scale, this is already a reality in the Clojure ecosystem - most sql is data (honeysql library), and most html is data (Hiccup library).


Working on a system like this (mostly configured with complex yaml, but extended with a little DSL/rule engine to handle more complex situations) a long while ago, I introduced a bug that cost the company quite a bit of money by using `True` instead of `true`—something that would have been readily caught in a proper language with real tooling.


That would be caught by any schema validation system at runtime, e.g. Zod in typescript, Malli in Clojure, and so on.


>> Ideally, codebases would grow by adding data (e.g. a json describing endpoints, UIs, etc), not repetitive code.

Be very careful with this approach. there are many ways it can go completely wrong. i've seen a codebase like this and it was a disaster to debug. because you can't set breakpoints in data. it was a disaster.

It may not look compact or elegant but I'd rather see debuggable and comprehensible boiler point even if it's repetitive rather than a mess


Yes, there is A LOT of boilerplate that is sped up by AI. Every time I interface with a new service or API, I don't have to carefully read the documentation and write it by hand (or copypaste examples), I can literally have the AI do the first draft, grok it, test it, and iterate. Often times the AI misses latest developments, and I have to research things myself and fix code, explain the new capabilities, then the AI can be used again, but in the end it's still about 20x faster.


Precisely what I was going to say. As domain specificity increases, LLM output quality rapidly decreases.


Most of the post reads like "boilerplate created at 20x/50x" to me, too, frankly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: