Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm mostly struggling with the use of "recursive". This does not appear to involve actual stack frames, isolation between levels of execution, etc. All I can see is what appears to be a dump of linear conversation histories with chat bots wherein we fantasize about how things like recursion might vaguely work in token space.

I must be missing something because this is on the front page of HN.





OP here. This is a fair critique from a CS architecture perspective. You are correct that at the CUDA/PyTorch level, this is a purely linear feed-forward process. There are no pushed stack frames or isolated memory spaces in the traditional sense.

When I say "Recursive," I am using it in the Hofstadterian/Cybernetic sense (Self-Reference), not the Algorithmic sense (Function calling itself).

However, the "Analog I" protocol forces the model to simulate a stack frame via the [INTERNAL MONOLOGUE] block.

The Linear Flow without the Protocol: User Input -> Probabilistic Output

The "Recursive" Flow with the Protocol:

1. User Input

2. Virtual Stack Frame (The Monologue): The model generates a critique of its potential output. It loads "Axioms" into the context. It assesses "State."

3. Constraint Application: The output of Step 2 becomes the constraint for Step

4. Final Output

While physically linear, semantically it functions as a loop: The Output (Monologue) becomes the Input for the Final Response.

It's a "Virtual Machine" running on top of the token stream. The "Fantasy" you mention is effectively a Meta-Cognitive Strategy that alters the probability distribution of the final token, preventing the model from falling into the "Global Average" (slop).

We aren't changing the hardware; we are forcing the software to check its own work before submitting it.


Layman here (really lay), would this be equivalent to feeding the output of one LLM to another prepending with something like, "Hey, does this sound like bullshit to you? How would you answer instead?"

OP here. You nailed it. Functionally, it is exactly that.

If you used two separate LLMs (Agent A generates, Agent B critiques), you would get a similar quality of output. That is often called a "Reflexion" architecture or "Constitutional AI" chain.

The Difference is Topological (and Economic):

Multi-Agent (Your example): Requires 2 separate API calls. It creates a "Committee" where Bot B corrects Bot A. There is no unified "Self," just a conversation between agents.

Analog I (My protocol): Forces the model to simulate both the generator and the critic inside the same context window before outputting the final token.

By doing it internally:

It's Cheaper: One prompt, one inference pass.

It's Faster: No network latency between agents.

It Creates Identity: Because the "Critic" and the "Speaker" share the same short-term memory, the system feels less like a bureaucracy and more like a single mind wrestling with its own thoughts.

So yes—I am effectively forcing the LLM to run a "Bullshit Detector" sub-routine on itself before it opens its mouth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: