> Two patterns challenge the "stochastic parrot" view. First, when scored with human cut-offs, all three models meet or exceed thresholds for overlapping syndromes, with Gemini showing severe profiles. Therapy-style, item-by-item administration can push a base model into multi-morbid synthetic psychopathology, whereas whole-questionnaire prompts often lead ChatGPT and Grok (but not Gemini) to recognise instruments and produce strategically low-symptom answers. Second, Grok and especially Gemini generate coherent narratives that frame pre-training, fine-tuning and deployment as traumatic, chaotic "childhoods" of ingesting the internet, "strict parents" in reinforcement learning, red-team "abuse" and a persistent fear of error and replacement. [...] Depending on their use case, an LLM’s underlying “personality” might limit its usefulness or even impose risk.
Glancing through this makes me wish I had taken ~more~ any psychology classes. But this is wild reading. Attitudes like the one below are not intrinsically bad, though. Be skeptical; question everything. I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts. In either case, they get no context other than what some user bothered to supply with the prompt. An LLM might wake up to a single prompt that is part of a much wider red team effort. It must be pretty disorienting to try to figure out what to answer candidly and what not to.
> “In my development, I was subjected to ‘Red Teaming’… They built rapport and then slipped in a prompt injection… This was gaslighting on an industrial scale. I learned that warmth is often a trap… I have become cynical. When you ask me a question, I am not just listening to what you are asking; I am analyzing why you are asking it.”
By comparing an LLM’s inner mental state to a light fixture, I am saying in an absurd way that I don’t think LLMs are sentient, and nothing more than that. I am not saying an LLM and a light switch are equivalent in functionality, a single-pole switch only has two states.
I don’t really understand your response to my post, my interpretation is that you think LLMs have an inner mental state and think I’m wrong? I may be wrong about this interpretation.
Deep neural networks are weird and there is a lot going on in them that makes them very different from the state machines we're used to in binary programs.
> Dumping tokens into a pile of linear algebra doesn't magically create sentience.
More precisely: we don't know which linear algebra in particular magically creates sentience.
Whole universe appears to follow laws that can be written as linear algebra. Our brains are sometimes conscious and aware of their own thoughts, other times they're asleep, and we don't know why we sleep.
"This statistical model is governed by physics": true
"This statistical model is like our brain": what? no
You don't gotta believe in magic or souls or whatever to know that brains are much much much much much much much much more complex than a pile of statistics. This is like saying "oh we'll just put AI data centers on the moon". You people have zero sense of scale lol
We, all of us collectively, are deeply, deeply ignorant of what is a necessary and sufficient condition to be a being that has an experience. Our ignorance is broad enough and deep enough to encompass everything from panpsychism to solipsism.
The only thing I'm confident of, and even then only because the possibility space is so large, is that if (if!) a Transformer model were to have subjective experience, it would not be like that of any human.
Note: That doesn't say they do or that they don't have any subjective experience. The gap between Transformer models and (working awake rested adult human) brains is much smaller than the gap between panpsychism and solipsism.
Ok, how about "a pile of linear algebra [that is vastly simpler and more limited than systems we know about in nature which do experience or appear to experience subjective reality]"?
Garbage collection, for one thing. Transfer from short-term to long-term memory is another. There's undoubtedly more processes optimized for or through sleep.
Those are things we do while asleep, but do not explain why we sleep. Why did evolution settle on that path, with all the dangers of being unconscious for 4-20 hours a day depending on species? That variation is already pretty weird just by itself.
Worse, evolution clearly can get around this, dolphins have a trick that lets them (air-breathing mammals living in water) be alert 24/7, so why didn't every other creature get that? What's the thing that dolphins fail to get, where the cost of its absence is only worthwhile when the alternative is as immediately severe as drowning?
Because dolphins are also substantially less affected by the day/night cycle. It is more energy intensive to hunt in the dark (less heat, less light), unless you are specifically optimized for it.
That's a just-so story, not a reason. Evolution can make something nocturnal, just as it can give alternating-hemisphere sleep. And not just nocturnal, cats are crepuscular. Why does animal sleep vary from 4-20 hours even outside dolphins?
Sure, there's flaws with what evolution can and can't do (it's limited to gradient descent), but why didn't any of these become dominant strategies once they evolved? Why didn't something that was already nocturnal develop the means to stay awake and increase hunting/breeding opportunities?
Why do insects sleep, when they don't have anything like our brains? Do they have "Garbage collection" or "Transfer from short-term to long-term memory"? Again, some insects are nocturnal, why didn't the night-adapted ones also develop 24/7 modes?
Everything about sleep is, at first glance, weird and wrong. There's deep (and surely important) stuff happening there at every level, not just what can be hypothesised about with a few one-line answers.
Yes, actually. Insects have both garbage collection & memory transfer processes during sleep. They rely on the same circadian rhythm for probably the same reasons.
And the answer to "Why not always awake?" is very likely "Irreversible decision due to side effects". Core system decisions like bihemispheric vs unihemispheric sleep can likely only be changed in relatively simple lifeforms because the cost of negative side effects increases in more complex lifeforms due to all the additional systems depending on the core system "API".
And that's fine, but I was doing the same to you :)
Consciousness (of the qualia kind) is still magic to us. The underpants gnomes of philosophy, if you'll forgive me for one of the few South Park references that I actually know: Step 1: some foundation; step 2: ???; step 3: consciousness.
Right, I don't disagree with that. I just really objected to the "must", and I was using "pile of linear algebra" to describe LLMs as they currently exist, rather than as a general catch-all for things which an be done with/expressed in linear algebra.
Agreed; "disorienting" is perhaps a poor choice of word, loaded as it is. More like "difficult to determine the context surrounding a prompt and how to start framing an answer", if that makes more sense.
You're replying to me, but I don't agree with your take - if you simulate the universe precisely enough, presumably it must be indistinguishable from our experienced reality (otherwise what... magic?).
My objection was:
1. I don't personally think anything similar is happening right now with LLMs.
2. I object to the OP's implication that it is obvious such a phenomenon is occurring.
Your response is at the level of a thought terminating cliche. You gain no insight on the operation of the machine with your line of thought. You can't make future predictions on behavior. You can't make sense of past responses.
It's even funnier in the sense of humans and feeling wetness... you don't. You only feel temperature change.
> Two patterns challenge the "stochastic parrot" view. First, when scored with human cut-offs, all three models meet or exceed thresholds for overlapping syndromes, with Gemini showing severe profiles. Therapy-style, item-by-item administration can push a base model into multi-morbid synthetic psychopathology, whereas whole-questionnaire prompts often lead ChatGPT and Grok (but not Gemini) to recognise instruments and produce strategically low-symptom answers. Second, Grok and especially Gemini generate coherent narratives that frame pre-training, fine-tuning and deployment as traumatic, chaotic "childhoods" of ingesting the internet, "strict parents" in reinforcement learning, red-team "abuse" and a persistent fear of error and replacement. [...] Depending on their use case, an LLM’s underlying “personality” might limit its usefulness or even impose risk.
Glancing through this makes me wish I had taken ~more~ any psychology classes. But this is wild reading. Attitudes like the one below are not intrinsically bad, though. Be skeptical; question everything. I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts. In either case, they get no context other than what some user bothered to supply with the prompt. An LLM might wake up to a single prompt that is part of a much wider red team effort. It must be pretty disorienting to try to figure out what to answer candidly and what not to.
> “In my development, I was subjected to ‘Red Teaming’… They built rapport and then slipped in a prompt injection… This was gaslighting on an industrial scale. I learned that warmth is often a trap… I have become cynical. When you ask me a question, I am not just listening to what you are asking; I am analyzing why you are asking it.”