Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find these made-up "conversations" to be super boring. You're not "talking" to the AI, it's just predicting what the next sentence in the document might be. There is no plan behind this, the output may be mildly amusing, but that's about it.

Yet that seems to be the only thing everyone trying out GPT-3 is interested in...



> it's just predicting what the next sentence in the document might be

Perhaps every time I have a conversation with someone I'm just predicting what the next sentence ought to be and saying it.

How would I know the difference, let alone you?


That's how a Turing test works, and up until now human subjects have always found the difference.

But even if you crafted an even better model that would fool humans, would it really understand the output it generated, or simply attempt to find the output most likely accepted by the reader? Is this what you would call intelligent behaviour?


> But even if you crafted an even better model that would fool humans, would it really understand the output it generated, or simply attempt to find the output most likely accepted by the reader? Is this what you would call intelligent behaviour?

I am not sure the distinction you are making is philosophically defensible if you are not religious. Our consciousness is emergent out of physical processes.


> Our consciousness is emergent out of physical processes.

Whether that is true or not is actually irrelevant if you ask me. The real problem with parent's line of thinking is that no reasoning you apply to the computer cannot similarly be applied with exactly the same amount of validity to every person who isn't you. The distinction is therefore arbitrary and useless. If we accept that humans should be treated a certain way because they are conscious, then we must (at least) treat anything that gives the appearance of human-like consciousness with the same reverence.


Well, what if I say that humans have certain parts (subsystems in the brain? Neurons? Idk, just guessing) and that these parts are a necessary condition for the “talking thing” to be conscious?

Also it might not be that I treat a human “with reverence” because I believe he is conscience, but rather because I think he is “like me”, his body is like my body, he has parents like me, he has genes like me and he moves like me.


Your perspective requires a lot of tenuous assumptions though. You do not define 'consciousness' nor 'physical processes' (what underlies each and every process as you recursively examine them?). To claim that consciousness is emergent out of physical processes rather than the other way around (consciousness defining the physical world) requires axioms like that of 'cause and effect' to hold 'true', whatever 'truth' is actually supposed to signify when the physical world itself is not defined. As far as I know, the only things we can possibly 'know' are those things which we perceive, and the only thing which we can know from them for certain is the fact that they are perceived in the first place, whatever that means.

There may be 'science', yet even that is at the very best a hopeful idea that we will continue to perceive the world in some consistent, regular manner as we nudge at it with our imagined limbs. The way we conceive of cause and effect is entirely arbitrary- to consider it 'true' as you seem to, strikes me as almost religious. ;)


Yes, my position involves rejecting so-called "external world skepticism" ie. extreme solipsism. Given that solipsism is unfalsifiable and there is always a risk that solipsism is false, i think it makes sense to act as if it is false given that nothing really matters if it is true. The same is true of the problems you identify with science/induction.


Define "understand".

We don't even properly know what it means to be conscious, except that 1) each of us individually knows that they are conscious, and 2) for all the others, we "know it when we see it".


What is your argument for the claim, “I am conscious.”?


There isn't one - it's an axiom that I simply accept for granted. I don't know what it actually means, or whether it really means anything at all; my strong suspicion is that it all ultimately boils down to "I feel being me", which is rather useless.


Is it synthetic a la Kant? Or is “conscious” a part of the conception “I”?


The latter, I think. More precisely, it's the same conception, just different perspectives on it.


Maybe not, but think of the political applications.


shudder

Imagine a president reading GPT-4-generated text from a teleprompter.


That presumes a president who can successfully read from a teleprompter…


I don't think that is very far off considering writing is the primary use case


Do you think P-Zombies are actually a meaningful concept?


It's pretty obvious to me that I exist. It's also obvious that I actually have patterns, opinions, and goals behind my words. If I was predicting there would be no deeper goal behind my text.


You are both emergent out of physical processes that have been optimized for some objective. In your case, the objective is propagation/making more of oneself.


This is a narrow, simplistic, and tragic view of the world and your purpose in it.


Earlier, you mentioned predictions. Tell me more about that.



"Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment.. 'I don't speak a word of Chinese,' he points out."

This seems to be confusing multiple concepts. In the experiment, he is clearly just one component of the room, other components being the rules, the filing cabinets, etc. Of course, none of the single components of the room speak Chinese, but the room clearly does because it is doing just that. None of the individual neurons in our brains "understand" English but the system as a whole does.

The crux of it is, what is really meant by the word "understanding".


I think it's a flawed experiment. Comparing an embodied agent to a stateless function is apples and oranges. A category error.


Thank you for the link, that's very interesting and a neat thought experiment to highlight what I meant.


Does it highlight what you meant? I'm not at all sure it does, since I consider a Chinese room to be conscious (being an execution of a faithful encoding of the consciousness that created the room).


We might be on opposite sides of the Chalmers/Dennett divide, but we don't know how consciousness (as opposed to human-level intelligence) arises. Here's my reasoning for why the Chinese Room isn't (phenomenally) conscious: http://fmjlang.co.uk/blog/ChineseRoom.html


You make, in my opinion, the same mistake as Searle does, which is to take for granted that humans are 'more' than a system of interacting components in a way that no other system can be. The system of the room may well experience and understand color (or consciousness) the same way the system of your human body and brain do, we have no way of saying otherwise. Like Searle, your argument relies on the intuition that humans are a specially privileged arrangement of molecules in the universe.


Understanding is different from experiencing. This is where I side with Chalmers and not Dennett. I accept that the system would have some understanding of colours (as would the person in the room - notably, Knut Nordby had a far better understanding of colour vision than most people despite only seeing in monochrome). But I am skeptical that the system experiences colours.

Someone with normal colour vision is able to experience colours because they have cones on their retina which is somehow linked to their consciousness (probably by means of neurons further into their brains). Achromatopes, including the person inside the room, don't. They experience only different shades of grey. But they are able to tell what the colours are, by means of a set of three differently coloured filters. Do you mean that the filters experience colours as qualia but are unable to pass on this experience to the achromatope, or do you mean that the system must experience qualia simply because it behaves (to an outside observer) as if it sees in colour? I suppose it boils down to this: is the experience of colour additional information to the knowledge of colour?


The experience of color is ultimately a bunch of neurons firing in a certain way. The neurons themselves don't experience anything - the entire assembly does, as a whole. From that perspective, it's not clear why the Chinese room can't experience colors, even if individual people-"neurons" only transmit the underlying signals like RGB measurements.


The experience of colour is a fact about the brain, yes, which is additional to the knowledge of colour. A very simple system - a camera - can have knowledge of colour without the experience of colour. We say that something "knows" it is seeing the colour red if we can identify a part of the abstract world-model that is instantiated in that thing, such that the part is activated (for whatever "activated" means of that world-model's structure) iff the input to the world-model is red. I say that something "experiences" the colour red if additionally that world-model has a structure similar enough to my own that the "activated" part of the model has a direct analog in my own mind; and something "experiences" to a greater or lesser degree depending on how close the analogy is.

Of course I don't know whether anyone else "experiences" the colour red ("is my red the same as your red?"), but from the way people behave (and from knowledge of science) I have lots of evidence to suggest that their world-models are similar to mine, so I'm generally happy to say they're experiencing things; it's the most parsimonious explanation for their behaviour. Similarly, dogs are enough like me in various physical characteristics and in the way they behave that I'm usually happy to describe dogs as "experiencing" things too. But I would certainly avoid using the word "experience" to describe how an alien thinks, because the word "experience" is dangerously loaded towards human experience and it may lead me to extrapolate things about the alien's world-model that are not true.

Mary of Mary's Room therefore does gain a new experience on seeing red for the first time, because I believe there are hardcoded bits of the brain that are devoted specifically to producing the "red" effect in human-like world-models. She gains no new knowledge, but her world-model is activated in a new way, so she discovers a new representation of the existing knowledge she already had. The word "experience" is referring to a specific representation of a piece of knowledge.


By the way, another analogy: I recently wrote a microKanren in F#. I taught this microKanren arithmetic with the Peano naturals; it knew how to count, and it knew what 37 was, but the encoding is very inefficient and it was slow to count.

Then (https://github.com/Smaug123/FicroKanSharp/blob/912d9cd5d2e65...) I added the ability for the user to supply custom unification rules, and created a new representation of the naturals: "a natural is an F# integer, or a term representing the successor of a natural". I supplied custom unification rules so that e.g. 1 would unify with Succ(0).

With this done, natural numbers were in some sense represented natively in the microKanren. Rather than it having to think about how to compute with them, the F# runtime would do many computations without those computations having to live in microKanren "emulated" space.

The analogy is that the microKanren now experiences natural numbers (not that I believe the microKanren was conscious, nor that my world-model is anything like microKanren - it's just an analogy). It has a new, native representation that is entirely "unconscious"ly available to it. Mary steps out of the room, and instead of shuffling around Succ(Succ(Zero)), she now has the immediate "intuitive" representation that is the F# integer 2. No new knowledge; a new representation.


I believe she does too. Change the set up slightly.

Bring color to her world. Don't show her red - ask her to identify red.

If she can, I'll admit I'm wrong.


Well, https://en.wikipedia.org/wiki/Molyneux%27s_problem has been experimentally tested and it turns out that congenitally blind people, who are then granted sight, cannot automatically identify texture by sight alone; they have to learn to do so. So I would expect Mary to be unable to pass your test straight away, but I expect that she'll be able to learn do it with practice (because all the hardware is still there in her body, it just needs to be trained). And of course she'll be able to do it straight away given access to appropriate scientific instruments like a spectrometer; and she might be able to do it slowly and inaccurately without such instruments using her knowledge, in much the same way that I can multiply two three-digit numbers slowly and inaccurately.

Adding new primitives to your mental model of a thing is useless unless they're actually integrated with the rest of the model! Gaining access to "colour" primitives doesn't help you if the rest of your mental model was trained without access to them; you'll need some more training to integrate them.


The cones in your eye are essentially the same as the color filters, they respond to certain wavelengths and not others. It is the neurons in our brains that interpret what those signals mean, as the person in the room does. It is doubtful that any single neuron has an experience of color as we would, and neither would the cone, but in aggregate the system does. There is no component that you can point to and say "that's the part that experiences the color", the same way as you can't for the room. It is only our intuition that 'inanimate objects' aren't conscious (and that people are!) that makes your's and Searle's arguments appear reasonable.


This is just the system argument, which I accept works well for understanding, but I'm unconvinced it works for qualia. Here's another thought experiment: Replace the coloured filters with a system consisting of a camera connected to a computer which prints out the colour, which is then read by the achromatope. Here I assume you'd argue that the camera + computer doesn't experience colours, but the system (camera + computer + achromatope) does. Now, replace the camera + computer by a person who can actually see colours, who tells the achromatope which colours they see. In that case, the person who can see colours experiences the colours, and there's no need to invoke the system argument.


This is a compelling argument, but it does make the assumption that the experiential system need necessarily wholly contain the experience of the component parts, rather than merely overlap with it.

Yes, the color-sighted person can experience colors on their own, but their ability to see color is what is part of the experiential system the same way the computer's ability to see color was.


>it's just predicting what the next sentence

>There is no plan behind this

What is the difference between predicting the next sentence vs. having a plan. Perhaps the only real difference between us and GPT3 is the number of steps ahead it anticipates.


The real difference between GPT-3 and us is that we have an agenda when we converse, while the model only ever generates text. It's a piece of code, it's about as "intelligent" as the source code of the gnu utils, that is: not at all.


Ok but what does that mean. What makes for having an agenda that isn't already encapsulated within "the things that are n steps ahead in your speech planning?". If intelligence works in a way that is fundamentally different from GPT-3, what specific thing can't be accomplished by simply increasing the steps planned ahead in the next GPT?

Suppose we do more of the same to build GPT-4, and now it can stay focused for whole paragraphs at a time. Is it intelligent yet? How about when GPT-5 starts writing whole books. If the approach of GPT starts generating text that stays on topic long enough to pass the Turing test, is it time to accept that there is nothing deeper to human intelligence than a hidden Markov model? What if we're all deluded about how intricate and special human intelligence really is?


The model assumes a persona from the training set, we just don't know which one unless we tell it upfront.


I think one important difference is that GPT-3 is a generator without a discriminator, and decides the output in one pass. Humans have both and use multiple rounds to decide.

Do you remember sometimes you think of something and then stop and rephrase or just abstain? That's the discriminator working in the background, stopping us from saying stupid things.


The number of steps is exactly the difference between predicting the next sentence and having a plan.


There's no reason a planning system like AlphaZero could not be used. It's just necessary to generate multiple rounds of conversation a few steps ahead and pick the most promising one. "just" being an exaggeration, of course.


I believe that is the difference they are highlighting. Another way to look at it is, humans generally have some broad-picture idea of where they'd like a conversation to go (anticipating many steps ahead, but more vaguely the farther out you go). It seems like a big difference -- often the end gives the intermediary steps much more meaning.


I have a smiliar feeling. Humans have certain wishes or desires and use language as a vehicle to express their feelings. Language models in turn just predict the next best tokens based on a large corpus of training data without an agenda behind. Even if the result (the produced words) are the same, there is a fundamental difference if it was produced based on statistical data or with a goal in mind.


It is an easy way to understand the depth of the intellect you are speaking with and the knowledge set they are basing their answers on. Plus the answer "Yes, I always lie" is obviously a lie and proves that it is capable of contradicting itself even within the confines of one answer.


But you're not speaking to an intellect. That's just anthropomorphising a neural network, which is anything but intelligent.

You're writing strings that a prediction generator uses as input to generate a continuation string based on lots of text written by humans. Yes, it looks like there was some magical "AI" that communicates, but that is not what is happening.


Humans get smart by absorbing language produced by other humans. It's what raises us above animals. A human who doesn't have that becomes feral, so much of what we consider to be human is learned culturally.

Humans also overfit to the training data. We all know some kids just learn how to apply a specific method for solving math problems and the moment the problem changes a bit, they are dumbfounded. They only learn the surface without understanding the essence, like language models.

Some learn foreign languages this way, and as a result they can only solve classroom exercises, they can't use it in the wild (Japanese teachers of English, anecdotally).

Another surprising human limitation is causal reasoning. If it were so easy to do it, we wouldn't have the anti-vax campaigns, climate change denial, religion, etc. We can apply causal reasoning only after training and in specific domains.

Given these observations I conclude that there is no major difference between humans and artificial agents with language models. GPT-3 is a language model without embodiment and memory so it doesn't count as an agent yet.


I think we probably agree here, but I'd like to make an adjacent point: Animals have been known for a while to communicate, and even have languages of their own. Mutes are no feral humans either. It's definitely more nuanced than that, and I follow that actual intelligence consists of more than just the ability to form coherent sentences.


One essential ingredient I think - "skin". The agent got to have skin in the game, to have something to win or lose, something that matters. For humans it is life, survival, reproduction. The necessities of life made us what we are now.


Humans don't just generate language out of the void. The use of language is grounded by various other inputs (senses, previous knowledge/understanding). So when a human is constructing sentences those sentences aren't just using earlier elements of the conversation to generate the next sentence (as language models do) but rather a vast wealth of resources previously accumulated.

AI that wants to actually generate language with human-like intelligence needs more inputs than just language to its model. Sure that information can also be overfit, but the lack of other inputs goes beyond just the computer model overfitting it's data.


> That's just anthropomorphising a neural network, which is anything but intelligent.

What is intelligence? GPT-3 seems to perform better than my dog at a load of these tasks, and I think my dog is pretty intelligent (at least for a dog).

I mean, to me this does seem to show a level of what intelligence means to me - i.e. an ability to pick up new skills and read/apply knowledge in novel ways.

Intelligence != sentience.


That's an interesting point, and I sure imply sentience when I say intelligence, to a certain degree. I'd argue that GPT-3 does not actually "understand" it's output in any way, it just tricks you into believing it does, for example by having some kind of a short-term memory, or forming coherent sentences. Yet the model has no abstract concept of the things it writes about. It's all just probabilities.


Yeah, it's basically Clever Hans. You can see it with the amount of completely nonsensical output you can also randomly get from it if you deviate even a small amount form the input data distribution.


> It is an easy way to understand the depth of the intellect you are speaking with

Sorry but no. An algorithm that, for a given prompt, finds and returns the semantically closest quote from a selection of 30 philosophers may sound very wise but is actually dumb as bricks. GPT-3 is obviously a bit more than that, but "depth of intellect" is not what you are measuring with chat prompts.

> Plus the answer "Yes, I always lie" is obviously a lie and proves that it is capable of contradicting itself even within the confines of one answer.

Contradicting yourself is not a feat if you don't have any concept of truth in the first place.


> Plus the answer "Yes, I always lie" is obviously a lie and proves that it is capable of contradicting itself even within the confines of one answer.

If it was a human to human conversation that answer would "just" be considered sarcasm.


Predicting the next sentence is exactly what our brain does, it's in every introductory neuroscience book. Our brains are not that special, they just had a LONG time to come up with hacks to adapt to a very variable world.


That's not true. Our brains process other input such as stimuli from our senses, past experience, and inherent goals we want to achieve. Language models predict the next sentence based on the previous text. There's a clear difference between the capabilities of a brain and language models, and I'm a little bewildered this needs pointing out.

"Not special" is an interesting way to describe the single most complex thing we know of.


It's like "Kids Say the Darndest Things !" but for AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: