'have been debunked' - quite an emotionally laden phrase to introduce what is in fact an opposing view by another mathematician don't you think?
The hostility the Strong AI camp have for Penrose's views is fascinating - it must be infuriating to have such a respected mathematician and physicist take the time to write a few books refuting the reductionist approach. There certainly seems to be no room for a contrarian around those parts!
I'd recommend reading the section "4. The "Bare" Gödelian Case". Two particularly relevant points!
"4.5 The many arguments that computationalists and other people have presented for wriggling around Gödel's original argument have become known to me only comparatively recently: perhaps we act and perceive according to an unknowable algorithm; perhaps our mathematical understanding is intrinsically unsound; perhaps we could know the algorithms according to which we understand mathematics, but are incapable of knowing the actual roles that these algorithms play. All right, these are logical possibilities. But are they really plausible explanations?
4.6 For those who are wedded to computationalism, explanations of this nature may indeed seem plausible. But why should we be wedded to computationalism? I do not know why so many people seem to be. Yet, some apparently hold to such a view with almost religious fervour. (Indeed, they may often resort to unreasonable rudeness when they feel this position to be threatened!) Perhaps computationalism can indeed explain the facts of human mentality - but perhaps it cannot. It is a matter for dispassionate discussion, and certainly not for abuse! "
Did you see where he admits he's wrong? "these are logical possibilities". IE, Penrose made a logical argument, and he didn't cover all the cases. Maybe this one is clearer: http://www.nytimes.com/books/97/04/27/nnp/17540.html
I haven't read the entire article you linked to. Can you elaborate on something?
Godel's Incompleteness Theorem (in essence) said that the first order Peano axioms for the integers was not strong enough to prove all statement in the second order Peano axioms. My understanding is that mathematicians wanted a computable system that would be strong enough to prove all true statements encompassed by the second order system.
Godel showed this can't be done. No computable system can be strong enough to prove all true statements in the second order system. Given that humans can prove statements that are true in a system that can't be reducible to a computable set of axioms how can computer intelligence ever equal human intelligence? Human intelligence must be fundamentally not a computable system. What's wrong with this reasoning?
Could you elaborate on "... given that humans can prove statements that are true in a system that can't be reducible to a computable set of axioms ..."?
Perhaps with an example?
The true but unprovable statement referred to by the theorem is often referred to as “the Gödel sentence” for the theory. It is not unique; there are infinitely many statements in the language of the theory that share the property of being true but unprovable.
Don't the existence of thing like the Gödel sentence show that there are things that are true but which humans can't prove, contradicting the claim that this is an important distinction between humans and computers?
I'll try to give an overview of what's going on. In the late 1800s Peano came up with his set of axioms for the integers. This set is second order which means it isn't computable. Hilbert and others wanted a computable set of axioms so that one could, in theory, remove humans from the discovery of mathematical truth.
So the first order axioms were invented. Godel showed that the first order axioms are not strong enough to prove all true statements about the integers. There are infinitely distinct models of the first order integers. There is only one model of the second order integers.
In the proof of the Incompleteness Theorem Godel proves a statement about an integer which is not provable in the first order system.
Humans can work in the second order system but not computers. So, from my perspective there is something different about human thought patterns. I'm not an expert and would like to know what is wrong with my reasoning.
Ok, I'll confess that I'm not sure what you're talking about when you mention second order systems. However, Godel's incompleteness theorems (plural) were a lot more general than you're suggesting.
First Theorem:
"Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory."
Second Theorem:
"For any formal effectively generated theory T including basic arithmetical truths and also certain truths about formal provability, T includes a statement of its own consistency if and only if T is inconsistent."
It wasn't that Godel proved that Peano arithmetic was incomplete that shocked mathematicians, it was that he showed that _no formal system_ could ever be complete as long as remained consistent.
Of course maybe this second order system isn't consistent, in which case there's no problem. After all, I'm fairly convinced that human reason isn't always perfectly consistent, so it makes sense that we can avoid Godel's theorems. Of course, I see no reason we couldn't make a slightly inconsistent computer either.
Definitely his Incompleteness Theorem (I there are two of them and also a Completeness Theorem) was more general than what I mentioned but the shock was its application to arithmetic. The second order axioms are not recursively enumerable (think computable). It was the goal of Hilbert to find a computable system capable of proving (in theory) all true statements. There is no such computable system.
One can find complete axiomatic systems of the integers. It's quite easy. Just take all true statements as the axiom system. The problem is that there is no computable method for determining in such a system whether or not a given statement is an axiom.
Again, we humans can work in the second order axiom system but computers can't. So it appears there's a difference between human intelligence and computer intelligence.
How does that prove in any way that the human mind can find all true statements in Mathematics and thus go beyond the limits of computability? Do you have any proof of this? Also, do you have any proof that the human mind is in fact consistent (meaning it can't possibly reach A and ~A at the same time)?
I'm not claiming that humans can discover all true statements in mathematics. I'm saying the Incompleteness Theorem demonstrates that such a task is not computable and in the proof of the theorem a statement that can't be proven in a computable system is proved. So....can a computer ever reach the same level of reasoning? I'm skeptical.
Solving a particular instance of a noncomputable problem is not the same than solving the problem itself. Eg, the busy beaver problem is noncomputable, but it's trivial to make a machine output solutions for simple cases in the same way that it's trivial for a human to do so. In fact, you cannot prove that what we are doing is essentially programming ourselves to solve ad-hoc cases of the general problem, which is computable, as opposed to somehow having in our heads a general way of solving the problem, which we know it's not computable.
Basically, what you need to prove to show that humans are intrinsically more powerful than machines is that we have the ability to generally solve noncomputable problems in a general way, ie, that we are hypercomputers.
He's a Platonist, so he believes in the existence of a real world that is neither material nor mental.
His central claim that conscious acts are in some sense noncomputational is prima facie false.
And his concrete solution to problem collects together several other very difficult problems and essentially says, solve one, solve them all. (Great news for his publishers btw.)
For those reasons, it's hard to take seriously because it's outrageously speculative.
I was annoyed at how the article just blithely asserted that wavefunctions collapse, as if that wasn't a matter of ongoing debate. In fact, Penrose is remarkable among big name physicists for being sure that wavefrom collapse occurs.
http://www.hedweb.com/manworld.htm#believes
Patricia Smith Churchland has famously remarked about Penrose's theories that "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules."
Pretty much. Penrose, despite being generally brilliant, is not an expert in neuroscience. QM explanations seem like hand waving or saying "we don't really know, and since I know a lot about QM, that must be the reason". It's almost avoiding investigation into the hard problems of neurobiology to find the real, physical processes that bring about human cognition.
The argument that computers can't think derives from the idea that there are noncomputational processes at work in the brain. Essentially, we don't know how certain thoughts arrive in our mind. We can't create an algorithm to mimic our chain-of-thought generator.
But that doesn't mean we can't create a computer that can have similar noncomputational "thoughts".
People must choose, when a computer need not choose. What I mean is, a computer can shut down. A human mind cannot -- and continue to live. When a computer "observes" -- so to speak -- stimuli it cannot handle or are beyond its capacity, it does not make random choices about what to do now. We do not trust randomness. Sometimes, however, humans have no option but randomness. This is why in a crowd of 100 each one will react differently to the same stimulus. If it suddenly gets very cold, some will shiver, some will leave, some will get up and jump around.
In most cases, Computer systems aren't even allowed to accept input that isn't known to be valid. Minds have to all the time.
When you begin to predict the future and that's largely what the human mind is -- a future prediction machine, then it becomes even more complex. It requires memory. Concoctions from memory or assumptions. We don't let computers assume.
In many ways, we are holding computers back. Because we are afraid. We are afraid of what they will decide for us. We are afraid of random. We need control. We haven't subjected computers to survival of the fittest.
If we did, then by the law of large numbers, eventually, like I suppose is true with many humans, one will survive that we can't explain how. We won't know how that computer made all the right decisions the whole time.
We don't know how to program computers to accept any input. White is the maximum color. Black is the darkest. But computers could see much darker than black and much brighter than white. How can we control something like that? We can't. We won't be able to. It will see and know thinks we can't imagine.
"People must choose, when a computer need not choose."
Perhaps I'm not understanding something, but that seems to be obviously false. A human always has the option of being unsure in the face of non-computable statements like "You can't know that this sentence is true" or such.
Imagine a dataset where the inputs are to be in a range from 1-100, but for some reason unknown to the programmers when the program was written, there is a value of 4,000.
In the real world, a human must decide what to do with that 4,000. A computer would crash or throw an error or something like that even though the data may actually be valid.
This is a classic Dunning-Kruger situation. Penrose is out of domain of expertise here. He has not bothered to study the theory of mind (eg see the lack of relevant references in his book).
His arguments do not stack up, as extensively documented elsewhere.
This is not the first time that a famous physicist/mathematician has got it drastically. Neils Bohr was an animist - he believed along with many people at the time that there was some magic hidden essence to life that went beyond material things. When told about the discovery of DNA he said "Yes, but where is the life?".
Thus, Penrose notes, they are true because of their meaning, not because of their syntax relation to an axiomatic system. This reinforces the thesis of Jerrold Katz, that syntactic simples are not semantic simples, and so some truths will depend on semantic contents that cannot be exhaustively expressed as syntax
If this were true, would it mean that there are aspects in physics which are not mere abstractions of math?
I bought and read "The Emperor's New Mind" in the early 90s, mostly for "knowing your enemy." I have slowly but surely come to mostly agree with Penrose: I think there is something magical and "quantum mechanical" about brain consciousness (including animals).
I believe in eventual real AI, but I would guess that it will not be on current computer hardware.
I only have a BS in Physics (UCSB) so this may be rough: about 10 years ago I went to a Quantum Mechanics and Consciousness conference and the gist is that there is a quantum effect that is required consciousness. My dad taught physics at Berkeley and his friend from Berkeley Henry Stapp presented an interesting paper on this theory. The philosopher David Chalmers was also there and he talked about the difficult question of consciousness: why did evolution favor the development of consciousness and qualia (an inner mental life)?
There are several obvious explanations like that we evolved conciousness in order to be able to explain ourselves to other humans. But I thought Chalmers believe that the physical world was "causally closed" and that therefore a purely material feedback process like evolution couldn't select for qualia?