> a computer can be programmed to detect instances of the word “betrayal” in scanned texts, but it lacks the concept of betrayal. Therefore, if a computer scans a story about betrayal that happens not to use the actual word “betrayal,” it will fail to detect the story’s theme.
Seems sorta odd to write a book today dismissing 1960s AI technology.
Somehow people often cite the Chinese room thought experiment discussing consciousness/subjective experience, but Searle originally intended it to be about exactly this split between syntax/semantics:
The author seems to use the fact that a computer's intelligence can be deconstructed to 0's and 1's as evidence that the computer can't understand what it's processing. Couldn't the processing in the human brain be similarly deconstructed (albeit not to 1's and 0's)?
Absolutely. That brains can be deconstructed into the presence and absence of ion gradients over space and time, and these ion gradients do not understand, does not demonstrate that brains cannot achieve understanding. Using an analogous argument against AI ever being more than dumb pattern matching is fallacious and a serious failure of imagination.
They're apparently not. We can hypothesize without having seen a thing, using previous information.
This is different from outputting random single value output. Typically people tend to use generic or vague terms rather than mismatch, which is very different from what ML does.
We can also output that we're not certain. And we can use surrounding context on a way current ML cannot.
That's not how biology works, the spiking intervals matter, and when they occur, chemical state of the neuron at many moments, its shape and composition, and myelination.
In comparison, DNN are an abacus level of simplicity.
Just because something is more complex than we currently can mathematically model, does not mean we cannot mathematically model it.
If we can mathematically model it, then we can use 1's and 0's to do so. What you are suggesting is that how we think and observe the world with our brains is not mathematically possible. If that's the case, then Math is wrong, or at best: an approximation to the universe.
Implication: Consciousness exists in the gap between the universe and math.
I love how nearly every pitfall listed for AI applies equally to humans who cheerfully see the face of a God in a slice of toast and confusion correlation and causation as easily as they breath air.
This is exactly one of the things that makes ML, AI, or whatever so interesting. Sometimes it does find correlations which a human might miss because the human thinks there is no way those two things are linked - but in fact they are.
"For example, a correlation may exist between changes in temperature in an obscure Australian town and price changes in the U.S. stock market. A person would know that the two events have no connection."
Indeed, I can imagine a few mechanisms for such a correlation.
1. El Nino / Southern Oscillation affects the temperature in Australia and (especially South) America which would affect farm yields and hence US stock market prices.
2. Extra hot temperatures at a uranium mine near a town in Australia might limit productivity of that mine, causing uranium prices to change.
Just two off the top of my head. The effects might be only very mildly correlated, but not zero.
If you're willing to entertain that many hypotheses, you need a huge amount of data to not be fooled by noise. For instance, there might be 1B hypotheses of the form "temperature in some town is correlated with the price of some stock". You'd need p<0.000000001 to not have false positives. Even the stock market doesn't have enough data for that.
I'm genuinely confused if you're saying ML is better or worse for finding such correlations, because humans are also extremely good at finding patterns where there's none, as this website hilariously demonstrates:
If the book is as "technical" as this article than I don't think I'll be reading it. The sole argument, it seems, is that computers lack the nebulously defined "concept".
If you don't define it, than you might as well be saying it lacks a soul and can therefore never be truly intelligent. What does understanding a concept mean?
Because AI researchers have been working to get closer to this understanding for years, and to simply say "it isn't there yet" falls a bit flat. It's like saying climbing a tree isn't flying, while ignoring the attempts to fly with kites, wingsuits, or parachutes: we might not have a plane yet, but we are making progress.
Seems sorta odd to write a book today dismissing 1960s AI technology.