Take the results with some salt. No matter how fascinating the results, if the model is wrong then there is no advancement in our knowledge of the brain.
As far as I know, they did not go the long and hard way of actually reconstructing a cortical column from sectioned tissue (as in electron microscopical imaging of serial sections of 60 nanometers in thickness).
What they have done, even if remarkable, is "simply" a simulation of a network built from statistical data gathered from many different studies of cortical columns.
Awesome (in the archaic sense) that the size estimate is only 200x what Google has currently. Seems trivially within reach in, what, only 8 doublings of performance? They guess at 10 years for "one machine", so what, 15-20 years before it's pda size/priced? An extra full-real-thinking-feeling-creating-art-type-brain in my laptop?! Holy shit.
I was actually just wondering why this project wasn't happening yet while reading "The Singularity is Near". If you haven't read it, I recommend it. It's hella dry and boring in parts, but overall worth reading if only to make you think about what a startup might look like in only 5 or 10 years.
I think the reason most people are ignoring Kurzweil is that he sounds like a crackpot. I happen to be a guy who owns an autographed copy of Kurzweil's latest book, but I acknowledge that the idea of the technological singularity sounds too good to be true. There have been thousands of cults proclaiming that the end is near; it would be a coincidence of historical proportions if this one happened to be right.
Kurzweil presents many convincing arguments, but the central tenet is that the doubling of processing power keeps up for another two decades. Due to the nature of nature, what we observe as exponential growth in the end invariably turns out to be logistic growth. If this is so, then at any time the doubling of processing power could slow, and this would mean that we would only, at the most, get one more doubling. Such an event would bring the dream of a near-term universe full of computers to a screeching halt.
I believe that we should be able to create machines that think, but it is important that we aren't blinded by ideology. A massive "singularity movement" that ends up disappointing people, will in the best case delay the advent of such technology due to funding issues. I think the history of AI has demonstrated this quite clearly.
If Kurzweil's estimates are correct, though, another 8-10 years of "exponential growth" in computer hardware will put full-scale simulation of a human brain into supercomputer territory. And frankly, I doubt that evolution has managed to find the most computationally efficient way to achieve intelligence. There's got to be a way to do this.
The reason "chips" are called "chips" is because they are essentially 2d, as soon as we bump up to 3d (though it will probably be more like a high surface area crumpled up shape (not unlike the brain) for cooling reasons), I expect at least a few more doublings.
The human brain requires about 25 watts of electricity to operate. Simulating the brain on a supercomputer with existing microchips would generate an annual electrical bill of about $3 billion. If computing speeds continue to develop at their current exponential pace, and energy efficiency improves, Markram believes that he'll be able to model a complete human brain on a single machine in ten years or less.
I wonder how many megawatts Blue Brain will need to do something simple simulating only 1e5 cells.
According to wikipedia,
- the average power consumption of a human cell is around 1e-12 watts.
- 50 years ago ENIAC needed 150 kW to do what you could probably do today in a few µW.
Based on those numbers, I think Markram is way too optimistic about simulating the brain on a signle machine in 10 years. If things continue as they have, it's more likely to take closer to 50 years.
I'll pay more attention to Kurzweil's singularity when the difference in power consumption between a tiny fraction of a simulated brain and a real one drops below ten orders of magnitude.
I wonder how they plan to do parameter estimation? Millions of nonlinear dynamic models running, each with at least 3 free parameters in the simplest of neuron models, with an almost infinite number of possible topologies of the networks. That is quite the search space and neuroscience can provide very little prior knowledge.
I watch with great interest. Is this how general AI will happen? or is it more like heavier then air flight was: we will get there, we just need to understand the fundamentals of what makes AI better.
I doubt it. Our modeling capabilities continue to develop much faster than our knowledge of how the brain works.
Of course, neural network modeling can still be of great use. Since we've more or less figured out how the brain does a variety of low-level processing tasks (like coordinating muscle movements, finding the visual depth and texture of observed surfaces, etc.), this knowledge can serve as a starting point for developing exciting technologies at increasingly faster speeds (robotics limbs, computer-guided navigation, etc.). Also, advanced neural networks can serve as rudimentary "existence proofs" for theories of mind and consciousness. For example, a theory that claims that our incredible computing power stems from a recurrent network of thalamo-cortical neurons is strengthened by a computer simulation that exhibits similar large-scale behavior in similar time scales under suitable parameters.
That last sentence is pretty qualified, for good reason. We shouldn't expect to write some code and discover a working brain, at least not until we figure out how it works to begin with.
Any simulation running neurone interactions has the ability to produce sentience without us knowing why or how it happened. However it would require a phenominal amount of processing power than the simulated 'brain' would have. Human neurones can have up to 10000 synapses connecting them together (on average 7000 per neurone), which each synaptic interaction is a multi-variable of chemical reactions, catalysts, reinforcers, inhibitors and stimulants.
I think our aim should be to simulate the R-Complex part of the triune brain model. This is where our evolutionary intelligence came from and it is the best basis to start. Birds have managed high levels of intelligence without the neocortex present in mammals, so obviously there's more than one route to sentience.
I believe brain density is the key to sentience. In a small animals, like birds, they lose heat quickly and efficiently, which means their 'processors' can pump out more heat without burning out. Where as a human brain would cook itself if it was 'overclocked' as we're at the upper limit of the communication-efficiency ratio, and Elephants are above that ratio but get away with large processing and data storage with a form of 'low-energy' brains.
With the speed of electricity in synapses, a Crow might be able to achive 10 times as many messages in the time it takes one of our brains to make 1 message. Elephants might only make .5 messages a second per synapse. So a crow would benefit greatly by increasing brain density and be able to solve problems to the same degree as a human infant, however humans have phenominal storage in our brains and a crow simply doesn't have the brain mass to compete so we have more data to act on than a crow. Elephants use their brains to remember pretty much everything, they've even been seen taking short cuts that they have 'guessed' between paths they've taken before.
I think the brain is astounding, but to reproduce sentience we should follow evolution; not only because it gives us vast insights into how our brains evolved, but also because it should avoid us ever getting into the irrational AI situation. If an AI is made from simulating a human brain then it is human, which it would be capable of understanding as it would have to be 'taught' and I know I'm human because of the society I know around me that teaches me that I think and act in a similar method as everyone else, just with different data.
Well I'm not saying I know for sure, just that in some senses we tend to view algorithms as living in an abstract world of mathematics, but the reality is that an actual computation is a physical occurrence in universe. So what we might end up with is an intelligent chunk of the universe being inside of a computer and not a human, and that might not tell us as much as we'd hoped because the intelligence is an emergent property of the computation and not a defined behavior of the algorithm.
I agree. I suspect that scaling up a simulation of a neuron and expecting it to act like a brain is a bit like scaling up a simulation of an iron atom and expecting it to act like a car.
Ray Kurzweil argues that we can eventually brute force the solution. However, we can do it more efficiently if we do it at a functional level. Basic acoustic function given as an example ( http://www.kurzweilai.net/articles/art0134.html?printable=1 ).
So if we have brutally powerful machines, that can do a human scale brain - how to we "start" it - is it going to have to learn itself, as we do, or is there tech to "read" our minds into it? (creepy sci fi style).
The human brain is mainly just a giant pattern matching organ (filling in gaps, resolving ambiguities, cross-comparing several senses, etc). Simulating how it works is one thing (that's what they've done). "Installing" existing memories, patterns, and connections is a giant step beyond what they're currently doing. This (and probably all) AIs will have to be trained. The hard part will be bootstrapping the first AI to usefulness, but then the next AI is a copy-paste operation away!
Ray Kurzweil notes that brain scan resolution and scan duration are both following Moore's law. So, within 50 years, it may be possible to take a full-resolution, real-time snapshot of a brain. Ray Kurzweil doesn't explore the possibility of snapshots being taken without consent.
Well if it uses MRI (although miniturised) should be pretty easy to detect/mess with strong magnetic fields around your head (a tin foil hat wouldn't do - but something ferrous may !).
Perhaps we should stop saying tin-foil hat and say "ferrous material hat" ;)
Why am I the only person who's terrified rather than excited by every new incremental progression of AI? Do you guys actually look forward to the day when humans are made obsolete?
PS: This scientist seems to be exponentiating incorrectly. In ten years he'll have 32-ish times as much processing power; he needs 10 million times as much to get to the level of the entire brain.
Apprehension about progress in artificial intelligence is entirely natural. In fact, I think apprehension and denial will lead people to continually redefine their notion of intelligent behavior so that current computers are always excluded.
Not so long ago many people said a computer could not beat a grandmaster at chess without being intelligent. Enter Big Blue. Others have stated computers will never compose music that is emotionally meaningful to humans without being intelligent. Enter Experiments in Musical Intelligence and other widely-acclaimed composition programs.
Until the Turing test is passed people will be able to plausibly deny any advances in artificial intelligence. No matter how advanced such "brain in a box" models becomes, they won't pass the Turing test without being embedded in a rich environment with which they can interact.
Can you link to a single good composition program? I have heard Wolfram's and I thought it was crap, and the only decent one I have heard was one which could only emulate old composers.
Although you characterize it as only being able to "emulate old composers", I may have been referring to the program you are thinking of. Although it is true that it learns and extrapolates from musical input, so do human composers. Its algorithms can learn from anything, and by feeding it a mixture of styles, it can actually generate some fairly compelling and new sounding works. The programmer is also a composer and has trained the algorithm on some of his own works and the output sounds nothing like old composers.
Every time mankind makes a major breakthrough it makes entire families of professions obsolete. This gives us opportunities to move onto more advanced forms of keeping ourselves busy.
Most of the time when people ask about "obsolete humans" in context of AI, they really mean "programmers" or "doctors" or whatever. Yes, it might. It simply means that there are professions to be invented that will be to programmers as programmers are to janitors today.
Seriously, though, there will be no use for you (or any other human) whatsoever post-AI.
And that's the best-case scenario—when AI is being used for the common good. If the people with early access to it try to use it to rule the world and enslave humanity, they'll probably succeed.
I'm not trying to be an irrational doomsday predictor, here; these are just the conclusions that I come to when I work off of the premise "humanity will have access to cheap human-level intelligences".
This sounds like nothing more than promoting the status quo.
Why do you automatically assume that "no use" in this context would turn out to be a bad thing? The way I see this, if we happen to have greater intelligences working for our common good, we would be able to solve any problem better than a human could - including a possible problem of feeling useless. This would be the best-case scenario, and IMHO it would be much better than the world we have today. Possible solutions to the problem, from my limited human brain, could be bringing the human brains up to the level of the greater intelligence and hence finding new problems to solve, altering human drives so it isn't a problem any longer, abolishing AGI entirely or partially, etc. Any problem could in such a best-case scenario be solved better and faster than humans could.
I agree with you on the worst or worse-case scenario point. There are huge ethical risks and implications in creating very powerful and capable machines. This means that we need to go into this situation with our eyes open, and make sure that we discuss ethics, transparency and consequences from day one.
Enslaving humanity is a basic human drive, and I like to believe that we can do better than that if we try hard. In a pure cost-benefit analysis, it is obvious that it would be best to use such technology to help out everyone.
>The way I see this, if we happen to have greater intelligences working for our common good, we would be able to solve any problem better than a human could - including a possible problem of feeling useless.
That's a logical error: the existence of a superhuman intelligence might cause more problems than that superhuman intelligence can solve.
When humans solve the personal problem of "feeling useless" they almost without exception do it outside of a vacuum. Their feeling of usefulness tends to stem from the impact that they have on humanity.
We are good in having human experiences. I think AIs will really like sites like ycombinator or reddit. People sharing their view on the world in a format which is very accessible for computers.
Some guy (maybe Kurzweil?) said something along the line that he was always more afraid of stupidity than of intelligence. And I can agree with that a lot. I'm way more afraid that humans are destroying the world out of stupidy than of hyperintelligences which we created & teached trying to get rid of us.
Your argument assumes that greater intelligence implies greater fitness for all jobs. I'm pretty sure there are jobs for which intelligence is not the primary requisite.
Human bodies may be made obsolete but I am keeping my fingers crossed. If we have computers which can simulate human brains (and presumably do it more efficiently/with more capabilities), we should have methods of transferring consciousness back and forth eventually at which point, I can implant myself into the computer and have all the advantages. For greater detail on this, see Space Odyssey 2001.
The day one of these simulators does something unique and interesting that can't be done with other hardware -- I think we will have turned a very important corner.