Brain simulation
Recently there was a considerable splash (some would unkindly call it hype) about an ongoing simulation project that might produce an "artificial human brain" within 10 years from now:
Artificial brain '10 years away' (7/22/09)
The project isn't brand new. It was formally initiated in June 2005 by EPFL, the École Polytechnique Fédérale de Lausanne (in Switzerland), in cooperation with IBM. This Blue Brain project has been directed since the beginning by Henry Markram of EPFL. The project's name is derived from its use of an IBM Blue Gene computer for performing the simulations.
An initial goal of the project was completed after just 1.5 years, in 2006 – the simulation of a single rat cortical column. While that's a noteworthy accomplishment, keep in mind that a human cortical column contains 6 times as many neurons (60,000) as does the rat equivalent.
In addition, although the architecture of each column is roughly similar to that of any other, the total number of columns in a human cerebral cortex is estimated at about 2 million. (The cerebral cortex is the outermost, and evolutionarily most recent, part of the human brain, in which the most sophisticated processing occurs.) The project is facing a rather daunting amount of simulation that needs to be performed.
The present flurry of excitement over the project stems from a presentation by Markram at the recent TEDGlobal 2009 conference.
While a video of Markram's talk doesn't seem to be available online yet, you can find some blogged notes here: Henry Markram at TEDGlobal 2009: Running notes from Session 5.
Here are some additional references on the talk and the project in general:
Note that the caption on the last video in this list observes that a complete simulation may require a computer 20,000 times as powerful as any currently existing computer. (At current rates of progress, we would see a computer "only" 1000 times as powerful in 10 years.) Such a computer would also need a memory capacity 500 times the size of the Internet. Anyone else sense a discrepancy with claims made more recently?
Later additions:
Competition in the wings?
Before passing on to a skeptical take on the feasibility of what the Blue Brain project proposes to do, it's rather interesting that there may be competition in this race – and in part from no less than another portion of IBM:
Cognitive Computing Project Aims to Reverse-Engineer the Mind (2/6/09)
Sort of makes one wonder what's going on here, no? I don't have any further information on this project, except some details on Dr. Modha, and further references contained therein (especially the first item):
And now for the skeptical view
You knew it was coming, right?
Here's video of a debate on this topic (in general, not either IBM simulation project in particular) between John Horgan (the skeptic) and Ray Kurzweil (in rebuttal).
Ray Kurzweil and John Horgan debate whether a singularity is near or far
And here's an article by Horgan with the details of his argument:
The Consciousness Conundrum
I don't always agree with Horgan, but his case on this topic seems a little more persuasive at this point.
In support of Horgan's point of view, I could mention an article from March 2009, about the work of the Allen Institute for Brain Science. It's written by Jonah Lehrer, who earlier wrote on the Blue Brain project in one of the references cited above:
Scientists Map the Brain, Gene by Gene
Additional skeptical views:
I think I'll need a bit more time to digest the information here...
Tags: neuroscience, brain simulation
Artificial brain '10 years away' (7/22/09)
A detailed, functional artificial human brain can be built within the next 10 years, a leading scientist has claimed.
Henry Markram, director of the Blue Brain Project, has already simulated elements of a rat brain.
He told the TED Global conference in Oxford that a synthetic human brain would be of particular use finding treatments for mental illnesses.
Around two billion people are thought to suffer some kind of brain impairment, he said.
"It is not impossible to build a human brain and we can do it in 10 years," he said.
The project isn't brand new. It was formally initiated in June 2005 by EPFL, the École Polytechnique Fédérale de Lausanne (in Switzerland), in cooperation with IBM. This Blue Brain project has been directed since the beginning by Henry Markram of EPFL. The project's name is derived from its use of an IBM Blue Gene computer for performing the simulations.
An initial goal of the project was completed after just 1.5 years, in 2006 – the simulation of a single rat cortical column. While that's a noteworthy accomplishment, keep in mind that a human cortical column contains 6 times as many neurons (60,000) as does the rat equivalent.
In addition, although the architecture of each column is roughly similar to that of any other, the total number of columns in a human cerebral cortex is estimated at about 2 million. (The cerebral cortex is the outermost, and evolutionarily most recent, part of the human brain, in which the most sophisticated processing occurs.) The project is facing a rather daunting amount of simulation that needs to be performed.
The present flurry of excitement over the project stems from a presentation by Markram at the recent TEDGlobal 2009 conference.
While a video of Markram's talk doesn't seem to be available online yet, you can find some blogged notes here: Henry Markram at TEDGlobal 2009: Running notes from Session 5.
Here are some additional references on the talk and the project in general:
- Blue Brain Project – official project website
- Swiss scientists aim to build a synthetic brain within a decade (7/23/09) – guardian.co.uk news article
- In Search for Intelligence, a Silicon Brain Twitches (7/14/09) – Wall Street Journal news article
- Scientists Create Artificial Brain – video from the WSJ article
- This Video is Incredible (7/15/09) – blog post about the WSJ article and video
- Artificial human brain within the next 10 years (7/27/09) – CNET Asia news article
- Neurogrid Neuron Chips (5/20/09) – blog post containing another Blue Brain video
- BlueGene Blue Brain Neuron Supercomputer – YouTube video from preceding item
- Simulated brain closer to thought (4/22/09) – BBC news article
- Inaugural lecture of Prof. Henry Markram – video of 3/4/08 lecture by Markram
- Out of the Blue (3/3/08) – detailed Seed Magazine article by Jonah Lehrer
- Henry Markram - Neuroinformatics 2008 – video of 80 minute lecture
- Henry Markram: Designing the Human Mind – video of 15 minute Markram lecture
Note that the caption on the last video in this list observes that a complete simulation may require a computer 20,000 times as powerful as any currently existing computer. (At current rates of progress, we would see a computer "only" 1000 times as powerful in 10 years.) Such a computer would also need a memory capacity 500 times the size of the Internet. Anyone else sense a discrepancy with claims made more recently?
Later additions:
- Scientist: Human brain could be replicated in 10 years (9/7/09) – press release [added 10/17/09]
Competition in the wings?
Before passing on to a skeptical take on the feasibility of what the Blue Brain project proposes to do, it's rather interesting that there may be competition in this race – and in part from no less than another portion of IBM:
Cognitive Computing Project Aims to Reverse-Engineer the Mind (2/6/09)
"The plan is to engineer the mind by reverse-engineering the brain," says Dharmendra Modha, manager of the cognitive computing project at IBM Almaden Research Center.
In what could be one of the most ambitious computing projects ever, neuroscientists, computer engineers and psychologists are coming together in a bid to create an entirely new computing architecture that can simulate the brain’s abilities for perception, interaction and cognition. All that, while being small enough to fit into a lunch box and consuming extremely small amounts of power.
The 39-year old Modha, a Mumbai, India-born computer science engineer, has helped assemble a coalition of the country’s best researchers in a collaborative project that includes five universities, including Stanford, Cornell and Columbia, in addition to IBM.
The researchers’ goal is first to simulate a human brain on a supercomputer. Then they plan to use new nano-materials to create logic gates and transistor-based equivalents of neurons and synapses, in order to build a hardware-based, brain-like system. It’s the first attempt of its kind.
Sort of makes one wonder what's going on here, no? I don't have any further information on this project, except some details on Dr. Modha, and further references contained therein (especially the first item):
- Dharmendra S. Modha's home page
- Dharmendra S Modha's Cognitive Computing Blog
- Dharmendra S Modha - facebook
And now for the skeptical view
You knew it was coming, right?
Here's video of a debate on this topic (in general, not either IBM simulation project in particular) between John Horgan (the skeptic) and Ray Kurzweil (in rebuttal).
Ray Kurzweil and John Horgan debate whether a singularity is near or far
And here's an article by Horgan with the details of his argument:
The Consciousness Conundrum
I don't always agree with Horgan, but his case on this topic seems a little more persuasive at this point.
In support of Horgan's point of view, I could mention an article from March 2009, about the work of the Allen Institute for Brain Science. It's written by Jonah Lehrer, who earlier wrote on the Blue Brain project in one of the references cited above:
Scientists Map the Brain, Gene by Gene
One unexpected—even disheartening—aspect of the Allen Institute's effort is that although its scientists have barely begun their work, early data sets have already demonstrated that the flesh in our head is far more complicated than anyone previously imagined. ...
"The brain is just details on top of details on top of details," Hawrylycz says. "You sometimes find yourself asking questions that don't have answers, like 'Do we really need so many different combinatorial patterns of genes?' ...."
"The problem with this data," one researcher told me, "is that it's like grinding up the paint on a Monet canvas and then thinking you understand the painting." The scientists are stuck in a paradox: When they zoom in and map the brain at a cellular level, they struggle to make sense of what they see. But when they zoom out, they lose the necessary resolution. "We're still trying to find that sweet spot," Jones says. "What's the most useful way to describe the details of the brain? That's what we need to figure out." ...
Although the human atlas is years from completion, a theme is beginning to emerge: Every brain is profoundly unique, a landscape of cells that has never existed before and never will again. The same gene that will be highly expressed in some subjects will be completely absent in others.
Additional skeptical views:
- The Limits of Computer Simulations (8/3/09) – blog post
I think I'll need a bit more time to digest the information here...
Tags: neuroscience, brain simulation
Labels: neuroscience
2 Comments:
LOGICAL EXTRACTION OF NEOCORTEX STRUCTURE
I do not understand why the neocortex is a mystery to everyone. Its neuron net circuit is repeated throughout the cortex. It consists of excitatory and inhibitory neurons whose functions, each, have been known for decades. The neuron net circuit is repeated over layers whose axonal outputs feed on as inputs to other layers. The neurons of each layer, each receive axonal inputs from one or more sending layers and all that they can do is correlate the axonal input stimulus pattern with their axonal connection pattern from those inputs and produce an output frequency related to the resultant psps. Axonal growth toward a neuron is definitely the mechanism for permanent memory formation and it is just what is needed to implement conditioned reflex learning. This axonal growth must be under the control of the glial cells and must be a function of the signals surrounding the neurons.
The cortex is known to be able to do pattern recognition and the correlation between an axonal input stimulus and an axonal input connection pattern is just what is needed to do pattern recognition. However, pattern recognition needs normalized correlations and a means to compare these correlations so that the largest correlation is recognized by the neurons. Without normalization, the psps relative values would not be bounded properly and could not be used to determine the best pattern match. In order to get psps to be compared so that the maximum psp neuron would fire, the inhibitory neuron is needed. By having a group of excitatory neurons feed an inhibitory neuron that feeds back inhibitory axonal signals to those excitatory neurons, one is able to have the psps of the excitatory neurons compared, with the neuron with the largest psps firing before the other do as the inhibitory signal decays after each excitatory stimulus, thus inhibiting the other excitatory neurons with the smaller psps. This inhibitory neuron is needed in order to achieve psp comparisons, no question about it. For a meaningful comparison, the psps must be normalized. As unlikely as it may seem possible, it comes out that the inhibitory connections growing by the same rules as excitatory connections, grow to a value which accomplishes the normalization. That is, as the excitatory axon pattern grows via conditioned reflex rules, the inhibitory axon to each excitatory neuron grows to a value equal to the square root of the sum of the squares of the excitatory connections. This can be shown by a mathematical analysis of a group of mutually inhibiting neurons under conditioned reflex learning. This normalization does not require the neurons to behave different from as known for decades, but rather requires that they interact with an inhibitory neuron as described.
Thus, by simply having the inhibitory neurons receive from neighboring excitatory neuron with large connection strengths where if the excitatory neuron fires, the inhibitory neuron fires and by allowing the inhibitory axonal signals be included with the excitatory axonal input signals to the inputs to those excitatory neurons, the neo-cortex is able to do normalized conditioned reflex pattern recognition as its basic function.
If one thinks about it, layers of mutually inhibiting groups of neurons are all that are needed to explain the neo-cortex functions. The layers of neurons are able to exhibit conditioned reflex behavior between sub-patterns, generating new learned behaviors as observed by the human. With layer to layer feedback, multi-stable behavior of layers of neurons results, forming short term memory patterns that become part of the stimulus to other neurons. With normalized correlations, there is always an axonal input stimulus pattern that will excite every excitatory neuron.
"I do not understand why the neocortex is a mystery to everyone"
If it's so simple, how come it's taking so long to make realistic simulations of even, say, a rat brain?
Post a Comment
<< Home