Feb 03 2009

Singularity University

Skeptics by nature take a conservative approach to scientific conclusions – they should reasonably track with plausibility and evidence. Annoyingly often, however, paranormal enthusiasts and other targets of skeptical critique misinterpret this skeptical conservatism as applying to the generation of new ideas and scientific research.

Quite the opposite is true. Science is often about exuberant exploration, tearing down existing ideas and paradigms, challenging authority, and flights of imaginative speculation. At the leading edge of science is creative chaos which rewards imagination, the ability to think in new ways, and to challenge assumptions. It also rewards risks, which means that most new ideas will be wrong – and that’s OK, even necessary.

Great science balances these two imperatives – rigorous, methodical and conservative testing with rebellious imagination. Without the former, ideas are disconnected from reality and will tend to spin off into fantasy land. Without the latter, ideas are stuck in neutral and progress is stalled.

All too often when skeptics point out the need for rigorous testing of ideas they are criticized by those who dislike their conclusions as being against new ideas themselves. It’s a deceptive straw man that just won’t go away.

The need for risky and creative imagination is exactly why I applaud the opening of Singularity University. Google and NASA are both supporting this new endeavor of Ray Kurzweil, a futurist and inventor. It’s not technically a real university, but it is a school that will focus on nanotechnology, biotechnology, and artificial intelligence.

Ray Kurzweil is a somewhat controversial figure. He is the author of The Singularity is Near, in which he lays out his ideas on the rate at which information-based technology advances. He argues that information-based systems tend to benefit from a feed-forward type of feedback loop where the more they progress the faster they progress. Information creates the ability to gather new information more quickly.

In applying this to current technology, he sees several trends. The most obvious of which is Moore’s law – the density of transistors on computer chips double roughly every 18 months – translating into about a doubling of overall computer power. Advances in computer power feed into advances in other technologies that depend upon information processing and communication, including scientific research.

Further, he argues that increasingly our technologies are becoming information-based.  The focus of Singularity University on nanotechnology, biotechnology, and artificial intelligence is very deliberate – these are the technologies that will make all other technologies essentially information-based, subjecting them to a Moore’s law type of accelerating progress.

Kurzweil further argues that we are approaching the rapid part of the curve, where it turns up sharply and technological progress explodes. He calls this tipping point in progress the singularity, and argues that we cannot extrapolate human civilization beyond this point because it will be too transformative. For futurists the singularity is like a back curtain we cannot see through – we must wait until we pass through to the other side.

Kurzweil’s ideas are certainly “out there” – meaning that they are highly speculative. Attempting to predict the future has also proven to be a fool’s errand and therefore invites ridicule, and is also almost a guarantee of error. Kurzweils faith in the future strikes many as “irrational exuberance.” He also seems highly invested in the notion that he will personally experience that future.

In my book the most dubious claims made by Kurzweil are for his supplement regimen. He takes about 230 supplements that he believes will extend his life. Not only is compelling evidence for this utterly lacking, its likely that aggressive supplementation causes more harm than good. Clinical medicine should strike a more conservative balance than other scientific arenas because of it’s direct impact on individual and public health.

But – Kurzweil is a provocative thinker with interesting ideas. He is not afraid to take risks and some of them have paid off. I think he is essentially correct when it comes to artificial intelligence – eventually computers will be able to out-think humans. I am not as confident as he about predicting the time frame (he thinks by 2050 computers will outstrip humans), but I think at most he may be off by a few decades – not a long time in the scope of human history.

Scientific research is both an investment and a gamble – because you never know how a specific research program will pan out. It makes sense to spread out our investments and hedge our scientific bets. This partly means that while we may spend the bulk of research dollars on high-probability translational research, it is reasonable to invest progressively smaller percentages on progressively more speculative programs.

I also think that creative and intelligent people should, to some extent, be free to pursue their interests, passions, and hunches. Such research has paid off historically, even though most of it may lead to dead ends. That is the nature of research.

There are limits, of course. There should be some plausibility and rationale behind scientific research.  Not all ideas are worthy of exploration, and some have already been adequately rejected by scientific evidence.

Kurzweil may end up being thought of as the Nikola Tesla of his day – a genuine scientist and eccentric figure with some far out but useful (whether right or wrong) ideas.

______________________

Addendum:

Here is some more information on AI that came up in the comments, but wanted to put the references in the body of this post:

To get a better idea of where we are, project Blue Brain in March of 2008 was able to virtually simulate a neocortical column from a rat. (http://seedmagazine.com/news/2008/03/out_of_the_blue.php)  Here is an excerpt:

It took less than two years for the Blue Brain supercomputer to accurately simulate a neocortical column, which is a tiny slice of brain containing approximately 10,000 neurons, with about 30 million synaptic connections between them. “The column has been built and it runs,” Markram says. “Now we just have to scale it up.” Blue Brain scientists are confident that, at some point in the next few years, they will be able to start simulating an entire brain. “If we build this brain right, it will do everything,” Markram says. I ask him if that includes selfconsciousness: Is it really possible to put a ghost into a machine? “When I say everything, I mean everything,” he says, and a mischievous smile spreads across his face.

And here is a separate story from two years ago about simulating the complexity of about half a mouse brain at 1/10 speed. (http://news.bbc.co.uk/2/hi/technology/6600965.stm)

Half a real mouse brain is thought to have about eight million neurons each one of which can have up to 8,000 synapses, or connections, with other nerve fibres.

Modelling such a system, the trio wrote, puts “tremendous constraints on computation, communication and memory capacity of any computing platform”.

The team, from the IBM Almaden Research Lab and the University of Nevada, ran the simulation on a BlueGene L supercomputer that had 4,096 processors, each one of which used 256MB of memory.

Using this machine the researchers created half a virtual mouse brain that had 8,000,000 neurons that had up to 6,300 synapses.

Saying we will be able to simulate an entire brain over the next few years seems overly optimistic to me, just from a computing power point of view.  But again there is no fundamental reason why this research will not lead to a virtual brain, once we have the requisite computing power and knowledge of brain connections. I think its likely that the first simulated brain will not be precisely human – because we don’t understand the connections precisely enough yet. But we’re getting there, and these computer simulations will actually help us get there.

The question of virtual brain consciousness is a tough one, as I discussed in detail here: http://www.theness.com/neurologicablog/?p=392 

52 responses so far