RAY KURZWEIL: The InstaPundit Interview

I’ve written before about Ray Kurzweil’s new book, The Singularity Is Near : When Humans Transcend Biology, and I thought it might be interesting to get him to expand on his thoughts for InstaPundit readers. Following is an email interview I did with him this past weekend.

GHR: Your book is called “The Singularity is Near” and — as an amusing photo makes clear — you’re spoofing those “The End is Near” characters from the New Yorker cartoons.

For the benefit of those who aren’t familiar with the topic, or who may have heard other definitions, what is your definition of “The Singularity?” And is it the end? Or a beginning?

RK: In chapter 1 of the book, I define the Singularity this way: “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one’s view of life in general and one’s own particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a ‘singularitarian.’”

The Singularity is a transition, but to appreciate its importance, one needs to understand the nature of exponential growth. On the one hand, exponential growth is smooth with no discontinuities, and values remains finite. On the other hand, it is explosive once we reach the “knee of the curve.” The difference between what I refer to as the “intuitive linear” view and the historically correct exponential view is crucial, and I discuss my “law of accelerating returns” in detail in the first two chapters. It is remarkable to me how many otherwise thoughtful observers fail to understand that progress is exponential, not linear. This failure underlies the common “criticism from incredulity” that I discuss at the beginning of the “Response to Critics” chapter.

To describe these changes further, within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses, “experience beaming,” and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned. But all of this is just the precursor to the Singularity. Nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. We’ll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity.

GHR: Over what timeframe do you see these things happening? And what signposts might we look for that would indicate we’re approaching the Singularity?

RK: I’ve consistently set 2029 as the date that we will create Turing test-capable machines. We can break this projection down into hardware and software requirements. In the book, I show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a functional equivalent to all the regions of the brain. Some estimates are lower than this by a factor of 100. Supercomputers are already at 100 trillion (1014) cps, and will hit 1016 cps around the end of this decade. Two Japanese efforts targeting 10 quadrillion cps around the end of the decade are already on the drawing board. By 2020, 10 quadrillion cps will be available for around $1,000. Achieving the hardware requirement was controversial when my last book on this topic, The Age of Spiritual Machines, came out in 1999, but is now pretty much of a mainstream view among informed observers. Now the controversy is focused on the algorithms.

To understand the principles of human intelligence, that is to achieve the software designs, we need to reverse-engineer the human brain. Here, progress is far greater than most people realize. The spatial and temporal (time) resolution of brain scanning is also progressing at an exponential rate, roughly doubling each year, like most everything else having to do with information. Just recently, scanning tools can see individual interneuronal connections, and watch them fire in real time. Already, we have mathematical models and simulations of a couple dozen regions of the brain, including the cerebellum, which comprises more than half the neurons in the brain. IBM is now creating a simulation of about 10,000 cortical neurons, including tens of millions of connections. The first version will simulate the electrical activity, and a future version will also simulate the relevant chemical activity. By the mid 2020s, it’s conservative to conclude that we will have effective models for all of the brain.

So at this point, we’ll have a full understanding of the methods of the human brain, which will expand the toolkit of techniques we can apply to create artificial intelligence. We will then be able to create nonbiological systems that match human intelligence in the ways that humans are now superior, for example, our pattern- recognition abilities. These superintelligent computers will also be able to do things we are not able to do, such as share knowledge and skills at electronic speeds.

By 2030, a thousand dollars of computation will be about a thousand times more powerful than a human brain. Keep in mind also that computers will not be organized as discrete objects as they are today. There will be a web of computing deeply integrated into the environment, our bodies and brains.

Achieving Turing test-capable nonbiological intelligence will be an important milestone, but this is not the Singularity. This is just creating more human-level intelligence. We already have billions of examples of human-level intelligence. Of course, there will be enormous benefits of machine intelligence with human level capabilities in that machines will be able to combine the now complimentary strengths of human and machine intelligence. Our biological thinking takes place at chemical gradient speeds of a few hundred feet per second, millions of times slower than electronics. And our communication speeds are at the speed of human language, again millions of times slower than what machines are capable of. Of course, our language ability has been very important – other animal species don’t have species-wide knowledge bases at all, let alone exponentially expanding ones, and the ability to share them.

In terms of signposts, credible reports of computer passing the full Turing test will be a very important one, and that signpost will be preceded by non-credible reports of successful Turing tests.

A key insight here is that the nonbiological portion of our intelligence will expand exponentially whereas our biological thinking is effectively fixed. When we get the mid 2040s, according to my models the nonbiological portion of our civilization’s thinking ability will be billions of times greater than the biological portion. Now that represents a profound change.

The term “Singularity” in my book and by the Singularity aware community is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we, with our limited biological brains, imagine what our future civilization, with its intelligence multiplied billions and ultimately trillions of trillions fold, be capable of thinking and doing? Nevertheless, just as we can draw conclusions about the nature of black holes through our conceptual thinking, despite never having actually been inside one, our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. That’s what I’ve tried to do in this book.

GHR: You look at three main areas of technology, what’s usually called GNR for Genetics, Nanotechnology, and Robotics. But it’s my impression that you regard Artificial Intelligence — strong AI — as the most important aspect. I’ve often wondered about that. I’m reminded of James Branch Cabell’s Jurgen, who worked his way up the theological food chain past God to Koschei The Deathless, the real ruler of the Universe, only to discover that Koschei wasn’t very bright, really. Jurgen, who prided himself on being a “monstrous clever fellow,” learned that “Cleverness was not on top, and never had been.” Cleverness isn’t power in the world we live in now — it helps to be clever, but many clever people aren’t powerful, and you don’t have to look far to see that many powerful people aren’t clever. Why should artificial intelligence change that? In the calculus of tools-to-power, is it clear that a ten-times-smarter-than-human AI is worth more than a ten megaton warhead?

RK: This is a clever – and important – question, which has different aspects to it. One aspect is what is the relationship between intelligence and power? Does power result from intelligence? It would seem that there are many counterexamples.

But to piece this apart, we first need to distinguish between cleverness and true intelligence. Some people are clever or skillful in certain ways but have judgement lapses that undermine their own effectiveness. So their overall intelligence is muted.

We also need to clarify the concept of power as there are different ways to be powerful. The poet laureate may not have much impact on interest rates (although conceivably a suitably pointed poem might affect public opinion), but s/he does have influence in the world of poetry. The kids who hung out on Bronx street corners some decades back also had limited impact on geopolitical issues, but they did play an influential role in the creation of the hip hop cultural movement with their invention of break dancing. Can you name the German patent clerk who wrote down his day dreams (mental experiments) on the nature of time and space? How powerful did he turn out to be in the world of ideas, as well as on the world of geopolitics? On the other hand, can you name the wealthiest person at that time? Or the U.S. Secretary of State in 1905? Or even the President of the U.S.?

Another important point is that it is possible to put power in the bank, so to speak. Of course, we can literally put money in the bank, and money is power. It generally takes intelligence to create power in the first place – again keeping in mind that there are different types of power. So one can use one’s intelligence to make money and then put it in the bank. Or one can use one’s intelligence to become a famous poet or a famous rap artist, and then people will listen to your next creation based on your past laurels.

Such stored power can be maintained by organizations as well as individuals – the power of a company or a nation, for example. It takes intelligence to create the power – any kind of power – in the first place, but it can then be stored. But a lack of intelligence will cause that power to dissipate, not instantly, but over time it will act like a slow leak. An organization may have as its nominal leader someone who may not be especially intelligent, but there may nonetheless be intelligence around that person. But if the organization truly lacks intelligence, and acts foolishly, it will lose its store of power over time.

A study of history will show that the technologically more sophisticated (and we can certainly consider technology to be a manifestation of intelligence) civilization prevails. The rise of India and China in recent history is certainly a manifestation of the intelligence and education of their citizens (more on that later). Israel has little land and no significant natural resources, yet its gross national product is now several times that of Saudi Arabia due to the education and technological sophistication of its citizens.

In short, it is my view that ultimately intelligence prevails, even though the ability to save and store it acts as a “low pass filter,” to use an engineering term.

The other interesting aspect of your question has to do with the whole promise versus peril question. The promise side of the equation is the opportunity for these accelerating technologies to advance complexity, where complexity is meaningful knowledge including all of the arts and sciences, as well as human skills. To take an extreme example of what you refer to as power without intelligence, gray goo certainly represents power – destructive power – and if such an existential threat were to prevail, it would represent a catastrophic loss of complexity. It would be a triumph of raw power over intelligence. A ten megaton warhead is similar. Note that in such scenarios, the power that might succeed over intelligence is invariably a destructive power.

Now I have been accused of being an optimist on these questions, and I think that accusation has merit. On the other hand, I was also the person that alerted Bill Joy to the dangers of technology, which started with our discussion in a Lake Tahoe bar room in September of 1998. And it would not at all be accurate to say that I am sanguine or dismissive about these dangers. I address them in some detail in chapter 8 of Singularity is Near as you know.

We have an existential threat now in the form of the possibility of a bioengineered malevolent biological virus. With all the talk of bioterrorism, the possibility of a bioengineered bioterrorism agent gets little and inadequate attention. The tools and knowledge to create a bioengineered pathogen are more widespread than the tools and knowledge to create an atomic weapon, yet it could be far more destructive. I’m on the Army Science Advisory Group (a board of five people who advise the Army on science and technology), and the Army is the institution responsible for the nation’s bioterrorism protection. Without revealing anything confidential, I can say that there is acute awareness of these dangers, but there is neither the funding nor national priority to address them in an adequate way.

The answer is not relinquishment of these advanced technologies as I argue in the chapter because in addition to depriving humankind of the profound benefits (such as effective treatments for cancer, heart disease and other diseases), it would actually make the dangers worse by driving these technologies underground where responsible practitioners would not have easy access to the tools to develop the defenses. The real answer is to put more stones on the defensive side of the scale. Along these lines, I’ve testified to Congress (http://www.kurzweilai.net/meme/frame.html?main=/articles/art0556.html) on my proposal for a “Manhattan” style project to quickly develop a quick response system for new biological viruses, whether human-made or natural. For example, we could put in place a system which would quickly sequence a new virus, create an RNAi (RNA interference) medication for it (RNAi has shown to be effective to combat a specific biological virus because almost all biological viruses use messenger RNA which RNAi blocks), and then rapidly build up production. In this testimony I also address similar issues for nanotechnology, which are still a couple of decades away.

The response of some other observers, such as Richard Smalley, is to just deny that such dangers as self-replicating nanotechnology are feasible. As I point out in the book, he has made this motivation explicit. And although the existential nanotechnology danger is not yet at hand, denial is not the appropriate strategy.

So, yes, it is possible for the destructive (complexity destroying) powers represented by one of the existential threats I discuss in chapter 8 to prevail. I’m optimistic that they won’t, but less optimistic that we can avoid all painful events. Technology accelerated smoothly through the twentieth (and all prior) centuries, but we certainly didn’t avoid painful episodes.

GHR: It seems to me that one of the characteristics of the Singularity is the development of what might be seen as weakly godlike powers on the part of individuals. Will society be able to handle that sort of thing? The Greek gods had superhuman powers (pretty piddling ones, in many ways, compared to what we’re talking about) but an at-least-human degree of egocentrism, greed, jealousy, etc. Will post-Singularity humanity do better?

RK: Arguably we already have powers comparable to the Greek gods, albeit, as you point out, piddling ones compared to what is to come. For example, you are able to write ideas in your blog and instantly communicate them to just those people who are interested. We have many ways of communicating our thoughts to precisely those persons around the world with whom we wish to share ideas. If you want to acquire an antique plate with a certain inscription, you have a good chance of quickly finding the person who has it. We have increasingly rapid access to our exponentially growing human knowledge base.

Human egocentrism, greed, jealousy, and other emotions that emerged from our evolution in much smaller clans has nonetheless not prevented the smooth, exponential growth of knowledge and technology through the centuries. So I don’t see these emotional limitations halting the ongoing progression of technology.

Adaptation to new technologies does not occur by old technologies suddenly disappearing. The old paradigms persist while new ones take root quickly. A great deal of economic commerce, for example, now transcends national boundaries, but the boundaries are still there, even if now less significant.

But there is reason for believing we will be in a position to do better than in times past. One important upcoming development will be the reverse-engineering of the human brain. In addition to giving us the principles of operation of human intelligence that will expand our AI tool kit, it will also give us unprecedented insight into ourselves. As we merge with our technology, and as the nonbiological portion of our intelligence begins to predominate in the 2030s, we will have the opportunity to apply our intelligence to improving on – redesigning – these primitive aspects of it.

GHR: The term “Singularity” — as applied to technological/social change — was coined by Vernor Vinge, who is both a professor of computer science and a science fiction writer. Since then, the idea has appeared in all sorts of science fiction by Vinge and others. I recently read Charles Stross’s Accelerando, where it’s predicted that once the entire mass of the Solar System has been devoted to computation, it will be taken over by automated sentient legal documents and the equivalent of 419 scams and spambots. I suspect that Stross was trying a bit hard to be clever, but what science-fictional treatments do you find compelling, if any? What do they get right and wrong?

RK: If the computational substrate that manifests our intelligence later in this century becomes taken over by scans and spambots, that would represent an existential failure, comparable to the triumph of a bioengineered biological virus or gray goo. We already have a complex ecology in the substrate represented today by our computers and the Internet. But we don’t see self-replicating software entities dominating and crowding out useful complexity.

With regard to science fiction, it should be pointed out that the science fiction/futurism movies of the most recent decade often represent the written science fiction of a couple of decades earlier. Most science futurism movies make the mistake of taking one future change and applying that to today’s world as if nothing else will change. For example, the movie AI depicts near human-level cyborgs, but everything else from the coffee maker to the cars are essentially unchanged. The Matrix movies, although dystopian as is common among science futurism films, do provide a somewhat more comprehensive view of the future nature of virtual reality.

It is difficult for the science fiction genre to deal effectively with the many diverse changes that a realistic depiction of the future would entail. It would require explaining a panoply of changes. It is easier for a writer to concentrate on the literary challenges of one type of change while being able to lean on an otherwise familiar landscape to create the needed human drama.

One science fiction writer who has made effective attempts at depicting the many profound changes that lie ahead is Cory Doctorow. His novel usr/bin/god (which I discuss on pages 271-272) depicts a genetic algorithm that evolves a Turing test-capable AI. The evaluation function is to send each AI program out to interact in chat rooms and determine how long each system can last without being challenged by one of the human participants with a statement like, “what are you, a bot, or something?” This is an interesting idea and may be a good way of finishing the strong AI project once we get close.

GHR: If an ordinary person were trying to prepare for the Singularity now, what should he or she do? Is there any way to prepare? And, for that matter, how should societies prepare, and can they?

RK: In essence, The Singularity will be an explosion of human knowledge made possible by the amplification of our intelligence through its merger with its exponentially growing variant. Creating knowledge requires passion, so one piece of advice would be to follow your passion.

That having been said, we need to keep in mind that the cutting edge of the GNR revolutions is science and technology. So individuals need to be science and computer literate. And societies need to emphasize science and engineering education and training. Along these lines, there is reason for concern in the U.S. I’ve attached seven charts I’ve put together (that you’re welcome to use) that show some disturbing trends. Bachelor degrees in engineering in the U.S. were 70,000 per year in 1985, but have dwindled to around 53,000 in 2000. In China, the numbers were comparable in 1985 but have soared to 220,000 in 2000, and have continued to rise since then. We see the same trend comparison in all other technological fields including computer science and the natural sciences. We see the same trends in other Asian countries such as Japan, Korea, and India (India is not shown in these graphs). We also see the same trends on the doctoral level as well.

One counterpoint one could make is that the U.S. leads in the application of technology. Our musicians and artists, for example, are very sophisticated in the use of computers. If you go to the NAMM (National Association of Music Merchants) convention, it looks and reads like a computer conference. I spoke recently to the American Library Association, and the presentations were all about data bases and search tools. Essentially every conference I speak at, although diverse in topic, look and read like computer conferences.

But there is an urgent need in our country to attract more young people to science and engineering. We need to make these topics cool and compelling.
—-

Graphics: