Brave new world: the evolution of mind in the twenty-first century

Article for WIRED Magazine discussion group

October 2, 1999

Ray Kurzweil

Author of The Age of Spiritual Machines, When Computers Exceed Human Intelligence (Viking, 1999)































The intelligence of machines – nonbiological entities – will exceed human intelligence early in this next century. By intelligence, I include all the diverse and subtle ways in which humans are intelligent—including musical and artistic aptitude, creativity, physically moving through the world, and even responding to emotion. By 2019, a $1,000 computer will match the processing power of the human brain – about 20 million billion calculations per second. This level of processing power is a necessary but not sufficient condition for achieving human-level intelligence in a machine. Organizing these resources – the "software" of intelligence – will take us to 2029, by which time your average personal computer will be equivalent to a thousand human brains.

Once a computer achieves a level of intelligence comparable to human intelligence, it will necessarily soar past it. A key advantage of nonbiological intelligence is that machines can easily share their knowledge. If I learn French, or read War and Peace, I can’t readily download that learning to you. You have to acquire that scholarship the same painstaking way that I did. My knowledge, embedded in a vast pattern of neurotransmitter concentrations and interneuronal connections, cannot be quickly accessed or transmitted. But we won’t leave out quick downloading ports in our nonbiological equivalents of human neuron clusters. When one computer learns a skill or gains an insight, it can immediately share that wisdom with billions of other machines.

As a contemporary example, we spent years teaching one research computer how to recognize continuous human speech. We exposed it to thousands of hours of recorded speech, corrected its errors, and patiently improved its performance. Finally, it became quite adept at recognizing speech (I dictated most of my recent book to it). Now if you want your own personal computer to recognize speech, it doesn’t have to go through the same process; you can just download the fully trained program in seconds.

Ultimately, billions of nonbiological entities can be the master of all human and machine acquired knowledge. Computers are also potentially millions of times faster than human neural circuits, and have far more reliable memories.

One approach to designing intelligent computers will be to copy the human brain, so these machines will seem very human. And through nanotechnology, which is the ability to create physical objects atom by atom, they will have human-like – albeit greatly enhanced – bodies as well. Having human origins, they will claim to be human, and to have human feelings. And being immensely intelligent, they’ll be very convincing when they tell us these things.

Keep in mind that this is not an alien invasion of intelligent machines. It is emerging from within our human-machine civilization. There will not be a clear distinction between human and machine as we go through the Twenty First century. First of all, we will be putting computers – neural implants – directly into our brains. We’ve already started down this path. We have neural implants to counteract Parkinson’s Disease and tremors from multiple sclerosis. I have a deaf friend who now can hear what I am saying because of his cochlear implant. Under development is a retina implant that will perform a similar function for blind individuals, basically replacing certain visual processing circuits of the brain. Recently scientists from Emory University placed a chip in the brain of a paralyzed stroke victim who can now begin to communicate and control his environment directly from his brain.

In the 2020s, neural implants will not be just for disabled people, and introducing these implants into the brain will not require surgery, but more about that later. There will be ubiquitous use of neural implants to improve our sensory experiences, perception, memory, and logical thinking.

These "noninvasive" implants will also plug us in directly to the World Wide Web. By 2030, "going to a web site" will mean entering a virtual reality environment. The implant will generate the streams of sensory input that would otherwise come from our real senses, thus creating an all encompassing virtual environment that responds to the behavior of our own virtual body in the virtual environment. This technology will enable us to have virtual reality experiences with other people – or simulated people – without requiring any equipment not already in our heads. And virtual reality will not be the crude experience that one can experience in today’s arcade games. Virtual reality will be as realistic, detailed, and subtle as real reality. So instead of just phoning a friend, you can meet in a virtual French café in Paris, or take a walk on a virtual Mediterranean beach, and it will seem very real. People will be able to have any type of experience with anyone – business, social, romantic, sexual – regardless of physical proximity.


To see into the future, we need insight into the past. We need to discern all the relevant trends and their interactions. There are a number of common mistakes in attempting prognostication. One common failure is to focus on only one aspect of science and technology, while ignoring other developments that are likely to intersect. Another is to see only one or two iterations of advancement in a technology, and then assume that progress will come to a halt.

Probably the most important failure is neglecting to understand the accelerating nature of technological progress. The next twenty years will see far more change than the previous hundred. The key to an assessment of future trends is timing, determining how much progress can realistically be expected in particular time frames.

One very important trend is referred to as "Moore’s Law." Gordon Moore, one of the inventors of integrated circuits, and then Chairman of Intel, noted in the mid 1970s that we could squeeze twice as many transistors on an integrated circuit every twenty-four months. The implication is that computers, which are built from integrated circuits, are doubling in power every two years. Lately, the rate has been even faster.

After sixty years of devoted service, Moore’s Law will die a dignified death around the year 2019. By that time, transistor features will be just a few atoms in width, and the strategy of ever finer photolithography will have run its course. So, will that be the end of the exponential growth of computing?

Don’t bet on it.

If we plot the speed (in instructions per second) per $1000 (in constant dollars) of 49 famous calculators and computers spanning the entire twentieth century, we note some interesting observations (See Figure).

(If the image is distorted please refresh your browser.)

First, Moore’s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. census, to Turing’s relay-based machine that cracked the Nazi enigma code, to the vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer which I used to dictate (and automatically transcribe) this article.

But I noticed something else surprising. When I plotted the 49 machines on a logarithmic graph (where a straight line means exponential growth), I didn’t get a straight line. What I got was another exponential curve. In other words, there’s exponential growth in the rate of exponential growth. Computer speed (per unit cost) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year.

Why is this happening? The answer has to do with the exponential nature of time, something I call the Law of Time and Chaos. Time does not move in a linear fashion. The rate of a process is always exponentially slowing down or speeding up.

Take the Universe, for example. There were three major paradigm shifts in the first billionth of a second (the emergence of gravity, the emergence of matter, and the emergence of the four fundamental forces). Now the Universe takes billions of years to get everything organized for an epochal event.

The process of evolution moves in the opposite direction. The evolution of life-forms took billions of years to get started. Later on, the emergence of homo sapiens from our hominid ancestors took only hundreds of thousands of years. The next step in evolution was too fast for DNA-guided protein synthesis, so the focus of evolution moved from the development of life-forms to the creation of technology, which I regard as evolution by other means (although this latter form of evolution should not properly be regarded as a "blind watchmaker"). It is interesting to note that there has been room for only one species on Earth in the ecological niche of technology creators.

In a process, the time interval between salient events expands or contracts along with the amount of chaos. This relationship is the key to understanding the reason that the exponential growth of computing will survive the demise of Moore’s Law. Evolution started with vast chaos, and little effective order, so early progress was slow. But evolution creates ever increasing order. That is, after all, the essence of evolution. Order is the opposite of chaos, so when order in a process increases – as is the case for evolution – time speeds up. I call this important sub-law the "Law of Accelerating Returns," to contrast it with a better known law in which returns diminish.

Computation represents the essence of order in technology. Being subject to the evolutionary process that is technology, it too grows exponentially. We can state this in terms of the speed of the process – it took ninety years to achieve the first MIP (Million Instructions per Second) for a thousand dollars. Now we add an additional MIP per thousand dollars every day.

If we view the exponential growth of computation in its proper perspective, as one example of many of the Law of Accelerating Returns, then we can confidently predict its continuation. Moore’s Law is not just "a set of industry expectations and goals," as Randy Isaac, head of basic science at IBM, contends. It is one manifestation of a basic law that governs the pace of any process through time, including evolutionary processes.

A sixth paradigm will take over from Moore’s Law, just as Moore’s Law took over from discrete transistors, and vacuum tubes before that. There are many new technologies waiting in the wings. Nanotube circuits, for example, which are formed from pentagonal patterns of carbon atoms formed into tubes, are already working in laboratories. They are capable of forming extremely dense three-dimensional arrays of computing elements. A one inch cube of nanotube circuitry would be a million times more powerful than the human brain. There are more than enough new computing technologies now being researched, including nanotubes, three-dimensional chips, optical computing, crystalline computing, DNA computing, and quantum computing, to keep the Law of Accelerating Returns going for a long time.

So where will this take us?

By the year 2020, your $1000 personal computer will have the processing power of the human brain – 20 million billion calculations per second (100 billion neurons times 1,000 connections per neuron times 200 calculations per second per connection). By 2030, it will take a village of human brains to match a $1000 of computing. By 2060, a $1000 of computing will equal the processing power of all human brains on Earth. Okay, I may be off a year or two (See Figure).

(If the image is distorted please refresh your browser.)

Of course, this only includes those brains still using carbon-based neurons. While human neurons are wondrous creations in a way, we wouldn’t design computing circuits the same way. Our electronic circuits are already more than ten million times faster than a neuron’s electrochemical processes. Most of the complexity of a human neuron is devoted to maintaining its life support functions, not its information processing capabilities. Ultimately, we will need to port our mental processes to a more suitable computational substrate. Then our minds won’t have to stay so small, being constrained as they are today to a mere hundred trillion neural connections each operating at a ponderous 200 analog calculations per second.

These projections are actually conservative because I considered only the first level of exponential growth. But one level of acceleration is enough. A careful consideration of the law of time and chaos, and its key sublaw, the law of accelerating returns, shows that the exponential growth of computing is not like those other exponential trends that run out of resources. The two resources it needs – the growing order of the evolving technology itself, and the chaos from which an evolutionary process draws its options for further diversity, are essentially without limit in the Universe.


So far, I’ve been talking about the hardware of computing. The software is even more salient. Achieving the computational capacity of the human brain, or even villages and nations of human brains will not automatically produce human levels of capability. It is a necessary but not sufficient condition.

The organization and content of these resources – the software of intelligence – is also critical.

There are a number of compelling scenarios to capture higher levels of intelligence in our computers, and ultimately human levels and beyond. We will be able to evolve and train a system combining massively parallel neural nets with other paradigms to understand language and model knowledge, including the ability to read and understand written documents. Unlike many contemporary "neural net" machines, which use mathematically simplified models of human neurons, more advanced neural nets are already using highly detailed models of human neurons, including their ability to combine digital and analog forms of information. Although the ability of today’s computers to extract and learn knowledge from natural language documents is limited, their capabilities in this domain are improving rapidly. Computers will be able to read on their own, understanding and modeling what they have read, by the second decade of the twenty-first century. We can then have our computers read all of the world’s literature – books, magazines, scientific journals, and other available material. Ultimately, the machines will gather knowledge on their own by venturing into the physical world, drawing from the full spectrum of media and information services, and sharing knowledge with each other (which machines can do far more easily than their human creators).

Once a computer achieves a human level of intelligence, it will necessarily soar past it. Since their inception, computers have significantly exceeded human mental dexterity in their ability to remember and process information. A computer can remember billions or even trillions of facts perfectly, while we are hard pressed to remember a handful of phone numbers. A computer can quickly search a data base with billions of records in fractions of a second. As I mentioned earlier, computers can readily share their knowledge. The combination of human level intelligence in a machine with a computer’s inherent superiority in the speed, accuracy and sharing ability of its memory will be formidable.


The most compelling scenario for mastering the software of intelligence is to tap the blueprint of the best example we can get our hands on of an intelligent process. There is no reason why we cannot reverse engineer the human brain, and essentially copy its design. It took its original designer several billion years to develop. And it’s not even copyrighted.

The most immediately accessible way to accomplish this is through destructive scanning: we take a frozen brain, preferably one frozen just slightly before rather than slightly after it was going to die anyway, and examine one brain layer – one very thin slice – at a time. We can readily see every neuron and every connection and every neurotransmitter concentration represented in each synapse-thin layer.

Human brain scanning has already started. A condemned killer allowed his brain and body to be scanned and you can access all 10 billion bytes of him on the Internet. He has a 25 billion byte female companion on the site as well in case he gets lonely. This scan is not high enough resolution for our purposes, but then we probably don’t want to base our templates of machine intelligence on the brain of a convicted killer, anyway.

But scanning a frozen brain is feasible today, albeit not yet at a sufficient speed or bandwidth, but again, the Law of Accelerating Returns will provide the requisite speed of scanning, just as it did for the human genome scan.

We also have noninvasive scanning techniques today, including high-resolution magnetic resonance imaging (mri) scans, optical imaging, near-infrared scanning, and other noninvasive scanning technologies, that are capable in certain instances of resolving individual somas, or neuron cell bodies. Brain scanning technologies are increasing their resolution with each new generation. Future generations will enable us to resolve the connections between neurons. Ultimately we will be able to peer inside the synapses and record the neurotransmitter concentrations.

We can peer inside someone’s brain today with noninvasive scanners, which are increasing their resolution with each new generation of this technology. There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth, lack of vibration, and safety. For a variety of reasons it is easier to scan the brain of someone recently deceased than of someone still living. It is easier to get someone deceased to sit still for one thing. But noninvasively scanning a living brain will ultimately become feasible as MRI, optical, and other scanning technologies continue to improve in resolution and speed.

In fact, the driving force behind the rapidly improving capability of noninvasive scanning technologies is again the Law of Accelerating Returns, because it requires massive computational ability to build the high resolution three-dimensional images. The exponentially increasing computational ability provided by the Law of Accelerating Returns (and for another fifteen to twenty years, Moore’s Law) will enable us to continue to rapidly improve the resolution and speed of these scanning technologies.


Yet another approach to scanning the human brain is to scan it from inside. By 2030, "nanobot" (i.e., nano robot) technology will be viable, and brain scanning will be a prominent application. Nanobots are robots that are the size of human blood cells, or even smaller. Billions of them could travel through every brain capillary and scan every salient neural detail from up close. Using high speed wireless communication, the nanobots would communicate with each other, and with other computers that are compiling the brain scan data base.

We already have technology capable of providing provide very high resolution scans so long as the scanner is physically proximate to the neural features. The computational and communication requirements are also essentially feasible today. The primary features that are not yet practical are nanobot size and cost. As I discussed above, we can project the exponentially declining cost of computation. Miniaturization is another readily predictable aspect of the law of accelerating returns. The size of electronics, for example, is shrinking at an exponential rate, currently by a factor of 5.6 per linear dimension per decade. We can expect, therefore, the requisite nanobot technology by around 2030. Because of its ability to place each scanner in very close physical proximity to every neural feature, nanobot-based scanning will be more practical than scanning the brain from outside.


How will apply the thousands of trillions of bytes of information derived from each brain scan? One approach is to use the results to design more intelligent parallel algorithms for our machines, particularly those based on one of the neural net paradigms. With this approach, we don’t have to copy every single connection. There is a great deal of repetition and redundancy within any particular brain region. Although the information contained in a human brain would require thousands of trillions of bytes of information (on the order of 100 billion neurons times an average of 1,000 connections per neuron, each with multiple neurotransmitter concentrations and connection data), the design of the brain is characterized by a human genome of only about a billion bytes.

Furthermore, most of the genome is redundant, so the initial design of the brain is characterized by at most a few hundred million bytes, about the size of Microsoft WORD. Of course, the complexity of our brains greatly increases as we interact with the world. It is not necessary, however, to capture each detail in order to reverse engineer the salient digital-analog algorithms. With this information, we can design simulated nets that operate similarly. There are already multiple efforts under way to scan the human brain and apply the insights derived to the design of intelligent machines. The ATR (Advanced Telecommunications Research) Lab in Kyoto, Japan, for example, is building a silicon brain with 1 billion neurons. Although this is 1% of the number of neurons in a human brain, the ATR neurons operate at much faster speeds.

After the algorithms of a region are understood, they can be refined and extended before being implemented in synthetic neural equivalents. For one thing, they can be run on a computational substrate that is already more than ten million times faster than neural circuitry. And we can also throw in the methods for building intelligent machines that we already understand.


Perhaps a more interesting approach than this scanning-the-brain-to-understand-it scenario is scanning-the-brain-to-download-it. Here we copy someone’s brain to map the locations, interconnections, and contents of all the somas, axons, dendrites, presynaptic vesicles, neurotransmitter concentrations, and other neural components and levels. Its entire organization can then be re-created on a neural computer of sufficient capacity, including the contents of its memory.

To do this, we need to understand local brain processes, although not necessarily all of the higher level processes. Scanning a brain to download is not as daunting an effort as it may sound. First of all, another scanning project – the human genome scan – also sounded daunting when it was first suggested. And at the rate at which we could scan genetic codes twelve years ago, it would have taken thousands of years to complete the project. But like all other technology projects, the human genome scan was governed by the Law of Accelerating Returns. Our ability to sequence the DNA in human genes has been accelerating like all other technology, so it appears that it will indeed be completed on time, in a couple of years from now.

The computationally salient aspects of individual neurons are complicated, but definitely not beyond our ability to accurately model. For example, Ted Berger and his colleagues at Hedco Neurosciences have built integrated circuits that precisely match the digital and analog information processing characteristics of neurons, including clusters with hundreds of neurons. Carver Mead and his colleagues at CalTech have built a variety of integrated circuits that emulate the digital-analog characteristics of mammalian neural circuits.

A recent experiment at San Diego’s Institute for Nonlinear Science demonstrates the potential for electronic neurons to precisely emulate biological ones. Neurons (biological or otherwise) are a prime example of what is often called "chaotic computing." Each neuron acts in an essentially unpredictable fashion. When an entire network of neurons receives input (from the outside world or from other networks of neurons), the signaling amongst them appears at first to be frenzied and random. Over time, typically a fraction of a second or so, the chaotic interplay of the neurons dies down, and a stable pattern emerges. This pattern represents the "decision" of the neural network. If the neural network is performing a pattern recognition task (which, incidentally, comprises more than 90% of the activity in the human brain), then the emergent pattern represents the appropriate recognition.

So the question addressed by the San Diego researchers was whether electronic neurons could engage in this chaotic dance alongside biological ones. They hooked up their artificial neurons with those from spiney lobsters in a single network, and their hybrid biological-nonbiological network performed in the same way (i.e., chaotic interplay followed by a stable emergent pattern) and with the same type of results as an all biological net of neurons. Essentially, the biological neurons accepted their electronic peers. It indicates that their mathematical model of these neurons was reasonably accurate.

There are many projects around the world which are creating nonbiological devices which recreate in great detail the functionality of human neuron clusters, and the accuracy and scale of these neuron clusters replications are rapidly increasing.

As the computational power to emulate the human brain becomes available – we’re not there yet, but we will be there within a couple of decades – projects already under way to scan the human brain will be accelerated, with a view both to understand the human brain in general, as well as providing a detailed description of the contents and design of specific brains. By the third decade of the twenty-first century, we will be in a position to create highly detailed and complete maps of the computationally relevant features of all neurons, neural connections and synapses in the human brain, and to recreate these designs in suitably advanced neural computers.


Now, what will we find when we do this?

We have to consider this question on both the objective and subjective levels. "Objective" means everyone except me, so let’s start with that. Objectively, when we scan someone’s brain and reinstantiate their personal mind file into a suitable computing medium, the newly emergent "person" will appear to other observers to have very much the same personality, history and memory as the person originally scanned. That is, once the technology has been refined and perfected. Like any new technology, it won’t be perfect at first. But ultimately, the scans and recreations will be very accurate and realistic.

Interacting with the newly instantiated person will feel like interacting with the original person. The new person will claim to be that same old person and will have a memory of having been that person.

Subjectively, the issue is more subtle and profound, but first we need to reflect on one additional objective issue: our physical self.


Consider how many of our thoughts and thinking is directed towards our body and its survival, security, nutrition, image, not to mention affection, sexuality, and reproduction. Many, if not most, of the goals we attempt to advance using our brains have to do with our bodies: protecting them, providing them with fuel, making them attractive, making them feel good, providing for their myriad needs, not to mention desires. Some philosophers maintain that achieving human level intelligence is impossible without a body. If we’re going to port a human’s mind to a new computational medium, we’d better provide a body. A disembodied mind will quickly get depressed.

There are variety of bodies that we will provide for our machines, and that they will provide for themselves: bodies built through nanotechnology (an emerging field devoted to building highly complex physical entities atom by atom), virtual bodies (that exist only in virtual reality), bodies comprised of swarms of nanobots.

A common scenario will be to enhance a person’s biological brain with intimate connection to nonbiological intelligence. In this case, the body remains the good old human body that we’re familiar with, although this too will become greatly enhanced through biotechnology (gene enhancement and replacement) and nanotechnology. A detailed examination of twenty-first century bodies is beyond the scope of this article, but is examined in chapter seven of my recent book The Age of Spiritual Machines.


To return to the issue of subjectivity, consider: is the reinstantiated mind the same consciousness as the person we just scanned? Are these "people" conscious at all? Is this a mind or just a brain?

Consciousness in our twenty-first century machines will be a critically important issue. But it is not easily resolved, or even readily understood. People tend to have strong views on the subject, and often just can’t understand how anyone else could possibly see the issue from a different perspective. Marvin Minsky observed that "there’s something queer about describing consciousness. Whatever people mean to say, they just can’t seem to make it clear."

We don’t worry, at least not yet, about causing pain and suffering to our computer programs. But at what point do we consider an entity, a process, to be conscious, to feel pain and discomfort, to have its own intentionality, its own free will? How do we determine if an entity is conscious, if it has subjective experience? How do we distinguish a process that is conscious from one that just acts as if it is conscious?

We can’t simply ask it. If it says "Hey I’m conscious," does that settle the issue? No, we have computer games today that effectively do that, and they’re not terribly convincing.

How about if the entity is very convincing and compelling when it says "I’m lonely, please keep me company." Does that settle the issue? If we look inside its circuits, and see similar kinds of feedback loops in its brain that we see in a human brain, does that settle the issue?

And just who are these people in the machine, anyway? The answer will depend on who you ask. If you ask the people in the machine, they will strenuously claim to be the original persons. For example, if we scan – let’s say myself – and record the exact state, level, and position of every neurotransmitter, synapse, neural connection, and other relevant details, and then reinstantiate this massive data base of information into a neural computer of sufficient capacity, the person that then emerges in the machine will think that he is (and had been) me. He will say "I grew up in Queens, New York, went to college at MIT, stayed in the Boston area, sold a few artificial intelligence companies, walked into a scanner there, and woke up in the machine here. Hey, this technology really works."

But wait. Is this really me? For one thing, old biological Ray (that’s me) still exists. I’ll still be here in my carbon-cell-based brain. Alas, I will have to sit back and watch the new Ray succeed in endeavors that I could only dream of.


Let’s consider the issue of just who I am, and who the new Ray is a little more carefully. First of all, am I the stuff in my brain and body?

Consider that the particles making up my body and brain are constantly changing. We are not at all permanent collections of particles. It is the patterns of matter and energy that are semipermanent (that is, changing only gradually), but our actual material content is changing constantly, and very quickly. We are rather like the patterns that water makes in a stream. The rushing water around a formation of rocks makes a particular, unique pattern. This pattern may remain relatively unchanged for hours, even years. Of course, the actual material constituting the pattern – the water – is replaced in milliseconds. This argues that we should not associate our fundamental identity with a specific set of particles, but rather the pattern of matter and energy that we represent. Many contemporary philosophers seem partial to this "identify from pattern" argument.

But wait. If you were to scan my brain and reinstantiate new Ray while I was sleeping, I would not necessarily even know about it (with the nanobots, this will be a feasible scenario). If you then come to me, and say, "good news, Ray, we’ve successfully reinstantiated your mind file, so we won’t be needing your old brain anymore," I may suddenly realize the flaw in the "identity from pattern" argument. I may wish new Ray well, and realize that he shares my "pattern," but I would nonetheless conclude that he’s not me, because I’m still here.

Let’s consider another perplexing scenario. Suppose I replace a small number of biological neurons with nonbiological ones that work the same way (they may provide certain benefits such as greater reliability and longevity, but that’s not relevant to this thought experiment). After I have this procedure performed, am I still the same person? My friends certainly think so. I still have the same self-deprecating humor, the same silly grin – yes, I’m still the same guy.

It should be clear where I’m going with this. Bit by bit, region by region, I ultimately replace my entire brain with essentially identical (perhaps improved) nonbiological equivalents. At each point, I feel the procedures were successful. At each point, I feel that I am same guy. After each procedure, I claim to be the same guy. My friends concur. There is no old Ray and new Ray, just one Ray, one that never appears to fundamentally change.

But consider this. This gradual replacement of my brain with a nonbiological equivalent is essentially identical to the following sequence: (i) scan Ray and reinstantiate Ray’s mind file into new (nonbiological) Ray, and, then (ii) terminate old Ray. But we concluded above that in such a scenario new Ray is not the same as old Ray. And if old Ray is terminated, well then that’s the end of Ray. So the gradual replacement scenario essentially results in New Ray, with old Ray terminated, even if we never saw him missing. So what appears to be the continuing existence of just one Ray is really the creation of new Ray and the termination of old Ray.

On yet another hand (we’re running out of philosophical hands here), the gradual replacement scenario is not altogether different from what happens normally to our biological selves, in that our particles are always rapidly being replaced. So am I we constantly being replaced with someone else who just happens to be very similar to my old self?

I am trying to illustrate why consciousness is not an easy issue. If we talk about consciousness as just a certain type of intelligent skill: the ability to reflect on one’s own self and situation, for example, then the issue is not difficult at all. Because any skill or capability or form of intelligence that one cares to define will be replicated in nonbiological entities (i.e., machines) within a few decades. With this type of objective view of consciousness, the conundrums do go away. But a fully objective view does not penetrate to the core of the issue, because the essence of consciousness is subjective experience, not objective correlates of that experience.

Will these future machines be capable of having spiritual experiences?

Oh, they’ll certainly claim to. They will claim to be people, and to have the full range of emotional and spiritual experiences that people claim to have. And these will not be idle claims; they will evidence the sort of rich, complex, and subtle behavior one associates with these feelings.

How do the claims and behaviors – compelling as they will be – relate to the subjective experience of these reinstantiated people? We keep coming back to the very real but ultimately unmeasurable issue of consciousness.

People often talk about consciousness as if it were a clear property of an entity that can readily be identified, detected, and gauged. If there is one crucial insight that we can make regarding why the issue of consciousness is so contentious, it is the following: there exists no objective test that can absolutely determine it’s presence. Science is about objective measurement and logical implications therefrom, but the very nature of objectivity is that you cannot measure subjective experience—you can only measure correlates of it, such as behavior. It has to do with the very nature of the concepts "objective" and "subjective."

Fundamentally, we cannot penetrate the subjective experience of another entity with direct objective measurement. We can certainly make arguments about it: i.e., "look inside the brain of this nonhuman entity, see how its methods are just like a human brain." Or, "see how its behavior is just like human behavior." But in the end, these remain just arguments. No matter how convincing the behavior of a reinstantiated person, some observers will refuse to accept the consciousness of an entity unless it squirts neurotransmitters, or is based on DNA-guided protein synthesis, or has some other specific biologically human attribute.

We assume that other humans are conscious, but that is still an assumption, and there is no consensus amongst humans about the consciousness of nonhuman entities, such as other higher non-human animals. The issue will be even more contentious with regard to future nonbiological entities with human-like behavior and intelligence.

From a practical perspective, we’ll accept their claims. Keep in mind that nonbiological entities in the twenty-first century will be extremely intelligent, so they’ll be able to convince us that they are conscious. They’ll have all the subtle cues that convince us today that humans are conscious. And they’ll get mad if we don’t accept their claims.


How will we apply technology that is more intelligent than its creators? One might be tempted to respond "Carefully!" But let’s take a look at some examples.

Consider several examples of the nanobot technology, which, based on miniaturization and cost reduction trends, will be feasible within 30 years. In addition to scanning your brain, the nanobots will also be able to expand your brain.

Nanobot technology will provide fully immersive, totally convincing virtual reality in the following way. The nanobots take up positions in close physical proximity to every interneuronal connection coming from all of our senses (e.g., eyes, ears, skin). We already have the technology for electronic devices to communicate with neurons in both directions that requires no direct physical contact. For example, scientists at the Max Planck Institute have developed "neuron transistors" that can detect the firing of a nearby neuron, or alternatively, can cause a nearby neuron to fire. The Institute scientists demonstrated their invention by controlling the movement of a living leech from their computer. Again, the primary aspect of nanobot-based virtual reality that is not yet feasible is size and cost.

When we want to experience real reality, the nanobots just stay in position and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from the real senses, and replace them with the signals that would be appropriate for the virtual environment. You would then cause your muscles and limbs to move as you normally would, but the nanobots again intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move and provide the appropriate movement and reorientation in the virtual environment.

The web will provide a panoply of virtual environments to explore. Some will be recreations of real places, others will be fanciful environments that have no "real" counterpart. Some indeed would be impossible in the physical world (perhaps, because they violate the laws of physics). We will be able to "go" to these virtual environments by ourselves, or we will meet other people there, both real people and simulated people. Of course, ultimately there won’t be a clear distinction between the two.

Nanobot technology will be able to expand our minds in virtually any imaginable way. Our brains today are relatively fixed in design. Although we do add patterns of interneuronal connections and neurotransmitter concentrations as a normal part of the learning process, the current overall capacity of the human brain is highly constrained. Brain implants based on massively distributed intelligent nanobots will ultimately expand our memories a trillion fold, and otherwise vastly improve all of our sensory, pattern recognition and cognitive abilities.

Using nanobots as brain extenders is a significant improvement over the idea of surgically installed neural implants, which are beginning to be used today. Nanobots will be introduced without surgery, essentially just by injecting or even swallowing them. They can all be directed to leave, so the process is easily reversible. They are programmable, in that they can provide virtual reality one minute, and a variety of brain extensions the next. They can change their configuration, and clearly can alter their software. Perhaps most importantly, they are massively distributed and therefore can take up billions or trillions of positions throughout the brain, whereas a surgically introduced neural implant needs to be placed in one or at most a few locations.


Technology has always been a double edged sword, bringing us longer and healthier life spans, freedom from physical and mental drudgery, and many new creative possibilities on the one hand, while introducing new and salient dangers on the other. We still live today with sufficient nuclear weapons (not all of which appear to be well accounted for) to end all mammalian life on the planet. The means and knowledge exist in a routine college bioengineering lab to create unfriendly pathogens more dangerous than nuclear weapons. For the twenty-first century, we will see the same two intertwined potentials: a great feast of creativity resulting from human intelligence expanded a trillion-fold combined with many grave new dangers.

Consider unrestrained nanobot replication. Nanobot technology requires billions or trillions of such intelligent devices to be useful. The most cost effective way to scale up to such levels is through self-replication, essentially the same approach used in the biological world. And in the same way that biological self-replication gone awry (i.e., cancer) results in biological destruction, a defect in the mechanism curtailing nanobot self-replication would endanger all physical entities, biological or otherwise.

Other salient concerns include "who is controlling the nanobots?" and "who are the nanobots talking to?" Organizations (e.g., governments, religions, cultural organizations) or just a clever individual could put trillions of undetectable nanobots in the water or food supply of an individual or an entire population. These "spy" nanobots could then monitor, influence, and even control our thoughts and actions. In addition to introducing physical spy nanobots, existing nanobots could be influenced through software viruses and other hacking techniques.

My own expectation is that the creative and constructive applications of this technology will dominate, as I believe they do today. But there will be a valuable (and increasingly vocal) role for a concerned and constructive Luddite (i.e., anti-technologists inspired by early nineteenth century weavers who destroyed labor-saving machinery in protest) movement.


Once brain porting technology has been refined and fully developed, will this enable us to live forever? The answer depends on what we mean by living and dying. Consider what we do today with our personal computer files. When we change from one personal computer to a less obsolete model, we don’t throw all our files away; rather we copy them over to the new hardware. Thus the longevity of our personal computer software is completely separate and disconnected from the hardware that it runs on. When it comes to our personal mind file, however, when our human hardware crashes, the software of our lives dies with it. However, this will not continue to be the case when we have the means to store and restore the thousands of trillions of bytes of information stored and represented in our brains.

The longevity of one’s mind file will not be dependent, therefore, on the continued viability of any particular hardware medium. Ultimately software-based humans, albeit vastly extended beyond the severe limitations of humans as we know them today, will live out on the web, projecting bodies whenever they need or want them, including virtual bodies in diverse realms of virtual reality, holographically projected bodies, and bodies comprised of nanobot swarms.

A software-based human will be free, therefore, from the constraints of any particular thinking medium. Today, we are each confined to a mere hundred trillion connections, but humans at the end of the twenty-first century can grow their thinking and thoughts without limit. We may regard this as a form of immortality, although it is worth pointing out that data and information do not necessarily last forever. Although not dependent on the viability of the hardware it runs on, the longevity of information depends on its relevance, utility, and accessibility. If you’ve ever tried to retrieve information from an obsolete form of data storage in an old obscure format (e.g., a reel of magnetic tape from a 1970 minicomputer), you will understand the challenges in keeping software viable. However, if we are diligent in maintaining our mind file, keeping current backups, and porting to current formats and mediums, then immortality can be attained, at least for software-based humans.

Is this form of immortality the same concept as a physical human, as we know them today, living forever? In one sense it is, because as I pointed out earlier, we are not a constant collection of matter. Only our pattern of matter and energy persists, and even that gradually changes. Similarly, it will be the pattern of a software human that persists and develops.

But is that person based on my mind file, who migrates across many computational substrates, and who outlives any particular thinking medium, really me? We come back to the same questions of consciousness and identity, issues that have been debated since the Platonic dialogues. As we go through the twenty-first century, these will not remain polite philosophical debates, but will be confronted as vital and practical issues.

Is death desirable? A great deal of our effort goes into avoiding it. We make extraordinary efforts to delay it, and indeed often consider its intrusion a tragic event. Yet we would find it hard to live without it. We consider death as giving meaning to our lives. It gives importance and value to time. We are concerned that time would become meaningless if there were too much of it.


But I regard the freeing of the human mind from its severe physical limitations of scope and duration as the necessary next step in evolution. Evolution, in my view, represents the purpose of life. That is, the purpose of life – and of our lives – is to evolve.

What does it mean to evolve? Evolution moves towards greater complexity, greater elegance, greater intelligence, greater beauty, greater creativity, greater love. And God has been called all these things, only without any limitation: infinite intelligence, infinite beauty, infinite creativity, and infinite love. Evolution does not achieve an infinite level, but as it explodes exponentially, it certainly moves in that direction. So evolution moves inexorably towards our conception of God. Thus the freeing of our thinking from the severe limitations of its biological form is an essential spiritual quest.

By the second half of this next century, there will be no clear distinction between human and machine intelligence. On the one hand, we will have biological brains vastly expanded through distributed nanobot-based implants. On the other, we will have fully nonbiological brains that are copies of human brains, albeit also vastly extended. And we will have a myriad of other varieties of intimate connection between human thinking and the technology it has fostered.

Ultimately, nonbiological intelligence will dominate because it is growing at a double exponential rate, whereas for all practical purposes biological intelligence is at a standstill. By the end of the twenty-first century, nonbiological thinking will be trillions of trillions of times more powerful than that of its biological progenitors, although still of human origin. It will continue to be the human-machine civilization taking the next step in evolution.

Before the next century is over, the Earth’s technology-creating species will merge with its computational technology. After all, what is the difference between a human brain enhanced a trillion fold by nanobot-based implants, and a computer whose design is based on high resolution scans of the human brain, and then extended a trillion-fold?

Most forecasts of the future seem to ignore the revolutionary impact of the inevitable emergence of computers that match and ultimately vastly exceed the capabilities of the human brain, a development that will be no less important than the evolution of human intelligence itself some thousands of centuries ago.