Emulation, Simulation, and the Human Brain
On this week’s episode of the EconTalk podcast, Russ Roberts asked Robin Hanson on the show to discuss his theory of the technological singularity. In a nutshell, Hanson believes that in the next few decades, humans will develop the technologies necessary to scan and “port” the human brain to computer hardware, creating a world in which you can create a new simulated copy of yourself for the cost of a new computer. He argues, plausibly, that if this were to occur it would have massive effects on the world economy, dramatically increasing economic growth rates.
But the prediction isn’t remotely plausible. There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson is confused by the ease with which this sort of thing can be done with digital computers. He fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems.
First a quick note on terminology. Hanson talks about “porting” the human brain, but he’s not using the term correctly. Porting is the process of taking software designed for one platform (say Windows) and modifying it to work with another (say Mac OS X). You can only port software you understand in some detail. The word Hanson is looking for isemulation. That’s the process of creating a “virtual machine” running inside another (usually physical) machine. There are, for example, popular video game emulators that allow you to play old console games on your new computer. The word “port” doesn’t make any sense in this context because the human brain isn’t software and he’s not proposing to modify it. What he means is that we’d emulate the human brain on a digital computer.
But that doesn’t really work either. Emulation works because of a peculiar characteristic of digital computers: they were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.
You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.
Hanson’s fundamental mistake is to treat the brain like a human-designed system we could conceivably reverse-engineer rather than a natural system we can only simulate. We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate.
Scientists have been trying to simulate the weather for decades, but the vast improvements in computing power in recent decades have produced only modest improvements in our ability to predict the weather. This is because the natural world is much, much more complex than even our most powerful computers. The same is true of our brains. The brain has approximately 100 billion neurons. If each neuron were some kind of simple mathematical construct (in the sense that transistors can be modeled aslogic gates) we could imagine computers powerful enough to simulate the brain within a decade or two. But each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. I have no doubt we’ll learn a lot from running computer simulations of neurons in the coming decades. But I see no reason to think these simulations will ever be accurate (or computationally efficient) enough to serve as the building blocks for full-brain emulation.
Comments