Published June 7, 2016
Twitter has been aflame with a pronouncement from Elon Musk. According to the visionary entrepreneur, the odds are very high that we’re all living in a version of the Matrix.
It’s an old chestnut of computer science speculation. The logic goes something like this: If it’s possible to create consciousness out of a computer, someone, either human or alien, will have done it. If they’ve done it, they’ve probably done it a number of times (since after all computers can run infinitely many programs). If that’s true, then the number of consciousnesses that are simulated is probably much higher — even orders of magnitude higher — than the number of consciousnesses that aren’t. Therefore, the odds that any given consciousness — you, me, Elon Musk — is simulated are much higher than the odds that it’s not.
It’s a funny thought experiment. But it’s also an interesting peek into the worldview of one of the world’s most influential voices on the fast-advancing field of artificial intelligence. You may know Musk for his talent for building rocket ships and electric cars, but he’s also formally warned about the dangers of artificial intelligence run amok, and has been a leader of a consortium that is committing a billion dollars to building artificial intelligence without the Skynet parts.
Which is why, if Musk wasn’t joking — and there’s little indication he was — we should note that this idea is completely nonsensical. It reveals a deep ignorance about the mind, consciousness and, yes, computers, which is strange for someone recognized as a prophet and an expert on the issue.
The problem lies with the first premise that it’s possible to generate consciousness out of software. It’s an idea that is widely shared in the tech industry. Last week, I was invited to speak about robots and AI at Brain Bar Budapest, a sort of “South By Southwest on the Danube,” a tech conference and music festival combination that attracted luminaries from the U.S. and Europe alike. The topic that excited everyone the most, both on and off the stage, was when we’d be able to upload our brains into a computer. This question, like the we-all-live-in-the-Matrix idea, relies on the notion that the mind is essentially a piece of very sophisticated software, something which might have been useful as a metaphor some decades ago — until people forgot it was a metaphor.
The fact is, brains and computers are fundamentally different, in ways that are simply unbridgeable.
The idea of uploading your brain to a computer relies on the idea that consciousness and software are the same. The problem is that while you might compare them metaphorically, they are actually categorically different. And for a very simple reason: A piece of software, technically speaking, is simply a list of zeroes and ones. Your brain contains no zeroes and ones, and your mind is not made up of zeroes and ones.
We may in the future — and in fact already do — build software that can perform many tasks as well as human beings can, but it won’t mean that that software will be conscious, or even that it will be “intelligent” in the human sense of the term, because it won’t. Even if you could write software that would be an enormously lifelike simulation of a mind, that piece of software would not have self-consciousness the way you or I do.
Famously, when the Lumière brothers gave the first public screening of their invention of the cinema, showing a train arriving at the station, the entire audience fled the theater in terror, thinking they were about to get crushed by a real train. There was no train. A movie or video game character can be enormously lifelike, even so much as to trick us, but it’s still not a conscious actor.
In a valuable essay in the online magazine Aeon, the psychologist Robert Epstein points out that the mind and computers simply work in a different way. A mind does not store information in a memory bank, or use algorithms. You can use metaphors to analogize them, sure, but a metaphor is just a metaphor. Epstein’s essay is also valuable for its historical survey, that shows that for centuries people trying to understand how the mind works have been analogizing it to the hip technology of the day.
This has important implications. You can build software that can beat every human at chess, and even at Go. But those computers do not actually “play chess” the way humans do. They do not analyze the situation and formulate strategies the way we do. Instead, they rely on raw computing power to statistically correlate the situation on the board with winning moves. This is not a “knock” on those programs, which are enormously impressive human achievements. This is simply to say that they absolutely do not work the way human minds do and that, at least under the current computing paradigm, they never will.
Does this mean we will never get “AI”? Well, first, rest easy, you’re not living in a simulation. Secondly, it means that whatever form “AI” takes, it will be something very different from our own minds, and will not look at all like what we expect. And thirdly, Musk’s Twitter-friendly joke suggests that our AI gurus actually know a lot less about what they’re doing than they think they do. Which, depending on your perspective, is either reassuring or terrifying.
Pascal-Emmanuel Gobry is a fellow at the Ethics and Public Policy Center.