“The fact that we cannot expect ever to accommodate in our language a detailed description of Martian or bat phenomenology should not lead us to dismiss as meaningless the claim that bats and Martians have experiences fully comparable in richness of detail to our own.” –Thomas Nagel
What is it like to be a smartphone? In all the chatter about the future of artificial intelligence, the question has been glossed over or, worse, treated as settled. The longstanding assumption, a reflection of the anthropomorphic romanticism of computer scientists, science fiction writers, and internet entrepreneurs, has been that a self-aware computer would have a mind, and hence a consciousness, similar to our own. We, supreme programmers, would create machine consciousness in our own image.
The assumption is absurd, and not just because the sources and workings of our own consciousness remain unknown to us and hence unavailable as models for coders and engineers. Consciousness is entwined with being, and being with body, and a computer’s body and (speculatively) being have nothing in common with our own. A far more reasonable assumption is that the consciousness of a computer, should it arise, would be completely different from the consciousness of a human being. It would be so different that we probably wouldn’t even recognize it as a consciousness.
As the philosopher Thomas Nagel observed in “What Is It Like to Be a Bat?,” his classic 1974 article, we humans are unable to inhabit the consciousness of any other animal. We can’t know the “subjective character” of other animals’ experience any more than they can understand ours. We are, however, able to see that, excepting the simplest of life forms, an animal has a consciousness — or at least a beingness. The animal, we understand, is a living thing with a mind, a sensorium, a nature. We know it feels like something to be that animal, even though we can’t know what that something is.
We understand this about other animals because we share with them a genetic heritage. Because they are products of the same evolutionary process that gave rise to ourselves and because their bodies and brains have the same essential biology, the same material substrate, as our own, they resemble us in both their physical characteristics and their behavior. It would be impossible, given this obvious likeness, to see them as anything other than living beings.
There would be no such shared heritage or shared substrate, no such likeness, between ourselves and any artificial intelligence that may spring into being through the workings of a computer or a network of computers. Our relationship to an AI, and its to us, would be characterized by radical unlikeness. Confronted with an AI, we would not only be unable to inhabit its consciousness or otherwise sense the character of its being; we would be unable to recognize that it even has a consciousness or a being. It would remain, in our perception, an inanimate thing that we have constructed.
But, you might ask, wouldn’t its being be an emanation of its programming? That might be true to some extent — though who can say where being comes from? — but even so, the programming would be of no help in understanding the character of a computer’s being. You would not be able to know what it’s like to be an AI by examining the 1s and 0s of its machine code any more than you’d be able to understand your own being by examining the As, Cs, Gs, and Ts of your genetic code. A conscious computer would likely be unaware of the routines of its software — just as we’re unaware of how our DNA shapes our body and being or even of the myriad signals that zip through our nervous system every moment. An intelligent computer may perform all sorts of practical functions, including taking our inputs and supplying us with outputs, without having any awareness that it is performing those functions. Its being may lie entirely elsewhere.
The Turing test, in all its variations, would also be useless in identifying an AI. It merely tests for a machine’s ability to feign likeness with ourselves. It provides no insight into the AI’s being, which, again, could be entirely separate from its ability to trick us into sensing it is like us. The Turing test tells us about our own skills; it says nothing about the character of the artificial being.
All of this raises another possibility. It may be that we are already surrounded by AIs but have no idea that they exist. Their beingness is invisible to us, just us ours is to them. We are both objects in the same place, but as beings we inhabit different universes. Our smartphones may right now be having, to borrow Nagel’s words, “experiences fully comparable in richness of detail to our own.”
Look at your phone. You see a mere tool, there to do your bidding, and perhaps that’s the way your phone sees you, the dutiful but otherwise unremarkable robot that from time to time plugs it into an electrical socket.