Humans are said to be unique in virtue of our beliefs, desires, and language use. Or so we thought. That was until we envisaged the possibility of machines doing what we do and doing it better. A.I. paranoia has fed Hollywood and conspiracy theorists in equal measure, but Alex Rosenberg argues that the problem with A.I. fantasies is our presumption that beliefs and desires can be replicated. They can’t and that’s because they don’t exist, not even in humans. Thus, the threat to human distinctiveness comes not from the possibility of A.I. but the possibility that what we think makes us unique is an illusion:
…we are convinced we have something AI will always lack: We are agents in the world, whose decisions, choices, actions are made meaningful by the content of the belief/desire pairings that bring them about. But what if the theory of mind that underwrites our distinctiveness is build on sand, is just another useful illusion foisted upon us by the Darwinian processes that got us here? Then it will turn out that neuroscience is a far greater threat to human distinctiveness than AI will ever be.
Rosenberg cites research by Eric Kandel, John O’Keefe, May-Britt and Edvard Moser who concluded that what we call beliefs and desires have no content at all. O’Keefe and the Mosers could tell what location a rat was in by looking at its neuronal firings. Since rats don’t have thoughts that represent the world to the mind, the research concluded that rats don’t need beliefs and desires in order to act. Rosenberg suggests that the brains of rats and humans are so similar in structure that we should conclude that we don’t need beliefs and desires to explain our behavior either. The fact that we think we have beliefs is better explained by our evolved capacity to attribute beliefs and desires to actions. But that is merely illusion. According to Rosenberg, folk psychology is a theory of mind that explains human behavior. We think we do things because we desire things and have beliefs. Instead, according to Rosenberg, humans have innate capacities to ‘mind-read’ other people in virtue of observing their behavior. We think of other people believing that turning up the heat will warm them up, but there is really no such thing as a belief.
The argument is as follows: rat brains don’t have intentionality, but they behave is if they do. Human brains are similar in structure to rat brains. Thus, human brains probably don’t have intentionality, but they behave is if they do. The explanation for the fact that we think our brains have intentionality is that we have an innate mechanism whereby we ascribe intentionality to other brains in order to interpret other human beings’ behavior.
But is the similarity between rat and human brain structure similar enough to conclude that we too are belief free? Let’s say it is. What would follow? Well, not the conclusion. The similarity in structure between two things does not determine any of its other properties. The humble valve has multiple variations while maintaining a similar structure. Just small changes to it yields remarkably different properties. ‘Similar structure’ is too vague a concept to determine that because rats can’t think, humans can’t either.
But perhaps Rosenberg’s view comes down to the power of his explanation. Perhaps Rosenberg’s question is: What is the simplest explanation of mental stuff in terms of physical stuff? The answer is simple: all that mental stuff–beliefs and desires–are not really part of the world at all. We think we have them, but we are simply wrong.
But it is not as if we are looking for an explanation for why we behave the way we do and landing upon beliefs and desires as the best explanation. Instead, we know directly that we have such things as beliefs and desires. If we have direct acquaintance of something, then the evidence for it being there is not that it explains something else. If I see that my money is missing, then I might have a theory about how it went missing (a thief took it, I left it in my car etc.) or I might see a thief take it. If I see the thief take it, I don’t have a theory to explain how my money went missing. I have direct evidence for how my money went missing. Likewise, I don’t need a theory to explain behavior if I am directly aware that I behave in a certain way because I believe or desire x, y, z.
In order to defeat such direct evidence, one needs something stronger than a competing theory. If it were a theory, then a competing theory might better explain the phenomenon. But it isn’t and so merely providing an explanation that does not include beliefs and desires does not refute the common view. Thus, the claim that the common view is “quite as much of a dead end as Ptolemaic astronomy” as Rosenberg suggests is not true.
On another note, scientific discoveries always risk being a ‘threat’ to human life, but I don’t think they become a threat to human uniqueness. We won’t wake up to discover that A.I. has somehow superseded human thought. My confidence about this is rooted in the simple claim that what makes a human unique is not merely our capacity for rational thought, but our likeness to the one who made us. A.I. might imitate us, but it can’t imitate Him no matter how much RAM we give it. Thus, our uniqueness as human beings is not up for grabs either because we make a machine that does what we do or because we discover something that suggests we are not as unique as we once thought.
In sum, I don’t believe Rosenberg. But why should that matter to him? No one believes anything about anything.