In “The Seven Deadly Sins of AI Predictions” Rodney Brooks argues that we ought to push back against mistaken predictions about artificial intelligence. Optimism about A.I. has simultaneously led to utopian visions of a workless future and fears of an AI that might destroy us. His main point is that we ought to stop falling for AI hype.
First, if we don’t know what something will be able to do, we will have a hard time knowing what it won’t be able to do:
If something is magic, it is hard to know its limitations…This is a problem we all have with imagined future technology. If it is far enough away from the technology we have and understand today, then we do not know its limitations. And if it becomes indistinguishable from magic, anything one says about it is no longer falsifiable… [But] nothing in the universe is without limit. Watch out for arguments about future technology that is magical. Such an argument can never be refuted. It is a faith-based argument, not a scientific argument.
Second, Brooks distinguishes between performance and competence. When a person performs an action, we naturally assume that person has a set of accompanying competences. When it comes to assessing a computer, this assumption is false:
…suppose a person tells us that a particular photo shows people playing Frisbee in the park. We naturally assume that this person can answer questions like What is the shape of a Frisbee? Roughly how far can a person throw a Frisbee? Can a person eat a Frisbee? Roughly how many people play Frisbee at once? Can a three-month-old person play Frisbee? Is today’s weather suitable for playing Frisbee? …Computers that can label images like “people playing Frisbee in a park” have no chance of answering those questions. Besides the fact that they can only label more images and cannot answer questions at all, they have no idea what a person is, that parks are usually outside, that people have ages, that weather is anything more than how it makes a photo look, etc.
Third, Brooks points out that terms used to describe human domains of learning cannot be used in the same way to describe advances in AI:
When people hear that a computer can beat the world chess champion (in 1997) or one of the world’s best Go players (in 2016), they tend to think that it is “playing” the game just as a human would. Of course, in reality those programs had no idea what a game actually was, or even that they were playing. They were also much less adaptable. When humans play a game, a small change in rules does not throw them off. Not so for AlphaGo or Deep Blue.
Fourth, the assumption that computers get better exponentially over time is false. If we calculated the power of an iPod exponentially,
we would expect a $400 iPod to have 160,000 gigabytes of memory. But the top iPhone of today (which costs much more than $400) has only 256 gigabytes of memory, less than double the capacity of the 2007 iPod. This particular exponential collapsed very suddenly once the amount of memory got to the point where it was big enough to hold any reasonable person’s music library and apps, photos, and videos. Exponentials can collapse when a physical limit is hit, or when there is no more economic rationale to continue them.
Similarly, we have seen a sudden increase in performance of AI systems thanks to the success of deep learning. Many people seem to think that means we will continue to see AI performance increase by equal multiples on a regular basis. But the deep-learning success was 30 years in the making, and it was an isolated event.
Fifth, Hollywood has perpetuated the myth of the unexpected change, one that will turn a powerful AI against the human species. However, Brooks points out that technological change is seldom that rapid.
Long before there are evil super-intelligences that want to get rid of us, there will be somewhat less intelligent, less belligerent machines. Before that, there will be really grumpy machines. Before that, quite annoying machines. And before them, arrogant, unpleasant machines. We will change our world along the way, adjusting both the environment for new technologies and the new technologies themselves. I am not saying there may not be challenges. I am saying that they will not be sudden and unexpected, as many people think.
Finally, the rate at which it is possible to deploy new hardware in the world is much slower than we think. Brooks proves several examples of industries that are still dependent on decades-old hardware. For example, he points out that the US air force still relies on planes built in 1961 that are expected to remain in service until at least 2040.
A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products. Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.
One Comment
Pamela
Well, if you want to use AI enables tools, then you must be aware what is deep learning machine learning and how it has helped bringing a big change in the world of technology.