Is the singularity close? Professor Don Howard argues there’s nothing to worry about. Apocalyptomania – Why We Should Not Fear an AI Apocalypse
This article originally premiered on Science Matters.
Elon Musk is wrong. In a speech to the 2017 annual meeting of the National Governors Association, Musk warned that artificial intelligence (AI) constitutes an “existential risk” to humankind. Musk has sounded this alarm before, as have other figures, including Stephen Hawking and Bill Gates. The argument is rarely spelled out in detail, but the basic idea is that, as futurist Ray Kurzweil has long been predicting, there will soon come a time, the “singularity,” when AI will outstrip human intelligence, and that, when that happens, super-smart AI will decide that humans are to be treated as pets, or that humans are expendable, or that, in the worst case scenario, humans represent a threat that must be exterminated. Musk argues that, given the magnitude of the risk, we must begin now to regulate the development of AI in ways that will guarantee human control.
That we should be thoughtful and prudent about how we develop and deploy AI is not controversial. But, ironically, Musk’s alarmist pronouncements make that task harder rather than easier. Let me explain why.
Start with this. Technology forecasting is a wicked hard problem. Our track record in predicting how technology will change human life is wretched. No one foresaw, in 1985, how the internet would radically transform our culture, our economy, or our political system. A famous 1937 report on “Technology Trends and National Policy,” commissioned by President Roosevelt, failed to anticipate nuclear energy, radar, antibiotics, jet aircraft, rocketry, space exploration, computers, microelectronics, and genetic engineering, even though the scientific and technical bases for nearly all of these developments were already in place in 1937.
This article is part of our Professor’s Perspective series—a place for experts to share their views and opinions on current events.
If, instead, we look for guidance to the way AI is currently being developed by folks like Musk, himself, or rather his company, Tesla, what stands out is that, while rapid progress is being made, it is all in the form of domain-specific AI. Tesla and Waymo are engineering ever better AI to control self-driving cars. Google is developing AI for machine translation. IBM is designing AI for medical diagnostics. And Microsoft is pioneering AI that can beat grandmasters at the game of Go. No one is trying to build an all-encompassing, universal, AI, however much the fear mongers fantasize about such a future. Why not? For the simple reason that domain-specific AI is what the market demands and what the prudent investment of research dollars dictates. Isn’t it more reasonable to extrapolate this trendline?
Of course, the AI will get better and better, but I see no reason to think that future AI won’t also be tailored to specific tasks. There is no efficiency or cost advantage to developing universal AI. God or evolution might have engineered human intelligence in a general form. But the fact that most humans are really bad at performing most tasks – like driving cars, or playing the piano, or slam-dunking a basketball – proves that specialized intelligence is almost always the better way to go.
Finally, an obsession with a fictional AI apocalypse frustrates rational thinking about our technological future, for two reasons. First, if we assign infinite negative value – existential risk, the extermination of all human life – to an imagined future, however slight the probability of that future, then that infinite risk swamps all other considerations in a rational assessment of risk and benefit.
No matter what the promised benefits if things turn out well, we should not move forward if the risk, however unlikely, is the total annihilation of humankind. But that’s an absurd way to think about the future because every innovation carries with it a tiny, tiny risk of some as yet unimagined cataclysmic consequences. [I have explored the errors of this kind of reasoning about the apocalypse in another blog post on risk analysis and in an editorial on the influenza virus gain-of-function debate.]
Second, as we obsess about an AI apocalypse, we are distracted from the much more important, near-term, ethical and policy challenges of more sophisticated AI, whether those be in the domain of autonomous weapons, predictive policing, technological unemployment, or intrusive and pervasive technologies of surveillance. Our intelligence and moral energies are far better spent in grappling with such real, present-day problems with AI.
One is reminded of the fable of Chicken Little, who, when an acorn falls on his head, immediately concludes that the sky is falling. Chicken Little persuades Henny Penny and Ducky Lucky that apocalypse is near. Along comes Foxey Loxey, who offers them all shelter from impending doom in his lair. And then he eats them all alive.
For more with Professor Howard, check out “Albert Einstein: Physicist, Philosopher, Humanitarian” on The Great Courses Plus!