Apocalyptic rhetoric about AI distracts from more immediate, pressing concerns

December 9, 2014

by Professor David Gunkel

a - speak your mind -xStephen Hawking recently told the BBC that “the development of full artificial intelligence (AI) could spell the end of the human race.”

And Hawking is not alone in this apocalyptic prediction. A similar warning was issued earlier this year by Elon Musk, the visionary innovator behind Tesla Motors and CEO of SpaceX, who recently told students at MIT that AI represents what could be the “biggest existential threat” to the human species. For Musk, who has invested a significant amount of his own money in AI start-ups, the development of machine intelligence is not just playing with fire but “summoning the demon.”

David Gunkel

David Gunkel

Unfortunately, the real problem here is not with the rather disturbing predictions of Hawking and Musk, but with the apocalyptic tone and mode of thinking that they employ and utilize. In other words, the problem is not the impending “robot apocalypse” that has been a staple of science fiction since the middle of the twentieth century, but the fact that we—and especially some of our leading scientists and technology innovators—understand and pose the challenge of AI in these rather stark and extreme terms.

If what Hawking and Musk wanted to do is awaken us to the need to think about our technology and its social impact, then these statements might be a useful form of motivation. My concern, however, is that the apocalyptic rhetoric they utilize could distract us from confronting the important critical questions that need to be asked and the debates that we should and need to be having right now.

Questions such as:

  • “What position do we want intelligent and socially interactive machines to occupy in our world?”
  • “Should they be regarded as mere instruments of human desire and action?”
  • “Should they have some kind of independent social status of their own?”
  • “Will they or should they be understood as another kind of moral and legal person, as we currently do for other artificial entities like corporations?”

Gunkel Machine bookFear sells. It is sensational, and it makes for dramatic headlines. But it is not necessarily the most responsible way to deal with and respond to the unique challenges of increasingly intelligent and autonomous machines. Clearly AI, social robotics, big data, the Internet of Things, etc. will have both benefits and costs—for individuals, organizations, and the human species.

And there is an invasion of sorts currently underway. The machines are coming. In fact, they are already here. But this “invasion” will not conform to narrative expectations inherited from science fiction. It will not take the form of a dramatic and fundamental existential threat to which the only possible response is either survival or eradication. The important questions are far more complex and mundane. My fear, therefore, is quite different from Hawking’s. What I fear is not the AI apocalypse, but the kind of fear-mongering that is more a distraction from engaged and informed thinking than a useful critical perspective and response.

David Gunkel is an NIU Distinguished Teaching Professor in the Department of Communication. His research examines the philosophical assumptions and ethical consequences of communication technology (ICT), and he teaches courses in ICT, cyberculture and web design and programming. He also is author of the book, “The Machine Question: Critical Perspectives on AI, Robots and Ethics” (MIT 2012).