Science is more than theory and lecture. Science is fun. Science is hands-on. Science is…
Drones are everywhere–not necessarily in the skies above our heads just yet, but certainly in the news media, in the informal discussions around the office, and front and center in the national consciousness. Until recently, these conversations had largely been about the use of battle field drones or Unmanned Aerial Vehicles–UAV’s as the military calls them. But that conversation is about to change. And it is going to change because of some recent efforts by the United States Department of Defense concerning the development of autonomous weapon systems—drones that are no longer tethered to a human operator but are designed to make life and death decisions on their own.Designing machines for autonomous operations is clearly useful and expedient. The fact that your Roomba can clean the floor without your direct involvement is obviously appealing. But machine autonomy also has a dark side, vividly illustrated in science fiction. Although science fiction clearly exaggerates things for dramatic effect, the basic questions raised by these techno-myths already apply to contemporary technology: How much autonomy should we design into these systems? How reliable are machine generated decisions? Can we (or should we) count on them? And if something does go wrong, who or what is responsible for the error? In other words, who or what is culpable, when decision making and real world action is no longer under human direction and control?
Responding to these questions will require not just the efforts of engineers and roboticists, but will also involve philosophers, sociologist, legal scholars, policy experts, and informed citizens. But what is most important at this time is that we begin having these conversations and debates.
Autonomous battlefield drones are no longer science fiction, they are here and they are now. And we have a unique opportunity to decide how and even whether to deploy them.