The FBI has the iPhone of one of the San Bernardino assailants – but can't…
We use various forms of robots in countless and ever increasing ways. Much attention has been focused on what robots can do for us. In his new book, “Robot Rights,” Northern Illinois University’s David Gunkel turns the tables by asking provocative questions about the accountability and even social standing of robots in the future.
Gunkel is a professor in NIU’s Department of Communication who specializes in the study of information and communication technology with a focus on ethics. In advance of the Sept. 7 release date for “Robot Rights,” he spoke recently with the NIU Newsroom.
Why did you write “Robot Rights”?
We are it seems in the middle of a robot apocalypse. The machines are now everywhere and doing virtually everything. We chat with them online, we collaborate with them at work and we rely on their capabilities to manage many aspects of our increasingly complex data-driven lives. Consequently, the “robot invasion” is not some future catastrophe that will transpire as we have imagined it in our science fiction, with a marauding army of rogue androids taking up arms against the human population. It is an already occurring event with machines of various configurations and capabilities coming to take up positions in our world through a slow but steady incursion. It looks less like “Westworld” and more like the fall of Rome.
As these various mechanisms take up increasingly influential positions in contemporary culture—positions where they are not necessarily just tools but somewhat interactive social entities in their own right—we will need to ask ourselves interesting and difficult questions. At what point might a robot, an algorithm or another autonomous system be held accountable for the decisions it makes or the actions it initiates? When, if ever, would it make sense to say, “It’s the robot’s fault”? Conversely, when might a robot, an intelligent artifact or other socially interactive mechanism be due some level of social standing or respect? When, in other words, would it no longer be considered nonsense to inquire about the rights of robots? I wrote this book to begin a conversation about this question and to get some traction on providing a workable solution.
Who is the intended audience?
The book is intended for those individuals and communities involved with emerging technology. This includes not only the people and organizations involved in designing, building and marketing AI applications and robots—things like self-driving cars, deep-learning algorithms, social robots, etc.—but also those concerned with the social impact and consequences of releasing these increasingly autonomous machines into the world. We are in the process of re-engineering social reality as we know it. The book speaks to the opportunities, challenges and even fears that we now face and are going to be facing in the next several decades.
What are you hoping the book will accomplish?
Simply put, a reality check. When it comes to robots and the question of their social status, there is a spectrum of responses bounded by two extreme positions. On one end of the spectrum, there are those individuals who find the very question of robot rights to be simply preposterous. On the other end, there are those who think robots and other forms of AI—especially human-like androids—will need to have rights on par with or substantially similar to what is extended to human beings. Both sides get it wrong. The one side is too conservative, while the other is too liberal. The book simply tries to formulate a more realistic and practical way to address the opportunities and challenges of robots and AI in our world.
This is a reasonable and intuitive response, mainly because it is rooted in some rather old and well-established assumptions. Currently, in both law and ethics, we divide up the world into two kinds of entities—persons and things. We have moral and legal obligations to other persons; they can be benefitted or harmed by our decisions and actions. But there are no such obligations to things; they are property that can be used, misused and even abused as we see fit. A robot, like any other manufactured artifact or piece of technology, would appear to be just another thing. Case closed, period.
But not so fast. “Person” is a socially constructed moral and legal category that applies to a wide range of different kinds of entities and not just human individuals. In fact, we already live in a world overrun by artificial entities that have the rights (and the responsibilities) of a person—the limited liability corporation. IBM, Amazon, Microsoft and McDonalds are all legal persons with rights similar to what you and I are granted under the law—the right to free speech, the right to defend ourselves from accusations, the right to religion, etc. If IBM is a legally recognized person with rights, it is possible that Watson—IBM’s AI—might also qualify for the same kind of status and protections. This is precisely what the book seeks to examine and investigate—whether this is possible, necessary and/or expedient.
Where does the term “robot” come from?
The word “robot” came into the world by way of Karel Čapek’s 1920 stage play, “R.U.R.” or “Rossum’s Universal Robots,” a drama that set the stage for both science fiction and science fact. In Čapek’s native Czech language, as in several other Slavic languages, the word robota (or some variation thereof) denotes “servitude or forced labor.” Consequently, the question concerning “robot rights” is something that is rooted in the very origin of the word, and the book grapples with this origin story and its repercussions.
Do you ultimately make a case for or against establishing some type of rights for robots?
The book is not a manifesto that ends with a call to arms for robots to have rights. What we need most at this point in time is a realistic assessment of the social opportunities and challenges that increasingly autonomous systems—like AI and robots—introduce into our world. These artifacts already occupy a weird position that strains against existing categorical distinctions. They are not persons like you and I, even if we are designing and developing artifacts like Sophia, the Hanson Robotics robot that emulates both the appearance and behavior of human beings and that was recently granted honorary citizenship by the Kingdom of Saudi Arabia. At the same time, these things are not just things that we feel free to use and abuse without further consideration. The soldiers who work with EOD (explosive ordinance disposal) robots in the field name their robots, award them medals and promotions for valor, and even risk their own lives to protect the robot from harm. Something is happening that is slowly but surely changing the rules of the game. The book is an attempt to diagnose the opportunities and challenges that we now confront in the face—or the faceplate—of the robot.
The intimacy that we now share with our technology is altering both us and our tools. The book is as much about how we deal with the changing nature of technology as it is about how technology is involved in changing and evolving us. No matter how simple or sophisticated it is, technology is a mirror in which we can see a reflection of ourselves. The book, therefore, develops its argument along two different vectors. On the one hand, it attempts to use the traditions of moral philosophy and legal theory to investigate the social meaning and status of increasingly autonomous and sociable technology. On the other hand, it uses the confrontation with this technology as a way to critically examine some of our most deep-seated assumptions about moral conduct and legal status.
At the end of your book, you propose a different way to conceptualize the social situation of robots. Can you elaborate?
I introduce and prototype a new way to conceptualize the social position and status of technological artifacts. This way of thinking has been called “the relational turn,” which is a term I borrow from my friend and colleague Mark Coeckelbergh at the University of Vienna. Simply put, the relational turn flips the script on the usual way of deciding who (or what) is a legitimate moral and legal subject. Typically we make these decisions based on what something is—whether an entity is conscious, intelligent or can feel pain. In this transaction, what something is determines how it comes to be treated. Many of our moral and legal systems proceed and are organized in this fashion.
The relational turn puts the how before the what. As we encounter and interact with others—whether they are humans, animals, the natural environment or robots—these other entities are first and foremost situated in relationship to us. Consequently, the question of social and moral status does not necessarily depend on what the other is but on how she/he/it stands before us and how we decide, in “the face of the other,” to respond. Importantly, this alternative is not offered as the ultimate solution or as a kind of “moral theory of everything.” It is put forward as a kind of counterweight to provide new ways of thinking about our moral and legal obligations to others.
Northern Illinois University is a student-centered, nationally recognized public research university, with expertise that benefits its region and spans the globe in a wide variety of fields, including the sciences, humanities, arts, business, engineering, education, health and law. Through its main campus in DeKalb, Illinois, and education centers for students and working professionals in Chicago, Hoffman Estates, Naperville, Oregon and Rockford, NIU offers more than 100 courses of study while serving a diverse and international student body.
Media Contact: Tom Parisi