We find ourselves in a reality where robot ethics, or Roboethics, is becoming an ever-pressing matter. Robot designers are using artificial intelligence techniques to create robots capable of learning and adapting to dynamic environments, which can reliably respond to human-robot interaction’s highest expectations. Robots are beginning to surround us and will soon become ubiquitous, yet very little has been done to adapt our society and laws accordingly.
A robot is a built system; a machine whose abilities and features are dictated by the chosen architecture and programming that has been given to it. They can sense the environment they are in, and process information about it. They are perceptive, can reason with data, and can even plan future actions. Most robots are goal-oriented, and are adaptive to their environment in some way.
In recent years, robots have been exploited for their versatility, being used in space travel, deep-sea exploration, war-zone navigation and even as home assistants or companions. As robots move away from simple, industrial tasks into roles that deal directly with humans, a discipline called Human-Robot Interaction (HRI) attempts to deal with the upcoming issues related to this new relationship. In industry, robots are usually kept far from human employees at all times, with little to no direct interaction unless the robot is inactive. However, as robots start making their way further into our everyday lives, HRI issues are becoming more and more common, and the safety of people interacting with robots is being brought into question.
It is crucial however, that we should not fear advancements in robotic research. Already we can see that public opinion is swayed by the media, with questions like “could a robot be dangerous to mankind?” appearing in movies, literature and television. A clear division already exists between public perception and what is actually currently feasible, in terms of robotics research.
What we must do is to identify the need for Roboethics, and the best way to deal with the associated challenges. But how could we define ethics in regards to a machine? One can argue that robots endowed with artificial intelligence could eventually become self-aware, and as a result would do whatever it takes to survive. Under these circumstances, is it ethical to abuse this robot and treat it like a machine? Must we alter our interactions with this new creature? What about consciousness? Do robots need some sort of morality code? These and other questions have resulted in many different conflicting answers over time. A morality code, for robots, should include human responsibilities as well, and take into account how humans can be protected from, and interact with, robots in a safe way.
Before we can start to develop and create fully ethical robots, we first need to create some kind of legal guidelines that robots will need to abide by. More specifically, we need to create a legal framework that protects humans from physical and emotional harm. It is also clear that ethics is an unlimited and totally abstract concept, to the point where it cannot be simplified down into a few set rules that human-robot interactions must follow. Therefore, rather than looking for a way to simplify and break down the concept of ethics, it seems more logical to take the simple rules from law, and over time (as needed) those legal regulations can be expanded.
The science-fiction stories told by authors like Isaac Asimov are slowly becoming more plausible, as robots develop in the twenty-first century. It might still be far-fetched to say that a robot will start talking about “The Master” as mentioned in Asimov’s short story Runaround, but robotics research is heading straight towards a delicate territory, and eventually we will be faced with a robot who deems itself alive. And we need to be prepared to live with them.
Images courtesy of the Robotics Lab from the School of Mathematical and Computer Sciences, Heriot-Watt University