Carson Reynolds, Masatoshi Ishikawa
What exactly would it take for a robot to be arrested? Prima facie, this appears to be an easy question. Clearly, being caught in the process of some sort of law-breaking ought to be enough. But really, should it not be the robot’s designer or owner who should be arrested and the robot instead impounded?
The goal of this paper is to speculate upon the legal questions that abound when robot agency is assumed. The approach will be to describe a scenario involving robot law breaking. It is our plan to use this scenario to interview individuals about robotics and ethics. As such, the paper expands upon an earlier paper regarding “RobotTrickery” (Reynolds and Ishikawa, 2006).
In Japanese,two separateverbs canbe usedto describeexistence. There is arimasu, which is used for dead or inanimate objects. There is also imasu, which is instead used for living objects.For instance,ifI were describing my bicycle,I would use arimasu. However,ifI were describing a living being such asa person or animalI would use imasu instead.
Curiously, it is increasingly common for the use of imasu to describe robots. This may be due to the animistic aspect of Shinto belief, which sees gods as existing in many objects (Bartneck, 2004). But it may also reflect a tendency in popular culture to increasingly view robots as “alive.”
Life on the Inside
Being alive, however does not necessarily entail the ability to commit a crime. It seems ridiculous to talk about a lawbreaking pet. Of course, an animal can become dangerous and need to be “put down” (as is the current American euphemism for euthanasia). And then there are certain animals that are themselves illegal (as opposed to having the ability to break laws). For instance, it is illegal to transport endangered species (Reeve, 2004). Even in cases where animals are participating in an illegal activity such as dogfighting or cockfighting, we do not view the animals as the criminals but instead blame the owners for inhumane treatment.
Suppose instead that a robot were involved in a criminal enterprise. Perhaps thereisagang that decidesto use robot security guards, reasoning that they’dbe less likely to snitch or skim from the goods.
Robotic security guards have been developed (Saitoh et al., 1995) and some such as Sohgo Security Services Guardrobo are being marketed (Anonymous, 2005). Furthermore, researchers are working to increase their ability to act autonomously (Everett et al., 1994).
It would seem that the only thing required to have a security robot participate in a criminal enterprise would be a commercially available security robot and an organization set upon its use.
But in this case, the robot still seems to be an unwitting accomplice. Despite increasing sophistication of security robots (Everett and Gage, 1997), theystill do not have free will. And so theylargely carry out the actions intended by their designers or users. As such, it seems that the robot is just in instrument just asfactory which produces illegal products might be. The robot in this case should not be arrested,but perhaps impounded and auctioned.
The Robot Kleptomaniac
Suppose thata robot has free will and self-chosen goals. Let us imagine that a robot wishes to remain operational as long as possible,buthasafixed supplyof energy.In orderto remain operational, it needs power from commonly available batteries.Now suppose that the robot is in a situation in which its power is dangerously low.It plans and executes a robbery of batteries from a local convenience store.
If such a robot were built and caught it seems a more likely candidate for an actual robotic lawbreaker. The robot’s designer or owner might share in the blame of making or owning a robot capable of robbery. But still in this case, the robot ultimately chooses and carries out the crime.
The authors would like to thank Alvaro Cassinelli, Courtney Humphries, and James Forren for participating in a discussion that led to this article.
Anonymous (2005). Japanese robots to guard shops and
Bartneck,C. (2004). From fictionto science –acultural
reflection on social robots. In Workshop on Shaping Human-
Robot Interaction -Understanding the Social Aspects of
Intelligent Robotic Products.
Everett, H. R. and Gage, D.W. (1997). Third-generation
security robot. InKenyon, C. H. and Kachroo,P., editors,
Mobile Robots XI and Automated Vehicle Control Systems,
volume 2903, pages 118–126. SPIE.
Everett,H.R., Gilbreath,G.A.,, and Laird,R.T. (1994).
Coordinated control of multiple security robots. InWolfe,
W.J.andChun,W.H., editors,Mobile Robots VIII,volume
2058, pages 292–305. SPIE.
Reeve, R. (2004). Policing International Trade in Endan
-gered Species: The Cites Treaty and Compliance. Chatham
House, London, Great Britain.
Reynolds, C. and Ishikawa, M. (2006). Robot trickery.
In International Workshop on Ethics of Human Interaction
with Robotic, Bionic, and AI Systems: Concepts and Poli
-cies, Naples, Italy.
Saitoh, M., Takahashi, Y., Sankaranarayanan, A.,
Ohmachi, H., and Marukawa, K. (1995). Amobile robot
testbed with manipulator for security guard application. In
Robotics and Automation, 1995. Proceedings., 1995 IEEE
International Conference on, volume 3, pages 2518–2523.