Service robots are becoming ever more pervasive in society-at-large. They are present in our apartments and our streets. They are found in hotels, hospitals, and care homes, in shopping malls, and on company grounds. In doing so, various challenges arise. Service robots consume energy, they take up space in ever more crowded cities, sometimes leading us to collide with them and stumble over them. They monitor us, they communicate with us and retain our secrets on their data drives. In relation to this, they can be hacked, kidnapped and abused. The first section of the article “Service Robots from the Perspectives of Information and Machine Ethics” by Oliver Bendel presents different types of service robots – like security, transport, therapy, and care robots – and discusses the moral implications that arise from their existence. Information ethics and machine ethics will form the basis for interrogating these moral implications. The second section discusses the draft for a patient declaration, by which people can determine whether and how they want to be treated and cared for by a robot. The article is part of the new book “Envisioning Robots in Society – Power, Politics, and Public Space“ that reproduces the talks of the Robophilosophy 2018 conference in Vienna (IOS Press, Amsterdam 2018).
Fig.: A transport robot in Switzerland
Im März 2016 ist der Proceedingsband “The 2016 AAAI Spring Symposium Series: Technical Reports” erschienen, in der AAAI Press (Palo Alto 2016). Die KI-Konferenz fand an der Stanford University statt. Zur Maschinenethik (Symposium “Ethical and Moral Considerations in Non-Human Agents”) referierten u.a. Ron Arkin (Georgia Institute of Technology), Luís Moniz Pereira (Universidade Nova de Lisboa), Peter Asaro (New School for Public Engagement, New York) und Oliver Bendel (Hochschule für Wirtschaft FHNW). Auf den Seiten 195 bis 201 findet sich der Beitrag “Annotated Decision Trees for Simple Moral Machines” von Oliver Bendel. Im Abstract heißt es: “Autonomization often follows after the automization on which it is based. More and more machines have to make decisions with moral implications. Machine ethics, which can be seen as an equivalent of human ethics, analyses the chances and limits of moral machines. So far, decision trees have not been commonly used for modelling moral machines. This article proposes an approach for creating annotated decision trees, and specifies their central components. The focus is on simple moral machines. The chances of such models are illustrated with the example of a self-driving car that is friendly to humans and animals. Finally the advantages and disadvantages are discussed and conclusions are drawn.” Der Tagungsband kann über www.aaai.org bestellt werden.
Abb.: Auf dem Stanford-Campus