In many cases it is important that an autonomous system acts and reacts adequately from a moral point of view. There are some artifacts of machine ethics, e.g., GOODBOT or LADYBIRD by Oliver Bendel or Nao as a care robot by Susan Leigh and Michael Anderson. But there is no standardization in the field of moral machines yet. The MOML project, initiated by Oliver Bendel, is trying to work in this direction. In the management summary of his bachelor thesis Simon Giller writes: „We present a literature review in the areas of machine ethics and markup languages which shaped the proposed morality markup language (MOML). To overcome the most substantial problem of varying moral concepts, MOML uses the idea of the morality menu. The menu lets humans define moral rules and transfer them to an autonomous system to create a proxy morality. Analysing MOML excerpts allowed us to develop an XML schema which we then tested in a test scenario. The outcome is an XML based morality markup language for autonomous agents. Future projects can use this language or extend it. Using the schema, anyone can write MOML documents and validate them. Finally, we discuss new opportunities, applications and concerns related to the use of MOML. Future work could develop a controlled vocabulary or an ontology defining terms and commands for MOML.“ The bachelor thesis will be publicly available in autumn 2020. It was supervised by Dr. Elzbieta Pustulka. There will also be a paper with the results next year.
Space travel includes travel and transport to, through and from space for civil or military purposes. The take-off on earth is usually done with a launch vehicle. The spaceship, like the lander, is manned or unmanned. The target can be the orbit of a celestial body, a satellite, planet or comet. Man has been to the moon several times, now man wants to go to Mars. The astronaut will not greet the robots that are already there as if he or she had been lonely for months. For on the spaceship he or she had been in the best of company. SPACE THEA spoke to him or her every day. When she noticed that he or she had problems, she changed her tone of voice, the voice became softer and happier, and what she said gave the astronaut hope again. How SPACE THEA really sounds and what she should say is the subject of a research project that will start in spring 2020 at the School of Business FHNW. Under the supervision of Prof. Dr. Oliver Bendel, students will design a voicebot that shows empathy towards an astronaut. The scenario is a proposal that can also be rejected. Maybe in these times it is more important to have a virtual assistant for crises and catastrophes in case one is in isolation or quarantine. However, the project in the fields of social robotics and machine ethics is entitled THE EMPATHIC ASSISTANT IN SPACE (SPACE THEA). First results will be available by the end of 2021.
The first phase of the HUGGIE project will start at the School of Business FHNW in March 2020. Oliver Bendel was able to recruit two students from the International Management program. The project idea is to create a social robot that contributes directly to a good life and economic success by touching and hugging people and especially customers. HUGGIE should be able to warm up in some places, and it should be possible to change the materials it is covered with. A research question will be: What are the possibilities besides warmth and softness? Are optical stimuli (also on displays), vibrations, noises, voices etc. important for a successful hug? HUGGIE could also play a role in crises and disasters, in epidemics and pandemics and in cases of permanent social distancing. Of course it would be bad if only a robot would hug us, and of course it would be good if humans could hug us every day if we wanted them to do so – but maybe in extreme situations a hug by a robot is better than nothing. The HUGGIE project is located in the heart of social robotics and on the periphery of machine ethics. By summer 2020, the students will conduct an online survey to find out the attitudes and expectations of the users.
From June 2019 to January 2020 the Morality Menu (MOME) was developed under the supervision of Prof. Dr. Oliver Bendel. With it you can transfer your own morality to the chatbot called MOBO. First of all, the user must provide various personal details. He or she opens the „User Personality“ panel in the „Menu“ and can then enter his or her name, age, nationality, gender, sexual orientation, and hair color. These details are important for communication and interaction with the chatbot. In a further step, the user can call up the actual morality menu („Rules of conduct“) via „Menu“. It consists of 9 rules, which a user (or an operator) can activate (1) or deactivate (0). The behaviors 1 – 8, depending on how they are activated, result in the proxy morality of the machine (the proxy machine). It usually represents the morality of the user (or the operator). But you can also give the system the freedom to generate its morality randomly. This is exactly what happens with this option. After the morality menu has been completely set, the dialogue can begin. To do this, the user calls up „Chatbot“ in the „Menu“. The Chatbot MOBO is started. The adventure can begin! A video of the MOBO-MOME is available here.
At the request of Prof. Dr. Oliver Bendel, a student at the School of Business FHNW, Alessandro Spadola, investigated in the context of machine ethics whether markup languages such as HTML, SSML and AIML can be used to transfer moral aspects to machines or websites and whether there is room for a new language that could be called Morality Markup Language (MOML). He presented his results in January 2020. From the management summary: „However, the idea that owners should be able to transmit their own personal morality has been explored by Bendel, who has proposed an open way of transferring morality to machines using a markup language. This research paper analyses whether a new markup language could be used to imbue machines with their owners‘ sense of morality. This work begins with an analysis how a markup language is structured, describes the current well-known markup languages and analyses their differences. In doing so, it reveals that the main difference between the well-known markup languages lies in the different goals they pursue which at the same time forms the subject, which is marked up. This thesis then examines the possibility of transferring personal morality with the current languages available and discusses whether there is a need for a further language for this purpose. As is shown, morality can only be transmitted with increased effort and the knowledge of human perception because it is only possible to transmit them by interacting with the senses of the people. The answer to the question of whether there is room for another markup language is ‚yes‘, since none of the languages analysed offer a simple way to transmit morality, and simplicity is a key factor in markup languages. Markup languages all have clear goals, but none have the goal of transferring and displaying morality. The language that could assume this task is ‚Morality Markup‘, and the present work describes how such a language might look.“ (Management Summary) The promising results are to be continued in the course of the year by another student in a bachelor thesis.
The book chapter „The BESTBOT Project“ by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the „Handbuch Maschinenethik“, edited by Oliver Bendel. From the abstract: „The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.“ (Abstract) The BESTBOT is an immoral machine in a moral one – or a moral machine in an immoral one, depending on the perspective. The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.
In 2018, Dr. Yuefang Zhou and Prof. Dr. Martin Fischer initiated the first international workshop on intimate human-robot relations at the University of Potsdam, „which resulted in the publication of an edited book on developments in human-robot intimate relationships“. This year, Prof. Dr. Martin Fischer, Prof. Dr. Rebecca Lazarides, and Dr. Yuefang Zhou are organizing the second edition. „As interest in the topic of humanoid AI continues to grow, the scope of the workshop has widened. During this year’s workshop, international experts from a variety of different disciplines will share their insights on motivational, social and cognitive aspects of learning, with a focus on humanoid intelligent tutoring systems and social learning companions/robots.“ (Website Embracing AI) The international workshop „Learning from Humanoid AI: Motivational, Social & Cognitive Perspectives“ will take place on 29 and 30 November 2019 at the University of Potsdam. Keynote speakers are Prof. Dr. Tony Belpaeme, Prof. Dr. Oliver Bendel, Prof. Dr. Angelo Cangelosi, Dr. Gabriella Cortellessa, Dr. Kate Devlin, Prof. Dr. Verena Hafner, Dr. Nicolas Spatola, Dr. Jessica Szczuka, and Prof. Dr. Agnieszka Wykowska. Further information is available at embracingai.wordpress.com/.
„Once we place so-called ’social robots‘ into the social practices of our everyday lives and lifeworlds, we create complex, and possibly irreversible, interventions in the physical and semantic spaces of human culture and sociality. The long-term socio-cultural consequences of these interventions is currently impossible to gauge.“ (Website Robophilosophy Conference) With these words the next Robophilosophy conference is announced. It will take place from 18 to 21 August 2019 in Aarhus, Denmark. The CfP raises questions like that: „How can we create cultural dynamics with or through social robots that will not impact our value landscape negatively? How can we develop social robotics applications that are culturally sustainable? If cultural sustainability is relative to a community, what can we expect in a global robot market? Could we design human-robot interactions in ways that will positively cultivate the values we, or people anywhere, care about?“ (Website Robophilosophy Conference) In 2018 Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Joanna Bryson, and Oliver Bendel had been keynote speakers. In 2020, Catrin Misselhorn, Selma Sabanovic, and Shannon Vallor will be presenting. More information via conferences.au.dk/robo-philosophy/.
Fig.: The Robot Restaurant in Tokyo (Foto: Stefanie Hauske)
CONVERSATIONS 2019 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: „Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.“ The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. The submission deadline for CONVERSATIONS 2019 was extended to September 10. More information via conversations2019.wordpress.com/.
Robophilosophy or robot philosophy is a field of philosophy that deals with robots (hardware and software robots) as well as with enhancement options such as artificial intelligence. It is not only about the practice and history of development, but also the history of ideas, starting with the works of Homer and Ovid up to science fiction books and movies. Disciplines such as epistemology, ontology, aesthetics and ethics, including information and machine ethics, are involved. The new platform robophilosophy.com was founded in July 2019 by Oliver Bendel. He invited several authors to write with him about robophilosophy, robot law, information ethics, machine ethics, robotics, and artificial intelligence. All of them have a relevant background. Oliver Bendel studied philosophy as well as information science and made his doctoral thesis about anthropomorphic software agents. He has been researching in the fields of information ethics and machine ethics for years.
„AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top players …“ (The Verge, 11 July 2019) According to the magazine, Pluribus was remarkably good at bluffing its opponents. The Wall Street Journal reported: „A new artificial intelligence program is so advanced at a key human skill – deception – that it wiped out five human poker players with one lousy hand.“ (Wall Street Journal, 11 July 2019) Of course you don’t have to equate bluffing with cheating – but in this context interesting scientific questions arise. At the conference „Machine Ethics and Machine Law“ in 2016 in Krakow, Ronald C. Arkin, Oliver Bendel, Jaap Hage, and Mojca Plesnicar discussed on the panel the question: „Should we develop robots that deceive?“ Ron Arkin (who is in military research) and Oliver Bendel (who is not) came to the conclusion that we should – but they had very different arguments. The ethicist from Zurich, inventor of the LIEBOT, advocates free, independent research in which problematic and deceptive machines are also developed, in favour of an important gain in knowledge – but is committed to regulating the areas of application (for example dating portals or military operations). Further information about Pluribus can be found in the paper itself, entitled „Superhuman AI for multiplayer poker“.
The papers of the CHI 2019 workshop „Conversational Agents: Acting on the Wave of Research and Development“ (Glasgow, 5 May 2019) are now listed on convagents.org. The extended abstract by Oliver Bendel (School of Business FHNW) entitled „Chatbots as Moral and Immoral Machines“ can be downloaded here. The workshop brought together experts from all over the world who are working on the basics of chatbots and voicebots and are implementing them in different ways. Companies such as Microsoft, Mozilla and Salesforce were also present. Approximately 40 extended abstracts were submitted. On 6 May, a bagpipe player opened the four-day conference following the 35 workshops. Dr. Aleks Krotoski, Pillowfort Productions, gave the first keynote. One of the paper sessions in the morning was dedicated to the topic „Values and Design“. All in all, both classical specific fields of applied ethics and the young discipline of machine ethics were represented at the conference. More information via chi2019.acm.org.
„In Germany, around four million people will be dependent on care and nursing in 2030. Already today there is talk of a nursing crisis, which is likely to intensify further in view of demographic developments in the coming years. Fewer and fewer young people will be available to the labour market as potential carers for the elderly. Experts estimate that there will be a shortage of around half a million nursing staff in Germany by 2030. Given these dramatic forecasts, are nursing robots possibly the solution to the problem? Scientists from the disciplines of computer science, robotics, medicine, nursing science, social psychology, and philosophy explored this question at a Berlin conference of the Daimler and Benz Foundation. The machine ethicist and conference leader Professor Oliver Bendel first of all stated that many people had completely wrong ideas about care robots: ‚In the media there are often pictures or illustrations that do not correspond to reality‘.“ (Die Welt, 14 June 2019) With these words an article in the German newspaper Die Welt begins. Norbert Lossau describes the Berlin Colloquium, which took place on 22 May 2019, in detail. The article is available in English and German. So are robots a solution to the nursing crisis? Oliver Bendel denies this. They can be useful for the caregiver and the patient. But they don’t solve the big problems.
Fig.: The Pepper robot at the Berlin Colloquium (photo: Daimler and Benz Foundation)
Prof. Dr. Susan L. Anderson, one of the most famous machine ethicists in the world, attended the 23rd Berlin Colloquium. She summarized her talk („Developing Ethics for Eldercare Robots“) in the brochure of the event with regard to the technology of care robots as follows: „Ideally, we would like eldercare robots to be able make correct ethical decisions on their own. This poses many challenges for machine ethicists. There are those who claim that ethics cannot be computed, that ethics is subjective, and/or that it makes no sense to speak of a robot as being an ethical agent. I argue to the contrary, maintaining that it is possible to represent numerically the ethical dilemmas with which an eldercare robot might be presented; and the robot could be given an ethical principle, derived from cases where ethicists agree as to the correct answer, to compute which of the possible actions it could perform at a given moment in time is the best one. It can also explain why it did what it did, if challenged. The robot will not be a full ethical agent, lacking some qualities of human agents; but it is all we need, and even desire.“ Susan L. Anderson had been invited by Prof. Dr. Oliver Bendel, who is himself a machine ethicist.
Fig.: Susan L. Anderson together with Michael Anderson (photo: Daimler and Benz Foundation)
Prof. Dr. Michael Anderson, one of the most famous machine ethicists in the world, attended the 23rd Berlin Colloquium. He summarized his presentation („An Ethical Care Robot“) in the brochure of the event with regard to the technology of care robots as follows: As with any technology, „its advantages need to be tempered with its possible disadvantages such as fewer employment opportunities and patient isolation“. „Further, given the intimate nature of this technology, it is of paramount importance that it behaves in an ethical manner towards its users. To insure ethical behavior from such technology, we maintain that its actions should be guided by a set of ethical values. As it is unrealistic to expect those with the expertise necessary to develop such technology will be equally competent in its ethical dimensions, we also maintain that this set of values be abstracted from a consensus of ethicists. To this end, we propose a case-supported, principle-based behavior paradigm where behavior of autonomous machines is directed by domain-specific ethical principles abstracted from the judgements of ethicists on simple, agreed upon cases. As ethics entails more than simply not taking improper action but choosing the best action in a given situation, we advocate that every action a care robot takes be determined by such ethical principles. As transparency will be important in such systems, ethical principles and the cases from which they have been derived have the added benefit of serving as support for why a particular action was chosen over another. To show the feasibility of the proposed paradigm, we have developed a principle in the domain of elder care and instantiated it in a SoftBank Robotics NAO robot situated in a simulated eldercare environment.“ Michael Anderson had been invited by Prof. Dr. Oliver Bendel, who is himself a machine ethicist.
Fig.: Michael Anderson in Berlin (photo: Daimler and Benz Foundation)
Am 22. Mai 2019 fand das 23. Berliner Kolloquium der Daimler und Benz Stiftung statt. Es widmete sich Pflegerobotern, nicht nur aus den bekannten, sondern auch aus neuartigen Perspektiven. So hatte der wissenschaftliche Leiter, Prof. Dr. Oliver Bendel, zwei der bekanntesten Maschinenethiker der Welt eingeladen, Prof. Dr. Michael Anderson und Prof. Dr. Susan L. Anderson. Zusammen mit Vincent Berenz hatten sie einen Nao-Roboter mit einer Reihe von Werten programmiert, die sein Verhalten bestimmen und gleichzeitig einer Person in einer simulierten Einrichtung der Altenpflege helfen. Ein Beitrag dazu erschien vor einiger Zeit in den Proceedings of the IEEE. Zum ersten Mal trugen sie die Ergebnisse aus diesem Projekt vor einem europäischen Publikum vor, und ihre insgesamt einstündige Präsentation mit der anschließenden zwanzigminütigen Diskussion kann als eine Sternstunde der Maschinenethik gelten. Mit dabei waren weitere international bekannte Wissenschaftlerinnen und Wissenschaftler, etwa der Japanexperte Florian Coulmas. Er ging auf Artefakte aus Japan ein und relativierte die häufig gehörte Behauptung, die Japaner hielten alle Dinge für beseelt. Mehrere Medien berichteten über das Berliner Kolloquium, beispielsweise Neues Deutschland.
Abb.: Die Andersons beim Berliner Kolloquium (Foto: Daimler und Benz Stiftung)
„In the last five years, work on software that interacts with people via typed or spoken natural language, called chatbots, intelligent assistants, social bots, virtual companions, non-human players, and so on, increased dramatically. Chatbots burst into prominence in 2016. Then came a wave of research, more development, and some use. The time is right to assess what we have learned from endeavoring to build conversational user interfaces that simulate quasi-human partners engaged in real conversations with real people.“ (Website Conversational Agents) The CHI 2019 workshop „Conversational Agents: Acting on the Wave of Research and Development“ (Glasgow, 5 May) brings together people „who developed or studied various conversational agents, to explore themes that include what works (and hasn’t) in home, education, healthcare, and work settings, what we have learned from this about people and their activities, and social or ethical possibilities for good or risk“ (Website Conversational Agents). Oliver Bendel will present three chatbots developed between 2013 and 2018 in the discipline of machine ethics, GOODBOT, LIEBOT and BESTBOT. More information via convagents.org.
Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article „The Morality Menu“ the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here.
Service robots are becoming ever more pervasive in society-at-large. They are present in our apartments and our streets. They are found in hotels, hospitals, and care homes, in shopping malls, and on company grounds. In doing so, various challenges arise. Service robots consume energy, they take up space in ever more crowded cities, sometimes leading us to collide with them and stumble over them. They monitor us, they communicate with us and retain our secrets on their data drives. In relation to this, they can be hacked, kidnapped and abused. The first section of the article „Service Robots from the Perspectives of Information and Machine Ethics“ by Oliver Bendel presents different types of service robots – like security, transport, therapy, and care robots – and discusses the moral implications that arise from their existence. Information ethics and machine ethics will form the basis for interrogating these moral implications. The second section discusses the draft for a patient declaration, by which people can determine whether and how they want to be treated and cared for by a robot. The article is part of the new book „Envisioning Robots in Society – Power, Politics, and Public Space“ that reproduces the talks of the Robophilosophy 2018 conference in Vienna (IOS Press, Amsterdam 2018).
CONVERSATIONS 2018 was a cross-disciplinary workshop to advance chatbot research. It took place on 26 October in St. Petersburg, at the School of Journalism and Mass Communications. „Chatbots enable users to interact with digital services in natural language, through text or voice dialogue. Customer service and digital assistants are applications areas where chatbots already have substantial impact. Also in other application areas, such as health and fitness, education, and information services, a similar impact is foreseen.“ (Website Workshop) The event was open to all scientists and practitioners with an interest in chatbot research and design. Among the organizers were experts such as Asbjørn Følstad, SINTEF (Norway), and Symeon Papadopoulos, Centre for Research and Technology Hellas (Greece). Members of the program committee were among others Adam Tsakalidis, University of Warwick (UK), Despoina Chatzakou, Aristotle University of Thessaloniki (Greece), Eleni Metheniti, Saarland University (Germany), Frank Dignum, Utrecht University (Netherlands), and Oliver Bendel, School of Business FHNW (Switzerland). Further information on conversations2018.wordpress.com/.
„With a few decades, autonomous and semi-autonomous machines will be found throughout Earth’s environments, from homes and gardens to parks and farms and so-called working landscapes – everywhere, really, that humans are found, and perhaps even places we’re not. And while much attention is given to how those machines will interact with people, far less is paid to their impacts on animals.“ (Anthropocene, Oct 10, 2018) „Machines can disturb, frighten, injure, and kill animals,“ says Oliver Bendel, an information systems professor at the University of Applied Sciences and Arts Northwestern Switzerland, according to the magazine. „Animal-friendly machines are needed.“ (Anthropocene, Oct 10, 2018) In the article „Will smart machines be kind to animals?“ the magazine Anthropocene deals with animal-friendly machines and introduces the work of the scientist. It is based on his paper „Towards animal-friendly machines“ (Paladyn) and an interview conducted by journalist Brandon Keim with Oliver Bendel. More via www.anthropocenemagazine.org/2018/10/animal-friendly-ai/.
Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments. There they encounter domestic animals, farm animals, working animals and/or wild animals. These animals could be disturbed, displaced, injured or killed. Within the context of machine ethics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral machines in the spirit of this discipline. They were each linked with an annotated decision tree containing the ethical assumptions or justifications for interactions with animals. Annotated decision trees are seen as an important basis in developing moral machines. They are not without problems and contradictions, but they do guarantee well-founded, secure actions that are repeated at a certain level. The article „Towards animal-friendly machines“ by Oliver Bendel, published in August 2018 in Paladyn, Journal of Behavioral Robotics, documents completed and current projects, compares their relative risks and benefits, and makes proposals for future developments in machine ethics.