Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments. There they encounter domestic animals, farm animals, working animals and/or wild animals. These animals could be disturbed, displaced, injured or killed. Within the context of machine ethics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral machines in the spirit of this discipline. They were each linked with an annotated decision tree containing the ethical assumptions or justifications for interactions with animals. Annotated decision trees are seen as an important basis in developing moral machines. They are not without problems and contradictions, but they do guarantee well-founded, secure actions that are repeated at a certain level. The article “Towards animal-friendly machines” by Oliver Bendel, published in August 2018 in Paladyn, Journal of Behavioral Robotics, documents completed and current projects, compares their relative risks and benefits, and makes proposals for future developments in machine ethics.
Fig.: In Australia
The BESTBOT was developed at the School of Business FHNW from March to August 2018. Predecessor projects were GOODBOT (2013) and LIEBOT (2016). Prof. Dr. Oliver Bendel has been doing research in the young discipline of machine ethics for several years. In cooperation with robotics and artificial intelligence (AI), it designs and produces moral machines. At the beginning of 2018 Bendel presented his paper “From GOODBOT to BESTBOT” at Stanford University, which laid the foundation for the BESTBOT project. David Studer programmed the chatbot in Java. Prof. Dr. Bradley Richards assisted him in technical matters. Like LIEBOT, BESTBOT is a networked system that exploits search engines and dictionaries. It analyzes the user’s text input with text-based emotion recognition software. At the same time, face recognition is used, again with emotion recognition. For example, if the user states that he is doing well but reveals something else on his face, the chatbot addresses this contradiction. It recognizes both small and big worries. Like the GOODBOT, the BESTBOT can escalate over several levels and provide a suitable emergency number. Like its predecessor, it makes it clear that it is only a machine. It is also special that it cites the source of factual allegations. The BESTBOT will be presented at conferences in 2019.
Fig.: The chatbot uses face recognition
LADYBIRD, the animal-friendly vacuum cleaning robot, was conceived in 2014 by Oliver Bendel and introduced at Stanford University (AAAI Spring Symposia) in 2017 and then implemented as a prototype at the School of Business FHNW. In the context of the project, a menu was proposed with which the user can set the morale of the vacuum cleaner robot. As the name implies, it spares ladybirds. It should also let spiders live. But if you want to have certain insects sucked in, you could define this via a menu. It is important that LADYBIRD remains animal-friendly overall. The idea was to develop a proxy morality in detail via the menu. The vacuum cleaner robot as a proxy machine does what the owner would do. In 2018 the moral menu (MOME) for LADYBIRD was born as a design study. It can be combined with other approaches and technologies. In this way, the user could learn how others have decided and how the personal morality that he has transferred to the machine is assessed. He could also be warned and enlightened if he wants to suck in not only vermin but also spiders.
Fig.: Moral menu for LADYBIRD
The international workshop “Understanding AI & Us” will take place in Berlin (Alexander von Humboldt Institute for Internet and Society) on 30 June 2018. It is hosted by Joanna Bryson (MIT), Janina Loh (University of Vienna), Stefan Ullrich (Weizenbaum Institute Berlin) and Christian Djeffal (IoT and Government, Berlin). Birgit Beck, Oliver Bendel and Pak-Hang Wong are invited to the panel on the ethical challenges of artificial intelligence. The aim of the workshop is to bring together experts from the field of research reflecting on AI. The event is funded by the Volkswagen Foundation (VolkswagenStiftung). The project “Understanding AI & Us” furthers and deepens the understanding of artificial intelligence (AI) in an interdisciplinary way. “This is done in order to improve the ways in which AI-systems are invented, designed, developed, and criticised.” (Invitation letter) “In order to achieve this, we form a group that merges different abilities, competences and methods. The aim is to provide space for innovative and out-of-the-box-thinking that would be difficult to pursue in ordinary academic discourse in our respective disciplines. We are seeking ways to merge different disciplinary epistemological standpoints in order to increase our understanding of the development of AI and its impact upon society.” (Invitation letter)
Fig.: The Humboldt Box in Berlin
“Sex robots are coming, but the argument that they could bring health benefits, including offering paedophiles a ‘safe’ outlet for their sexual desires, is not based on evidence, say researchers. The market for anthropomorphic dolls with a range of orifices for sexual pleasure – the majority of which are female in form, and often boast large breasts, tiny waists and sultry looks – is on the rise, with such dummies selling for thousands of pounds a piece.” (Guardian, 5 June 2018) These are the initial words of an article in the well-known British daily newspaper Guardian, published on 5 June 2018. It quotes Susan Bewley, professor of women’s health at Kings College London, and Oliver Bendel, professor at the School of Business FHNW. Oliver Bendel is not in favor of a ban on the development of sex robots and love dolls. However, he can imagine that the area of application could be limited. He calls for empirical research in the field. The article can be accessed via www.theguardian.com/science/2018/jun/04/claims-about-social-benefits-of-sex-robots-greatly-overstated-say-experts.
Fig.: A love doll
Machine ethics researches the morality of semiautonomous and autonomous machines. The School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW realized a project for implementation of a prototype called GOODBOT, a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. It was a stand-alone solution, not linked with other systems and not internet- or web-based. In the LIEBOT project, the mentioned meta rule was reversed. This web-based chatbot, implemented in 2016, could lie systematically. It was an example of a simple immoral machine. A follow-up project in 2018 is going to develop the BESTBOT, considering the restrictions of the GOODBOT and the opportunities of the LIEBOT. The aim is to develop a machine that can detect problems of users of all kinds and can react in an adequate way. To achieve this, it will use approaches of face recognition. The paper “From GOODBOT to BESTBOT” describes the preconditions and findings of the GOODBOT project and the results of the LIEBOT project and outlines the subsequent BESTBOT project. A reflection from the perspective of information ethics is included. Oliver Bendel presented his paper on 27 March 2018 at Stanford University (“AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents”, AAAI 2018 Spring Symposium Series). The PDF is available here.
The book chapter “Co-robots from an Ethical Perspective” by Oliver Bendel was published in March 2018. It is included in the book “Business Information Systems and Technology 4.0″ (Springer). The abstract: “Cooperation and collaboration robots work hand in hand with their human colleagues. This contribution focuses on the use of these robots in production. The co-robots (to use this umbrella term) are defined and classified, and application areas, examples of applications and product examples are mentioned. Against this background, a discussion on moral issues follows, both from the perspective of information and technology ethics and business ethics. Central concepts of these fields of applied ethics are referred to and transferred to the areas of application. In moral terms, the use of cooperation and collaboration robots involves both opportunities and risks. Co-robots can support workers and save them from strains and injuries, but can also displace them in certain activities or make them dependent. Machine ethics is included at the margin; it addresses whether and how to improve the decisions and actions of (partially) autonomous systems with respect to morality. Cooperation and collaboration robots are a new and interesting subject for it.” The book can be ordered here.
Fig.: The cover of the book (photo: Springer)
The tentative schedule of AAAI 2018 Spring Symposium on AI and Society at Stanford University (26 – 28 March 2018) has been published. On Tuesday Emma Brunskill from Stanford University, Philip C. Jackson (“Toward Beneficial Human-Level AI … and Beyond”) and Andrew Williams (“The Potential Social Impact of the Artificial Intelligence Divide”) will give a lecture. Oliver Bendel will have two talks, one on “The Uncanny Return of Physiognomy” and one on “From GOODBOT to BESTBOT”. From the description on the website: “Artificial Intelligence has become a major player in today’s society and that has inevitably generated a proliferation of thoughts and sentiments on several of the related issues. Many, for example, have felt the need to voice, in different ways and through different channels, their concerns on: possible undesirable outcomes caused by artificial agents, the morality of their use in specific sectors, such as the military, and the impact they will have on the labor market. The goal of this symposium is to gather a diverse group of researchers from many disciplines and to ignite a scientific discussion on this topic.” (AAAI website)
Fig.: On the campus of Stanford University
In a few days the book “Love and Sex with Robots”, edited by David Levy and Adrian D. Cheok, will be published. From the information on the Springer website: “This book constitutes the refereed proceedings of the Third International Conference on Love and Sex with Robots, LSR 2017, held in December 2017, in London, UK. The 12 revised papers presented together with 2 keynotes were carefully reviewed and selected from a total of 83 submissions. One of the biggest challenges of the Love and Sex with Robots conference is to engage a wider scientific community in the discussions of the multifaceted topic, which has only recently established itself as an academic research topic within, but not limited to, the disciplines of artificial intelligence, human-computer interaction, robotics, biomedical science and robot ethics etc.” Included are contributions by Oliver Bendel (“SSML for Sex Robots”), Sophie Wennerscheid (“Posthuman desire in robotics and science fiction”) and Dr. Rebekah Rousi (“Lying cheating robots – robots and infidelity”). The book can be pre-ordered via www.springer.com/de/book/9783319763682. Already on the market is the book with the same title, which contains the contributions of the LSR 2016 at Goldsmiths.
Fig.: The cover of the new book (photo: Springer)
“Robophilosophy 2018 – Envisioning Robots In Society: Politics, Power, And Public Space” is the third event in the Robophilosophy Conference Series which focusses on robophilosophy, a new field of interdisciplinary applied research in philosophy, robotics, artificial intelligence and other disciplines. The main organizers are Prof. Dr. Mark Coeckelbergh, Dr. Janina Loh and Michael Funk. Plenary speakers are Joanna Bryson (Department of Computer Science, University of Bath, UK), Hiroshi Ishiguro (Intelligent Robotics Laboratory, Osaka University, Japan), Guy Standing (Basic Income Earth Network and School of Oriental and African Studies, University of London, UK), Catelijne Muller (Rapporteur on Artificial Intelligence, European Economic and Social Committee), Robert Trappl (Head of the Austrian Research Institute for Artificial Intelligence, Austria), Simon Penny (Department of Art, University of California, Irvine), Raja Chatila (IEEE Global Initiative for Ethical Considerations in AI and Automated Systems, Institute of Intelligent Systems and Robotics, Pierre and Marie Curie University, Paris, France), Josef Weidenholzer (Member of the European Parliament, domains of automation and digitization) and Oliver Bendel (Institute for Information Systems, FHNW University of Applied Sciences and Arts Northwestern Switzerland). The conference will take place from 14 to 17 February 2018 in Vienna. More information via conferences.au.dk/robo-philosophy/.
Fig.: Creating artificial beings
Prof. Dr. Oliver Bendel was invited to give a lecture at the ISAIM special session “Formalising Robot Ethics”. “The International Symposium on Artificial Intelligence and Mathematics is a biennial meeting that fosters interactions between mathematics, theoretical computer science, and artificial intelligence.” (Website ISAIM) Oliver Bendel will present selected prototypes of moral and immoral machines and will discuss a project planned for 2018. The GOODBOT is a chatbot that responds morally adequate to problems of the users. It’s based on the Verbot engine. The LIEBOT can lie systematically, using seven different strategies. It was written in Java, whereby AIML was used. LADYBIRD is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. In this case, an annotated decision tree was translated into Java. The BESTBOT should be even better than the GOODBOT. Technically everything is still open. The ISAIM conference will take place from 3 to 5 January 2018 in Fort Lauderdale, Florida. Further information is available at isaim2018.cs.virginia.edu/.
Abb.: Die Hand von Nao
The Digital Europe Working Group Conference Robotics will take place on 8 November 2017 at the European Parliament in Brussels. The keynote address will be given by Mariya Gabriel, European Commissioner for Digital Society and Economy. The speakers of the first panel are Oliver Bendel (Professor of Information Systems, Information Ethics and Machine Ethics at the School of Business FHNW, via video conference), Anna Byhovskaya (policy and communications advisor, Trade Union Advisory Council of the OECD) and Malcolm James (Senior Lecturer in Accounting & Taxation, Cardiff Metropolitan University). The third panel will be moderated by Mady Delvaux (Member of the European Parliament). Speaker is Giovanni Sartor (Professor of Legal Informatics and Legal Theory at the European University Institute). The poster can be downloaded here. Further information is available at www.socialistsanddemocrats.eu/events/sd-group-digital-europe-working-group-robotics.
Fig.: A detail of the poster
Adrian David Cheok is director of the Mixed Reality Lab which “aims to push the boundaries of research into interactive new media technologies through the combination of technology, art, and creativity” (Website Mixed Reality Lab). He is editor of several academic journals and of the book “Love and Sex with Robots” (together with Kate Devlin and David Levy) which was published in 2017. In a current press release, he presents an electric smell machine for internet and virtual smell. “Here we are excited to introduce the world’s ﬁrst computer controlled digital device developed to stimulate olfactory receptor neurons with the aim of producing smell sensations purely using electrical pulses. Using this device, now we can easily stimulate the various areas of nasal cavity with different kinds of electric pulses. During the initial user experiments, some participants experienced smell sensations including ﬂoral, fruity, chemical, and woody. In addition, we have observed a difference in the ability of smelling odorants before and after the electrical stimulation. These results suggest that this technology could be enhanced to artiﬁcially create and modify smell sensations. By conducting more experiments with human subjects, we are expecting to uncover the patterns of electrical stimulations, that can effectively generate, modify, and recall smell sensations. This invention can lead to internet and virtual reality digital smell.” (Press Release, 10 August 2017) More via imagineeringinstitute.org/press-release-electric-smell-machine-for-internet-virtual-smell/.
Fig.: Towards the digital smell
There are more and more service robots in “open” spaces, safety and surveillance robots, transport and delivery robots, information and navigation robots and entertainment and toy robots. They are on their way in places that many of us share, and that are public. This poses various challenges. The article “Service Robots in Public Spaces: Ethical and Sociological Considerations” by Prof. Dr. Oliver Bendel (School of Business, University of Applied Sciences and Arts Northwestern Switzerland FHNW) addresses these challenges – from the moral and social points of view – and proposes solutions, among other things on the ethical, technical and organizational level, as well as offering assistance for roboticists and for legislative and political instances. The article was published in Telepolis (25 June 2017). It is Oliver Bendel’s twelfth contribution since 2008 in Germany’s oldest online magazine (founded in 1996).
Fig.: The K5 robot can be seen in Stanford
Harper’s Magazine, ansässig in New York, 666 Broadway, hat Herman Melvilles “Moby Dick” ebenso besprochen wie die neuesten Entdeckungen von Thomas Edison. Auch den Frauenrechten hat man sich immer wieder gewidmet. Winston Churchill und Theodore Roosevelt schrieben Beiträge für die berühmte Zeitschrift. In einer Selbsteinschätzung auf der Website heißt es: “Harper’s Magazine, the oldest general-interest monthly in America, explores the issues that drive our national conversation, through long-form narrative journalism and essays, and such celebrated features as the iconic Harper’s Index. With its emphasis on fine writing and original thought Harper’s provides readers with a unique perspective on politics, society, the environment, and culture.” (Website Harper’s Magazine) In der neuen Ausgabe, erschienen Mitte Mai 2017, wird unter dem Titel “Machine Yearning” der Maschinenethik Platz eingeräumt. Zitiert werden Fragen von Oliver Bendel, die der Wirtschaftsinformatiker und Ethiker zu Sexrobotern gestellt hat und die im Buch “Love and Sex with Robots” (Springer, 2017) verschriftlicht sind. Der Beitrag kann online über harpers.org/archive/2017/06/machine-yearning/ abgerufen werden und ist auch als PDF verfügbar.
Abb.: Blick über Manhattan
The LADYBIRD project starts in March 2017. More and more autonomous and semi-autonomous machines make decisions that have moral implications. Machine ethics as a discipline examines the possibilities and limits of moral machines. In this context, Prof. Dr. Oliver Bendel developed various design studies and thus submitted proposals for their appearance and functions. He focused on animal-friendly machines which make morally sound decisions, and chatbots with specific skills. The project is about a service robot, which shall spare beneficial insects – a vacuum cleaner called LADYBIRD. An annotated decision tree modelled for this objective and a set of sensors will be used. Three students work from March to October in the practice project at the School of Business FHNW. Since 2013, the principal Oliver Bendel published several articles on LADYBIRD and other animal-friendly machines, e.g., “Ich bremse auch für Tiere (I also brake for animals)” in the Swiss magazine inside.it. The robot will be the third prototype in the context of machine ethics at the School of Business FHNW. The first one was the GOODBOT (2013), the second one the LIEBOT (2016). All these machines can be described as simple moral machines.
Fig.: The robot should spare the ladybird
Inverse is an American online magazine, launched by Dave Nemetz, co-founder of Bleacher Report. It is based in San Francisco, California. According to the New York Observer, Inverse “aims to capture the millennial dude market with quirky takes on subjects like tech, games (video and board) and space” (Observer, October 20, 2015). Gabe Bergado asked the ethicist Oliver Bendel about sex robots and love dolls. One issue was: “Why are more companies and people becoming more interested in sex robots?” The reply to it: “Companies want to earn money. But I don’t think that this is a big market. Sex robots will remain a niche product. But, of course, sex toys are popular, und together with virtual and mixed reality, this could be the biggest thing next year. Some people and media are sensation-seeking. That’s why they are interested in sex robots and in men and women who fall in love with machines.” Another question was: “How should governments be reacting to the development of sex robots? Should there be regulations?” The answer is short and clear again: “Sex robots will remain a niche product, and I don’t think there will be many needs for regulation. Adults can do whatever they want to do, provided that they do not affect or disturb others in extreme ways. But I’m against child-like robots in brothels. Perhaps this should be banned.” The ethicist concluded with respect to sex robots: “It is interesting for a philosopher to research them and our relationship with them.” The final version of the interview was published in the article “Sex Robots Can’t Automate Emotional Intimacy” (January 28, 2017) which is available via www.inverse.com/article/27001-sex-robots-virtual-reality-engineering-oliver-mendel-interview.
Fig.: This could be a love doll
“Should we develop robots that deceive?” Diese Frage diskutierten Ron Arkin (Georgia Tech), Oliver Bendel (Hochschule für Wirtschaft FHNW), Jaap Hage (Universiteit Maastricht) und Mojca Plesnicar (Inštitut za kriminologijo pri Pravni fakulteti v Ljubljani) auf dem Podium der Konferenz “Machine Ethics and Machine Law”. Ron Arkin forschte wiederholt für das Pentagon an Robotern, die Maschinen und Personen täuschen und Personen und Einrichtungen schützen. Er stellte Tiere vor, die andere Tiere täuschen, und skizzierte, wie dieses Verhalten auf Roboter übertragen werden kann. Oliver Bendel, der Vater des LIEBOT, formulierte sechs Thesen. Die fünfte lautete: “We should … develop deceptive and lying machines to be able to fight against deceptive and lying machines.” Und er kam zum Schluss: “After all, deceptive and lying machines are important for the improvement of artificial intelligence.” Nach den Statements von Jaap Haage und Mojca Plesnicar diskutierte man miteinander und mit dem Publikum, angeleitet und vermittelt von Amit Kumar Pandey (SoftBank Robotics). Die Konferenz am 18. und 19. November 2016 in Krakau dürfte nach dem Symposium “Ethical and Moral Considerations in Non-Human Agents” (21. – 23. März in Stanford) die wichtigste internationale Konferenz für Maschinenethik im Jahre 2016 gewesen sein. An den zwei Tagen trugen u.a. Ron Arkin, Oliver Bendel (zusammen mit Kevin Schwegler), Elizabeth Kinne, Lily Frank, Joanna Bryson und Kathleen Richardson vor. Es liegt ein Proceedingsband mit den Extended Abstracts vor.
Abb.: Nach der Podiumsdiskussion
Die Konferenz “Machine Ethics and Machine Law” am 18. und 19. November in Krakau dürfte nach dem Symposium “Ethical and Moral Considerations in Non-Human Agents” (21. – 23. März in Stanford) die wichtigste internationale Konferenz für Maschinenethik im Jahre 2016 sein. Die Keynotes am 19. November stammen von Ron Arkin (“Lethal Autonomous Robots and the Plight of the Noncombatant”) und Amit Pandey (“Contemporary Issues on Ethical Intelligence of a Socially Intelligent Consumer Robot”). In der darauffolgenden Podiumsdiskussion geht es um die Frage: “Should we develop robots that deceive?” Es diskutieren Ron Arkin, Oliver Bendel, Jaap Hage und Mojca Plesnicar. In den Sessions am Nachmittag referieren u.a. Oliver Bendel, Lily Frank und Kathleen Richardson. Jaap Hage hält am 19. November seine Keynote zur Frage “Under which circumstances can we hold a machine responsible for its acts?”, Ewa Lukasik ihre Keynote über “Human Autonomy and the Hazards of Principle Agency in an Era of Expanding AI”. Es folgen eine weitere Podiumsdiskussion und Vorträge u.a. von Luís Moniz Pereira und Georgi Stojanov. Das gesamte Programm kann über die Website der Konferenz (machinelaw.philosophyinscience.com/technical-program/) eingesehen werden; dort findet sich auch der Proceedingsband mit den Abstracts.
Abb.: Ist Obelix so freundlich, wie er tut?
Prior to the hearing in the Parliament of the Federal Republic of Germany on 22 June 2016 from 4 – 6 pm, the contracted experts had sent their written comments on ethical and legal issues with respect to the use of robots and artificial intelligence. The video for the hearing can be accessed via www.bundestag.de/dokumente/textarchiv/2016/kw25-pa-digitale-agenda/427996. The documents of Oliver Bendel (School of Business FHNW), Eric Hilgendorf (University of Würzburg), Norbert Elkman (Fraunhofer IPK) and Ryan Calo (University of Washington) were published in July on the website of the German Bundestag. Answering the question “Apart from legal questions, for example concerning responsibility and liability, where will ethical questions, in particular, also arise with regard to the use of artificial intelligence or as a result of the aggregation of information and algorithms?” the US scientist explained: “Robots and artificial intelligence raise just as many ethical questions as legal ones. We might ask, for instance, what sorts of activities we can ethically outsource to machines. Does Germany want to be a society that relegates the use of force, the education of children, or eldercare to robots? There are also serious challenges around the use of artificial intelligence to make material decisions about citizens in terms of minimizing bias and providing for transparency and accountability – issues already recognized to an extent by the EU Data Directive.” (Website German Bundestag) All documents (most of them in German) are available via www.bundestag.de/bundestag/ausschuesse18/a23/anhoerungen/fachgespraech/428268.
Abb.: Es entstehen immer mehr hybride Wesen und Vorgänge