Schlagworte: Moral Machines

The LADYBIRD Project

The LADYBIRD project starts in March 2017. More and more autonomous and semi-autonomous machines make decisions that have moral implications. Machine ethics as a discipline examines the possibilities and limits of moral machines. In this context, Prof. Dr. Oliver Bendel developed various design studies and thus submitted proposals for their appearance and functions. He focused on animal-friendly machines which make morally sound decisions, and chatbots with specific skills. The project is about a service robot, which shall spare beneficial insects – a vacuum cleaner called LADYBIRD. An annotated decision tree modelled for this objective and a set of sensors will be used. Three students work from March to October in the practice project at the School of Business FHNW. Since 2013, the principal Oliver Bendel published several articles on LADYBIRD and other animal-friendly machines, e.g., “Ich bremse auch für Tiere (I also brake for animals)” in the Swiss magazine inside.it. The robot will be the third prototype in the context of machine ethics at the School of Business FHNW. The first one was the GOODBOT (2013), the second one the LIEBOT (2016). All these machines can be described as simple moral machines.

Fig.: The robot should spare the ladybird

Considerations in Non-Human Agents

The proceedings of the AAAI conference 2016 have been published in March 2016 (“The 2016 AAAI Spring Symposium Series: Technical Reports”). The symposium “Ethical and Moral Considerations in Non-Human Agents” was dedicated to the discipline of machine ethics. Ron Arkin (Georgia Institute of Technology), Luís Moniz Pereira (Universidade Nova de Lisboa), Peter Asaro (New School for Public Engagement, New York) and Oliver Bendel (School of Business FHNW) spoke about moral and immoral machines. The contribution “Annotated Decision Trees for Simple Moral Machines” (Oliver Bendel) can be found on the pages 195 – 201. In the abstract it is said: “Autonomization often follows after the automization on which it is based. More and more machines have to make decisions with moral implications. Machine ethics, which can be seen as an equivalent of human ethics, analyses the chances and limits of moral machines. So far, decision trees have not been commonly used for modelling moral machines. This article proposes an approach for creating annotated decision trees, and specifies their central components. The focus is on simple moral machines. The chances of such models are illustrated with the example of a self-driving car that is friendly to humans and animals. Finally the advantages and disadvantages are discussed and conclusions are drawn.” The proceedings can be ordered via www.aaai.org.

Fig.: Oliver Bendel, Cindy Mason, Luís Moniz Pereira and others in Stanford