Should Robots Manipulate the Customer?

Under the supervision of Prof. Dr. Oliver Bendel, Liliana Margarida Dos Santos Alves wrote her master thesis „Manipulation by humanoid consulting and sales hardware robots from an ethical perspective“ at the School of Business FHNW. The background was that social robots and service robots like Pepper and Paul have been doing their job in retail for years. In principle, they can use the same sales techniques – including those of a manipulative nature – as salespeople. The young scientist submitted her comprehensive study in June 2021. According to the abstract, the main research question (RQ) is „to determine whether it is ethical to intentionally program humanoid consulting and sales hardware robots with manipulation techniques to influence the customer’s purchase decision in retail stores“ (Alves 2021). To answer this central question, five sub-questions (SQ) were defined and answered based on an extensive literature review and a survey conducted with potential customers of all ages: „SQ1: How can humanoid consulting and selling robots manipulate customers in the retail store? SQ2: Have ethical guidelines and policies, to which developers and users must adhere, been established already to prevent the manipulation of customers‘ purchasing decisions by humanoid robots in the retail sector? SQ3: Have ethical guidelines and policies already been established regarding who must perform the final inspection of the humanoid robot before it is put into operation? SQ4: How do potential retail customers react, think and feel when being confronted with a manipulative humanoid consultant and sales robot in a retail store? SQ5: Do potential customers accept a manipulative and humanoid consultant and sales robot in the retail store?“ (Alves 2021) To be able to answer the main research question (RQ), the sub-questions SQ1 – SQ5 were worked through step by step. In the end, the author comes to the conclusion „that it is neither ethical for software developers to program robots with manipulative content nor is it ethical for companies to actively use these kinds of robots in retail stores to systematically and extensively manipulate customers‘ negatively in order to obtain an advantage“. „Business is about reciprocity, and it is not acceptable to systematically deceive, exploit and manipulate customers to attain any kind of benefit.“ (Alves 2021) The book „Soziale Roboter“ – which will be published in September or October 2021 – contains an article on social robots in retail by Prof. Dr. Oliver Bendel. In it, he also mentions the study by Liliana Margarida Dos Santos Alves.

Fig.: Should robots act like salespeople?

Deceptive Machines

„AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top players …“ (The Verge, 11 July 2019) According to the magazine, Pluribus was remarkably good at bluffing its opponents. The Wall Street Journal reported: „A new artificial intelligence program is so advanced at a key human skill – deception – that it wiped out five human poker players with one lousy hand.“ (Wall Street Journal, 11 July 2019) Of course you don’t have to equate bluffing with cheating – but in this context interesting scientific questions arise. At the conference „Machine Ethics and Machine Law“ in 2016 in Krakow, Ronald C. Arkin, Oliver Bendel, Jaap Hage, and Mojca Plesnicar discussed on the panel the question: „Should we develop robots that deceive?“ Ron Arkin (who is in military research) and Oliver Bendel (who is not) came to the conclusion that we should – but they had very different arguments. The ethicist from Zurich, inventor of the LIEBOT, advocates free, independent research in which problematic and deceptive machines are also developed, in favour of an important gain in knowledge – but is committed to regulating the areas of application (for example dating portals or military operations). Further information about Pluribus can be found in the paper itself, entitled „Superhuman AI for multiplayer poker“.

Fig.: Pluribus is good at bluffing