The ACI took place from 5 to 8 December 2022 in Newcastle upon Tyne. It is the world’s leading conference on animal-computer interaction. The proceedings were published in the ACM Library on March 30, 2023. They include the paper „A Face Recognition System for Bears: Protection for Animals and Humans in the Alps“ by Oliver Bendel and Ali Yürekkirmaz. From the abstract: „Face recognition, in the sense of identifying people, is controversial from a legal, social, and ethical perspective. In particular, opposition has been expressed to its use in public spaces for mass surveillance purposes. Face recognition in animals, by contrast, seems to be uncontroversial from a social and ethical point of view and could even have potential for animal welfare and protection. This paper explores how face recognition for bears (understood here as brown bears) in the Alps could be implemented within a system that would help animals as well as humans. It sets out the advantages and disadvantages of wildlife cameras, ground robots, and camera drones that would be linked to artificial intelligence. Based on this, the authors make a proposal for deployment. They favour a three-stage plan that first deploys fixed cameras and then incorporates camera drones and ground robots. These are all connected to a control centre that assesses images and developments and intervenes as needed. The paper then discusses social and ethical, technical and scientific, and economic and structural perspectives. In conclusion, it considers what could happen in the future in this context.“ The proceedings can be accessed via dl.acm.org/doi/proceedings/10.1145/3565995.
Gesichtserkennung ist eine problematische Technologie, vor allem wenn sie für die Überwachung von Menschen eingesetzt wird. Sie hat allerdings auch Potenzial, etwa mit Blick auf die Erkennung von (Individuen von) Tieren. Prof. Dr. Oliver Bendel hatte 2021 an der Hochschule für Wirtschaft FHNW das Thema „ANIFACE: Animal Face Recognition“ ausgeschrieben und die Wahl gelassen, ob es um Wölfe oder Bären gehen sollte. Ali Yürekkirmaz nahm den Auftrag an und konzipierte in seiner Abschlussarbeit ein System, mit dem man in den Alpen – ohne elektronische Halsbänder oder implantierte Microchips – einzelne Bären identifizieren und entsprechende Maßnahmen einleiten könnte. Es stehen, so die Idee, in bestimmten Gebieten entsprechende Kamera- und Kommunikationssysteme bereit. Wenn ein Bär identifiziert ist, wird eruiert, ob er als harmlos oder gefährlich gilt. Dann werden die zuständigen Stellen oder direkt die Betroffenen informiert. Spaziergänger können vor den Aufnahmen gewarnt werden – es ist aber auch technisch möglich, ihre Privatsphäre zu schützen. In einem Expertengespräch mit einem Vertreter von KORA konnten wichtige Erkenntnisse zur Wildtierbeobachtung und speziell zur Beobachtung von Bären gewonnen werden, und eine Umfrage hat die Haltung von Teilen der Bevölkerung in Erfahrung gebracht. Aufbauend auf der Arbeit von Ali Yürekkirmaz, die im Januar 2022 abgegeben wurde, könnte ein Algorithmus für Bären entwickelt und ein ANIFACE-System in den Alpen implementiert und evaluiert werden.
Face recognition for humans is very controversial, especially when it comes to surveillance or physiognomy. However, there are also other possible applications, for example in relation to animals. At the moment, individuals are mainly tracked with the help of chips and transmitters. However, these are disturbing for some of the animals. Further, the question is whether one should interfere with living beings in this way. In addition, animals are constantly being born that escape monitoring. The project „ANIFACE: Animal Face Recognition“ will develop a concept of a facial recognition system that can identify individuals of bears and wolves. These are advancing more and more in Switzerland and need to be monitored to protect them and affected people (and their agriculture). Facial recognition can be used to identify the individual animals and also to track them if there are enough stations, which of course must be connected with each other. An interesting sidebar would be emotion recognition for animals. The system could find out how bears and wolves are feeling and then trigger certain actions. The project was applied for in July 2021 by Prof. Dr. Oliver Bendel, who has already designed and implemented several animal-friendly machines with his teams. In August, it will be decided whether he can start the work.
The „Reclaim Your Face“ alliance, which calls for a ban on biometric facial recognition in public space, has been registered as an official European Citizens‘ Initiative. One of the goals is to establish transparency: „Facial recognition is being used across Europe in secretive and discriminatory ways. What tools are being used? Is there evidence that it’s really needed? What is it motivated by?“ (Website RYF) Another one is to draw red lines: „Some uses of biometrics are just too harmful: unfair treatment based on how we look, no right to express ourselves freely, being treated as a potential criminal suspect.“ (Website RYF) Finally, the initiative demands respect for human: „Biometric mass surveillance is designed to manipulate our behaviour and control what we do. The general public are being used as experimental test subjects. We demand respect for our free will and free choices.“ (Website RYF) In recent years, the use of facial recognition techniques have been the subject of critical reflection, such as in the paper „The Uncanny Return of Physiognomy“ presented at the 2018 AAAI Spring Symposia or in the chapter „Some Ethical and Legal Issues of FRT“ published in the book „Face Recognition Technology“ in 2020. More information at reclaimyourface.eu.
The book chapter „The BESTBOT Project“ by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the „Handbuch Maschinenethik“, edited by Oliver Bendel. From the abstract: „The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.“ (Abstract) The BESTBOT is an immoral machine in a moral one – or a moral machine in an immoral one, depending on the perspective. The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.
Face recognition is the automated recognition of a face or the automated identification, measuring and description of features of a face. In the 21st century, it is increasingly attempted to connect to the pseudoscience of physiognomy, which has its origins in ancient times. From the appearance of persons, a conclusion is drawn to their inner self, and attempts are made to identify character traits, personality traits and temperament, or political and sexual orientation. Biometrics plays a role in this concept. It was founded in the eighteenth century, when physiognomy under the lead of Johann Caspar Lavater had its dubious climax. In the paper „The Uncanny Return of Physiognomy“, the basic principles of this topic are elaborated; selected projects from research and practice are presented and, from an ethical perspective, the possibilities of face recognition are subjected to fundamental critique in this context, including the above examples. Oliver Bendel presented his paper on 27 March 2018 at Stanford University („AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“, AAAI 2018 Spring Symposium Series). The entire volume can be downloaded via AAAI.
Face recognition is the automated recognition of a face or the automated identification, measuring and description of features of a face. In the 21st century, it is increasingly attempted to connect to the pseudoscience of physiognomy, which has its origins in ancient times. From the appearance of persons, a conclusion is drawn to their inner self, and attempts are made to identify character traits, personality traits and temperament, or political and sexual orientation. Biometrics plays a role in this concept. It was founded in the eighteenth century, when physiognomy under the lead of Johann Caspar Lavater had its dubious climax. In the paper „The Uncanny Return of Physiognomy“, the basic principles of this topic are elaborated; selected projects from research and practice are presented and, from an ethical perspective, the possibilities of face recognition are subjected to fundamental critique in this context, including the above examples. Oliver Bendel will present his paper in March 2018 at Stanford University („AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“, AAAI 2018 Spring Symposium Series).
Ein neuer Beitrag von Oliver Bendel im Wirtschaftslexikon von Gabler erklärt, was Gesichtserkennung ist. „Gesichtserkennung ist das automatisierte Erkennen eines Gesichts in der Umwelt bzw. in einem Bild (das bereits vorliegt oder zum Zwecke der Gesichtserkennung erzeugt wird) oder das automatisierte Erkennen, Vermessen und Beschreiben von Merkmalen eines Gesichts, um die Identität einer Person … oder deren Geschlecht, Gesundheit, Herkunft, Alter, sexuelle Ausrichtung oder Gefühlslage … festzustellen.“ Auch die Perspektive der Ethik wird eingenommen: „Die Informationsethik fragt nach der Verletzung der informationellen Autonomie, die Wirtschaftsethik nach Chancen und Risiken des Einsatzes von Gesichtserkennung im Zusammenhang mit Beratung und Werbung. Um sich zu schützen, können Individuen ihr Erscheinungsbild modifizieren oder die Systeme manipulieren, was die Informationsethik wiederum unter dem Begriff der informationellen Notwehr behandeln würde.“ Der Beitrag ist am 5. Oktober 2017 erschienen und kann über wirtschaftslexikon.gabler.de/Definition/gesichtserkennungssoftware.html abgerufen werden.