The “Artificial Intelligence, Animals, and the Law” conference will take place November 7 – 9, 2025, at The George Washington University Law School. Organized by Kathy Hessler, Jamie McLaughlin, and Joan Schaffner, the event brings together attorneys and experts to examine how AI intersects with animal law and to discuss its implications for humans, animals, and the environment. On November 8, the panel „Applications and Considerations When Using AI for Animals“ will feature Oliver Bendel, Yip Fai Tse, and Karol Orzechowski, with Rachel Pepper as moderator. This session will explore practical uses of AI for animals, addressing both opportunities and challenges in applying emerging technologies to questions of welfare, ethics, and law. With panels ranging from ethical foundations and regulatory issues to the role of AI in research and its broader impact on the planet, the conference is designed to provoke meaningful dialogue and foster new insights at the intersection of artificial intelligence and animal law. The conference flyer can be downloaded here.
Fig.: The conference is on AI, animals, and the law (Image: ChatGPT/4o Image)
On September 4, 2025, the Association for the Advancement of Artificial Intelligence (AAAI) announced the continuation of the AAAI Spring Symposium Series. The symposium will be held from April 7–9, 2026, at the Hyatt Regency San Francisco Airport in Burlingame, California. The call for proposals for the symposium series is available on its website. According to the organizers, proposals are due October 24, 2025, and early submissions are encouraged. „The Spring Symposium Series is an annual set of meetings run in parallel at a common site. It is designed to bring colleagues together in an intimate forum while at the same time providing a significant gathering point for the AI community. The two and one-half day format of the series allows participants to devote considerably more time to feedback and discussion than typical one-day workshops. It is an ideal venue for bringing together new communities in emerging fields.“ (AAAI website). As was the case this year, the Spring Symposium Series will once again not be held on the Stanford University campus. For many years, the History Corner served as the traditional venue for the event. Efforts to secure an alternative university location in the Bay Area have been unsuccessful. AAAI should seriously consider returning to Stanford in 2027. Only then can the Spring Symposium Series regain the atmosphere and significance it once enjoyed.
Some thinkers – including Weber-Guskar (2021) – have introduced the idea of „Emotionalized Artificial Intelligence“, describing AI systems designed to elicit emotions in humans, detect and simulate emotional responses, and form affective ties with their users. That concept opens the door to understanding how people might form romantic or erotic relationships with AI agents and robots. Recent studies show these (probably one-sided) relationships are no longer speculative. In a thorough 2024 paper, Bertoni, Klaes & Pilacinski (2024) review the emerging field of „intimate companion robots“, encouraging more research into the field while exploring possibilities from combatting loneliness to replacing vulnerable sex workers. The radical scope of suggested uses makes it obvious how important ethics of human-robot interaction will become. Research by Ebner & Szczuka (2025) explores the romantic and sexual aspects of chatbot communication, and how romantic fantasies shared with them can elicit feelings of closeness that mimic the effects of human partners. There are dangers to these parallels, however, since undesirable aspects of human-human intimate interaction can be replicated as well. Chu et al. (2025) reveal that conversational AI (e.g., Replika) can evoke emotional synchrony, but also patterns resembling toxic relationships or self harm in vulnerable users. The breadth of these studies shows that emotionalized AI, robots, and other human-oriented machines and programs are already a reality, and romantic and sexual engagement with artificial agents is a pressing issue to debate on within ethics of human-robot interaction (authors: Grzegorz Roguszczak, Phan Thanh Phuong Le, Karahan Senzümrüt, Nesa Baruti Zajmi, and Zuzanna Bakuniak).
The Research Topic „Exploring human-likeness in AI: From perception to ethics and interaction dynamics“, hosted by Frontiers in Cognition, invites submissions on how human-like features in robots and AI systems influence user perception, trust, interaction, and ethical considerations. As AI becomes more integrated into society, anthropomorphic design raises pressing questions: Do human-like traits improve communication and acceptance, or do they lead to unrealistic expectations? What ethical implications arise when machines simulate empathy or emotion? This interdisciplinary call welcomes contributions from fields such as psychology, engineering, philosophy, and education. Submissions may include empirical research, theoretical analysis, reviews, or case studies that explore how human-likeness shapes the way we engage with AI. The deadline for manuscript summaries is September 22, 2025; full manuscripts are due by January 10, 2026. Articles will undergo peer review and are subject to publication fees upon acceptance. Topic editors are Dr. Katharina Kühne (University of Potsdam, Germany) and Prof. Dr. Roger K. Moore (The University of Sheffield, United Kingdom). For full details and submission guidelines, visit: www.frontiersin.org/research-topics/72370/exploring-human-likeness-in-ai-from-perception-to-ethics-and-interaction-dynamics.
The Association for the Advancement of Artificial Intelligence’s (AAAI) 2025 Spring Symposium Series took place from March 31 to April 2 in Burlingame, California. The symposium series fosters an intimate environment that enables emerging AI communities to engage in workshop-style discussions. Topics change each year to ensure a dynamic venue that stays current with the evolving landscape of AI research and applications. This year’s program included eight symposia: AI for Engineering and Scientific Discoveries; AI for Health Symposium: Leveraging Artificial Intelligence to Revolutionize Healthcare; Current and Future Varieties of Human-AI Collaboration; GenAI@Edge: Empowering Generative AI at the Edge; Human-Compatible AI for Well-being: Harnessing Potential of GenAI for AI-Powered Science; Machine Learning and Knowledge Engineering for Trustworthy Multimodal and Generative AI (AAAI-MAKE); Symposium on Child-AI Interaction in the Era of Foundation Models; Towards Agentic AI for Science: Hypothesis Generation, Comprehension, Quantification, and Validation. On May 28, 2025, the „Proceedings of the AAAI 2025 Spring Symposium Series“ (Vol. 5 No. 1) were published. They are available at ojs.aaai.org/index.php/AAAI-SS/issue/view/654.
The article „Image Synthesis from an Ethical Perspective“ by Prof. Dr. Oliver Bendel was published as an electronic version in the journal AI & SOCIETY on September 27, 2023. It addresses the ethical implications of image generators, i.e., a specific form of generative AI. It can now also be found in the current print edition from February 2025 (Volume 40, Issue 2). From the abstract: „Generative AI has gained a lot of attention in society, business, and science. This trend has increased since 2018, and the big breakthrough came in 2022. In particular, AI-based text and image generators are now widely used. This raises a variety of ethical issues. The present paper first gives an introduction to generative AI and then to applied ethics in this context. Three specific image generators are presented: DALL-E 2, Stable Diffusion, and Midjourney. The author goes into technical details and basic principles, and compares their similarities and differences. This is followed by an ethical discussion. The paper addresses not only risks, but opportunities for generative AI. A summary with an outlook rounds off the article.“ A lot has happened with image generators since 2023. The new one from OpenAI now also allows photorealistic images, and it has fewer problems with average-looking people – DALL-E 2 and 3 favored beauty over mediocrity and ugliness. The article can be downloaded from link.springer.com/article/10.1007/s00146-023-01780-4.
Fig.: An average-looking woman (Photo: OpenAI Image Generator)
„Inclusive AI (dt. ‚inklusive KI‘) will zum einen Phänomene der künstlichen Intelligenz (KI) mit exkludierendem Charakter wie Bias, Halluzination, Hate Speech und Deepfakes bekämpfen, zum anderen Anwendungen mit inkludierendem Charakter stärken und damit Betroffenen helfen. Die eine Bedeutung ist mit weiteren Ausdrücken wie ‚Responsible AI‘, ‚Explainable AI‘ und ‚Trustworthy AI‘ verknüpft, die andere mit solchen wie ‚AI for Good‘ oder ‚AI for Wellbeing‘. Insgesamt spricht man von Ethical AI, wobei es sich hierbei auch um einen Marketingbegriff handelt.“ Mit diesen Worten beginnt ein Beitrag von Prof. Dr. Oliver Bendel, der am 5. März 2025 im Gabler Wirtschaftslexikon veröffentlicht wurde. Am Ende des ersten Abschnitts wird festgestellt: „Inclusive AI kann mit generativer KI sowie mit anderen Formen von künstlicher Intelligenz einhergehen.“ Im zweiten Abschnitt werden Beispiele für die Bewegung gegeben, im dritten die Perspektiven von Informationsethik, KI-Ethik und Wirtschaftsethik eingenommen. Der Beitrag kann über wirtschaftslexikon.gabler.de/definition/inclusive-ai-171870 aufgerufen werden.
The paper „Revisiting the Trolley Problem for AI: Biases and Stereotypes in Large Language Models and their Impact on Ethical Decision-Making“ by Şahan Hatemo, Christof Weickhardt, Luca Gisler (FHNW School of Computer Science), and Oliver Bendel (FHNW School of Business) was accepted at the AAAI 2025 Spring Symposium „Human-Compatible AI for Well-being: Harnessing Potential of GenAI for AI-Powered Science“. A year ago, Şahan Hatemo had already dedicated himself to the topic of „ETHICAL DECISION MAKING OF AI: An Investigation Using a Stereotyped Persona Approach in the Trolley Problem“ in a so-called mini-challenge in the Data Science degree program. His supervisor, Oliver Bendel, had told the other scientists about the idea at the AAAI 2025 Spring Symposium „Impact of GenAI on Social and Individual Well-being“ at Stanford University. This led to a lively discussion between the participants. The student recruited two colleagues, Christof Weickhardt and Luca Gisler, and worked on the topic in a much more complex form in a so-called Challenge X. This time, three different open-source large language models were applied to the trolley problem. In each case, personalities were created with nationality, gender, and age. In addition, the data was compared with that of the MIT Moral Machine project. Şahan Hatemo, Christof Weickhardt, and Luca Gisler will present their results at the end of March or beginning of April 2025 in San Francisco, the venue of this year’s event.
Musikgeneratoren sind Audiogeneratoren, die Musik generieren. Dafür werden sie mit großen Datenmengen trainiert. Die KI-Systeme können Musikstücke komponieren und Musik produzieren, wobei sie bestimmte Stile und Stimmen imitieren können. Sie sind auch in der Lage, Songs zu remixen. Es sind zahlreiche Musikgeneratoren verfügbar, als Prototypen und Produkte, wie Soundraw, Boomy, Jukebox, AIVA und Mubert. Mit Audimee, Suno AI und Kits.ai kann man eigene Stimmen anlegen und trainieren. Auf TikTok fand sich im April 2023 ein neuer Musikclip der beiden kanadischen Sänger Drake und The Weeknd. Er wurde laut Golem vom Kanal ghostwriter977 veröffentlicht und ist nach Angaben des Benutzers mithilfe künstlicher Intelligenz erstellt worden. Seitdem gab es mehrere weitere freundliche und feindliche Übernahmen von Stilen und Stimmen. Über dieses Thema, aber auch über die Konzerte von Miku Hatsune und die Voyage-Tour von ABBA, hat Bernd Lechler vom Deutschlandfunk mit Prof. Dr. Oliver Bendel – Technikphilosoph und Wirtschaftsinformatiker aus Zürich – gesprochen. Entstanden ist die Sendung „Corso Spezial: Aura – Hat KI-Musik eine Seele?“, die am 3. Oktober 2024 ausgestrahlt wurde. Man kann sie über die DLF-Website und bei Spotify anhören.
Abb.: Eine Darstellung von Miku Hatsune (Bild: Ideogram)
On 27 August 2024, AAAI announced the continuation of the AAAI Spring Symposium Series, to be held March 31 – April 2, 2025, at San Francisco Airport Marriott Waterfront in Burlingame, CA. The Call for Proposals for the Spring Symposium Series is available on the Spring Symposium Series website. According to the organizers, proposals are due October 4, 2024, and early submissions are encouraged. „The Spring Symposium Series is an annual set of meetings run in parallel at a common site. It is designed to bring colleagues together in an intimate forum while at the same time providing a significant gathering point for the AI community.“ (Website AAAI) The traditional conference will therefore not be held at Stanford University in 2025 – as it was in 2023. It returned there in 2024 to the delight of all participants. The Covid-19 pandemic had hit the conference hard before. The AAAI can only be advised to return to Stanford in 2026. Only there will the conference live up to its promise.
Tinder has officially launched its „Photo Selector“ feature, which uses AI to help users choose the best photos for their dating profiles. This was reported by TechCrunch in the article „Tinder’s AI Photo Selector automatically picks the best photos for your dating profile“ by Lauren Forristal. The feature, now available to all users in the U.S. and set to roll out internationally later this summer, leverages facial detection technology. Users upload a selfie, and the AI creates a unique facial geometry to identify their face and select photos from their camera roll. The feature curates a collection of 10 selfies it believes will perform well based on Tinder’s insights on good profile images, focusing on aspects like lighting and composition. Tinder’s AI is trained on a diverse dataset to ensure inclusivity and accuracy, aligning with the company’s Diversity, Equity, and Inclusion (DEI) standards. It also filters out photos that violate guidelines, such as nudes. The goal is to save users time and reduce uncertainty when choosing profile pictures. A recent Tinder survey revealed that 68% of participants found an AI photo selection feature helpful, and 52% had trouble selecting profile images. The TechCrunch article was published on 17 July 2024 and is available at techcrunch.com/2024/07/17/tinder-ai-photo-selection-feature-launches/.
The deadline for the International Conference on Social Robotics 2024 (ICSR 2024) is approaching. Experts in social robotics and related fields have until July 5 to submit their full papers. The prestigious event was last held in Florence (2022) and Qatar (2023). Now it enters its next round. The 16th edition will bring together researchers and practitioners working on human-robot interaction and the integration of social robots into our society. The title of the conference includes the addition „AI“. This is a clarification and demarcation that has to do with the fact that there will be two further formats with the name ICSR in 2024. ICSR’24 (ICSR + AI) will take place as a face-to-face conference in Odense, Denmark, from 23 to 26 October 2024. The theme of this year’s conference is „Empowering Humanity: The role of social and collaborative robotics in shaping our future“. The topics of the Call for Papers include „Collaborative robots in service applications (in construction, agriculture, etc.)“, „Human-robot interaction and collaboration“, „Affective and cognitive sciences for socially interactive robots“, and „Context awareness, expectation, and intention understanding“. The general chairs are Oskar Palinko, University of Southern Denmark, and Leon Bodenhagen, University of Southern Denmark. More information is available at icsr2024.dk.
„DuckDuckGo AI Chat is an anonymous way to access popular AI chatbots – currently, Open AI’s GPT 3.5 Turbo, Anthropic’s Claude 3 Haiku, and two open-source models (Meta Llama 3 and Mistral’s Mixtral 8x7B), with more to come. This optional feature is free to use within a daily limit, and can easily be switched off.“ (DuckDuckGo, 6 June 2024) This was reported by the DuckDuckGo blog on June 6, 2024. Initial tests have shown that the responses come at high speed. This is an excellent way of testing and comparing different language models one after the other. All this is possible with a high level of data protection: „Chats are private, anonymized by us, and are not used for any AI model training.“ (DuckDuckGo, 6 June 2024) It would be desirable for this service to be offered free of charge and without limitation. But that is still a long way off: DuckDuckGo is currently exploring the possibility of „a paid plan for access to higher daily usage limits and more advanced models“ (DuckDuckGo, 6 June 2024). You can try out the new tool at duck.ai or duckduckgo.com/chat.
Fig.: DuckDuckGo AI Chat has just started (Image: Ideogram)
The Association for the Advancement of Artificial Intelligence’s (AAAI) 2024 Spring Symposium Series took place from March 25 to 27 at Stanford University in Stanford, California. The symposium series fosters an intimate environment that enables emerging AI communities to engage in workshop-style discussions. Topics change each year to ensure a dynamic venue that stays current with the evolving landscape of AI research and applications. This year’s program included eight symposia: Bi-directionality in Human-AI Collaborative Systems; Clinical Foundation Models Symposium; Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge (AAAI-MAKE 2024); Federated Learning on the Edge; Impact of GenAI on Social and Individual Well-being; Increasing Diversity in AI Education and Research; Symposium on Human-Like Learning; User-Aligned Assessment of Adaptive AI Systems. On May 26, 2024, the „Proceedings of the 2024 AAAI Spring Symposium Series“ (Vol. 5 No. 1) were published. They are available at ojs.aaai.org/index.php/AAAI-SS/issue/view/604.
Conversational Agents sind seit einem Vierteljahrhundert ein Forschungsgegenstand von Prof. Dr. Oliver Bendel. Ihnen – genauer gesagt den Pedagogical Agents, die man heute wohl Virtual Learning Companions nennen würde – widmete er seine Doktorarbeit an der Universität St. Gallen von Ende 1999 bis Ende 2022. Seit 2009 ist er Professor an der Hochschule für Wirtschaft FHNW. Ab 2012 entstanden vor allem Chatbots und Sprachassistenten im Kontext der Maschinenethik, unter ihnen GOODBOT, LIEBOT, BESTBOT und SPACE THEA. 2022 wandte sich der Wirtschaftsinformatiker und Technikphilosoph dann toten und gefährdeten Sprachen zu. Unter seiner Betreuung entwickelte Karim N’diaye den Chatbot @ve für Latein und Dalil Jabou den mit Sprachausgabe erweiterten Chatbot @llegra für Vallader, ein Idiom des Rätoromanischen. Derzeit testet er die Reichweite von GPTs – „custom versions of ChatGPT“, wie OpenAI sie nennt – für gefährdete Sprachen wie Irisch (Irisch-Gälisch), Maori (offiziell „Māori“ geschrieben) und Baskisch. Nach Auskunft von ChatGPT gibt es zu ihnen relativ viel Trainingsmaterial. Am 9. Mai 2024 wurde – eine Woche nach Irish Girl – eine erste Version von Maori Girl erstellt. Sie scheint die polynesische Sprache des indigenen Volks in Neuseeland dem ersten Anschein nach durchaus im Griff zu haben. Man kann sich die Antworten auf Englisch oder Deutsch übersetzen lassen. Maori Girl ist im GPT Store verfügbar und wird in den nächsten Wochen weiter verbessert.
Abb.: Maori Girl schreibt und spricht Maori (Bild: Ideogram)
Conversational Agents sind seit einem Vierteljahrhundert ein Forschungsgegenstand von Prof. Dr. Oliver Bendel. Ihnen – genauer gesagt den Pedagogical Agents, die man heute wohl Virtual Learning Companions nennen würde – widmete er seine Doktorarbeit an der Universität St. Gallen von Ende 1999 bis Ende 2022. Seit 2009 ist er Professor an der Hochschule für Wirtschaft FHNW. Ab 2012 entstanden vor allem Chatbots und Sprachassistenten im Kontext der Maschinenethik, unter ihnen GOODBOT, LIEBOT, BESTBOT und SPACE THEA. 2022 wandte sich der Wirtschaftsinformatiker und Technikphilosoph dann toten und gefährdeten Sprachen zu. Unter seiner Betreuung entwickelte Karim N’diaye den Chatbot @ve für Latein und Dalil Jabou den mit Sprachausgabe erweiterten Chatbot @llegra für Vallader, ein Idiom des Rätoromanischen. Derzeit testet er die Reichweite von GPTs – „customized versions of ChatGPT“, wie OpenAI sie nennt – für gefährdete Sprachen wie Irisch (Irisch-Gälisch), Maori und Baskisch. Nach Auskunft von ChatGPT gibt es zu ihnen relativ viel Trainingsmaterial. Am 3. Mai 2024 wurde eine erste Version von Irish Girl erstellt. Sie scheint die goidelische Sprache aus der keltischen Familie dem ersten Anschein nach durchaus im Griff zu haben. Man kann sich die Antworten auf Englisch oder Deutsch übersetzen lassen. Danach kann es vorkommen, dass man sie bitten muss, wieder ins Irische zu wechseln. Irish Girl ist im GPT Store verfügbar und wird in den nächsten Wochen weiter verbessert.
Abb.: Mit Chatbots kann man gefährdete Sprachen fördern (Bild: Ideogram)
„AI Unplugged – Offener Austausch mit KI-Fachpersonen: Fachleute von Google, IKEA, der SRG, Microsoft und Dozierende der FHNW geben Einblicke in ihre Arbeit mit KI und beantworten Fragen zur Zukunft.“ (Newsletter FHNW) Dies meldete die FHNW am 30. April 2024 in ihrem Newsletter. Unter den Experten ist Prof. Dr. Oliver Bendel, der den Titel der Reihe beigesteuert hat. Organisiert wird sie von Tania Welter, der KI-Beauftragten der Hochschule. Der Wirtschaftsinformatiker und Technikphilosoph wird wie die anderen fünf Minuten lang über sein Verständnis von KI sprechen. Seine Definition lautet: „Der Begriff ‚Künstliche Intelligenz‘ … steht für einen eigenen wissenschaftlichen Bereich der Informatik, der sich mit dem menschlichen Denk-, Entscheidungs- und Problemlösungsverhalten beschäftigt, um dieses durch computergestützte Verfahren ab- und nachbilden zu können.“ Er spricht dann zehn Minuten lang darüber, was KI für ihn im Alltag und im Beruf bedeutet. Dabei wird er sicherlich auf moralische Maschinen, Apps mit generativer KI für blinde und sehbeeinträchtigte Personen, Chatbots für gefährdete Sprachen und KI-Kunst zu sprechen kommen. Die Veranstaltungen werden von Mai bis September 2024 mehrmals durchgeführt und stehen Mitgliedern der Hochschule offen.
Im Frühjahrssemester 2024 hat Prof. Dr. Oliver Bendel wiederholt virtuelle Tutorinnen in seine Lehrveranstaltungen an der FHNW eingebunden. Es handelt sich um „custom versions of ChatGPT“, sogenannte GPTs. Zur Verfügung stand Social Robotics Girl für die Wahlmodule zur Sozialen Robotik an der Hochschule für Wirtschaft FHNW, entstanden bereits im November 2023, zudem Digital Ethics Girl vom Februar 2024 für die Pflichtmodule „Ethik und Recht“ bzw. „Recht und Ethik“ sowie „Ethics and Law“ innerhalb mehrerer Hochschulen und Studiengänge. Die virtuellen Tutorinnen haben das „Weltwissen“ von GPT-4, aber auch das spezifische Fachwissen des Technikphilosophen und Wirtschaftsinformatikers aus Zürich. Zum Abschluss des Kurses in Geomatik wurde Digital Ethics Girl gefragt: „Ich sitze hier an einer Hochschule mit 26 begabten, motivierten Studierenden. Sie fiebern einer Gruppenarbeit entgegen. Sie sollen ethische Probleme, Herausforderungen und Fragestellungen zur Disziplin der Geomatik identifizieren und darüber referieren. Es wäre total nett, wenn du ihnen acht Themen vorschlagen würdest. Vor allem soll die Informationsethik als Perspektive eine Rolle spielen. Aber auch Roboterethik und Maschinenethik können dabei sein.“ Die virtuelle Tutorin nannte Themen wie „Datenschutz und geografische Informationssysteme (GIS)“, „Autonome Vermessungsroboter“, „Umweltauswirkungen der Geomatik-Technologien“, „KI-gestützte Analyse geografischer Daten“ und „Implikationen der Augmented Reality in der Geomatik“. Gleich zwei Gruppen widmeten sich der „Ethik der Fernerkundung“. Für die Gruppenarbeiten selbst wurden u.a. ChatGPT, Copilot und D-ID genutzt. Social Robotics Girl wird am 6. Juni 2024 im Future Lab der Learntec vorgestellt.
Abb.: Digital Ethics Girl im Unterricht (Bild: DALL-E 3)
Der Bundeswettbewerb KI unter der Schirmherrschaft des Ministerpräsidenten des Landes Baden-Württemberg, Winfried Kretschmann, geht in eine neue Runde. Auf der Website wird mit den Worten geworben: „Eure Ideen sind gefragt! Verändert die Welt mit Künstlicher Intelligenz und entwickelt euer eigenes KI-Projekt. Setzt eure Idee um und nutzt dazu die Methoden des maschinellen Lernens. Lasst euch von den Projekten aus dem Vorjahr inspirieren.“ (Website BWKI) Der Wettbewerb richtet sich nach den Angaben der Initiatoren und Organisatoren an Schüler und Schülerinnen weiterführender Schulen. Eine Teilnahme im ersten Jahr nach dem Schulabschluss sei ebenfalls möglich. Auf Instagram stehen Materialien und Interviews zu interessanten Themen und Disziplinen zur Verfügung. Dazu gehört die Tier-Maschine-Interaktion (mitsamt Ansätzen der Maschinenethik), die von Prof. Dr. Oliver Bendel erklärt wird, etwa in den Beiträgen „Wie sollen sich Maschinen gegenüber Tieren verhalten“ (14. März 2024), „Können Tiere und Maschinen Freunde werden?“ (17. März 2024) und „Schützt KI Igel vor dem Rasenmähertod?“ (21. März 2024). Im Teaser der Website heißt es: „Meldet euch, euer Team und eure Idee bis zum 2. Juni 2024 an. Euer Projekt könnt ihr dann bis zum 15. September 2024 fertigstellen. Wer ist dabei?“ (Website BWKI) Weitere Informationen über www.bw-ki.de.
Abb.: Eine mögliche Perspektive ist die Tier-Maschine-Interaktion (Bild: DALL-E 3)
On the second day of the AAAI Spring Symposia, one could already get the impression that the traditional conference has returned to its former greatness. The Covid pandemic had damaged it. In 2023, there were still too few participants for some symposia. Many stayed home and watched the sessions online. It was difficult for everyone involved. But the problems had already started in 2019. At that time, the Association for the Advancement of Artificial Intelligence had decided not to publish the proceedings centrally any more, but to leave it to the individual organizers. Some of them were negligent or disinterested and left the scientists alone with their demands. In 2024, the association took over the publication process again, which led to very positive reactions in the community. Last but not least, of course, the boost from generative AI helped. In 2024, you can see many happy and exuberant AI experts at Stanford University, with mild temperatures and lots of sunshine.
Douglas „Doug“ Bruce Lenat was an American AI researcher and founder and CEO of Cycorp, Inc. in Austin, Texas. He died in 2023 at the age of 72. Not long before, he had participated in the AAAI Spring Symposia. In a touching speech, Edward „Ed“ Albert Feigenbaum remembered his friend and colleague on the afternoon of 26 March 2024 at the symposium „Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge (AAAI-MAKE 2024)“. He is a famous American computer scientist who is considered the father of expert systems. Andreas Martin from the FHNW School of Business created the framework in a sensitive manner. He also called up a video that had never been posted online and that most of the audience had never seen before. It showed Doug Lenat giving an online lecture. Back in 1983, he had come to the opinion that heuristics lead to a dead end because a program mostly learns new things that are similar to what it already knows. His conclusion was that the first step was to make it’s knowledge base as large as possible. This led to the Cyc project, which aimed to capture the general knowledge of an average adult. Both Ed Feigenbaum and Doug Lenat proved to be critics of large language models at the event. More information about the AAAI 2024 Spring Symposia is available here.
Fig.: Edward Feigenbaum during his speech at Stanford University
The AAAI 2024 Spring Symposium Series will take place from March 25-27, 2024 at Stanford University. There are eight symposia in total. One of them is „Impact of GenAI on Social and Individual Well-being“. It will be hosted by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The announcement text states: „Generative AI (GenAI) presents significant opportunities and challenges in the realms of individ-ual and societal well-being. While its benefits in fields like healthcare, arts, and education are vast, it also necessitates careful consideration of ethics, privacy, fairness, and security.“ The first version of the program was published in mid-March. Minor changes may still be made. On the afternoon of the first day, the following presentations will take place within the topic „Generative AI: Well-being and Learning“: „How Can Generative AI Enhance the Well-being of Blind?“ (Oliver Bendel), „How Can GenAI Foster Well-being in Self-regulated Learning?“ (Stefanie Hauske and Oliver Bendel), „Sleep Stage Estimation by Introduction of Sleep Domain Knowledge to AI: Towards Personalized Sleep Counseling System with GenAI“ (Iko Nakari and Keiki Takadama), and „Personalized Image Generation Through Swiping“ (Yuto Nakashima). Further information on this symposium is available here.