Inclusive AI and Inclusive Robotics at CES 2026

At CES 2026, some of the most compelling examples of Inclusive AI and Inclusive Robotics came not from consumer gadgets, but from European assistive technologies designed to expand human autonomy. This was reported by FAZ and other media outlets in January 2026. These innovations show how AI-driven perception and robotics can be centered on accessibility – and still scale beyond niche use cases. Romanian startup Dotlumen exemplifies Inclusive AI through its “.lumen Glasses for the Blind,” a wearable system that replaces a guide dog with real-time, on-device intelligence. Using multiple cameras, sensors, and GPU-based computer vision, the glasses interpret sidewalks, obstacles, and spatial structures and translate them into intuitive haptic signals. The company calls this approach “Pedestrian Autonomous Driving” – a concept that directly bridges human navigation and mobile robotics. Notably, the same algorithms are now being adapted for autonomous delivery robots, underscoring the overlap between assistive AI and broader robotic autonomy. A complementary approach comes from France-based Artha (Seehaptic), whose haptic belt uses AI-powered scene understanding to convert visual space into tactile feedback. By shifting navigation cues from sound to touch, the system reduces cognitive load and leverages sensory substitution – an inclusive design principle with implications for human-machine interfaces in robotics. Together, these technologies illustrate a European model of Inclusive AI: privacy-preserving, embodied, and focused on real-world autonomy. What begins as assistive tech increasingly becomes a foundation for the next generation of intelligent, human-centered robotics.

Fig.: A scene from the Cybathlon (Photo: ETH Zürich, CYBATHLON/Alessandro della Bella)