As the boundaries between human experience and artificial intelligence continue to blur, designers are rethinking how we interact with technology on an emotional level. In the rapidly evolving landscape of domestic robotics, the conversation is no longer just about efficiency or automation—it’s about connection. While robots are becoming more capable, their integration into intimate human spaces still sparks discomfort for many. This friction doesn’t stem from technological limitations alone, but from the absence of a natural, empathetic bridge between user and machine. That’s the context in which Mimic emerges—not just as a wearable, but as a reimagined interface for trust-building with humanoid companions.
The next-gen wearable is designed to bridge the emotional and psychological gap between humans and humanoid robots. In a future where household humanoids are commonplace, Mimic enables users to teach robots actions based on their behavior data. Initially, humanoids perform basic functions, but over time, they learn to understand and personalize tasks for the user. Through this process, users actively teach humanoids, forming emotional bonds and overcoming psychological barriers.
Designer: Dohyuk Joo
By wearing the device and performing desired actions, users provide real-time data that the robot uses to learn and adapt. This hands-on approach fosters a sense of control and agency, reducing feelings of unease and promoting a more harmonious relationship between humans and robots. The device is lightweight and ergonomically designed to be worn during daily activities. Its intuitive interface ensures that users can seamlessly integrate it into their routines without disruption.
The robot can adapt to changes in user behavior over time. By allowing users to wear the device and physically demonstrate tasks they want their robots to perform, Mimic shifts the paradigm from command-based interaction to embodied teaching. For example, if a user is preparing their own version of Korean stew, Mimic enables the humanoid to grasp more than just the recipe—it interprets the specific ingredients, motions, and subtle preferences involved. This context-aware learning gives the robot a far deeper understanding of the user’s world, moving beyond generic presets.
The device comprises two integrated modules: the Vision tracker and the Hand tracker. The Vision tracker collects visual and auditory data using an array of infrared sensors, tracking cameras, and vision-based AI. It maps the environment with Lidar and captures head movement using a 6-DOF camera alongside a front-facing RGB camera, forming a dynamic world model. It’s engineered to adjust comfortably to different body types through an adaptable rail system, ensuring seamless wearability.
Meanwhile, the Hand tracker—worn on the forearm—records nuanced physical data like grip strength, arm angle, and muscle activity through electromyography (EMG) sensors and precision tracking cameras. By analyzing electrical signals and motion patterns, it decodes how users interact with objects on a tactile level. This fusion of data is processed through deep learning to construct a comprehensive behavioral profile. Even passive moments become learning opportunities, as Mimic uses 360-degree and dual-facing cameras to observe environmental structure and object purpose, enabling the humanoid to understand intent even when it’s unstated.
Through these layers of sensory input and contextual modeling, Mimic doesn’t just teach tasks—it conveys intuition. It represents a new approach to robotics, one that leverages embodied cognition to break down psychological resistance and cultivate emotional resonance. As our homes evolve alongside technology, Mimic points to a future where machines don’t just serve—they relate, reflect, and respond.
The post Mimic’s hands-on approach to humanoid teaching bridges emotion and AI through wearable input first appeared on Yanko Design.