Robot Kindergarten - Learning from Human Observation

The ability to learn tasks from observation is a key feature of the factory of future, which aims to enable highly automated and intelligent manufacturing systems to adapt to changing demands. Our goal is to develop methods that allow for a time- and cost-efficient re-programming of factory processes by non-expert (in terms of robotics) personnel.

Skill Learning

Teaching skills to the robot by demonstration is possible via vision-based tracking of human motion. The system is able to analyze human actions in different everyday environments, which helps, for instance, to assist a human with difficult tasks.

©Technical University of Munich. TUM School of Computation, Information and Technology. Chair of Robotics, Artificial Intelligence and Real-time Systems; Machine Vision and Perception Group. Munich Institute of Robotics and Machine Intelligence (MIRMI).

Teaching Movements

Teaching a new movement by simply showing the robot the movement and labeling it afterwards. In the demo, we show the robot how to write the letter “h”.

©Technical University of Munich. TUM School of Computation, Information and Technology. Chair of Robotics, Artificial Intelligence and Real-time Systems; Machine Vision and Perception Group. Munich Institute of Robotics and Machine Intelligence (MIRMI).

Detection and Execution

The robot can detect already known movements and repeat them, scaled in time and amplitude. In the demo, the letter “a” is correctly detected and repeated by the robot.

Skill Sequence Extraction

The developed framework aims to extract a parametrized skill sequence from a video recording of a task demonstration executed by a human demonstrator.

©Technical University of Munich. TUM School of Computation, Information and Technology. Chair of Robotics, Artificial Intelligence and Real-time Systems; Machine Vision and Perception Group. Munich Institute of Robotics and Machine Intelligence (MIRMI).

Skill Sequence Execution

Here, we aim to use the extracted skill sequence from the above LfD framework to execute the task in a new environment. Therefore, we need to implement intelligent robot skills that can handle limitations in the robot hardware, e.g. a 2-finger gripper instead of a 5-finger hand, or obstacles. 

©Technical University of Munich. TUM School of Computation, Information and Technology. Chair of Robotics, Artificial Intelligence and Real-time Systems; Machine Vision and Perception Group. Munich Institute of Robotics and Machine Intelligence (MIRMI).

Tactile Perception

The physical parameters of the demonstration are not determined by pure passive observation. We give the robot a sense of touch through additional sensors in the gripper and develop algorithms so that the robot can identify the physical parameters of the environment by himself.

©Technical University of Munich. TUM School of Computation, Information and Technology. Chair of Robotics, Artificial Intelligence and Real-time Systems; Machine Vision and Perception Group. Munich Institute of Robotics and Machine Intelligence (MIRMI).

Telepresence by Hand Movement

Teleoperation using visual perception and control to detect hand movement.

©Technical University of Munich. TUM School of Computation, Information and Technology. Chair of Robotics, Artificial Intelligence and Real-time Systems; Machine Vision and Perception Group. Munich Institute of Robotics and Machine Intelligence (MIRMI).

©Technical University of Munich. TUM School of Computation, Information and Technology. Chair of Robotics, Artificial Intelligence and Real-time Systems; Machine Vision and Perception Group. Munich Institute of Robotics and Machine Intelligence (MIRMI).

Contact