AccessabilityContactSelect language: DE

Session 1: Autonomy and Interaction in Robotics

Future robot generations will be capable of making and executing decisions independently. New technologies such as embodied AI, agentic AI, and powerful machine learning methods will play a key role in this context. What should we learn about autonomous robots?

Prof. Dr. Seth Hutchinson of the Northeastern University in Boston/USA states that mobile robots will closely cooperate with people in future work environments. A new generation of robots will, among other things, enhance the quality of life for people requiring care. Prof. Hutchinson outlines a software setting for these robots, which is based on safety, cooperation, and adaptability functions.

Prof. Dr. Sethu Vijayakumar, Professor of Robotics at the University of Edinburgh, addresses the trade-off between ever-increasing auton-omy and the desire for control and safety of future robot generations. In his talk, he presents powerful machine learning technology and out-lines a balanced compromise between autonomy and control.

Prof. Dr. Antonio Bicchi, IIT Genoa / University of Pisa and publisher of the International Journal of Robotics Research (IJRR), focuses his talk on the development from human-robot interaction to human-robot integration. In this context, a robot can be thought of as a type of physical ‘prosthetic device’ fitted with extensive sensor technology. It is smart enough to ‘guess’ what humans want and acts accordingly – in both healthcare and industrial environments.

Dr. Felix Ocker and his team from the Honda Research Institute Europe present ‘decisive’ robots that, apart from being capable of com-pletely independent orientation, can also act and decide autonomously, i.e. without receiving commands from humans. They can even explain how they came to their decisions. This is possible thanks to agentic AI – an artificial intelligence still in its infancy.

Talks and speakers in this session

"Mobile Manipulation: Safety, Cooperation, Adaptation"

Mobile manipulators, working cooperatively with people, have the potential to significantly enhance quality of life for older adults and others with physical limitations or mental impairment. However, in order to gain widespread acceptance in domestic settings these robotic systems must be safe, effective, and adaptable. In this talk, we describe recent results in safe control of mobile manipulators using control barrier functions and control Lyapunov functions, both defined in the task (or operational) space. Our approach is implemented in an optimization-based framework, which allows easy extension to task-level control in dynamic settings.

Seth Hutchinson is a professor at Northeastern University. He was previously the Executive Director of the Institute for Robotics and Intelligent Machines at the Georgia Institute of Technology, where he was also Professor and KUKA Chair for Robotics in the School of Interactive Computing (2018-2024). He is Professor emeritus at the University of Illinois in Urbana-Champaign, where he was a faculty member during 1990-2017. He received his Ph.D. from Purdue University.

Hutchinson served as president of the IEEE Robotics and Automation Society, as Editor-in-Chief for the "IEEE Trans. on Robotics" and as the founding Editor-in-Chief of the RAS Conference Editorial Board. He has more than 300 publications on the topics of robotics and computer vision, and is coauthor of two books on robotics. He is a fellow of the IEEE.

Mobile Manipulation: Safety, Cooperation, Adaptation

Mobile manipulators, working cooperatively with people, have the potential to significantly enhance quality of life for older adults and others with physical limitations or mental impairment. However, in order to gain widespread acceptance in domestic settings these robotic systems must be safe, effective, and adaptable. In this talk, we describe recent results in safe control of mobile manipulators using control barrier functions and control Lyapunov functions, both defined in the task (or operational) space. Our approach is implemented in an optimization-based framework, which allows easy extension to task-level control in dynamic settings.

Hightech Summit Session 1: Autonomy and Interaction in Robotics

"From Automation to Autonomy: Machine Learning for Next-generation Robotics"

The new generation of robots work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision making systems to one that involves shared control -- with significant autonomy devolved to the robot platform; and end-users in the loop making only high level decisions.
This talk will briefly introduce powerful machine learning technologies ranging from robust multi-modal sensing, shared representations, scalable real-time learning and adaptation, and compliant actuation that are enabling us to reap the benefits of increased autonomy while still feeling securely in control.
This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade-off between autonomy and control that we are comfortable with?
Domains where this debate is relevant include deployment of robots in extreme environments, self-driving cars, asset inspection, repair & maintenance, factories of the future and assisted living technologies including exoskeletons and prosthetics to list a few.

Sethu Vijayakumar is the Professor of Robotics at the University of Edinburgh, UK, and the Founding Director of the Edinburgh Centre for Robotics. He has pioneered the use of large-scale machine learning techniques in the real-time control of several iconic robotic platforms such as the SARCOS and the HONDA ASIMO humanoids, KUKA-LWR robot arm and iLIMB prosthetic hand. One of his projects (2016) involved a collaboration with NASA Johnson Space Centre on the Valkyrie humanoid robot being prepared for unmanned robotic pre-deployment missions to Mars. Professor Vijayakumar holds the Royal Academy of Engineering (RAEng) - Microsoft Research Chair at Edinburgh and is also an Adjunct Faculty of the University of Southern California (USC), Los Angeles. He has published over 250 peer reviewed and highly cited articles [H-index 50, Citations > 13,000 as of 2025] on topics covering robot learning, optimal control, and real-time planning in high dimensional sensorimotor systems. He has been appointed to grant review panels for the EU (FP7, H2020), DFG-Germany and NSF-USA. He is a Fellow of the Royal Society of Edinburgh, a judge on BBC Robot Wars and winner of the 2015 Tam Dalyell Prize for excellence in engaging the public with science – including his role in the UK wide launch of the BBC micro:bit initiative (2016) for STEM education. Professor Vijayakumar helps shape and drive the national Robotics and Autonomous Systems (RAS) agenda in his role as a Programme Director (Human-AI Interfaces and Robotics) at The Alan Turing Institute, the United Kingdom’s national institute for data science and Artificial Intelligence.

From Automation to Autonomy: Machine Learning for Next-generation Robotics

The new generation of robots work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision making systems to one that involves shared control -- with significant autonomy devolved to the robot platform; and end-users in the loop making only high level decisions.
This talk will briefly introduce powerful machine learning technologies ranging from robust multi-modal sensing, shared representations, scalable real-time learning and adaptation, and compliant actuation that are enabling us to reap the benefits of increased autonomy while still feeling securely in control.
This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade-off between autonomy and control that we are comfortable with?
Domains where this debate is relevant include deployment of robots in extreme environments, self-driving cars, asset inspection, repair & maintenance, factories of the future and assisted living technologies including exoskeletons and prosthetics to list a few.

Hightech Summit Session 1: Autonomy and Interaction in Robotics

"From Cobotics and Human-Robot Interaction to Human-Robot Integration"

For many years, the name of the game in avdanced, human-centric Robotics has been Human-Robot Interaction. In recent years, we have witnessed a further deepening of the relationship between humans and technology. Robotic technologies have been providing definite advances to assist people in need of physical help, including rehabilitation and prosthetics. Working in fields were humans are placed right at the center of the technology, on the other hand, is helping refocus our robotics research itself. In prosthetics, the goal is to have an artificial limb to move naturally and intelligently enough to perform the task that users intend, without requiring their attention. By abstracting this idea, a robot of the future can be thought as a physical "prosthesis'' of its user, with sensors, actuators, and intelligence enough to interpret and execute the user intention, translating it in a sensible action of which the user remains the owner.

In the talk I will present how human-robot integration reaches beyond prosthetics and rehabilitation applications to industrial environments. For example, exoskeletons and supernumerary limbs for augmenting human possibilities and shared-autonomy robotic avatars, with the robot executing the human's intended actions and the human perceiving the context of his/her actions and their consequences.

Antonio Bicchi is Senior Scientist at the Italian Institute of Technology in Genoa and the Chair of Robotics at the University of Pisa. He graduated from the University of Bologna in 1988 and was a postdoc scholar at M.I.T. Artificial Intelligence lab. He teaches Robotics and Control Systems in the Department of Information Engineering (DII) of the University of Pisa. He leads the Robotics Group at the Research Center "E. Piaggio'' of the University of Pisa since 1990. He is the head of the SoftRobotics Lab for Human Cooperation and Rehabilitation at IIT in Genoa. Since 2013 he serves ad Adjunct Professor at the School of Biological and Health Systems Engineering of Arizona State University.
From January, 2023, he is the Editor in Chief of the International Journal of Robotics Reserach (IJRR), the first scientific journal in Robotics. He has been the founding Editor-in-Chief of the IEEE Robotics and Automation Letters (2015-2019), which rapidly became the top Robotics journal by number of submissions. He has organized the first WorldHaptics Conference (2005), today the premier conference in the field. He is a co-founder and President of the Italian Institute of Robotics and Intelligent Machines (I-RIM)
His main research interests are in Robotics, Haptics, and Control Systems. He has published more than 500 papers on international journals, books, and refereed conferences. His research on human and robot hands has been generoously supported by the European Research Council. He originated and is today the scientific coordinator of the JOiiNT Lab, an advanced tech transfer lab with leading-edge industries in the Kilometro Rosso Innovation District in Bergamo, Italy.
Antonio Bicchi is the recipient of the Robotics and Automation Society Pioneer Award in 2025.

From Cobotics and Human-Robot Interaction to Human-Robot Integration

For many years, the name of the game in avdanced, human-centric Robotics has been Human-Robot Interaction. In recent years, we have witnessed a further deepening of the relationship between humans and technology. Robotic technologies have been providing definite advances to assist people in need of physical help, including rehabilitation and prosthetics. Working in fields were humans are placed right at the center of the technology, on the other hand, is helping refocus our robotics research itself. In prosthetics, the goal is to have an artificial limb to move naturally and intelligently enough to perform the task that users intend, without requiring their attention. By abstracting this idea, a robot of the future can be thought as a physical "prosthesis'' of its user, with sensors, actuators, and intelligence enough to interpret and execute the user intention, translating it in a sensible action of which the user remains the owner.

In the talk I will present how human-robot integration reaches beyond prosthetics and rehabilitation applications to industrial environments. For example, exoskeletons and supernumerary limbs for augmenting human possibilities and shared-autonomy robotic avatars, with the robot executing the human's intended actions and the human perceiving the context of his/her actions and their consequences.

Hightech Summit Session 1: Autonomy and Interaction in Robotics

"When robots take initiative: Agentic AI for human-robot cooperation"

As robots move beyond rigid, rule-based behavior, a new class of intelligent agents is emerging—robots that take initiative, act with purpose, and collaborate with humans. This talk explores the shift toward agentic AI, highlighting key design patterns and recent breakthroughs that have the potential to make autonomy not just possible, but useful. From language model-based robots that offer support only when truly needed, to systems that plan for more long-term tasks using extensible tool libraries, agentic AI is reshaping how robots perceive, decide, and interact. Drawing on real-world prototypes and research at the Honda Research Institute, this talk gives an overview of how robots can detect needs, choose when to act, and explain their behavior—critical steps toward meaningful cooperation. This talk will close with an outlook on what’s working, what’s next, and where agentic AI is already finding traction.

Felix Ocker is a Senior Scientist at Honda Research Institute Europe, researching AI systems that enable autonomous and collaborative behavior. With a background in engineering and a Ph.D. from the Technical University of Munich, his research has spanned knowledge representation, multi-agent systems, and semantic integration for automated production. His current work explores agentic AI - intelligent agents capable of proactive, goal-directed behavior - and how these capabilities can enhance human-robot interaction. Felix has authored numerous publications and is co-inventor on patents related to memory, tool use, and theory of mind in intelligent systems. He regularly contributes to the academic community through peer reviewing, technical committees, and invited talks at both industrial and scientific venues.

When robots take initiative: Agentic AI for human-robot cooperation

As robots move beyond rigid, rule-based behavior, a new class of intelligent agents is emerging—robots that take initiative, act with purpose, and collaborate with humans. This talk explores the shift toward agentic AI, highlighting key design patterns and recent breakthroughs that have the potential to make autonomy not just possible, but useful. From language model-based robots that offer support only when truly needed, to systems that plan for more long-term tasks using extensible tool libraries, agentic AI is reshaping how robots perceive, decide, and interact. Drawing on real-world prototypes and research at the Honda Research Institute, this talk gives an overview of how robots can detect needs, choose when to act, and explain their behavior—critical steps toward meaningful cooperation. This talk will close with an outlook on what’s working, what’s next, and where agentic AI is already finding traction.

Hightech Summit Session 1: Autonomy and Interaction in Robotics

© Messe München

Session Chair

"Autonomy and Interaction in Robotics" will be hosted by Session Chair Prof. Dr. Lorenzo Masia, professor in “Intelligent BioRobotic Systems” and Director of the Munich Institute for Robotics and Machine Intelligence (MIRMI) at the Technical University of Munich (TUM).

© Messe München

Session Chair

"Autonomy and Interaction in Robotics" will be hosted by Session Chair Prof. Dr. Angela Schoellig, member of the Board of Directors at the Munich Institute of Robotics and Machine Intelligence (MIRMI), Coordinator of the Robotics Institute Germany (RIG)
professor in "Safety, Performance and Reliability of Learning Systems" at the Technical University of Munich (TUM).