Advertisment

Session 4: Learning, Understanding and Interaction

“You, human. I, robot.”—of communication and interaction with machines

If information exchange between machines is the future, then communication between robots and humans is the distant future. At least if it is to take place at a level amounting to social interaction. What will future deep learning processes be like? Will artificial intelligences have teachers instead of programmers in the future? And how will interaction between humans and machines change our world?

The following researchers will present their latest findings:

Prof. Wolfram Burgard, University of Technology Nuremberg
Melonee Wise, Zebra Technologies
Prof. Jens Kober, Delft University of Technology
Prof. Giorgio Metta, Italian Institute of Technology (IIT)
Session Chair
Advertisment

Closing speech—the munich_i Summit comes to an end:

Prof. Seth Hutchinson, Georgia Institute of Technology

There is always scope for improvement: In his presentation, AI pioneer Prof. Wolfram Burgard of TU Nuremberg will show how robots optimize their AI using deep learning approaches. Wolfram Burgard is considered an international autonomous driving expert. He was Vice President for Automated Driving Technology and Machine Learning at the Toyota Research Institute in Los Altos, CA, USA, from 2019 to 2021.

Do robots need to understand their environment to work efficiently? Melonee Wise, Vice President of Robotic Automation at Zebra Technologies, will explore this question. Her audience will surely be excited about this one: According to the Business Insider online magazine, Melonee Wise is one of the eight CEOs whose activities will have the greatest impact on the way we work.

Will programmers be teachers in the future? Prof. Jens Kober of Delft University of Technology conducts research on AI learning techniques. In his opinion, the acquisition and improvement of motor skills is one of the greatest challenges in robotics. He has developed strategies in the past and wants to discuss these with colleagues and the audience.

iCub is watching you! Prof. Giorgio Metta of the Italian Institute of Technology (IIT) works on social robot-human interaction. In his presentation, he will talk about his experience with the humanoid iCub robot. This robot supports research processes in the IIoT, which has recently seen the development of algorithms for processing external information and contactless interaction through optics and acoustics.

Closing presentation and AI.Society special show:

The closing outline presentation will be given by Prof. Seth Hutchinson of the Georgia Institute of Technology, who is also World President of the IEEE Robotics and Automation Society, and a pioneer of vision-based robot control.

Prof. Dr.-Ing. Sami Haddadin, Director of the Munich Institute of Robotics and Machine Intelligence (MIRMI), and holder of the Chair for Robotics and System Intelligence at the Technical University of Munich (TUM), will wrap up the event and summarize the results.

A joint tour of the munich_i special show and the AI.Society dialog platform at automatica provides additional practical insights into future trends in robotics and artificial intelligence.

Speakers and presentations in this session

“Probabilistic and Deep Learning Approaches for Intelligent Robots and Automated Driving”

For autonomous robots and automated driving, the capability to robustly perceive environments and execute their actions is the ultimate goal. The key challenge is that no sensors and actuators are perfect, which means that robots and cars need the ability to properly deal with the resulting uncertainty. In this presentation, I will introduce the probabilistic approach to robotics, which provides a rigorous statistical methodology to deal with state estimation problems. I will furthermore discuss how this approach can be extended using state-of-the-art technology from machine learning to deal with complex and changing real-world environments.

Wolfram Burgard is a Professor of Computer Science at the University of Technology Nuremberg, Germany, where he heads the research group for Robotics and Artificial Intelligence. From 2019 until 2021 he was VP for Automated Driving Technology and Machine Learning at the Toyota Research Institute in Los Altos, CA, USA. Wolfram Burgard is known for his contributions to robot navigation, perception, manipulation, and learning.

Probabilistic and Deep Learning Approaches for Intelligent Robots and Automated Driving

For autonomous robots and automated driving, the capability to robustly perceive environments and execute their actions is the ultimate goal. The key challenge is that no sensors and actuators are perfect, which means that robots and cars need the ability to properly deal with the resulting uncertainty. In this presentation, I will introduce the probabilistic approach to robotics, which provides a rigorous statistical methodology to deal with state estimation problems. I will furthermore discuss how this approach can be extended using state-of-the-art technology from machine learning to deal with complex and changing real-world environments.

Hightech-Session 4: Learning, Understanding and Interaction

„Do robots need to understand the world to be effective?”

A look into how robots perceive the world that they work in and the impact of their understanding of the world when working with people and human operated machines.

Melonee Wise is the Vice President of Robotics Automation at Zebra Technologies. Melonee joined Zebra through the acquisition of Fetch Robotics where she was the CEO. Melonee was the second employee at Willow Garage where she led a team of engineers developing next-generation robot hardware and software, including ROS, the PR2, and TurtleBot. Melonee serves as the Chair of the IFR Service Robot Group, as a robotics board member for A3, and on the MHI Roundtable Advisory Committee. Melonee has received the MIT Technology Review’s TR35 and has been named to the Silicon Valley Business Journal’s Women of Influence and 40 Under 40, the Robotics Business Review RBR50, and as one of eight CEOs changing the way we work by Business Insider.

Do robots need to understand the world to be effective?

A look into how robots perceive the world that they work in and the impact of their understanding of the world when working with people and human operated machines.

Hightech-Session 4: Learning, Understanding and Interaction

“Robots Learning (Through) Interactions”

The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. We will discuss various learning techniques we developed that enable robots to have complex interactions with their environment and humans. Complexity arises from dealing with high-dimensional input data, non-linear dynamics in general and contacts in particular, multiple reference frames, and variability in objects, environments and tasks.

A human teacher is always involved in the learning process, either directly (providing data) or indirectly (designing the optimization criterion), which raises the question: How to best make use of the interactions with the human teacher to render the learning process efficient and effective?

We will discuss various methods we have developed in the fields of supervised learning, imitation learning, reinforcement learning, and interactive learning. All these concepts will be illustrated with benchmark tasks and real robot experiments ranging from fun (ball-in-a-cup) to more applied (sorting products).

Jens Kober is an associate professor at the TU Delft, Netherlands. He worked as a postdoctoral scholar jointly at the CoR-Lab, Bielefeld University, Germany and at the Honda Research Institute Europe, Germany. He graduated in 2012 with a PhD Degree in Engineering from TU Darmstadt and the MPI for Intelligent Systems. For his research he received the annually awarded Georges Giralt PhD Award for the best PhD thesis in robotics in Europe, the 2018 IEEE RAS Early Academic Career Award, and has received an ERC Starting grant. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.

Robots Learning (Through) Interactions

The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. We will discuss various learning techniques we developed that enable robots to have complex interactions with their environment and humans. Complexity arises from dealing with high-dimensional input data, non-linear dynamics in general and contacts in particular, multiple reference frames, and variability in objects, environments and tasks.

A human teacher is always involved in the learning process, either directly (providing data) or indirectly (designing the optimization criterion), which raises the question: How to best make use of the interactions with the human teacher to render the learning process efficient and effective?

We will discuss various methods we have developed in the fields of supervised learning, imitation learning, reinforcement learning, and interactive learning. All these concepts will be illustrated with benchmark tasks and real robot experiments ranging from fun (ball-in-a-cup) to more applied (sorting products).

Hightech-Session 4: Learning, Understanding and Interaction

“Physical and Social Human-robot Interaction”

This talk covers two main research directions based on the iCub humanoid robot. The iCub is a humanoid robot designed to support research in embodied AI. The iCub is being used at the Italian Institute of Technology as a model platform to develop the technology of future interactive service robots. In particular, I will describe our work in the field of physical and social interaction. For example, through extensive use of machine learning, we developed algorithms to interpret and use external contact information in a variety of tasks as well as contactless cues – vision, sound – to ease interaction between the user and the robot.

Giorgio Metta is the Scientific Director of the Istituto Italiano di Tecnologia (IIT). He holds a MSc cum laude (1994) and PhD (2000) in electronic engineering both from the University of Genoa. From 2001 to 2002, Giorgio was postdoctoral associate at the MIT AI-Lab. He was previously with the University of Genoa and from 2012 to 2019 Professor of Cognitive Robotics at the University of Plymouth (UK). He was member of the board of directors of euRobotics aisbl, the European reference organization for robotics research. Giorgio Metta served as Vice Scientific Director of IIT from 2016 to 2019. He coordinated IIT's participation into two of the Ministry of Economic Development Competence Centers for Industry 4.0 (ARTES4.0, START4.0). He was one of the three Italian representatives at the 2018 G7 forum on Artificial Intelligence and, more recently, one of the authors of the Italian Strategic Agenda on AI. Giorgio coordinated the development of the iCub robot for more than a decade making it de facto the reference platform for research in embodied AI. Presently, there are more than 40 robots reaching laboratories as far as Japan, China, Singapore, Germany, Spain, UK and the United States. Giorgio Metta research activities are in the fields of biologically motivated and humanoid robotics and, in particular, in developing humanoid robots that can adapt and learn from experience. Giorgio Metta is author of more than 300 scientific publications. He has been working as principal investigator and research scientist in about a dozen international research as well as industrial projects.

Physical and Social Human-robot Interaction

This talk covers two main research directions based on the iCub humanoid robot. The iCub is a humanoid robot designed to support research in embodied AI. The iCub is being used at the Italian Institute of Technology as a model platform to develop the technology of future interactive service robots. In particular, I will describe our work in the field of physical and social interaction. For example, through extensive use of machine learning, we developed algorithms to interpret and use external contact information in a variety of tasks as well as contactless cues – vision, sound – to ease interaction between the user and the robot.

Hightech-Session 4: Learning, Understanding and Interaction

Session Chair

Learning, Understanding and Interaction will be hosted by Session Chair Prof. Dr. Stefan Leutenegger, professor at the Chair of Machine Learning for Robotics (TU Munich).


Technologies have the potential to assist humans and improve our quality of life. However, it is essential to design technologies with a focus on the benefits for the many, and not the few. Question of social justice and equity need to move to the center of technology development, particularly in fields such as AI and robotics. To this end, we must foster interdisciplinary collaborations between the social sciences and AI research and address social, ethical and political questions in an integrated way already during technology development.

“Closing Speech”

Seth Hutchinson is the Executive Director of the Institute for Robotics and Intelligent Machines at the Georgia Institute of Technology, where he is also Professor and KUKA Chair for Robotics in the School of Interactive Computing. His research in robotics spans the areas of planning, sensing, and control. He has published widely on these topics, and is coauthor of the books "Robot Modeling and Control," published by Wiley, and "Principles of Robot Motion: Theory, Algorithms, and Implementations," published by MIT Press.

Hightech-Summit: Closing Speech

Advertisment
Advertisment
Advertisment
Advertisment
Advertisment
munich_i
Session 1: Next Generation Robotics

What will future robot generations look like, what tasks will they perform, and how will they differ from the robots we know today?

Mehr erfahren
munich_i
Session 2: Networking Intelligence

Future robots will be integrated into smart and self-learning systems – in industrial applications, medical engineering and pharmaceutical research.

Mehr erfahren
munich_i
Session 3: Industrial Perception and Autonomy

Full potential of robotics can't be exploited without smart environmental perception. Evolution of machine vision is a prerequisite to autonomization.

Mehr erfahren