AccessabilityContactSelect language: DE

Session 3: Generative AI in Robotics and Big Data

Google DeepMind, Meta, Microsoft, Nvidia: Researchers from the four globally leading companies provide insights into their development activities with a special focus on the fields of generative AI and Big Data. This is an industry session with top-level speakers – don’t miss it!

Dr. Francesco Nori, Head of Google DeepMind Robotics, present Google’s activities relating to the development of true general AI, which will enable robots to take on complex manipulation and locomotion tasks. Its purpose is to allow robots to navigate within their environment, complete new tasks independently, and perform complex manipulation tasks.

Dr. Franziska Meier, scientist at Facebook Artificial Intelligent Research (FAIR@Meta), and her team are developing methods that make robots truly autonomous. Her talk will include examples from FAIR research that focus on providing robots with general skills for a wide range of applications.

Dr. Yashraj Narang (NVIDIA)

Dr. Katja Hofmann heads the ‘Game Intelligence Team’ at Microsoft Research. She presents the latest findings in the development of ma-chine learning architectures for modelling complex 3D environments and human gameplay data. Robots with this technology are capable of collaboratively solving complex issues.

This session starts with a digital impulse by Prof. Eckehard Steinbach (MIRMI/TUM) and Prof. Alin Albu-Schäffer (DLR), both part of the munich_i Pioneers Committee.

Talks and speakers in this session

"Google Deepmind Robotics"

Google DeepMind is working on some of the world’s most complex and interesting research challenges, with the ultimate goal of building AI responsibly to benefit humanity. We ultimately want to develop an AI capable of dealing with a variety of environments. A truly general AI needs to be able to act on the real world and to learn tasks on real robots. Robotics at Google DeepMind aims at endowing robots with the ability to learn how to perform complex manipulation and locomotion tasks. This talk will give an introduction to Google DeepMind with specific focus on robotics, control or reinforcement learning.

Francesco Nori (born Padova, 1976) achieved his D.Eng. (Padova, 2002). A visiting student stint at UCLA (2002) began his research into computational vision and motion tracking. His Padova Ph.D. (2005) addressed biologically inspired modular control. Moving via a PostDoc at Genova's LiraLab (2006), he joined Italy's Institute of Technology (IIT) in 2007, eventually becoming a Tenure Track Researcher (2015). His main IIT ventures involved whole-body motion control using multiple compliant contacts. A vital member behind the iCub robot's development, Nori managed control and body force regulation using tactile awareness. He managed major European ventures like H2020 An.Dy (coordinator) and FP7's CoDyCo (coordinator) and Koroibot (PI). Moving to DeepMind in 2017, his interests merged robotics and AI, applied broadly across manipulation and ambulation. After DeepMind merged with Google Brain (2023), Nori advanced, becoming Director of Robotics at Google DeepMind, managing major research ventures alongside many valuable members.

Google Deepmind Robotics

Google DeepMind is working on some of the world’s most complex and interesting research challenges, with the ultimate goal of building AI responsibly to benefit humanity. We ultimately want to develop an AI capable of dealing with a variety of environments. A truly general AI needs to be able to act on the real world and to learn tasks on real robots. Robotics at Google DeepMind aims at endowing robots with the ability to learn how to perform complex manipulation and locomotion tasks. This talk will give an introduction to Google DeepMind with specific focus on robotics, control or reinforcement learning.

Hightech Summit Session 3: Generative AI in Robotics and Big Data

"Towards robots that generalize and adapt efficiently"

While there has been major investment in developing large-scale robot learning algorithms, achieving true autonomy remains an open research challenge. Key ingredients towards this goal are a robot’s ability to generalize and efficiently adapt to unseen scenarios and new tasks variations, the ability to recover from failures, and to efficiently learn an entirely new skill from human demonstrations. In this talk, I’ll present examples of FAIR robotics research towards the goal of learning general representations for a wide spectrum of robotics applications.

Franzsika Meier is a research scientist at FAIR@Meta AI in Menlo Park. She is the team lead of the Cortex team which is focussed on learning and evaluating foundation models for robotics. Her research interests lie at the intersection of machine learning and robotics, with a special interest in lifelong learning. I have a Ph.D. from the University of Southern California and was advised by Stefan Schaal.

Towards robots that generalize and adapt efficiently

While there has been major investment in developing large-scale robot learning algorithms, achieving true autonomy remains an open research challenge. Key ingredients towards this goal are a robot’s ability to generalize and efficiently adapt to unseen scenarios and new tasks variations, the ability to recover from failures, and to efficiently learn an entirely new skill from human demonstrations. In this talk, I’ll present examples of FAIR robotics research towards the goal of learning general representations for a wide spectrum of robotics applications.

Hightech Summit Session 3: Generative AI in Robotics and Big Data

"Accelerating Robotics Research through Simulation"

Simulators have played a crucial role in modern science, and they are essential for the future of robotics. In this presentation, we will discuss several of NVIDIA’s initiatives to build and leverage GPU-based simulators to accelerate and scale robotics research. Specifically, we will explore how physics simulators can be used to generate and augment data for imitation learning, how they enable scalable training of robotic control policies via reinforcement learning, and how video foundation models can be integrated to complement classical physics simulation. Throughout the session, we will highlight applications in tabletop manipulation, robotic assembly, and humanoid robotics. Attendees will gain insights into the latest advancements in simulation technologies and their impact on robotics research and development.

I am a robotics research manager at NVIDIA Research. I lead the Simulation and Behavior Generation (SBG) team within the Seattle Robotics Lab (SRL), which is directed by Dieter Fox. My team currently focuses on learned simulators, automated data generation, reinforcement learning, imitation learning, force and tactile sensing, high-performance control, and sim-to-real transfer. We are exploring applications to multi-step tabletop manipulation, robotic assembly, dexterous manipulation with anthropomorphic hands, and humanoids. Previously, I have worked on projects in cardiovascular biomechanics, precision machine design, human gait biomechanics, surgical robotics, soft robotics, finite element simulation, solid/fluid/structural mechanics, parameter inference, self-supervised learning, differentiable simulation, deformable-object manipulation, contact simulation, and machine-learning infrastructure.

Accelerating Robotics Research through Simulation

Simulators have played a crucial role in modern science, and they are essential for the future of robotics. In this presentation, we will discuss several of NVIDIA’s initiatives to build and leverage GPU-based simulators to accelerate and scale robotics research. Specifically, we will explore how physics simulators can be used to generate and augment data for imitation learning, how they enable scalable training of robotic control policies via reinforcement learning, and how video foundation models can be integrated to complement classical physics simulation. Throughout the session, we will highlight applications in tabletop manipulation, robotic assembly, and humanoid robotics. Attendees will gain insights into the latest advancements in simulation technologies and their impact on robotics research and development.

Hightech Summit Session 3: Generative AI in Robotics and Big Data

"What can we learn from human gameplay data?"

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in areas such as video games and embodied AI. This talk focuses on recent advances of my team, the Game Intelligence team at Microsoft Research, towards scalable machine learning architectures that effectively model human gameplay data and complex 3D environments.

Katja Hofmann is a Senior Principal Researcher at Microsoft Research, where she leads the Game Intelligence team. Her research focuses on machine learning for Games with the mission to drive advances in AI capabilities with the aim to unlock novel applications in Gaming and beyond. She and her team share the belief that games will drive a transformation of how people interact with AI technology. Her long-term goal is to develop systems that learn to collaborate with people, to empower their users and help solve complex real-world problems.

What can we learn from human gameplay data?

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in areas such as video games and embodied AI. This talk focuses on recent advances of my team, the Game Intelligence team at Microsoft Research, towards scalable machine learning architectures that effectively model human gameplay data and complex 3D environments.

Hightech Summit Session 3: Generative AI in Robotics and Big Data

© Messe München

Session Chair

"Generative AI in Robotics and Big Data" will be hosted by Session Chair Prof. Dr. Patrick van der Smagt, Head of AI at Foundation Robotics.

© Messe München

Session Chair

"Generative AI in Robotics and Big Data" will be hosted by Session Chair Prof. Dr. Achim Lilienthal, Deputy Director of MIRMI and holder of the Chair of Perception for Intelligent Systems at TUM School of Computation, Information and Technology at Technical University of Munich (TUM).