BarrierefreiheitKontaktSprache auswählen: EN

Session 3: Generative AI in Robotics and Big Data

Google DeepMind, Meta, Microsoft, Nvidia: Forscher aus diesen vier weltweit führenden Unternehmen geben Einblicke in ihre Entwicklungsaktivitäten, insbesondere in die Bereiche generative KI und Big Data. Eine hochkarätig besetzte Industry-Session, die sich niemand entgehen lassen sollte.

Dr. Francesco Nori, Leiter von Google DeepMind Robotics, stellt die Aktivitäten von Google bei der Entwicklung einer wirklich allgemeinen KI vor, die es Robotern erlaubt, sehr komplexe Handhabungs- und Transportaufgaben zu übernehmen. Roboter sollen in die Lage versetzt werden, sich in der Umgebung zu orientieren, selbsttätig neue Aufgaben auszuführen und komplexe Handhabungsaufgaben zu erledigen.

Dr. Franziska Meier, Wissenschaftlerin und Forschungsmanagerin bei Facebook Artificial Intelligent Research (FAIR@Meta) befasst sich mit ihrem Team damit, Methoden zu entwickeln, die Robotern echte Autonomie verleihen. Ihr Vortrag zeigt Beispiele aus der FAIR-Forschung auf, die darauf fokussieren, Robotern allgemeine Fähigkeiten für ein breites Anwendungsspektrum zu verleihen.

Dr. Yashraj Narang (NVIDIA)

Dr. Katja Hofmann leitet das „Game Intelligence Team“ bei Microsoft Research. Sie berichtet über aktuelle Forschungsergebnisse bei der Entwicklung von Machine Learning-Architekturen, die komplexe 3D-Umgebungen sowie das menschliche Verhalten beim Gaming modellie-ren. Roboter, die mit dieser Technologie ausgestattet sind, können kollaborativ komplexe Probleme lösen.

Eröffnet wird diese Session mit einer Impuls-Rede von Prof. Eckehard Steinbach (MIRMI/TUM) und Prof. Alin Albu-Schäffer (DLR), die Teil des munich_i Pioneers Committees sind.

Die Speaker und Vorträge dieser Session

"Google DeepMind Robotics"

Google DeepMind is working on some of the world’s most complex and interesting research challenges, with the ultimate goal of building AI responsibly to benefit humanity. We ultimately want to develop an AI capable of dealing with a variety of environments. A truly general AI needs to be able to act on the real world and to learn tasks on real robots. Robotics at Google DeepMind aims at endowing robots with the ability to learn how to perform complex manipulation and locomotion tasks. This talk will give an introduction to Google DeepMind with specific focus on robotics, control or reinforcement learning.

Francesco Nori (born Padova, 1976) achieved his D.Eng. (Padova, 2002). A visiting student stint at UCLA (2002) began his research into computational vision and motion tracking. His Padova Ph.D. (2005) addressed biologically inspired modular control. Moving via a PostDoc at Genova's LiraLab (2006), he joined Italy's Institute of Technology (IIT) in 2007, eventually becoming a Tenure Track Researcher (2015). His main IIT ventures involved whole-body motion control using multiple compliant contacts. A vital member behind the iCub robot's development, Nori managed control and body force regulation using tactile awareness. He managed major European ventures like H2020 An.Dy (coordinator) and FP7's CoDyCo (coordinator) and Koroibot (PI). Moving to DeepMind in 2017, his interests merged robotics and AI, applied broadly across manipulation and ambulation. After DeepMind merged with Google Brain (2023), Nori advanced, becoming Director of Robotics at Google DeepMind, managing major research ventures alongside many valuable members.

Google DeepMind Robotics

Google DeepMind is working on some of the world’s most complex and interesting research challenges, with the ultimate goal of building AI responsibly to benefit humanity. We ultimately want to develop an AI capable of dealing with a variety of environments. A truly general AI needs to be able to act on the real world and to learn tasks on real robots. Robotics at Google DeepMind aims at endowing robots with the ability to learn how to perform complex manipulation and locomotion tasks. This talk will give an introduction to Google DeepMind with specific focus on robotics, control or reinforcement learning.

Hightech Summit Session 3: Generative AI in Robotics and Big Data​

"Towards robots that generalize and adapt efficiently"

While there has been major investment in developing large-scale robot learning algorithms, achieving true autonomy remains an open research challenge. Key ingredients towards this goal are a robot’s ability to generalize and efficiently adapt to unseen scenarios and new tasks variations, the ability to recover from failures, and to efficiently learn an entirely new skill from human demonstrations. In this talk, I’ll present examples of FAIR robotics research towards the goal of learning general representations for a wide spectrum of robotics applications.

Franzsika Meier is a research scientist at FAIR@Meta AI in Menlo Park. She is the team lead of the Cortex team which is focussed on learning and evaluating foundation models for robotics. Her research interests lie at the intersection of machine learning and robotics, with a special interest in lifelong learning. I have a Ph.D. from the University of Southern California and was advised by Stefan Schaal.

Towards robots that generalize and adapt efficiently

While there has been major investment in developing large-scale robot learning algorithms, achieving true autonomy remains an open research challenge. Key ingredients towards this goal are a robot’s ability to generalize and efficiently adapt to unseen scenarios and new tasks variations, the ability to recover from failures, and to efficiently learn an entirely new skill from human demonstrations. In this talk, I’ll present examples of FAIR robotics research towards the goal of learning general representations for a wide spectrum of robotics applications.

Hightech Summit Session 3: Generative AI in Robotics and Big Data​

"Accelerating Robotics Research through Simulation"

Simulators have played a crucial role in modern science, and they are essential for the future of robotics. In this presentation, we will discuss several of NVIDIA’s initiatives to build and leverage GPU-based simulators to accelerate and scale robotics research. Specifically, we will explore how physics simulators can be used to generate and augment data for imitation learning, how they enable scalable training of robotic control policies via reinforcement learning, and how video foundation models can be integrated to complement classical physics simulation. Throughout the session, we will highlight applications in tabletop manipulation, robotic assembly, and humanoid robotics. Attendees will gain insights into the latest advancements in simulation technologies and their impact on robotics research and development.

I am a robotics research manager at NVIDIA Research. I lead the Simulation and Behavior Generation (SBG) team within the Seattle Robotics Lab (SRL), which is directed by Dieter Fox. My team currently focuses on learned simulators, automated data generation, reinforcement learning, imitation learning, force and tactile sensing, high-performance control, and sim-to-real transfer. We are exploring applications to multi-step tabletop manipulation, robotic assembly, dexterous manipulation with anthropomorphic hands, and humanoids. Previously, I have worked on projects in cardiovascular biomechanics, precision machine design, human gait biomechanics, surgical robotics, soft robotics, finite element simulation, solid/fluid/structural mechanics, parameter inference, self-supervised learning, differentiable simulation, deformable-object manipulation, contact simulation, and machine-learning infrastructure.

Accelerating Robotics Research through Simulation

Simulators have played a crucial role in modern science, and they are essential for the future of robotics. In this presentation, we will discuss several of NVIDIA’s initiatives to build and leverage GPU-based simulators to accelerate and scale robotics research. Specifically, we will explore how physics simulators can be used to generate and augment data for imitation learning, how they enable scalable training of robotic control policies via reinforcement learning, and how video foundation models can be integrated to complement classical physics simulation. Throughout the session, we will highlight applications in tabletop manipulation, robotic assembly, and humanoid robotics. Attendees will gain insights into the latest advancements in simulation technologies and their impact on robotics research and development.

Hightech Summit Session 3: Generative AI in Robotics and Big Data

"What can we learn from human gameplay data?"

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in areas such as video games and embodied AI. This talk focuses on recent advances of my team, the Game Intelligence team at Microsoft Research, towards scalable machine learning architectures that effectively model human gameplay data and complex 3D environments.

Katja Hofmann is a Senior Principal Researcher at Microsoft Research, where she leads the Game Intelligence team. Her research focuses on machine learning for Games with the mission to drive advances in AI capabilities with the aim to unlock novel applications in Gaming and beyond. She and her team share the belief that games will drive a transformation of how people interact with AI technology. Her long-term goal is to develop systems that learn to collaborate with people, to empower their users and help solve complex real-world problems.

What can we learn from human gameplay data?

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in areas such as video games and embodied AI. This talk focuses on recent advances of my team, the Game Intelligence team at Microsoft Research, towards scalable machine learning architectures that effectively model human gameplay data and complex 3D environments.

Hightech Summit Session 3: Generative AI in Robotics and Big Data​

© Messe München

Session-Chair

„Generative AI in Robotics and AI” wird von Prof. Dr. Patrick van der Smagt, Head of AI bei Foundation Robotics, als Session Chair moderiert.

© Messe München

Session-Chair

„Generative AI in Robotics and AI” wird von Prof. Dr. Achim Lilienthal, Direktor Strategie & Partnerschaften des MIRMI und Lehrstuhlinhaber des Lehrstuhls für Perception for Intelligent Systems an der TUM School of Computation, Information and Technology der Technischen Universität München (TUM), als Session Chair moderiert.