The deployment of robots and other assistance systems in the healthcare sector and eldercare is associated with critical technical problems, and there are acceptance issues to be addressed. This is covered in the Hightech Summit Session 3 – based on top-level expertise.
Prof. Dr. Sunil Agrawal, Columbia University, USA, provides an overview of this specific field of collaborative robotics. The international award-winning expert for robotics in medical rehabilitation is known for his exoskeletons that help stroke patients learn to walk again.
Prof. Dr.-Ing. Dr. med. h.c. Robert Riener, ETH Zurich, Switzerland, conducts research in a related field. He presents an approach to human-machine interaction in rehabilitation based on exoskeletons.
Finally, Dr. Alex Mihailidis, AGE-WELL/University of Toronto, Canada, talks about the increasing adoption of AI and robotics in eldercare and provides a glimpse of the future.
This session starts with a digital opening address by Klaus Holetschek, Bavarian Minister of State for Health and Care.
Google DeepMind is working on some of the world’s most complex and interesting research challenges, with the ultimate goal of building AI responsibly to benefit humanity. We ultimately want to develop an AI capable of dealing with a variety of environments. A truly general AI needs to be able to act on the real world and to learn tasks on real robots. Robotics at Google DeepMind aims at endowing robots with the ability to learn how to perform complex manipulation and locomotion tasks. This talk will give an introduction to Google DeepMind with specific focus on robotics, control or reinforcement learning.
Francesco Nori (born Padova, 1976) achieved his D.Eng. (Padova, 2002). A visiting student stint at UCLA (2002) began his research into computational vision and motion tracking. His Padova Ph.D. (2005) addressed biologically inspired modular control. Moving via a PostDoc at Genova's LiraLab (2006), he joined Italy's Institute of Technology (IIT) in 2007, eventually becoming a Tenure Track Researcher (2015). His main IIT ventures involved whole-body motion control using multiple compliant contacts. A vital member behind the iCub robot's development, Nori managed control and body force regulation using tactile awareness. He managed major European ventures like H2020 An.Dy (coordinator) and FP7's CoDyCo (coordinator) and Koroibot (PI). Moving to DeepMind in 2017, his interests merged robotics and AI, applied broadly across manipulation and ambulation. After DeepMind merged with Google Brain (2023), Nori advanced, becoming Director of Robotics at Google DeepMind, managing major research ventures alongside many valuable members.
Google DeepMind is working on some of the world’s most complex and interesting research challenges, with the ultimate goal of building AI responsibly to benefit humanity. We ultimately want to develop an AI capable of dealing with a variety of environments. A truly general AI needs to be able to act on the real world and to learn tasks on real robots. Robotics at Google DeepMind aims at endowing robots with the ability to learn how to perform complex manipulation and locomotion tasks. This talk will give an introduction to Google DeepMind with specific focus on robotics, control or reinforcement learning.
While there has been major investment in developing large-scale robot learning algorithms, achieving true autonomy remains an open research challenge. Key ingredients towards this goal are a robot’s ability to generalize and efficiently adapt to unseen scenarios and new tasks variations, the ability to recover from failures, and to efficiently learn an entirely new skill from human demonstrations. In this talk, I’ll present examples of FAIR robotics research towards the goal of learning general representations for a wide spectrum of robotics applications.
Franzsika Meier is a research scientist at FAIR@Meta AI in Menlo Park. She is the team lead of the Cortex team which is focussed on learning and evaluating foundation models for robotics. Her research interests lie at the intersection of machine learning and robotics, with a special interest in lifelong learning. I have a Ph.D. from the University of Southern California and was advised by Stefan Schaal.
While there has been major investment in developing large-scale robot learning algorithms, achieving true autonomy remains an open research challenge. Key ingredients towards this goal are a robot’s ability to generalize and efficiently adapt to unseen scenarios and new tasks variations, the ability to recover from failures, and to efficiently learn an entirely new skill from human demonstrations. In this talk, I’ll present examples of FAIR robotics research towards the goal of learning general representations for a wide spectrum of robotics applications.
abstract will follow
Yashraj “Yash” Narang is a research manager in NVIDIA’s robotics research lab, led by Dieter Fox. Yash’s work focuses on the intersection of robotics, AI, and simulation, and he leads a team conducting research in sim-to-real transfer, data generation, reinforcement learning, imitation learning, and robotics simulation. Prior to NVIDIA, he completed a PhD in materials science and mechanical engineering from Harvard University and a master’s in mechanical engineering from MIT.
Abstract will follow soon
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in areas such as video games and embodied AI. This talk focuses on recent advances of my team, the Game Intelligence team at Microsoft Research, towards scalable machine learning architectures that effectively model human gameplay data and complex 3D environments.
Katja Hofmann is a Senior Principal Researcher at Microsoft Research, where she leads the Game Intelligence team. Her research focuses on machine learning for Games with the mission to drive advances in AI capabilities with the aim to unlock novel applications in Gaming and beyond. She and her team share the belief that games will drive a transformation of how people interact with AI technology. Her long-term goal is to develop systems that learn to collaborate with people, to empower their users and help solve complex real-world problems.
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in areas such as video games and embodied AI. This talk focuses on recent advances of my team, the Game Intelligence team at Microsoft Research, towards scalable machine learning architectures that effectively model human gameplay data and complex 3D environments.
"Generative AI in Robotics and Big Data" will be hosted by Session Chair Prof. Dr. Patrick van der Smagt, Head of AI at Foundation Robotics.