Industry meets research: munich_i offers you the unique opportunity to experience the world's leading technology experts in the area of robotics and AI in a single day. Each of them is considered a visionary from this innovative international industry and represents the development of new technological standards. Or in other words: They build the required bridges between high-tech and practical applications. For you this means: first-hand pioneers’ knowledge.
Sunil K. Agrawal received a Ph.D. degree in Mechanical Engineering from Stanford University in 1990. He is currently a Professor and Director of Robotics and Rehabilitation (ROAR) Laboratory at Columbia University, located both in engineering and medical campuses of the University. Dr. Agrawal has published more than 500 journal and conference papers, three books, and 18 U.S. patents. He is a Fellow of the ASME and AIMBE. His honors include a NSF Presidential Faculty Fellowship from the White House in 1994, a Bessel Prize from Germany in 2003, and a Humboldt US Senior Scientist Award in 2007. He is a recipient of 2016 Machine Design Award from ASME for “seminal contributions to design of robotic exoskeletons for gait training of stroke patients” and 2016 Mechanisms and Robotics Award from the ASME for “cumulative contributions and being an international leading figure in mechanical design and robotics”. He is a recipient of several Best Paper awards in ASME and IEEE sponsored robotics conferences. He has successfully directed 35 PhD student theses and currently supervises research of 5 PhD students in ROAR laboratory. He is the founding Editor-in-Chief of the journal “Wearable Technologies” from Cambridge University Press. He was the Conference Chair for IEEE BioRob2020 organized in New York City.
Neural disorders, old age, and traumatic injuries limit the ability of humans to perform activities of daily living. Robotics can be used to characterize and retrain human neuromuscular responses. Columbia University Robotics and Rehabilitation (ROAR) Laboratory designs innovative robots and performs scientific studies to improve everyday human functions such as standing, walking, stair climbing, reaching, head turning, and others. Human experiments have targeted individuals with stroke, cerebral palsy, Parkinson’s disease, ALS, and elderly subjects with various disorders. The talk will provide an overview of some of these robotic technologies and scientific studies performed with them.
Prof. Aude Billard is Head of the LASA laboratory in the School of Engineering at the Swiss Institute of Technology Lausanne (EPFL) and holds a B.Sc and M.Sc. in Physics from EPFL (1995) and a Ph.D. in Artificial Intelligence (1998) from the University of Edinburgh. Aude Billard’s research spans the fields of machine learning and robotics with a particular emphasis on learning from sparse data and performing fast and robust retrieval. This work finds application to robotics, human-robot / human-computer interaction and computational neuroscience. Aude Billard leads the Swiss National Thematic Network Innovation Booster on Robotics, a half a million fund in support of industrial-academic partnerships, and is the current president-elect of the IEEE Robotics and Automation Society.
Today, many would like to deploy robots everywhere: in the streets, as cars, wheelchairs, and other mobility devices; in our homes, to cook, clean, and entertain us; on the body, to replace a lost limb or to augment its capabilities. For these robots to become reality, they need to depart from their ancestors in one dramatic way: They must escape from the comfortable, secluded, and largely predictable industrial world. In the past decades, robotics has made leaps forward in the design of increasingly complex robotic platforms to meet these challenges. In this endeavor, it has benefited from advances in optimization for solving high-dimensional constrained problems. These methods are powerful for planning in slow-paced tasks and when the environment is known. Advances in machine learning to analyze vast amounts of data often have offered powerful solutions for real-time control, but they often fall short of providing explicit guarantees on the learned model. The alternative is to develop machine learning methods that retain theoretical guarantees traditional from control theory.
A key issue faced by robotics today is to endow robots with the necessary reactivity to adapt their path at time-critical situations. Online reactivity is not just a matter of ensuring that there is a good-enough central processing unit on board the robot. It requires inherently robust control laws that can provide multiple solutions. Methods that combine machine learning and control theory do not necessitate large datasets and allow robots to learn control laws from only a handful of examples, while generalizing to the entire state space.
Marc Carrel-Billiard is a Senior Managing Director at Accenture. He is the Global Lead of Accenture Technology Innovation. In his role Marc leads:
Each practice of Technology Innovation is a critical component of Accenture Innovation architecture, which helps our clients driving continuous innovation at scale in their organizations.
Marc has been with Accenture for more than 25 years and has worked across all the five industry groups we serve. Before taking on the leadership of Technology Innovation, Marc was the global lead for our Emerging Technology practice. He has held several global leadership roles within Accenture’s technology group, and he has worked across several cutting-edge areas of IT including voice recognition, knowledge-based systems, neural networks, and quantum computing.
Marc is one of a select group of Accenture certified Master Technology Architects. He is also a certified a Solution Architect and Select Quality Assurance director.
Marc has broad software engineering and delivery experience, particularly in areas such as component and object-oriented technologies. Before joining Accenture in 1998, Marc was part of IBM Global Services where he published several Marc holds a MS of Computer Sciences from Institut d’Informatique d’Enterprises / CNAM - France (Minor: Mathematics)
Marc was born and grew up in Savigny s/Orge, France. Marc is married with three grownup kids. After living in various world places (USA, Netherlands, …) Marc now lives in the South of France with his wife.
Before joining Accenture in 1998, Marc was part of IBM Global Services and has published several articles and books from 1985 to 1998.
At Accenture we believe that we have arrived at an exciting frontier of technology innovation for businesses, one where we’re not just digitizing but starting to put that digital foundation to work. Fusing digital and physical is not only generating new products and services, it’s the force behind a new era of scientific research. Next to Cloud, AI, and the metaverse, leveraging robotics will be a critical connection to converge between digital and physical realities as exist today, into a shared reality that will bring atoms and bits together.
This talk will give a glimpse into Accenture’s client-facing projects and investments in the underlying science and technology tools and capabilities, from digital engineering and manufacturing as new industry standard, to utilizing digital twins and robotics technologies. Those activities range across our R&D Labs, Open Innovation, Ventures, Liquid Studios and Next Gen Robotics Incubation groups, and are supported by a close cooperation with academia.
Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden and director of WASP-HS, the Wallenberg Program on Humanities and Society for AI, Autonomous Systems and Software, the largest Swedish national research program on fundamental multidisciplinary research on the societal and human impact of AI. She is a member of the Royal Swedish Academy of Engineering Sciences (IVA), and a Fellow of the European Artificial Intelligence Association (EURAI). She is member of the Global Partnership on AI (GPAI), World Economic Forum’s Global Artificial Intelligence Council, Executive Committee of the IEEE Initiative on Ethically Aligned Design, of ALLAI, the Dutch AI Alliance, EU’s High Level Expert Group on Artificial Intelligence, and leader of UNICEF's guidance for AI and children, and member of UNESCO expert group on the implementation of AI recommendations. She is author of “Responsible Artificial Intelligence: developing and using AI in a responsible way”.
Responsible Artificial Intelligence (AI) is not an option but the only possible way to go. It involves understanding AI's nature, design choices, societal role, and ethical considerations. It extends human capabilities but requires addressing challenges in education, jobs, and biases. Recognizing the societal role of AI is vital, understanding that it is not an autonomous entity but rather dependent on human responsibility and decision-making. Recent AI developments, including foundational models, transformer models, generative models, and large language models (LLMs), raise questions about whether they are changing the paradigm of AI, and about the responsibility of those that are developing and deploying AI systems.
In this talk, I will further discuss the need for a relational perspective on AI that emphasize acceptance, trust, cooperation, and the common good. Taking responsibility involves regulation, governance, and awareness. Ethics and dilemmas are ongoing considerations, but require understanding that trade-offs must be made and that decision processes are always contextual. Taking responsibility requires designing AI systems with values in mind, implementing regulations, governance, monitoring, agreements, and norms.
Rather than viewing regulation as a constraint, it should be seen as a stepping stone for innovation, ensuring public acceptance, driving transformation, and promoting business differentiation.
Toshio Fukuda received Dr. Eng. from the University of Tokyo, Tokyo, Japan, in 1977.
Currently, He is Professor Emeritus Nagoya University(2013), visiting professor of Nagoya University(2013-), Professor Meijo University( 2013-2022), Professor Waseda University(2019-). His major is bio-robotics, especially Micro and Nano Robotics.
Dr. Fukuda is IEEE President (2020), the IEEE Director of Division X, Systems and Control (2017-2018), IEEE Region 10 Director (2013-2014) and served President of IEEE Robotics and Automation Society (1998-1999), Director of the IEEE Division X, Systems and Control (2001- 2002), Co-founding Editor-in-Chief of IEEE / ASME Transactions on Mechatronics (2000-2002) and Editor-in-Chief of ROBOMECH Journal, Springer (2013-), Editor-in-Chief, Journal of Cyborg and Bionic Systems(2018-). He was Founding President of IEEE Nanotechnology Council (2002-2003, 2005).He was elected as a member of Science Council of Japan (2008-2013). He organized many conferences, such as IEEE/RSJ Conference on Intelligent Robots and Systems(IROS, 1988), System Integration International(SII, 2008), Cyborg and Bionic Systems (CBS, 2017) as the founding Chair and others.
Dr. Fukuda received IEEE Robotics and Automation Pioneer Award (2004), IEEE Robotics and Automation Technical Field Award (2010), Honorary Doctor of Aalto University School of Science and Technology (2010), member of the Japan Academy of Engineering(2013), Friendship Award of State Administration of Foreign Experts affairs of the PR China (2014), Medal of Honor on Purple Ribbon (2015), Foreign member of Chinese Academy of Science (2017), Chunichi Culture Award(2019), The Order of the Sacred Treasure, Gold Rays with Neck Ribbon (2022).
IEEE Fellow (1995), SICE Fellow (1995), JSME Fellow (2001), RSJ Fellow (2004).
Robotics and AI can challenge for the Mega-Trend problems such as aging society, climate change, energy and food issues in the world in the future, 2050 and beyond. Intelligent robotic technology can be used for the wide variety of human life, such that aged people can live independently and comfortably with less assistance from others in near future, and that robot/AI can help human find new solutions and discovery in many wider applications, including the design and manufacturing in automation and many others. Thus today’s asymmetry nature in function between human and robot will be dramatically changed to the more symmetric relation between both: easy to use and dependent each other equally in the future. Those symmetric AI robot will help people find new scientific and technological solution and discovery in many fields in future, so that there will be evolutionary changes of our society in not only the manufacturing but also design for device and system in automation and even the structure of our society itself.To make such ambitious goal realizable in future, it is necessary for AI robots to have the capabilities of co-evolution and self-organization. I will show a new initiative on AI and Robot, Moon Shot Programs aiming on challenging to the Mega-Trend problems. Thus robotics/AI will greatly change the structure and architecture of the world itself in future, 2050 and beyond.
Ryan Gariepy is co-founder and CTO of both Clearpath Robotics and OTTO Motors. In addition, he serves on the board of the Open Source Robotics Foundation, is a co-founder of ROSCon, and also co-founded and co-chairs the Canadian Robotics Council. Ryan is also an advisor to several startups and venture capital groups, and helped found the Next Generation Manufacturing Canada initiative. He is a regular speaker, panelist, and expert guest on topics including robotics, AI, and technology policy. Ryan completed both a B.A.Sc. degree in Mechatronics Engineering and a M.A.Sc. degree in Mechanical Engineering at the University of Waterloo, and has over seventy pending patents in the field of autonomous systems.
From vacuums to quadrupeds to self-driving cars, robots are becoming increasingly physically capable, intelligent and cost-effective. As with any emerging industry, the earliest innovators didn't have the luxury of decades of fundamental knowledge and best practices available to them. They built from the ground up and learned the hard way what not to do. Today, we're entering a new era of robotics. The most successful robotics companies of the next decade won't be the ones building from scratch. They'll build on existing platforms that have been hardened to solve very specific problems, including problems in autonomy, fleet management, simulation, and more throughout the robotics stack.
In this presentation, the audience will learn how robotics development has been done recently, what is changing, and what is coming in the next decade from an expert with fifteen years of experience in robot development & deployment across a variety of industries. Market expectations surrounding robotic capabilities, security and privacy, and robustness and safety are becoming increasingly difficult for new entrants to match. Nevertheless, a variety of market forces are making building robots cheaper and easier than ever before, and demand for robotics has never been higher!
Just as a new software company today wouldn’t build their own cloud computing platform, and instead would use AWS, the next generation of robotics companies are not going to start with a hodgepodge of ROS nodes and custom circuit boards. It is highly likely that some of the world’s largest robotics companies haven’t even been founded yet!
Dr. Norbert Gaus, Executive Vice President at Siemens Technology, is responsible for Research & Pre-Development at Siemens. After earning a degree in electrical engineering from Technical University Munich, he worked initially as a research assistant at the German Aerospace Center. During that time he earned a Ph.D. in engineering from Ruhr University Bochum. He joined Siemens AG Technology in 1991. Between 1994 and 2001, Gaus held various positions in the Information and Communication Networks Group with the rank of Vice President. In the following four years, he served as President and CEO of Siemens Corporate Research Inc., Princeton, NJ, USA. From 2005 to when he assumed his current position, he held various executive positions in Siemens Healthcare, ultimately as CEO of the Customer Solutions Division (Sales and Service for Siemens Healthcare, HIS Business Unit) of Siemens Healthcare. Since May 2015 he is in his current position at Siemens Technology.
The field of industrial manufacturing is in the center of the digital transformation. Rapid technical developments coincide with global events impacting the way we develop and produce. Increasing flexibility and resilience demands for our supply chains require new ways of collaborating across the various parts of the value creation network. Recent developments in robotics and AI allow us to picture hyper-flexible production systems and autonomous factories to address these challenges and create a sustainable manufacturing landscape that produces on demand and close to consumption. The talk will show how industry ensures the availability of high-quality data as the basis of end-to-end digital manufacturing processes. Examples demonstrate how Digital Twins and advanced algorithms can be used to automate the automation and compute complete autonomous manufacturing processes instead of engineering them manually. Finally, the talk will show use cases that leverage the combination of robotics and AI for flexible, autonomous processes in manufacturing and logistics.
Torsten is Chief Technology Officer at Intrinsic.
He is the founder and former CEO of Reflexxes GmbH, a startup working on research and development of real-time motion generation software. In 2014, Reflexxes was acquired by Google, where Torsten became the Head of a Robotics Software Division. This included coordinating robotics and machine learning research activities between DeepMind, Google Research, Boston Dynamics, and X.
He is also a Co-founder and former CEO of Loom Vision GmbH. Loom Vision's focus was on software and certification of machine learning and robotics software. In 2018, the team of Loom Vision GmbH joined X.
From 2017 to 2022, Torsten was a Professor of Computer Science at Karlsruhe Institute of Technology (KIT).
Torsten has been working as a research consultant for Volkswagen AG, KUKA Roboter GmbH, Connyun GmbH, Manz Automation AG, Auris Surgical Robotics, Inc., Redwood Robotics, Inc., and Google, Inc.. Torsten is a board member at Fischerwerke and Pictet Asset Management.
Torsten is an editor or an associate editor of multiple IEEE conference proceedings, books, and book series, and the Multimedia Editor of the Springer Handbook of Robotics.
Among other awards, Torsten received the 2022 IEEE RAS George Saridis Leadership Award, the 2018 IEEE RAS Distinguished Service Award, and the 2014 IEEE RAS Early Career Award. He is an IEEE Fellow.
Robotics can be defined as the intelligent connection between perception and action. Adding cameras and sensors to robot arms or mobile robots makes the task of robot programming challenging (and often economically non-viable). I will show a few examples of how non-expert robotics application developers can use computer-vision and machine learning algorithms without writing a single line of code or any knowledge about training and inference.
Compared to robot perception, creating utility value through machine learning for robot actuation is more challenging - especially when physics plays a role in your application (e.g., dealing with unknown or changing friction, modeling contact forces, unknown and changing process parameters, dynamically changing environments). While much less generic, I will show examples of how a novice application programmer can implement applications using force/torque-controlled industrial robots.
I will demonstrate how these concepts can be applied across applications and to a large extent agnostic to robot and sensor hardware using a new software platform.
Dr. Stephan Martin is a Principal Business AI Strategist at SAP. He works on the AI strategy of SAP and its holistic execution. Dr. Martin joined SAP in 2016 and has gathered broad experience in customer business process innovation with AI and Data Science. Prior to his career at SAP, he held various academic positions in applied mathematics. He was a temporary professor at RWTH Aachen in 2014 and a postdoctoral researcher at Imperial College London from 2012-2016.
Everyone is talking about Generative AI – its A(I)ge has just begun. Applications such as ChatGPT create new content like text, images or even code. But what does this technology bring to the enterprise? Generative AI is emerging as a powerful tool for various business processes and industries. Learn about SAP's thoughts on the future of AI in business.
Professor Mihailidis is the Associate Vice-President for International Partnerships at the University of Toronto, and the Scientific Director of the AGE-WELL Network of Centres of Excellence, which focuses on the development of new technologies and services for older adults. He is a Professor in the Department of Occupational Science and Occupational Therapy (U of T) and in Biomedical Engineering (U of T), with a cross appointment in the Department of Computer Science (U of T).
Professor Mihailidis has been conducting research in the field of technology to support older adults for the past 20 years, having published over 200 journal papers, conference papers, and abstracts in this field. Dr. Mihailidis is also very active in the rehabilitation engineering profession, currently as the Past-President for RESNA (Rehabilitation Engineering and Assistive Technology Society of North America). He was also named a Fellow of RESNA in 2014, which is one of the highest honours within this field of research and practice, and a Fellow in the Canadian Academy of Health Science (CAHS) in 2021 for his contributions to the health and well-being of older Canadians.
In 2022, Dr. Mihailidis was recognized by the UN as one of the Healthy Ageing 50 – 50 leaders working to transform the world to be a better place in which to grow older.
Professor Mihailidis received a B.A.Sc. in Mechanical Engineering from University of Toronto in 1996, a M.A.Sc. in Biomedical Engineering in 1998 from the University of Toronto, and a PhD in Bioengineering (Rehabilitation Engineering) in 2002 from the University of Strathclyde (Glasgow, Scotland).
The field of AgeTech has been emerging for the past several decades, with the development of new technologies and approaches that can support older adults. These technologies range from simple devices and apps to more complex systems like smart homes and robotics. A key aspect in developing this field has been the establishment of consortiums and networks, like the AGE-WELL Network of Centres of Excellence in Canada. This presentation will discuss the area of AgeTech, where the field currently sits, and more importantly where the field is going. It will also present examples of technologies that are currently being developed by the AGE-WELL network and strategies that are being established in Canada to help move AgeTech innovations to the marketplace and into the hands of those people who need these solutions—older adults and their caregivers.
Dr. Robin R. Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M University and a director of the Center for Robot-Assisted Search and Rescue. Her research focuses on artificial intelligence, robotics, and human-robot interaction for emergency management. She has deployed ground, aerial, and marine robots to over 30 disasters in five countries including the 9/11 World Trade Center, Fukushima, Hurricane Harvey, and the Surfside collapse. She is an ACM and IEEE Fellow, a TED speaker, and the author of over 200 papers and four books including the award-winning Disaster Robotics. Her contributions to robotics have been recognized with numerous awards including the ACM Eugene L. Lawler Award for Humanitarian Contributions and the Motohiro Kisoi Rescue Engineering Awards.
Small ground, aerial, or marine robots has been used for disaster response since 2001, but why aren’t they used more often? Why hasn’t AI revolutionized search and rescue? The barriers stem from the collision between the unique constraints of emergency management with the narrow focus of start-up culture and investment. Fortunately, robots, disasters, and high tech can mix if technologists turn to systems thinking.
Michael Pfeiffer is the Head of AI Research at Bosch Research and the Bosch Center for Artificial Intelligence (BCAI) in Rennigen. BCAI is the leading AI research lab in the European industry, and works in fields such as perception for automated driving, reinforcement learning, embedded AI, dynamics modelling, manipulation robotics, neuro-symbolic AI, or natural language processing, targeting real-world applications in the broad product spectrum of Bosch. BCAI is closely collaborating with top academia worldwide, including Carnegie Mellon University, Tsinghua University, Cyber Valley, or the University of Amsterdam. From 2018 to 2021 Dr. Pfeiffer was a senior manager for embedded deep learning at BCAI. Prior to joining Bosch in 2016 he was a group leader at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. He holds an undergraduate degree in Technical Mathematics and a PhD in Computer Science from Graz University of Technology in Austria. In his research Michael Pfeiffer has combined machine learning and computer vision with insights from computational neuroscience. His research has resulted in novel concepts for training spiking neural networks, allowing fast and efficient applications for event-based vision and neuromorphic engineering. At Bosch his research focus has been on novel deep-learning based concepts for processing automotive radar data, and on developing hardware-aware automated machine learning tools and concepts that allow efficient implementations of powerful deep learning algorithms on embedded hardware.
While the whole world is amazed by the rapid progress and new opportunities through AI, there is still little trust in AI for safety critical applications in the real, physical world. Whether in automated driving, robotics, IoT, manufacturing, or quality control, the key enabling technologies are safe and robust AI methods for perception and planning. In my talk I will present recent research results at Bosch, in which we try closing the gap between innovative AI research on benchmark data, and the demands of the real-world applications. Crucial ingredients are the synthesis of realistic data for corner cases, detection of outliers in the data and of model weaknesses, and finally training and deployment of neural networks that run reliably and with high performance on constrained, embedded hardware platforms. The combination of deep domain knowledge and safety aspects, combined with cutting edge methods research is the key to success for industrial applications of AI.
As the Head of Market Development, Sameer manages a worldwide team of market development managers who lead the effort to foster the adoption of MathWorks products for Technical Computing and Model-Based Design across different market segments.
He has over thirty years of experience applying MathWorks products in various application areas and, prior to joining MathWorks, Sameer has engaged in the research and development of complex control systems through his work at Visteon, Caterpillar, and Tata Motors. His efforts have been documented through United States patents and in global publications.
Sameer has a Bachelor’s Degree in Mechanical Engineering from the University of Mumbai and received his Ph.D. in Mechanical Engineering from Duke University, specializing in robotic controls and artificial intelligence. He also holds an MBA from the University of Michigan.
Robotics and AI are laying the foundation for breakthroughs in the industry and society by realizing smart factories, advancing healthcare, transforming mobility and more. Growing system complexity and expected faster design iterations are concomitants that are best addressed through the systematic use of models and data – known as Model-Based Design. This talk highlights how the same foundational principles of Model-Based Design can be applied across flexible manufacturing, surgical robotics, and autonomous driving. This allows reusing experience and sharing best practices across these different domains resulting in faster realization of the benefits from these transformative technologies.
Robert Riener is full professor for Sensory-Motor Systems at the Department of Health Sciences and Technology, ETH Zurich, and full professor of medicine at the University Hospital Balgrist, University of Zurich. His work focuses on the investigation of the sensory-motor interactions between humans and machines and the development of user-cooperative rehabilitation robots, exoskeletons, and virtual reality technologies. Riener is the initiator and organizer of the CYBATHLON, which was awarded with the European Excellence Award, the Yahoo Sports Technology Award and with two categories of the REIMAGINE Education Award. Riener has published more than 500 peer-reviewed journal and conference articles, 36 books and book chapters and he filed 26 patents. He has received 26 personal distinctions and awards. In 2018 Riener obtained the honorary doctoral degree from the University of Basel.
The aging of our population is increasingly challenging our healthcare system, because of the shortage of funding and personnel. Robots and AI can help solving these challenges, for example, by performing interventions more accurately and faster, by taking over tasks, which are too exhausting for the medical staff or too dangerous for the patient. Robotics and AI can also help to predict and prevent certain diseases or disease symptoms. This talk provides an overview of current and future robotic systems applied to rehabilitation.
Christian is passionate about Robotics and Technology with more than 25 years’ experience in Industrial Robotics and a deep knowledge in collaborative robotics. He studied at the Karlsruhe Institute of Technology and holds a Diplom degree in Physics.
Christian has working for KUKA Roboter GmbH R&D where he was responsible for collaborative systems and perception. He was also responsible for “Sunrise” – a robotics operating system for the KUKA Light Weight Robot LBR.
He is now with Festo where he leads a highly talented robotics team with the aim to enter the collaborative robot market with an innovative pneumatic driven robot – the “Festo Cobot”. The “Festo Cobot” was shown at the Hanover Fair and the Automatica fair in 2022 and received a lot of interests by the robotics community and potential customers as well.
He loves to develop small size robots and flight physics engines in his short spare time.
Nature gives us important and valuable impulses for technology. Neuronal networks are copies of nature’s incredible brain design, and we even created a new disciplinary called bionics – the science of copying and adapting nature’s solution not only but also for industrial automation. As one of the emerging fields in robotics are “robots for human” so called cobots – machines that can interact and collaborate with the human. The goal must be to design such machines in a way that they intrinsically accepted by the human companion. Therefore, one focus must be to build-in an intrinsically compliance, another focus must address the easy – or naturally – interaction between human and machine using more sensors and artificial intelligence technology.
The Festo Cobot – pCobot – is such a design approach for such an industrial co-worker. We will show how we use state-of-the-art controlled pneumatics technology to achieve a more “human behavior” without losing the required capability of an industrial machine. To control such a machine is a challenge and here the use of ai and ml is also evaluated. Also important for “robots for humans” is the ease-of-use – especially in unstructured environments – and here ai and ml is also the key to lower the entry hurdle for using cobots. The ”driver assistant systems” for robots will lead us the way to a more and more autonomous system that will be one day not just an robot but more a companion to the human. “Bin picking” is an excellent example for simplifying the setup and operation and shows already the advantage of ai and ml to extend the limits of today’s automation. This talk will show a different design approach for industrial cobots and will also covers why ai and ml is a key technology for robotics – whether to create a more naturally human robot interaction or to increase the usage of robots in service or industrial applications.
Jeremy Wyatt is Director of Applied Science at Amazon Robotics. He leads a team that invents and deploys robotic manipulation systems to handle packages and products. The robotic systems which he has contributed to currently pick tens of millions of packages every week to fulfill customer orders. He was previously Professor of Robotics and Artificial Intelligence at the University of Birmingham, in the UK. He has authored 120 papers on topics in Robotics and Artificial Intelligence, including machine learning, robot vision, robotic manipulation, probabilistic AI and robotic task planning. He earned his PhD from the University of Edinburgh.
Amazon has challenging robotic manipulation problems in terms of scale, item variability and process variability. In the past four years we have invented our own approaches to a variety of pick and place tasks. I will talk, at a high level, about the visual perception, grasp learning, failure detection, continual learning and automated A/B testing that we have used to deploy hundreds of robots that pick millions of packages every day. I will also cover the even more challenging problem of manipulating the hundreds of millions of products we have in our catalogue, including fast pick and place, damage prevention and stowing to dense storage.
Meet the experts: The munich_i Hightech Summit speakers are some of the most relevant international robotics and AI visionaries. Read up on the people who have contributed their pioneer’s knowledge to the past events. In addition to their expert profiles, we have summarized their most important theses and findings in short videos.