How will humans and smart robots work together in the future – not just in industrial environments, but in the private domain, med-ical technology, and disaster control as well? The speakers in this session present examples and address acceptance and ethical issues.
AI-enabled robots will solve some of humankind’s most pressing issues, says Prof. Dr.-Ing. Toshio Fukuda of Nagoya University, Japan. He heads one of the Japanese government’s Moonshot R&D programs on smart robots and supports his claim with examples from his research.
Prof. Dr. Aude Billard, a Swiss national who has been honored for her work in human-robot interaction, is the President-Elect of the global IEEE Robotics and Automation Society (RAS). She presents machine learning methods for fast and reactive robot control – an essential precondition for robot deployments outside of industrial environments.
The presentation by Prof. Dr. Robin R. Murphy, Texas A&M University, USA, focuses on her specific field of research: the development of AI-based robots for disaster control. And she knows what she’s talking about: Her robot has been deployed in disaster response operations in five countries.
Prof. Dr. Virginia Dignum heads a group conducting research into social and ethical impact of AI at Umeå University, Sweden. She calls for responsible AI and advocates a multidisciplinary approach to handling societal impacts. In this context, the required regulation becomes a stepping stone for innovation: to the benefit of humanity.
Robotics and AI can challenge for the Mega-Trend problems such as aging society, climate change, energy and food issues in the world in the future, 2050 and beyond. Intelligent robotic technology can be used for the wide variety of human life, such that aged people can live independently and comfortably with less assistance from others in near future, and that robot/AI can help human find new solutions and discovery in many wider applications, including the design and manufacturing in automation and many others.
Thus today’s asymmetry nature in function between human and robot will be dramatically changed to the more symmetric relation between both: easy to use and dependent each other equally in the future. Those symmetric AI robot will help people find new scientific and technological solution and discovery in many fields in future, so that there will be evolutionary changes of our society in not only the manufacturing but also design for device and system in automation and even the structure of our society itself.
To make such ambitious goal realizable in future, it is necessary for AI robots to have the capabilities of co-evolution and self-organization. I will show a new initiative on AI and Robot, Moon Shot Programs aiming on challenging to the Mega-Trend problems.
Thus robotics/AI will greatly change the structure and architecture of the world itself in future, 2050 and beyond.
Toshio Fukuda received Dr. Eng. from the University of Tokyo, Tokyo, Japan, in 1977.
Currently, He is Professor Emeritus Nagoya University(2013), visiting professor of Nagoya University(2013-), Professor Meijo University( 2013-2022), Professor Waseda University(2019-). His major is bio-robotics, especially Micro and Nano Robotics.
Dr. Fukuda is IEEE President (2020), the IEEE Director of Division X, Systems and Control (2017-2018), IEEE Region 10 Director (2013-2014) and served President of IEEE Robotics and Automation Society (1998-1999), Director of the IEEE Division X, Systems and Control (2001- 2002), Co-founding Editor-in-Chief of IEEE / ASME Transactions on Mechatronics (2000-2002) and Editor-in-Chief of ROBOMECH Journal, Springer (2013-), Editor-in-Chief, Journal of Cyborg and Bionic Systems(2018-). He was Founding President of IEEE Nanotechnology Council (2002-2003, 2005).He was elected as a member of Science Council of Japan (2008-2013). He organized many conferences, such as IEEE/RSJ Conference on Intelligent Robots and Systems(IROS, 1988), System Integration International(SII, 2008), Cyborg and Bionic Systems (CBS, 2017) as the founding Chair and others.
Dr. Fukuda received IEEE Robotics and Automation Pioneer Award (2004), IEEE Robotics and Automation Technical Field Award (2010), Honorary Doctor of Aalto University School of Science and Technology (2010), member of the Japan Academy of Engineering(2013), Friendship Award of State Administration of Foreign Experts affairs of the PR China (2014), Medal of Honor on Purple Ribbon (2015), Foreign member of Chinese Academy of Science (2017), Chunichi Culture Award(2019), The Order of the Sacred Treasure, Gold Rays with Neck Ribbon (2022).
IEEE Fellow (1995), SICE Fellow (1995), JSME Fellow (2001), RSJ Fellow (2004).
Robotics and AI can challenge for the Mega-Trend problems such as aging society, climate change, energy and food issues in the world in the future, 2050 and beyond. Intelligent robotic technology can be used for the wide variety of human life, such that aged people can live independently and comfortably with less assistance from others in near future, and that robot/AI can help human find new solutions and discovery in many wider applications, including the design and manufacturing in automation and many others. Thus today’s asymmetry nature in function between human and robot will be dramatically changed to the more symmetric relation between both: easy to use and dependent each other equally in the future. Those symmetric AI robot will help people find new scientific and technological solution and discovery in many fields in future, so that there will be evolutionary changes of our society in not only the manufacturing but also design for device and system in automation and even the structure of our society itself.To make such ambitious goal realizable in future, it is necessary for AI robots to have the capabilities of co-evolution and self-organization. I will show a new initiative on AI and Robot, Moon Shot Programs aiming on challenging to the Mega-Trend problems. Thus robotics/AI will greatly change the structure and architecture of the world itself in future, 2050 and beyond.
Today, many would like to deploy robots everywhere: in the streets, as cars, wheelchairs, and other mobility devices; in our homes, to cook, clean, and entertain us; on the body, to replace a lost limb or to augment its capabilities. For these robots to become reality, they need to depart from their ancestors in one dramatic way: They must escape from the comfortable, secluded, and largely predictable industrial world. In the past decades, robotics has made leaps forward in the design of increasingly complex robotic platforms to meet these challenges. In this endeavor, it has benefited from advances in optimization for solving high-dimensional constrained problems. These methods are powerful for planning in slow-paced tasks and when the environment is known. Advances in machine learning to analyze vast amounts of data often have offered powerful solutions for real-time control, but they often fall short of providing explicit guarantees on the learned model. The alternative is to develop machine learning methods that retain theoretical guarantees traditional from control theory.
A key issue faced by robotics today is to endow robots with the necessary reactivity to adapt their path at time-critical situations. Online reactivity is not just a matter of ensuring that there is a good-enough central processing unit on board the robot. It requires inherently robust control laws that can provide multiple solutions. Methods that combine machine learning and control theory do not necessitate large datasets and allow robots to learn control laws from only a handful of examples, while generalizing to the entire state space.
Prof. Aude Billard is Head of the LASA laboratory in the School of Engineering at the Swiss Institute of Technology Lausanne (EPFL) and holds a B.Sc and M.Sc. in Physics from EPFL (1995) and a Ph.D. in Artificial Intelligence (1998) from the University of Edinburgh. Aude Billard’s research spans the fields of machine learning and robotics with a particular emphasis on learning from sparse data and performing fast and robust retrieval. This work finds application to robotics, human-robot / human-computer interaction and computational neuroscience. Aude Billard leads the Swiss National Thematic Network Innovation Booster on Robotics, a half a million fund in support of industrial-academic partnerships, and is the current president-elect of the IEEE Robotics and Automation Society.
Today, many would like to deploy robots everywhere: in the streets, as cars, wheelchairs, and other mobility devices; in our homes, to cook, clean, and entertain us; on the body, to replace a lost limb or to augment its capabilities. For these robots to become reality, they need to depart from their ancestors in one dramatic way: They must escape from the comfortable, secluded, and largely predictable industrial world. In the past decades, robotics has made leaps forward in the design of increasingly complex robotic platforms to meet these challenges. In this endeavor, it has benefited from advances in optimization for solving high-dimensional constrained problems. These methods are powerful for planning in slow-paced tasks and when the environment is known. Advances in machine learning to analyze vast amounts of data often have offered powerful solutions for real-time control, but they often fall short of providing explicit guarantees on the learned model. The alternative is to develop machine learning methods that retain theoretical guarantees traditional from control theory.
A key issue faced by robotics today is to endow robots with the necessary reactivity to adapt their path at time-critical situations. Online reactivity is not just a matter of ensuring that there is a good-enough central processing unit on board the robot. It requires inherently robust control laws that can provide multiple solutions. Methods that combine machine learning and control theory do not necessitate large datasets and allow robots to learn control laws from only a handful of examples, while generalizing to the entire state space.
Small ground, aerial, or marine robots has been used for disaster response since 2001, but why aren’t they used more often? Why hasn’t AI revolutionized search and rescue? The barriers stem from the collision between the unique constraints of emergency management with the narrow focus of start-up culture and investment. Fortunately, robots, disasters, and high tech can mix if technologists turn to systems thinking.
Dr. Robin R. Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M University and a director of the Center for Robot-Assisted Search and Rescue. Her research focuses on artificial intelligence, robotics, and human-robot interaction for emergency management. She has deployed ground, aerial, and marine robots to over 30 disasters in five countries including the 9/11 World Trade Center, Fukushima, Hurricane Harvey, and the Surfside collapse. She is an ACM and IEEE Fellow, a TED speaker, and the author of over 200 papers and four books including the award-winning Disaster Robotics. Her contributions to robotics have been recognized with numerous awards including the ACM Eugene L. Lawler Award for Humanitarian Contributions and the Motohiro Kisoi Rescue Engineering Awards.
Small ground, aerial, or marine robots has been used for disaster response since 2001, but why aren’t they used more often? Why hasn’t AI revolutionized search and rescue? The barriers stem from the collision between the unique constraints of emergency management with the narrow focus of start-up culture and investment. Fortunately, robots, disasters, and high tech can mix if technologists turn to systems thinking.
Responsible Artificial Intelligence (AI) is not an option but the only possible way to go. It involves understanding AI's nature, design choices, societal role, and ethical considerations. It extends human capabilities but requires addressing challenges in education, jobs, and biases. Recognizing the societal role of AI is vital, understanding that it is not an autonomous entity but rather dependent on human responsibility and deci-sion-making. Recent AI developments, including foundational models, transformer models, generative models, and large language models (LLMs), raise questions about whether they are changing the paradigm of AI, and about the responsibility of those that are developing and deploying AI systems.
In this talk, I will further discuss the need for a relational perspective on AI that emphasize acceptance, trust, cooperation, and the common good. Taking responsibility involves regulation, governance, and awareness. Ethics and dilemmas are ongoing considerations, but require understanding that trade-offs must be made and that decision processes are always contextual. Taking responsibility requires designing AI systems with values in mind, implementing regulations, governance, monitoring, agreements, and norms.
Rather than viewing regulation as a constraint, it should be seen as a stepping stone for innovation, ensuring public acceptance, driving trans-formation, and promoting business differentiation.
Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden and director of WASP-HS, the Wallenberg Program on Humanities and Society for AI, Autonomous Systems and Software, the largest Swedish national research program on fundamental multidisciplinary research on the societal and human impact of AI. She is a member of the Royal Swedish Academy of Engineering Sciences (IVA), and a Fellow of the European Artificial Intelligence Association (EURAI). She is member of the Global Partnership on AI (GPAI), World Economic Forum’s Global Artificial Intelligence Council, Executive Committee of the IEEE Initiative on Ethically Aligned Design, of ALLAI, the Dutch AI Alliance, EU’s High Level Expert Group on Artificial Intelligence, and leader of UNICEF's guidance for AI and children, and member of UNESCO expert group on the implementation of AI recommendations. She is author of “Responsible Artificial Intelligence: developing and using AI in a responsible way”.
Responsible Artificial Intelligence (AI) is not an option but the only possible way to go. It involves understanding AI's nature, design choices, societal role, and ethical considerations. It extends human capabilities but requires addressing challenges in education, jobs, and biases. Recognizing the societal role of AI is vital, understanding that it is not an autonomous entity but rather dependent on human responsibility and decision-making. Recent AI developments, including foundational models, transformer models, generative models, and large language models (LLMs), raise questions about whether they are changing the paradigm of AI, and about the responsibility of those that are developing and deploying AI systems.
In this talk, I will further discuss the need for a relational perspective on AI that emphasize acceptance, trust, cooperation, and the common good. Taking responsibility involves regulation, governance, and awareness. Ethics and dilemmas are ongoing considerations, but require understanding that trade-offs must be made and that decision processes are always contextual. Taking responsibility requires designing AI systems with values in mind, implementing regulations, governance, monitoring, agreements, and norms.
Rather than viewing regulation as a constraint, it should be seen as a stepping stone for innovation, ensuring public acceptance, driving transformation, and promoting business differentiation.
Humans, AI and Robots! will be hosted by Session Chair Prof. Dr. Stefan Leutenegger, professor at the Chair of Machine Learning for Robotics (TU Munich).
Technologies have the potential to assist humans and improve our quality of life. However, it is essential to design technologies with a focus on the benefits for the many, and not the few. Question of social justice and equity need to move to the center of technology development, particularly in fields such as AI and robotics. To this end, we must foster interdisciplinary collaborations between the social sciences and AI research and address social, ethical and political questions in an integrated way already during technology development.