{"id":9560,"date":"2024-09-17T09:31:57","date_gmt":"2024-09-17T13:31:57","guid":{"rendered":"https:\/\/www.purdue.edu\/newsroom\/?p=9560"},"modified":"2024-09-17T10:52:40","modified_gmt":"2024-09-17T14:52:40","slug":"raising-robots-teaching-robots-things-humans-learn-including-navigation-movement-dance-spatial-reasoning","status":"publish","type":"post","link":"https:\/\/www.purdue.edu\/newsroom\/2024\/Q3\/raising-robots-teaching-robots-things-humans-learn-including-navigation-movement-dance-spatial-reasoning","title":{"rendered":"Raising robots: Teaching robots things humans learn, including navigation, movement, dance, spatial reasoning"},"content":{"rendered":"\n<p>WEST LAFAYETTE, Ind. \u2014 Being a baby is harder than it looks. Born into the world knowing almost nothing, they spend their first few years getting a PhD in navigating a physical environment.<\/p>\n\n\n\n<p>Likewise, artificial intelligence programs come into the world as a stream of code, knowing nothing, with no experience and no abilities. But while babies learn about physics from the moment they\u2019re born, robots have to be programmed to understand innately human traits.<\/p>\n\n\n\n<p>\u201cA robot needs to interact with the world,\u201d said <a href=\"https:\/\/www.cs.purdue.edu\/people\/faculty\/bera89.html\" target=\"_blank\" rel=\"noreferrer noopener\">Aniket Bera<\/a>, associate professor of computer science in Purdue University\u2019s <a href=\"https:\/\/www.purdue.edu\/science\/\" target=\"_blank\" rel=\"noreferrer noopener\">College of Science<\/a> and an AI expert. \u201cHuman brains learn through experience and by extrapolating their experiences to new situations. Our brain learns properties are transferable. Machine learning models don\u2019t currently do this. That\u2019s the gap we\u2019re addressing. This is very foundational research.\u201d<\/p>\n\n\n\n    <div  class=\"purdue-home-quick-links-static \">\n        <div class=\"tagged-header-container\">\n\n            <h2 class=\"tagged-header\"><span>ADDITIONAL INFORMATION<\/span><\/h2>\n        \n        <\/div>\n\n       <ul class=\"quick-links-content\">\n                                        <li class=\"quick-link__item\">\n                                                                <a class=\"quick-link__link\"\n                                    href=\"https:\/\/www.purdue.edu\/newsroom\/2023\/Q1\/youve-got-to-have-heart-computer-scientist-works-to-help-ai-comprehend-human-emotions\/ \" target=\"_blank\">\n                                    You\u2019ve got to have heart: Computer scientist works to help AI comprehend human emotions                                <\/a>\n                            <\/li>\n                                                <li class=\"quick-link__item\">\n                                                                <a class=\"quick-link__link\"\n                                    href=\"https:\/\/www.purdue.edu\/newsroom\/2023\/Q2\/approaching-artificial-intelligence-how-purdue-is-leading-the-research-and-advancement-of-ai-technologies\/ \" target=\"_blank\">\n                                    Approaching artificial intelligence: How Purdue is leading the research and advancement of AI technologies                                <\/a>\n                            <\/li>\n                            <\/ul>\n\n<\/div>\n\n\n\n\n<p>As babies begin exploring the world around them, they touch their parents\u2019 hair and nose. They graduate to pushing food off high chairs, putting toys in a bucket (and dumping them out!), sorting shapes into the correct compartments, and splashing in the tub. Babies walk. They navigate. And babies dance.<\/p>\n\n\n\n<p>Bera is trying to teach robots to do those things, too. Helping robots understand the physical world improves their functionality and navigation, making them more practical in situations from wilderness rescue to food delivery.<\/p>\n\n\n\n<p>To do that, AI programs must first be trained to recognize shapes, to understand how to form sentences, to understand queries and responses, and to acknowledge their names and commands. AI is a foundational component of the <a href=\"https:\/\/www.purdue.edu\/computes\/institute-for-physical-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">Institute for Physical Artificial Intelligence<\/a>, a <a href=\"https:\/\/www.purdue.edu\/computes\/\" target=\"_blank\" rel=\"noreferrer noopener\">Purdue Computes initiative.<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Elementary, my dear robot<\/h2>\n\n\n\n<p>Most people don\u2019t know how the math behind fluid dynamics works, or parabolic motion, or inertia. But they do know that water will splash out when a heavy rock is dropped into a full bucket, how to catch a ball, and when to push a child on a swing so that they go as high as possible.<\/p>\n\n\n\n<p>To robots, everything is math, and scientists must teach robots how physics works, which is more complicated than it sounds. Humans extrapolate to exist \u2014 robots do not naturally extrapolate from what they know to new situations.<\/p>\n\n\n\n<p>Humans assume the floor will still be there when they get out of bed each morning; they don\u2019t check. If gravity worked yesterday, humans assume it will today. If one book falls when pushed off a table, humans don\u2019t need to test it to know all the others will. And humans assume the same goes for every other object in existence.<\/p>\n\n\n\n<p>\u201cTrying to model the movement of objects, liquids and gases is challenging,\u201d Bera said. \u201cScientists have traditionally used mathematical models, but they\u2019re very computationally heavy and data-intensive. We are building machine learning AI models that can understand the physics of situations, such as how a ball will move in an environment. Lots of AIs can recognize a ball, but far fewer understand what happens when a ball moves. Babies play with blocks to intrinsically learn physics. Their towers fall many times, but each time they\u2019re learning.\u201d<\/p>\n\n\n\n<p>Even once an AI is trained to understand how a small ball moves, it can\u2019t extrapolate the dynamics of the system to other shapes \u2014 or even to similar shapes with different colors or textures of objects.<\/p>\n\n\n\n<p>Using math to teach robots and computer programs to extrapolate is a massive challenge and requires new and unique approaches to programming and machine learning.<\/p>\n\n\n\n<p>\u201cWhen you try to get an AI program to apply acquired knowledge into a different situation than the one it was trained on, it fails miserably,\u201d Bera said. \u201cThis is also a problem with equity and ethics. Early facial recognition software struggled because it was trained on predominantly white faces. When it was tested on non-white faces, it couldn\u2019t identify them. That\u2019s because your models don\u2019t transfer from one situation to another. They need to understand what a face is, what hair looks like, what a nose is, not just a collection of pixels. They need to go beyond pixels to understanding the whole to be able to apply knowledge in new areas or new applications.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"876\" height=\"493\" src=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/bera-dancing.jpg\" alt=\"Two researchers in motion capture suits dancing together\" class=\"wp-image-9548\" title=\"\" srcset=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/bera-dancing.jpg 876w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/bera-dancing-300x169.jpg 300w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/bera-dancing-768x432.jpg 768w\" sizes=\"auto, (max-width: 876px) 100vw, 876px\" \/><figcaption class=\"wp-element-caption\">Bera and his team used motion capture sensors to record dancers\u2019 movements, which they later used to help program robots with more natural movements \u2014 including dance. (Purdue University photo\/Jonathan Poole)<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Dancing the robot<\/h2>\n\n\n\n<p>Robots can do lots of things humans typically can\u2019t do. Translate any language into any other language, for instance. Track thousands of satellites at once. Solve process and math problems faster than most humans and almost all other animals.<\/p>\n\n\n\n<p>But they can\u2019t do something even 1-year-old humans and parrots on YouTube can do, and that\u2019s dance.<\/p>\n\n\n\n<p>Is that a problem? Dancing robots aren\u2019t in high demand, generally. But as it turns out, the ability to dance says important things about a being\u2019s \u2014 human or otherwise \u2014 coordination, cognition and perception.<\/p>\n\n\n\n<p>Robots don\u2019t need to dance, of course, but the programming introduces fluidity of movement and competence, helping them grasp how other physical objects, including potentially their own robot bodies, move in a physical environment.<\/p>\n\n\n\n<p>Teaching robots to understand the way humans naturally move also makes them more predictable. If they understand normal human movements, they can guess how humans in a crowd will move. That ability to extrapolate future movement imparts a lesser chance of a robot colliding with a human and a greater chance that a robot can reach someone who needs help quickly or more efficiently get to where it needs to be.<\/p>\n\n\n\n<p>\u201cModeling human motion is a very complex problem,\u201d Bera said. \u201cEspecially in dancing, everything moves very dynamically and rhythmically. Dance incorporates the content of the lyrics of the song, the emotions of the melody. Generating natural humanlike motions is a very difficult proposition.\u201d<\/p>\n\n\n\n<p>How does a robot learn to <a href=\"https:\/\/www.youtube.com\/watch?v=wvEwsc0Qmnw\" target=\"_blank\" rel=\"noreferrer noopener\">define dancing<\/a>? By watching dancers. To that end, Bera and his team have filmed dozens of people performing a diverse range of traditional cultural dance forms, including Latin, ballroom, folk and TikTok dances.<\/p>\n\n\n\n<p>Using motion capture technology, they are analyzing the dance movements, then using that data to train a machine learning model. Their goal is not only to teach an AI to dance by itself, but also to dance as a pair \u2014 potentially even with a human partner.<\/p>\n\n\n\n<p>\u201cDuet forms \u2014 where people dance together \u2014 are an order of magnitude more difficult than single dances,\u201d Bera said. \u201cWe are analyzing how couples interact, how they anticipate each other\u2019s moves and how they react to the music. This is dancing as directed research; all these insights go to make robots safer and more helpful to humans.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Where we\u2019re going, we don\u2019t need roads<\/h2>\n\n\n\n<p>GPS can tell drivers to turn left, turn right, follow street signs, and check databases for traffic and construction updates. What it can\u2019t do is help when there\u2019s no road. While some navigation software can give directions over roadless terrain by using words like \u201cnorth\u201d and coordinates on a map, it can\u2019t adapt on the fly, and it certainly can\u2019t adapt around moving targets, including shifting sands and human crowds.<\/p>\n\n\n\n<p>That\u2019s a problem for robots whose remit includes rescuing trapped humans in unstable landscapes or quickly navigating a crowd. In dynamic, even dangerous, environments, robots need to know how to stay safe and how to keep other people safe.<\/p>\n\n\n\n<p>Like your robot vacuum continually butting into the same desk it has run into the last hundred times it cleaned your floors, robots don\u2019t typically understand how their bodies move in space and how to solve physical problems. They think about obstacles differently than humans do.<\/p>\n\n\n\n<p>\u201cWhen robots move through the environment, they have to worry about the terrain, but they also have to navigate around humans,\u201d Bera said. \u201cRobots need to be able to find and predict gaps in a crowd and understand how people will move before they do it. We humans do this subconsciously all the time, but robots need to be taught.\u201d<\/p>\n\n\n\n<p>Part of the issue is that robots don\u2019t see the same way humans or other animals do. Picture the robot vacuum again, terrified of a \u201ccliff\u201d that is actually a stark shadow on the floor. Biological sight uses images from the eye but also interpretation through the brain that scientists still don\u2019t fully understand. The poor robots only have cameras \u2014 which means they need to use as many as possible to get a comprehensive and correct view of their environment.<\/p>\n\n\n\n<p>Part of Bera\u2019s newest research project involves braiding multiple visual inputs together in a robot\u2019s brain to give them a better understanding of their surroundings. Those camera angles could include the robot\u2019s own \u201ceyes\u201d as well as cameras on drones or remote cameras held by humans or attached to other robots. Robots live in the real world, not the idealized ones that exist inside cleanrooms and dry labs.<\/p>\n\n\n\n<p>\u201cRobots need to be able to quickly grasp and map their environment, especially if they\u2019re on a planet\u2019s surface,\u201d Bera said. \u201cNatural terrain is unpredictable, and it can shift. Different cameras give different viewpoints, and the robot needs to be able to integrate all that information.\u201d<\/p>\n\n\n\n<p>In addition, Bera is programming AI models to make decisions based on goals. If speed is of the essence, say in delivering vaccines or rescue materials, a robot might take the most direct route. But in other situations, the robot may need to weigh keeping itself safe. If its mission isn\u2019t time sensitive or urgent over the short term, keeping the robot\u2019s mechanism in good working order is also a goal.<\/p>\n\n\n\n<p>If a robot is trying to deliver a pizza to someone in a building\u2019s basement, falling down a stairwell isn\u2019t the best way to go. The pizza might get there, but it\u2019ll be squashed along with the robot.<\/p>\n\n\n\n<p>In other settings, including extraplanetary research and even exploration of extreme environments on Earth, robots need to plan around natural conditions like weather.<\/p>\n\n\n\n<p>\u201cIf it\u2019s raining, and the robot isn\u2019t in a hurry, can it plot a course that keeps it under trees and cover?\u201d Bera said. \u201cIn a windstorm on Mars, can it figure out how to use boulders and crater rims to stay safe?\u201d<\/p>\n\n\n\n<p>Teaching humans to correctly weigh and assess risk is challenging enough. Trying to get a robot or AI to do the same thing requires helping them understand not only the physical and mathematical components, but more nebulous factors including long-term goals, probability and understanding of humans.<\/p>\n\n\n\n<p>\u201cIf robots understand human emotions, behavior, motions and environment, that understanding will help them be more effective, more beneficial and more successful in the long term,\u201d Bera said.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">About Purdue University<\/h2>\n\n\n\n<p>Purdue University is a public research institution demonstrating excellence at scale. Ranked among top 10 public universities and with two colleges in the top four in the United States, Purdue discovers and disseminates knowledge with a quality and at a scale second to none. More than 105,000 students study at Purdue across modalities and locations, including nearly 50,000 in person on the West Lafayette campus. Committed to affordability and accessibility, Purdue\u2019s main campus has frozen tuition 13 years in a row. See how Purdue never stops in the persistent pursuit of the next giant leap \u2014 including its first comprehensive urban campus in Indianapolis, the\u202fMitch Daniels School of Business, Purdue Computes and the One Health initiative\u202f\u2014 at\u202f<a href=\"https:\/\/www.purdue.edu\/president\/strategic-initiatives\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.purdue.edu\/president\/strategic-initiatives<\/a>.<\/p>\n\n\n<div id=\"note\" class=\"post-content__attribution \">\n    <div class=\"columns\"> \n                    <div class=\"column\"> \n                <p class=\"post-content__source\">\n                    <strong>Media contact:<\/strong> Brittany Steff, <a href=\"mailto:bsteff@purdue.edu\">bsteff@purdue.edu<\/a>                <\/p>\n            <\/div>\n                    <\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>WEST LAFAYETTE, Ind. \u2014 Being a baby is harder than it looks. Born into the world knowing almost nothing, they spend their first few years getting a PhD in navigating a physical environment. Likewise, artificial intelligence programs come into the<\/p>\n","protected":false},"author":25,"featured_media":9546,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[263],"tags":[],"department":[],"source":[29],"purdue_today_topic":[],"coauthors":[77],"class_list":["post-9560","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-purdue-computes","source-purdue-news"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/posts\/9560","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/users\/25"}],"replies":[{"embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/comments?post=9560"}],"version-history":[{"count":1,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/posts\/9560\/revisions"}],"predecessor-version":[{"id":9561,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/posts\/9560\/revisions\/9561"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/media\/9546"}],"wp:attachment":[{"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/media?parent=9560"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/categories?post=9560"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/tags?post=9560"},{"taxonomy":"department","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/department?post=9560"},{"taxonomy":"source","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/source?post=9560"},{"taxonomy":"purdue_today_topic","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/purdue_today_topic?post=9560"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/coauthors?post=9560"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}