{"id":9530,"date":"2024-09-16T13:08:37","date_gmt":"2024-09-16T17:08:37","guid":{"rendered":"https:\/\/www.purdue.edu\/newsroom\/?p=9530"},"modified":"2024-09-16T13:08:38","modified_gmt":"2024-09-16T17:08:38","slug":"autonomous-vehicles-could-understand-their-passengers-better-with-chatgpt-research-shows","status":"publish","type":"post","link":"https:\/\/www.purdue.edu\/newsroom\/2024\/Q3\/autonomous-vehicles-could-understand-their-passengers-better-with-chatgpt-research-shows","title":{"rendered":"Autonomous vehicles could understand their passengers better with ChatGPT, research shows"},"content":{"rendered":"\n<p>WEST LAFAYETTE, Ind. \u2014 Imagine simply telling your vehicle, \u201cI\u2019m in a hurry,\u201d and it automatically takes you on the most efficient route to where you need to be.<\/p>\n\n\n\n<p>Purdue University engineers have found that an autonomous vehicle (AV) can do this with the help of ChatGPT or other chatbots made possible by artificial intelligence algorithms called large language models.<\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2312.09397\">The study<\/a>, to be presented Sept. 25 at the <a href=\"https:\/\/ieee-itsc.org\/2024\/\">27th IEEE International Conference on Intelligent Transportation Systems<\/a>, may be among the first experiments testing how well a real AV can use large language models to interpret commands from a passenger and drive accordingly.<\/p>\n\n\n\n<p><a href=\"https:\/\/engineering.purdue.edu\/CCE\/People\/ptProfile?resource_id=271311\">Ziran Wang<\/a>, an assistant professor in Purdue\u2019s <a href=\"https:\/\/engineering.purdue.edu\/CCE\">Lyles School of Civil and Construction Engineering<\/a> who led the study, believes that for vehicles to be fully autonomous one day, they\u2019ll need to understand everything that their passengers command, even when the command is implied. A taxi driver, for example, would know what you need when you say that you\u2019re in a hurry without you having to specify the route the driver should take to avoid traffic.<\/p>\n\n\n\n<p>Although today\u2019s AVs come with features that allow you to communicate with them, they need you to be clearer than would be necessary if you were talking to a human. In contrast, large language models can interpret and give responses in a more humanlike way because they are trained to draw relationships from huge amounts of text data and keep learning over time.<\/p>\n\n\n\n<p>\u201cThe conventional systems in our vehicles have a user interface design where you have to press buttons to convey what you want, or an audio recognition system that requires you to be very explicit when you speak so that your vehicle can understand you,\u201d Wang said. \u201cBut the power of large language models is that they can more naturally understand all kinds of things you say. I don\u2019t think any other existing system can do that.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conducting a new kind of study<\/h2>\n\n\n\n<p>In this study, large language models didn\u2019t drive an AV. Instead, they were assisting the AV\u2019s driving using its existing features. Wang and his students found through integrating these models that an AV could not only understand its passenger better, but also personalize its driving to a passenger\u2019s satisfaction.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"876\" height=\"493\" src=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avgadgets.jpg\" alt=\"A student sits in a vehicle\u2019s driver\u2019s seat, hands in lap and surrounded by several gadgets hooked up to the vehicle\u2019s interior\" class=\"wp-image-9463\" title=\"\" srcset=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avgadgets.jpg 876w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avgadgets-300x169.jpg 300w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avgadgets-768x432.jpg 768w\" sizes=\"auto, (max-width: 876px) 100vw, 876px\" \/><figcaption class=\"wp-element-caption\">Purdue PhD student Can Cui sits for a ride in the test autonomous vehicle. A microphone in the console picks up his commands, which large language models in the cloud interpret. The vehicle drives according to instructions generated from the large language models. (Purdue University photo\/John Underwood)<\/figcaption><\/figure>\n\n\n\n<p>Before starting their experiments, the researchers trained ChatGPT with prompts that ranged from more direct commands (e.g., \u201cPlease drive faster\u201d) to more indirect commands (e.g., \u201cI feel a bit motion sick right now\u201d). As ChatGPT learned how to respond to these commands, the researchers gave its large language models parameters to follow, requiring it to take into consideration traffic rules, road conditions, the weather and other information detected by the vehicle\u2019s sensors, such as cameras and light detection and ranging.<\/p>\n\n\n\n<p>The researchers then made these large language models accessible over the cloud to an experimental vehicle with <a href=\"https:\/\/www.sae.org\/blog\/sae-j3016-update\">level four autonomy as defined by SAE International<\/a>. Level four is one level away from what the industry considers to be a fully autonomous vehicle.<\/p>\n\n\n\n<p>When the vehicle\u2019s speech recognition system detected a command from a passenger during the experiments, the large language models in the cloud reasoned the command with the parameters the researchers defined. Those models then generated instructions for the vehicle\u2019s drive-by-wire system \u2014 which is connected to the throttle, brakes, gears and steering \u2014 regarding how to drive according to that command.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"876\" height=\"493\" src=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avtrunk.jpg\" alt=\"A student and professor on either side of a vehicle\u2019s open trunk look at electronic devices and wiring installed inside\" class=\"wp-image-9464\" title=\"\" srcset=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avtrunk.jpg 876w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avtrunk-300x169.jpg 300w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avtrunk-768x432.jpg 768w\" sizes=\"auto, (max-width: 876px) 100vw, 876px\" \/><figcaption class=\"wp-element-caption\">The trunk of the test autonomous vehicle contains a drive-by-wire system that allows large language models in the cloud to assist the vehicle with responding to a passenger\u2019s commands. Pictured from left to right: Purdue PhD student Zichong Yang and Purdue assistant professor Ziran Wang. (Purdue University photo\/John Underwood)<\/figcaption><\/figure>\n\n\n\n<p>For some of the experiments, Wang\u2019s team also tested a memory module they had installed into the system that allowed the large language models to store data about the passenger\u2019s historical preferences and learn how to factor them into a response to a command.<\/p>\n\n\n\n<p>The researchers conducted most of the experiments at a proving ground in Columbus, Indiana, which used to be an airport runway. This environment allowed them to safely test the vehicle\u2019s responses to a passenger\u2019s commands while driving at highway speeds on the runway and handling two-way intersections. They also tested how well the vehicle parked according to a passenger\u2019s commands in the lot of Purdue\u2019s Ross-Ade Stadium.<\/p>\n\n\n\n<p>The study participants used both commands that the large language models had learned and ones that were new while riding in the vehicle. Based on their survey responses after their rides, the participants expressed a lower rate of discomfort with the decisions the AV made compared to data on how people tend to feel when riding in a level four AV with no assistance from large language models.<\/p>\n\n\n\n<p>The team also compared the AV\u2019s performance to baseline values created from data on what people would consider on average to be a safe and comfortable ride, such as how much time the vehicle allows for a reaction to avoid a rear-end collision and how quickly the vehicle accelerates and decelerates. The researchers found that the AV in this study outperformed all baseline values while using the large language models to drive, even when responding to commands the models hadn\u2019t already learned.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"876\" height=\"493\" src=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avstudents.jpg\" alt=\"A student sits in a car\u2019s rear with a keyboard in his lap watching a mounted screen while another student sits in the driver\u2019s seat\" class=\"wp-image-9465\" title=\"\" srcset=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avstudents.jpg 876w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avstudents-300x169.jpg 300w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avstudents-768x432.jpg 768w\" sizes=\"auto, (max-width: 876px) 100vw, 876px\" \/><figcaption class=\"wp-element-caption\">While study participants sat in the driver\u2019s seat of the test autonomous vehicle and spoke commands, a Purdue researcher sat in the back to monitor the large language models and feeds from the vehicle\u2019s cameras. Pictured from back to front of the vehicle: Purdue master\u2019s student Yupeng Zhou and PhD student Can Cui. (Purdue University photo\/John Underwood) <\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Future directions<\/h2>\n\n\n\n<p>The large language models in this study averaged 1.6 seconds to process a passenger\u2019s command, which is considered acceptable in non-time-critical scenarios but should be improved upon for situations when an AV needs to respond faster, Wang said. This is a problem that affects large language models in general and is being tackled by the industry as well as by university researchers.<\/p>\n\n\n\n<p>Although not the focus of this study, it\u2019s known that large language models like ChatGPT are prone to \u201challucinate,\u201d which means that they can misinterpret something they learned and respond in the wrong way. Wang\u2019s study was conducted in a setup with a fail-safe mechanism that allowed participants to safely ride when the large language models misunderstood commands. The models improved in their understanding throughout a participant\u2019s ride, but hallucination remains an issue that must be addressed before vehicle manufacturers consider implementing large language models into AVs.<\/p>\n\n\n\n<p>Vehicle manufacturers also would need to do much more testing with large language models on top of the studies that university researchers have conducted. Regulatory approval would additionally be required for integrating these models with the AV\u2019s controls so that they can actually drive the vehicle, Wang said.<\/p>\n\n\n\n<p>In the meantime, Wang and his students are continuing to conduct experiments that may help the industry explore the addition of large language models to AVs.<\/p>\n\n\n\n<p>Since their study testing ChatGPT, the researchers have evaluated other public and private chatbots based on large language models, such as Google\u2019s Gemini and Meta\u2019s series of Llama AI assistants. So far, they\u2019ve seen ChatGPT perform the best on indicators for a safe and time-efficient ride in an AV. Published results are forthcoming.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"876\" height=\"493\" src=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avdriving.jpg\" alt=\"A white, Purdue-branded midsize crossover vehicle drives through a parking lot\" class=\"wp-image-9466\" title=\"\" srcset=\"https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avdriving.jpg 876w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avdriving-300x169.jpg 300w, https:\/\/www.purdue.edu\/newsroom\/wp-content\/uploads\/2024\/09\/wang-avdriving-768x432.jpg 768w\" sizes=\"auto, (max-width: 876px) 100vw, 876px\" \/><figcaption class=\"wp-element-caption\">The test autonomous vehicle drives itself as part of a demonstration in the parking lot of Purdue\u2019s Ross-Ade Stadium. (Purdue University photo\/John Underwood)<\/figcaption><\/figure>\n\n\n\n<p>Another next step is seeing if it would be possible for large language models of each AV to talk to each other, such as to help AVs determine which should go first at a four-way stop. Wang\u2019s lab also is starting a project to study the use of large vision models to help AVs drive in extreme winter weather common throughout the Midwest. These models are like large language models but trained on images instead of text. The project will be conducted with support from the <a href=\"https:\/\/engineering.purdue.edu\/STSRG\/research\/CCAT\/P_CCAT\" target=\"_blank\" rel=\"noreferrer noopener\">Center for Connected and Automated Transportation (CCAT)<\/a>, which is funded by the <a href=\"https:\/\/www.transportation.gov\/policy\/ost-r\/rdt\" target=\"_blank\" rel=\"noreferrer noopener\">U.S. Department of Transportation\u2019s Office of Research, Development and Technology<\/a> through its <a href=\"https:\/\/www.transportation.gov\/content\/university-transportation-centers\" target=\"_blank\" rel=\"noreferrer noopener\">University Transportation Centers program<\/a>. Purdue is one of the CCAT\u2019s university partners.<\/p>\n\n\n\n<p>The experiments Wang\u2019s lab conducted on integrating large language models into an AV were supported by gift funding from Toyota Motor North America. Wang is the assistant director of the <a href=\"https:\/\/engineering.purdue.edu\/ICON\">Institute for Control, Optimization and Networks<\/a> at Purdue, which is affiliated with the university\u2019s <a href=\"https:\/\/www.purdue.edu\/computes\/institute-for-physical-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">Institute for Physical Artificial Intelligence<\/a>, a <a href=\"https:\/\/www.purdue.edu\/computes\/\" target=\"_blank\" rel=\"noreferrer noopener\">Purdue Computes<\/a> initiative.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">About Purdue University<\/h2>\n\n\n\n<p>Purdue University is a public research institution demonstrating excellence at scale. Ranked among top 10 public universities and with two colleges in the top four in the United States, Purdue discovers and disseminates knowledge with a quality and at a scale second to none. More than 105,000 students study at Purdue across modalities and locations, including nearly 50,000 in person on the West Lafayette campus. Committed to affordability and accessibility, Purdue\u2019s main campus has frozen tuition 13 years in a row. See how Purdue never stops in the persistent pursuit of the next giant leap \u2014 including its first comprehensive urban campus in Indianapolis, the Mitch Daniels School of Business, Purdue Computes and the One Health initiative \u2014 at <a href=\"https:\/\/www.purdue.edu\/president\/strategic-initiatives\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.purdue.edu\/president\/strategic-initiatives<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Paper<\/h2>\n\n\n\n<p><em>Personalized autonomous driving with large language models: field experiments<\/em><br>27th IEEE International Conference on Intelligent Transportation Systems<br>DOI: <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2312.09397\" target=\"_blank\" rel=\"noreferrer noopener\">10.48550\/arXiv.2312.09397<\/a><\/p>\n\n\n<div id=\"note\" class=\"post-content__attribution \">\n    <div class=\"columns\"> \n                    <div class=\"column\"> \n                <p class=\"post-content__source\">\n                    <strong>Media contact:<\/strong> Kayla Albert, 765-494-2432, <a href=\"mailto:wiles5@purdue.edu\">wiles5@purdue.edu<\/a>                <\/p>\n            <\/div>\n                            <div class=\"column is-narrow\">                 \n                <div class=\"post-content__editor-note\">\n                    <p class=\"post-content__editor-note--header\">Note to journalists:<\/p>\n                    <p>    \n                        High-resolution photos and b-roll showing a demonstration of these experiments are available on <a href=\"https:\/\/drive.google.com\/drive\/folders\/1mDnnPeEbr7OnCPfQCtX8UXdY5xw-zbC3?usp=sharing\">Google Drive<\/a>. <a href=\"https:\/\/newsroom.ap.org\/detail\/Tellself-drivingcarswheretogowithChatGPT\/3eb45bed8bca4a279f50b68a4b3c4b2b\" data-type=\"link\" data-id=\"https:\/\/newsroom.ap.org\/detail\/Tellself-drivingcarswheretogowithChatGPT\/3eb45bed8bca4a279f50b68a4b3c4b2b\" target=\"_blank\" rel=\"noreferrer noopener\">A video of Ziran Wang<\/a>\u00a0talking about this research is available to media who have an Associated Press subscription.                    <\/p>\n                <\/div>\n            <\/div>\n            <\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>WEST LAFAYETTE, Ind. \u2014 Imagine simply telling your vehicle, \u201cI\u2019m in a hurry,\u201d and it automatically takes you on the most efficient route to where you need to be. Purdue University engineers have found that an autonomous vehicle (AV) can<\/p>\n","protected":false},"author":25,"featured_media":9461,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[263],"tags":[],"department":[],"source":[29],"purdue_today_topic":[],"coauthors":[131],"class_list":["post-9530","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-purdue-computes","source-purdue-news"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/posts\/9530","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/users\/25"}],"replies":[{"embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/comments?post=9530"}],"version-history":[{"count":2,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/posts\/9530\/revisions"}],"predecessor-version":[{"id":9541,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/posts\/9530\/revisions\/9541"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/media\/9461"}],"wp:attachment":[{"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/media?parent=9530"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/categories?post=9530"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/tags?post=9530"},{"taxonomy":"department","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/department?post=9530"},{"taxonomy":"source","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/source?post=9530"},{"taxonomy":"purdue_today_topic","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/purdue_today_topic?post=9530"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.purdue.edu\/newsroom\/wp-json\/wp\/v2\/coauthors?post=9530"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}