{"id":2171,"date":"2024-04-08T12:00:26","date_gmt":"2024-04-08T16:00:26","guid":{"rendered":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/?p=2171"},"modified":"2026-01-29T12:42:51","modified_gmt":"2026-01-29T17:42:51","slug":"multi-fidelity-training-the-key-to-affordable-and-accurate-ai-models","status":"publish","type":"post","link":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/2024\/04\/08\/multi-fidelity-training-the-key-to-affordable-and-accurate-ai-models\/","title":{"rendered":"Multi-Fidelity Training: The Key to Affordable and Accurate AI Models"},"content":{"rendered":"<div  class=\"section has-padding-top-large \">\n    <div class=\"container\">\n                \n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>Artificial Intelligence seems like it\u2019s everywhere these days. From writing emails to stopping shoplifters it is prevalent in almost every one of our daily lives, and many of us don\u2019t even know it. Even as I write this article, a small AI agent acts to ensure that I spell the word \u2018discombobulated\u2019 without getting the letters\u2026well, discombobulated. My research is focused on computer vision systems, the ones you see guiding autonomous cars or recognizing your face to log onto your phone or computer. But how do we train these AI models? Data, and lots of it.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Let\u2019s use the example of teaching a toddler to explain training an AI. If you were to train a toddler what a cat looks like, you\u2019d most likely print out a picture of one and show it to them. Imagine this cat in the photo has a long tail, four legs, and a nice black coat of fur. The issue is, when our toddler sees a dog later that day with all of those features our trainee might get it confused with one of its feline friends. That\u2019s why we\u2019re going to train our student with more data, a lot more data. Pictures of cats of all sizes and colors; big ones, small ones, hairless ones, maybe even a few with funny sweaters expressing their disdain for Mondays. Only after all this training can we be confident that our cat-identifying professional is fit for the role. It\u2019s the same with AI-based image recognition systems. The COCO dataset, one of the most widely used trainers, comes with over 140,000 images that in total train a single AI to recognize only 80 types of objects. That\u2019s a lot of data for engineers to gather for their next computer vision project.&nbsp;&nbsp;<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:5%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image size-full is-resized is-style-default\"><img loading=\"lazy\" decoding=\"async\" width=\"667\" height=\"754\" src=\"https:\/\/dev.www.purdue.edu\/academics\/ogsps\/professional-development\/wp-content\/uploads\/sites\/4\/2025\/09\/Robert_Seif.png\" alt=\"Photo of the article author, Robert Seif.\" class=\"wp-image-1216\" style=\"aspect-ratio:1;object-fit:cover;width:300px\" srcset=\"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-content\/uploads\/sites\/4\/2025\/09\/Robert_Seif.png 667w, https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-content\/uploads\/sites\/4\/2025\/09\/Robert_Seif-265x300.png 265w\" sizes=\"auto, (max-width: 667px) 100vw, 667px\" \/><figcaption class=\"wp-element-caption\"><sub>Robert Seif, MS. student in the Department of Engineering<\/sub><\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<p>So, what if our entire data source wasn\u2019t all real photos of our target? If we trained our toddler on a few cartoon kitties it would still get the point across, right? Simulated data is significantly easier, cheaper and faster to maneuver and manipulate for AI models and will have a big role in training them in our future. This is the essence of multi-fidelity training and testing.&nbsp;<\/p>\n\n\n\n<p>Multi-fidelity. It\u2019s the term the industry uses when AI trainers combine both high fidelity (and high quality) with low fidelity (and low cost!) data to bring a new AI into creation. The question is, how much low fidelity data can we supplement into high-fidelity training sets before our identification abilities greatly suffer? In my research with the Design Engineering Lab at Purdue (DELP) we explored this issue in an effort to ensure safety in the next generation of autonomous vehicles being released worldwide.&nbsp;&nbsp;<\/p>\n\n\n\n<p>The cost of controlling classified data is estimated to be over $50 billion a year for just the US government alone. Meanwhile, the cost of finding a clip from almost any highway in the world is having to watch a 15-second ad on YouTube. This is what makes cutting a significant percentage of potentially controlled (and costly) data from my training set for only a small decrease in accuracy the right decision. The simulated data is significantly cheaper and faster to acquire than the alternatives, and with the excess in funds the lost 4% can be overcome elsewhere. This is the value of these multi-fidelity datasets.&nbsp;&nbsp;<\/p>\n\n\n\n<p>AI has become an indispensable tool for a variety of industries, from education to automotive to defense. The key to these models is data, more than we could ever fathom. These datasets need to be robust and obtainable for us to complete before the next tech craze takes over. My research, in collaboration with the DELP, shows how multi-fidelity training can revolutionize the most critical of tasks. Embracing this multi-fidelity method will allow us to significantly reduce the costs of training without sacrificing the end accuracy and safety of the project. Applications for this approach are limitless, and as the technology advances multi-fidelity training will undoubtedly play a pivotal role in shaping the future of AI.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">About the Author:&nbsp;<\/h3>\n\n\n\n<p>Robert Seif was a master\u2019s student in the Department of Mechanical Engineering, obtaining his undergraduate in Mechanical Engineering at Purdue as well. His research&nbsp;on creating methods to safely test and evaluate ground-breaking AI systems to ensure best use won ASME&#8217;s <em>Best Paper<\/em>&nbsp;award for AI\/ML approaches in 2024. Outside of his work with Purdue University, he is a co-founder and advisor to both AI and non-AI startups, and currently lives in the San Francisco Bay Area looking for his next big project.<\/p>\n\n    <\/div>\n<\/div>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n<div  class=\"section  page-layout-wide\">\n    <div class=\"container\">\n                \n\n<div class=\"wp-block-columns page-layout-columns columns is-multiline is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column column is-full-tablet page-layout-main is-layout-flow wp-block-column-is-layout-flow\">\n<p>Want to participate in the competition?<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/innovated-graduate-research-magazine\/how-to-submit\/\" target=\"_blank\" rel=\"noreferrer noopener\">How to Submit<\/a><\/div>\n\n\n\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"\/academics\/ogsps\/professional-development\/innovated-graduate-research-magazine\/\">Back to Magazine<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column column is-one-quarter-desktop is-full-tablet is-full-mobile page-layout-sidebar is-layout-flow wp-block-column-is-layout-flow\">\n<p><\/p>\n<\/div>\n<\/div>\n\n    <\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[19],"tags":[],"class_list":["post-2171","post","type-post","status-publish","format-standard","hentry","category-engineering"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/posts\/2171","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/comments?post=2171"}],"version-history":[{"count":2,"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/posts\/2171\/revisions"}],"predecessor-version":[{"id":2175,"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/posts\/2171\/revisions\/2175"}],"wp:attachment":[{"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/media?parent=2171"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/categories?post=2171"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.purdue.edu\/academics\/ogsps\/professional-development\/wp-json\/wp\/v2\/tags?post=2171"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}