From diagnostics to eventual robot docs, Purdue Public Health professor discusses AI enhancements in health care administration

Cody Mullen headshot

Cody Mullen

Written by: Tim Brouk, tbrouk@purdue.edu

A recent study found health care staffing overall has finally gotten back to pre-pandemic and pre-“Great Resignation” levels as of this summer, but there are still some areas that are largely understaffed. For example, U.S. nursing care facilities still have more than 11% fewer workers than before COVID-19.

To combat such gaps and also be prepared for another event like the COVID-19 pandemic, health care providers are turning to artificial intelligence (AI) to assist nurses, physicians and business staff, according to Cody Mullen, Purdue University clinical associate professor in the Department of Public Health.

From interpreting MRI, CT and X-ray scans to supporting patients with mental health concerns, Mullen sees AI’s implementation into hospitals and clinics as a collaborative and useful tool that shouldn’t come at the cost of human jobs.

“How can AI better support both clinician decision-making processes and patient decision-making processes but then also the business operations?” Mullen asked. “How can AI reframe the care continuum?”

Mullen said AI in some form has been used in health care for over 20 years for computer-informed decision making, but machine learning models have become much more sophisticated and more accurate in the past few years. It’s a hot-button, mainstream topic no longer regulated for technology blogs.

Mullen spoke on how AI is enhancing health care administration during a recent webinar hosted by Purdue University Online. His research predicts more AI use in the next few years and even possible “robot doctors” in the future working with clinicians caring for patients.

What are some examples in how AI can help a health care facility staff?

One is diagnostics. We are starting to see AI read raw images. We saw that with the new ChatGPT model that came out a few weeks ago. … You add lab values and other data; it helps providers for easier triaging. In an understaffed emergency room or hospital where not every patient will have human-to-human contact immediately, how do we ethically scale who needs care first versus who doesn’t?

It can speed up the process and lead to less potential error. ChatGPT doesn’t get tired whereas a physician at the end of a 12-hour shift could. There’s a small abnormality on the image that could be missed after looking at imaging for the last 11 hours and dictating. The AI model can detect the area that needs to be looked at specifically: “This doesn’t look right.”

How can AI assist in behavioral or mental health facilities?

We’re starting to see AI have dialogues with patients via text or spoken word around their behavioral health needs. AI is starting to be on-demand and available when the patient wants to access AI. We must ask, though, is it as effective? Is it ethical? What happens if the patient indicates self-harm ideation or someone who needs immediate human interaction? But again, how, with limited resources, can we make sure people get the best possible care?

We are seeing, especially in the younger generation, or Gen Z, that they prefer text. They’re not as comfortable talking about their emotions. There’s chat-based therapy already with a human on the other end. What if ChatGPT or other AI models can step in and support? Also, the voice-to-text understanding of AI is so great that in theory, via Zoom, you could really have a computer-generated model that someone is talking to that can provide the initial therapy protocol for the provider to review.

Would jobs be endangered by AI in these health care facilities?

I don’t think in the near 5-10-year range that we will (potentially see jobs lost to AI). We’re seeing a lot of retirements. We’ve seen some dips in the amount of people going into the health care profession over the last several years. There’s such a shortage for high-quality health care workers that AI is another tool. The human is still going to be involved, but how do you make them more effective?

I think we as a society will just have to wrestle with how health care delivery is going to change in our country. The current model of someone being ill and being in the hospital for 10 days is not sustainable. So, how do we utilize technology to the best of our ability to increase patient outcomes, increase patient satisfaction, reduce the burnout of providers? Currently, AI can’t feel the thyroids of a patient. It’s very much the computer on the desk, but I’m sure at some point there will be a robot that will be able to get those measurements. I would say that’s years away, but I’m amazed at how much technology has changed just in the last 10 years.

How are rural clinics adopting AI?

I think resource allocation is always going to be different. Since AI is new, it’s going to be expensive; it’s going to be costly. Also, we still have rural clinics that do not have the necessary internet connection, speed or bandwidth for what the current AI requires.

It’s important that rural and underserved facilities are not left out. We’re at a turning point in figuring out what AI can do. We also need to figure out how to make sure everyone has access to it.

With AI in health care, how fruitful will new research opportunities be for you and your colleagues?

It’s an exciting time for health care as AI researchers want to work in the health care space, but sometimes, they don’t always know the nuances of health care. So, it’s a great opportunity to collaborate and have different mindsets. As more and more computer science and AI researchers get into the health care space, they’re able to challenge health care people and ask, “Why? Why do you do it that way?” It’s going to be a fun time to see how these other researchers will be able to make a beneficial impact.

Learn more about Purdue Online graduate programs here.


Discover more from News | College of Health and Human Sciences

Subscribe to get the latest posts sent to your email.