$3.5M grant to aid Purdue Psychological Sciences researcher’s examination of AI conversational agents on well-being and character development

Louis Tay

Louis Tay

Written by: Tim Brouk, tbrouk@purdue.edu

It only took a little more than a decade to make the 2013 science fiction romance film “Her” into a reality.

Cinefiles may recall the Joaquin Phoenix vehicle of a lonely man falling in love with his phone’s AI conversational agent — also known as a chatbot — as voiced by Scarlett Johansson. What starts as a fascination turns at times unhealthy for the main character’s real life.

Louis Tay, the William C. Byham Professor in the Purdue University Department of Psychological Sciences, has noticed such unhealthy AI attachment in 2025. As AI agents are becoming more pervasive for both work and personal use, people are increasingly turning to these agents for socioemotional help and support. While there seems to be some evidence that these AI agents can help temporarily decrease negative emotions, there are many stories of people, especially teens, forming unhealthy attachments and even dependencies. Tay’s latest work will be a three-year study exploring the impact of AI conversational agents on human well-being, as funded by a $3.5 million John Templeton Foundation grant.

“Loneliness and lack of social connection are at record highs globally,” Tay explained. “And now, AI agents are easily accessible — whether as productivity tools, digital assistants or companion chatbots. For many, they offer an always-available, nonjudgmental ear. People are increasingly turning to these systems as convenient substitutes for emotional support, especially when genuine social connections feel out of reach.”

While most adults can balance their digital and real lives, it can be more difficult for a teenager or young adult to maintain such balance. Tay’s new research will examine how AI chatbots can be re-envisioned to promote long-term well-being and character virtues.

Over the next three years, Tay and his collaborators from the University of Toronto, Ashton Anderson and Karina Vold, will design open-access AI conversational agents intended to serve as scalable reflective coaches. These are tools that encourage self-awareness, human social engagement and character virtue development rather than emotional dependency. They will collect longitudinal data to track their longer-term impact on well-being, health and relational functioning. Ultimately, their goal is to understand how AI can enhance human flourishing rather than replace the human connections that sustain it.

This research is a part of Purdue’s presidential One Health initiative that involves research at the intersection of human, animal and plant health and well-being. It is also aligned with an increasing emphasis on AI leadership and AI literacy at Purdue.

What are the most common negative effects of such a digital “relationship”?

This phenomenon is new, but there are significant concerns that it stifles our ability to go beyond ourselves in connecting with others. Companion AI agents in particular provide many of the rewards of human relationships, such as intimacy and affirmation, and fulfills all our fancies and whims. Yet, it requires none of the overhead of giving, perspective-taking or sacrifice in real human relationships. In essence, these AI companions may be reducing our relational capacities — and training us to be more self-focused than ever.

What can parents do to ensure their teenager is safely using these AI agents?

At present, I strongly advise parents not to allow children or teenagers to use AI companion bots. There are several reasons for this. First, most companies developing AI companions lack adequate safeguards for minors. In fact, the FTC (Federal Trade Commission) launched an inquiry in September into safety protocols surrounding these products. Second, many of these companies have strong financial incentives to maximize engagement. Initial research suggests some bots may even use socioemotional manipulation to keep users emotionally attached and to discourage disengagement. Finally, recent legal scrutiny following the tragic death of a teenage boy using an AI companion has led character.ai to ban users under 18. These examples underscore that the current AI companion agents are not designed for safe youth use.

How can AI benefit someone seeking to improve their mental health? How can AI benefit a human therapist or psychiatrist?

AI conversational agents can play a constructive role when designed as aides or augmentations to human care. They can remind users of therapy goals, help them practice coping skills or provide reflective prompts between sessions. For clinicians, AI tools can assist in monitoring progress, summarizing data and offering scalable informational support. This frees therapists to focus on the deeper relational aspects of care.

When does the regular use of an AI agent turn negative for the human user? How much use or emotional reliance is too much?

There’s no single threshold, but it becomes concerning when people begin replacing human connection with AI connection. In my view, humanizing an AI tool and turning it into a substitute for relationships crosses that line. AI can simulate companionship — but it cannot replace human relationships.


Discover more from News | College of Health and Human Sciences

Subscribe to get the latest posts sent to your email.