Psychological Sciences researcher: Training better humans with AI

Written By: Louis Tay, Karina Vold and Ashton Anderson

The following commentary was written by Louis Tay, William C. Byham Professor of Industrial-Organizational Psychology in the Purdue University Department of Psychological Sciences, along with his collaborators from the University of Toronto. Tay, who recently received a $3.5 million John Templeton Foundation grant, is currently conducting research about the effects of AI conversational agents on well-being and character development.

The advancement of generative AI has heightened concerns about whether AI will be human-like enough to take over our jobs or replace human partners in relationships. But, if designed thoughtfully, generative AI also offers new opportunities as a tool that can support us in becoming better people.

Currently, AI agents are trained to be effective task completers and human engagers with little regard for what effects they may have on our character. But there is great potential for building AI agents in a way that promotes the human virtues we aspire toward. What if we redesigned AI to cultivate character virtues in us, such as patience and empathy, not just for speed and accuracy? Rather than fearing AI’s rise, we could look forward to a future in which AI helps us be better humans.

Contemporary generative AI models are trained myopically. The design philosophy is reactive — give the user what they want, when they want it. This encourages engagement, keeping users on the platform. But in doing so, AI mirrors our worst instincts: impatience, instant gratification, self-flattery and, many times, a lack of reflection.

When you type a prompt into a chatbot, it generates an answer with speed and precision. Ask it to summarize a research paper, draft an email or even write a heartfelt apology, and it delivers within moments. The efficiency is astounding but also telling. AI has been designed to maximize convenience, not cultivate character. It gives us information but rarely challenges us to pause, reflect or grow.

For example, if a user wants wise advice on how to apologize and asks a chatbot, “How do I apologize for forgetting my friend’s birthday?” The chatbot can churn out a well-crafted apology in seconds. But it won’t ask the user to reflect: Why did you forget? How do you rebuild trust? What does a meaningful apology require beyond words? Current AI helps us perform apologies, but it does not help us become more considerate people.

What if chatbots were trained not just to provide quick answers but to encourage deeper reflection to cultivate virtue? Imagine an AI that responds to an apology request by saying:

“I can help you draft an apology, but consider: A true apology acknowledges the harm, expresses genuine remorse and offers to make amends. How do you feel about what happened? Would you like to talk through what you want to express first? How can you share an apology that goes beyond words?”

Such a redesign would transform AI from a mere task completer into a reflective virtue support agent — one that prompts users to slow down, contemplate and internalize ethical behavior.

Cultivating patience and empathy

Right now, chatbots prioritize immediacy. But what if they reinforced patience? If a student asks for an essay summary, AI could provide an overview but also suggest reading key sections for deeper insight. Instead of shortcutting learning, it could become a tutor that encourages curiosity and patience in learning.

Similarly, when a user asks AI for relationship advice — say, about a difficult conversation — AI could go beyond providing scripted responses. It could ask, “What is your goal in this conversation? Are you seeking to be understood or to understand?” These nudges could cultivate empathy and self-awareness in users, rather than reinforcing impulsive decision-making.

Evidence of model sycophancy — the tendency for AI chatbots to be agreeable and even flatter the user — is now well-documented. Human feedback is often used to finetune AI models, but this encourages them to give responses that match the user’s existing beliefs, even when they’re not accurate. If we want AI models to make us better people, and to encourage virtue development, we may need them to challenge us once in a while. But to do that, we need to develop strategies that move away from this popular method of training AI language models.

More than knowledge and efficiency: A better humanity

If we continue to design AI solely for speed and accuracy, we risk creating a world where we outsource our thinking, decision-making and even moral reasoning to machines that don’t promote our best interests. But if we consciously embed the cultivation of virtues like patience and empathy into AI, chatbots wouldn’t just generate quicker and better answers — they would help create reflective and better people. AI doesn’t have to replace humanity; it can help us become the best versions of ourselves.

About the authors:

Louis Tay is William C. Byham Professor of Industrial-Organizational Psychology and is the co-editor of the books “Big Data in Psychology” (American Psychological Association) and “Technology and Measurement Around the Globe” (Cambridge).

Karina Vold is an assistant professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto and an AI2050 Early Career Fellow with the Schmidt Sciences Foundation.

Ashton Anderson is an associate professor of computer science at the University of Toronto and the recipient of the 2022 CS-Canada Outstanding Early Career Computer Science Researcher Award.


Discover more from News | College of Health and Human Sciences

Subscribe to get the latest posts sent to your email.