Teammate or tool? Purdue Psychological Sciences professor investigates human perceptions of AI in the workplace
Written By: Rebecca Hoffa, rhoffa@purdue.edu

Alexandra Harris-Watson, assistant professor in the Purdue University Department of Psychological Sciences, is interested in where the boundaries are in people viewing AI as a teammate or collaboration partner rather than a tool in the workplace. Companies such as Asana and Microsoft have already positioned their AI agents as teammates while others see them as merely a tool in their toolbox.
“When do we start to think of AI as a social agent?” Harris-Watson said. “A lot of work for a long time has focused on AI as a tool. Now, we’re moving into the space where people are increasingly conceptualizing it as a teammate or as many other things: a mentor, a coach, a companion, a friend, a coworker. I am interested in these social aspects, and that is where I see it affecting the workplace more broadly. If you’re no longer working with human team members, what happens when you think about AI team members? How does that change our perceptions around what it means to work?”

Alexandra Harris-Watson(Photo provided)
For Harris-Watson, an important question centers around the use of the word “teammate” in AI contexts. When is that framing helpful, and when is it harmful? This is where her background in industrial-organizational psychology becomes particularly valuable.
“Using these terms could backfire by having people resist it or feel like the organization is devaluing them, which could have downstream consequences for people where they become less committed,” said Harris-Watson, whose recent work has suggested teammate labels may make people less willing to work with AI agents. “I think given the enthusiasm for this label, we need to make sure we understand the actual consequences, so organizations can implement it appropriately and responsibly and make sure that the huge amounts of money that are being invested into these things are actually going to pay off.”
While her work has evolved to focus on human/AI teams, at the heart of Harris-Watson’s work is a passion for understanding both individual differences and collaborative relationships.
“Really my interests are at the intersection of those things: How can we take the unique pieces of what makes us ‘us’ and how do we think about that in relation to other people?” Harris-Watson said. “So how do my individual differences interact with your individual differences in these interpersonal relationships that are central to pretty much everything we do?”
Because of her background, Harris-Watson noted it is critical for psychologists to be involved in understanding how humans work with AI to ensure organizations are using models and frameworks that are most beneficial to their workforce.
“My interest in the space is really driven by extending what I know about individual differences and teams to make sure that the advancement of human-AI teaming is happening in a way that is scientifically accurate and using the models that we already know make sense and work within the human psychology of teams,” Harris-Watson said.
Recently, conversations have begun about introducing AI CEOs or managers at various organizations. Harris-Watson said she thinks it’s important for psychologists and other social scientists to be involved in those conversations, whether or not they agree with it.
“There have also been some attempts to position AI as a leader that I think are philosophically very interesting while also perhaps problematic in some ways,” Harris-Watson said. “That conversation is happening, whether or not individuals are comfortable with it. I think it’s important for psychologists to be involved in those spaces, thinking about what that looks like and what the consequences are.”
In 2023, Harris-Watson published a study in Computers in Human Behavior applying the psychological framework for how humans identify someone as ‘friend’ or ‘foe’ to AI models. The team created a simulation in which participants believed they were teaming up with AI but it was a human on the other end. This allowed the team to see if warmth — whether the person is likable — and competence — how capable they are of doing the work — held up as important predictors for effective AI collaborations.
“We generally know that in humans, we are always going to prefer ‘likable’ to ‘competent,’” Harris-Watson said. “We like working with people we like. Although competence is good in certain tasks — we would all love to work with people who are likable and competent —given the choice, we’re going to prioritize likability. So, we were interested in extending that to how we’re going to interact with something like AI.”
The findings suggested people still use impressions of warmth and competence when reasoning about AI; however, unlike human relationships, competence comes out on top in AI pairings.
“Generally speaking, what we find is that these models do work, but with some important caveats,” Harris-Watson said. “We can extend models of human perception to AI social perception. Warmth and competence still hold, but the patterns are a little bit different. In AI, competence is a lot more important than likability, but likability still matters. So, we’re still going to prioritize working with AI that feels likable in some way in many contexts, but competence is overall more important. We’re seeing a slight twist on how we’re going to respond and want to interact with these things.”
Currently Harris-Watson’s work in the College of Health and Human Sciences is looking at what classifies an AI teammate in work environments, from chatbots to robots and beyond.
“The word ‘teammate’ has been thrown around a lot by all kinds of different organizations about AI, but we’re very interested in when that term actually applies,” Harris-Watson said. “The reaction I get talking to everyday people around this is usually negative. People are scared of AI teammates because it elicits some really fundamental psychological processes around thinking about your own job security, whether it devalues you as a worker in some way to have this other thing be elevated to the status of a worker or a teammate.”
Ultimately, Harris-Watson and her team are looking at possible strategies organizations can apply when integrating AI within their teams to ensure positive responses.
“We’re really interested in when that label is appropriate and what the consequences of those labels are,” Harris-Watson said. “I imagine most organizations are using these terms because they think it will be a good thing — they think it’s going to improve cooperation, coordination or integration, or they think there will be some sort of net gain. But research has not shown that yet. It has not shown calling AI ‘teammates’ and trying to integrate them as full teammates is in fact beneficial. We know from social psychology that even small shifts in framing and the way we talk about things can have really dramatic impacts on how people respond to and think about those things.”
In this way, Harris-Watson sees the solution not just about making AI as robust as possible but about ensuring it is introduced responsibly and with a focus on human perception, not just the AI’s technical capabilities.
“From a psychology perspective, we know that to some extent what matters is just what we think of AI, not what it’s actually capable of,” Harris-Watson said. “We’re pointing out here that perception matters. This could also explain then why we have really divergent reactions and responses to these things and why some people are very excited about it and some are not.”
Discover more from News | College of Health and Human Sciences
Subscribe to get the latest posts sent to your email.