Researchers from Carnegie Mellon University’s Tepper School of Business are learning how AI can be used to support teamwork rather than replace teammates.
Anita Williams Woolley is a professor of organizational behavior. She researches collective intelligence, or how well teams perform together, and how artificial intelligence could change workforce dynamics. Now, Woolley and her colleagues are helping to figure out exactly where and how AI can play a positive role.
“I’m always interested in technology that can help us become a better version of ourselves individually,” Woolley said, “but also collectively, how can we change the way we think about and structure work to be more effective?”
Woolley collaborated with technologists and others in her field to develop Collective HUman-MAchine INtelligence (COHUMAIN), a framework that seeks to understand where AI fits within the established boundaries of organizational social psychology.
The researchers behind the 2023 publication of COHUMAIN caution against treating AI like any other teammate. Instead, they see it as a partner that works under human direction, with the potential to strengthen existing capabilities or relationships. “AI agents could create the glue that is missing because of how our work environments have changed, and ultimately improve our relationships with one another,” Woolley said.
The research that makes up the COHUMAIN architecture emphasizes that while AI integration into the workplace may take shape in ways we don’t yet understand, it won’t change the fundamental principles behind organizational intelligence, and likely can’t fill in all of the same roles as humans.
For instance, while AI might be great at summarizing a meeting, it’s still up to people to sense the mood in the room or pick up on the wider context of the discussion.
Organizations have the same needs as before, including a structure that allows them to tap into each human team member’s unique expertise. Woolley said that artificial intelligence systems may best serve in “partnership” or facilitation roles rather than managerial ones, like a tool that can nudge peers to check in with each other, or provide the user with an alternate perspective..
Safety and risk
With so much collaboration happening through screens, AI tools might help teams strengthen connections between coworkers. But those same tools also raise questions about what’s being recorded and why.
“People have a lot of sensitivity, rightly so, around privacy. Often you have to give something up to get something, and that is true here,” Wooley said.
The level of risk that users feel, both socially and professionally, can change depending on how they interact with AI, according to Allen Brown, a Ph.D. student who works closely with Woolley. Brown is exploring where this tension shows up and how teams can work through it. His research focuses on how comfortable people feel taking risks or speaking up in a group.
Brown said that, in the best case, AI could help people feel more comfortable speaking up and sharing new ideas that might not be heard otherwise. “In a classroom, we can imagine someone saying, “Oh, I’m a little worried. I don’t know enough for my professor, or how my peers are going to judge my question,” or, “I think this is a good idea, but maybe it isn’t.” We don’t know until we put it out there.”
Since AI relies on a digital record that might or might not be kept permanently, one concern is that a human might not know which interactions with an AI will be used for evaluation.
“In our increasingly digitally mediated workspaces, so much of what we do is being tracked and documented,” Brown said. “There’s a digital record of things, and if I’m made aware that, ‘Oh, all of a sudden our conversation might be used for evaluation,’ we actually see this significant difference in interaction.”
Even when they thought their comments might be monitored or professionally judged, people still felt relatively secure talking to another human being. “We’re talking together. We’re working through something together, but we’re both people. There’s kind of this mutual assumption of risk,” he explained.
The study found that people felt more vulnerable when they thought an AI system was evaluating them. Brown wants to understand how AI can be used to create the opposite effect—one that builds confidence and trust.
“What are those contexts in which AI could be a partner, could be part of this conversational communicative practice within a pair of individuals at work, like a supervisor-supervisee relationship, or maybe within a team where they’re working through some topic that might have task conflict or relationship conflict?” Brown said. “How does AI help resolve the decision-making process or enhance the resolution so that people actually feel increased psychological safety?”
Creating a more trustworthy AI
At the individual level, Tepper researchers are also learning how the way in which AI explains its reasoning affects how people use and trust it. Zhaohui (Zoey) Jiang and Linda Argote are studying how people react to different kinds of AI systems—specifically, ones that explain their reasoning (transparent AI) versus ones that don’t explain how they make decisions (black box AI).
“We see a lot of people advocating for transparent AI,” Jiang said, “but our research reveals an advantage of keeping the AI a black box, especially for a high ability participant.”
One of the reasons for this, she explained, is overconfidence and distrust in skilled decision-makers.
“For a participant who is already doing a good job independently at the task, they are more prone to the well-documented tendency of AI aversion. They will penalize the AI’s mistake far more than the humans making the same mistake, including themselves,” Jiang said. “We find that this tendency is more salient if you tell them the inner workings of the AI, such as its logic or decision rules.”
People who struggle with decision-making actually improve their outcomes when using transparent AI models that show off a moderate amount of complexity in their decision-making process. “We find that telling them how the AI is thinking about this problem is actually better for less-skilled users, because they can learn from AI decision-making rules to help improve their own future independent decision-making.”
While transparency is proving to have its own use cases and benefits, Jiang said the most surprising findings are around how people perceive black box models. “When we’re not telling these participants how the model arrived at its answer, participants judge the model as the most complex. Opacity seems to inflate the sense of sophistication, whereas transparency can make the very same system seem simpler and less ‘magical,'” she said.
Both kinds of models vary in their use cases. While it isn’t yet cost‑effective to tailor an AI to each human partner, future systems may be able to self-adapt their representation to help people make better decisions, she said.
“It can be dynamic in a way that it can recognize the decision-making inefficiencies of that particular individual that it is assigned to collaborate with, and maybe tweak itself so that it can help complement and offset some of the decision-making inefficiencies.”
Citation:
Researchers explore how AI can strengthen, not replace, human collaboration (2025, November 1)
retrieved 1 November 2025
from https://techxplore.com/news/2025-10-explore-ai-human-collaboration.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

