How to Foster Psychological Safety When AI Erodes Trust on Your Team
Briefly

How to Foster Psychological Safety When AI Erodes Trust on Your Team
"There may be something unsettling happening on your team. Despite expected productivity gains from integrating AI tools, overall team performance appears to be declining. People are starting to second-guess themselves, and trust is eroding in ways that are hard to pinpoint. is a corporate scientist and first-ever chief science advocate at 3M who is the author of Jayshree Seth The Heart of Science book trilogy published by Society of Women Engineers (SWE)."
"With a PhD in Chemical Engineering and 81 patents, she is a TEDx speaker and award-winning innovator who was named to the 2025 Thinkers50 Radar. Jayshree is currently leading use cases for gen AI in R&D at 3M. is the Novartis Professor of Leadership and Management at the Harvard Business School and the author of Amy C. Edmondson Right Kind of Wrong: The Science of Failing Well. She is an expert on team learning and psychological safety."
Integrating AI tools can produce expected productivity gains while overall team performance declines. Team members begin to second-guess decisions and exhibit reduced confidence. Trust deteriorates in subtle, hard-to-pinpoint ways. Causes include unclear roles, inconsistent tool adoption, lack of transparency around AI outputs, and weakened psychological safety. Overreliance on AI and insufficient learning practices amplify errors and inhibit team learning. Restoring performance requires diagnosing social and technical frictions, clarifying norms for AI use, rebuilding psychological safety, and implementing transparent workflows and training. Ongoing monitoring and iterative adjustments help align AI tools with team effectiveness.
Read at Harvard Business Review
Unable to calculate read time
[
|
]