Generative AI can make a manager’s email look sharp, but it might also chip away at trust. A new study from the University of Florida and the University of Southern California finds that while tools like ChatGPT, Gemini, Copilot, and Claude improve the professionalism of workplace writing, heavy reliance on them can make supervisors seem less sincere and less trustworthy to their teams.
The research, published in the International Journal of Business Communication, surveyed 1,100 professionals to uncover a gap between perceptions of message quality and perceptions of the sender.
AI Boosts Professionalism But Hurts Perception
“We see a tension between perceptions of message quality and perceptions of the sender,” said Anthony Coman, Ph.D., co-author and researcher at the University of Florida’s Warrington College of Business. “Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance.”
Survey participants evaluated congratulatory emails with varying levels of AI help. While AI-polished messages were often described as efficient, effective, and professional, the sender’s perceived sincerity and care dropped sharply when AI played a bigger role—especially if that sender was a manager.
The Numbers Behind the Trust Gap
- Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages.
- 95% found low-AI supervisor messages professional, but this dropped to 69-73% for high-AI messages.
- Perceptions of integrity and ability—key components of cognitive-based trust—were lower when high AI use was detected.
The researchers also found that people judge their own AI use more leniently than others’. When assessing a supervisor’s AI use, higher assistance triggered more skepticism about authorship, competence, and effort.
When AI Works and When It Doesn’t
Coman and co-author Peter Cardon, Ph.D., of USC, say AI can be a time-saver for factual announcements, meeting reminders, or other routine communications. But for relationship-oriented messages—like praise, congratulations, motivation, or personal feedback—managers should limit AI involvement to light editing or grammar checks. Employees can often detect AI-generated content and may interpret heavy use as a sign of laziness or detachment.
“In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, citing impacts on perceived ability and integrity.
Implications for the AI-Era Workplace
The findings suggest a delicate balance for leaders navigating AI adoption in everyday communication. While polished language matters, it should not come at the expense of perceived authenticity. In an era where over 75% of professionals already use AI tools at work, the question may not be whether to use them, but how—and when—to keep the human voice front and center.
Journal: International Journal of Business Communication
DOI: 10.1177/23294884251350599
Related
If our reporting has informed or inspired you, please consider making a donation. Every contribution, no matter the size, empowers us to continue delivering accurate, engaging, and trustworthy science and medical news. Independent journalism requires time, effort, and resources—your support ensures we can keep uncovering the stories that matter most to you.
Join us in making knowledge accessible and impactful. Thank you for standing with us!