
In a Duke Health-led survey, patients who were shown messages written either by artificial intelligence (AI) or human clinicians indicated a preference for responses drafted by AI over a human. That preference was diminished, though not erased, when told AI was involved.
The study, published March 11 in JAMA Network Open, showed high overall satisfaction with communications written both by AI and humans, despite their preference for AI. This suggests that letting patients know AI was used does not greatly reduce confidence in the message.
“Every health system is grappling with this issue of whether we disclose the use of AI and how,” said senior author Anand Chowdhury, M.D., assistant professor in the Department of Medicine at Duke University School of Medicine. “There is a desire to be transparent, and a desire to have satisfied patients. If we disclose AI, what do we lose? That is what our study intended to measure.”
Chowdhury and colleagues sent a series of surveys to members of the Duke University Health System patient advisory committee. This is a group of Duke Health patients and community members who help inform how Duke Health communicates with and cares for patients. More than 1,400 people responded to at least one of the surveys.
The surveys focused on three clinical topics, including routine medication refill request (a low seriousness topic), medication side effect question (moderate seriousness), and potential cancer on imaging (high seriousness).
Human responses were provided by a multidisciplinary team of physicians who were asked to write a realistic response to each survey scenario based on how they typically draft responses to patients. The generative AI responses were written using ChatGPT and were reviewed for accuracy by the study physicians who made minimal changes to the responses.
For each survey, participants were asked to review a vignette that presented one of the clinical topics. Each vignette included a response from either AI or human clinicians, along with either a disclosure or no disclosure telling them who the author was. They were then asked to rate their overall satisfaction with the response, usefulness of the information, and how cared for they felt during the interaction.
Comparing authors, patients preferred AI-drafted messages by an average difference of 0.30 points on 5-point scale for satisfaction. The AI communications tended to be longer, included more details, and likely seemed more empathetic than human-drafted messages.
“Our study shows us that patients have a slight preference for messages written by AI, even though they are slightly less satisfied when the disclosure informs them that AI was involved,” said first author Joanna S. Cavalier, M.D., assistant professor in the Department of Medicine at Duke University School of Medicine.
When they looked at the difference in satisfaction when participants were told AI was involved, disclosing AI led to lower satisfaction, though not by much: 0.1 points on the 5-point scale. Regardless of the actual author, patients were overall more satisfied with messages when they were not told AI was involved in drafting the response.
“These findings are particularly important in the context of research showing that patients have higher satisfaction when they can connect electronically with their clinicians,” Chowdhury said.
“At the same time, clinicians express burnout when their in-basket is full, making the use of automated tools highly attractive to ease that burden,” Chowdhury said. “Ultimately, these findings give us confidence to use technologies like this to potentially help our clinicians reduce burnout, while still doing the right thing and telling our patients when we use AI.”
In addition to Chowdhury and Cavalier, study authors include Benjamin A. Goldstein, Vardit Ravitsky, Jean-Christophe Bélisle-Pipon, Armando Bedoya, Jennifer Maddocks, Sam Klotman, Matt Roman, Jessica Sperling, Chun Xu, and Eric G Poon.
More information:
Joanna S. Cavalier et al. Drafted Responses to Electronic Messages, JAMA Network Open (2025). DOI: 10.1001/jamanetworkopen.2025.0449, jamanetwork.com/journals/jaman … /fullarticle/2831219
Citation:
Patients’ affinity for AI messages drops if they know the technology was used, surveys reveal (2025, March 11)
retrieved 11 March 2025
from https://medicalxpress.com/news/2025-03-patients-affinity-ai-messages-technology.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

In a Duke Health-led survey, patients who were shown messages written either by artificial intelligence (AI) or human clinicians indicated a preference for responses drafted by AI over a human. That preference was diminished, though not erased, when told AI was involved.
The study, published March 11 in JAMA Network Open, showed high overall satisfaction with communications written both by AI and humans, despite their preference for AI. This suggests that letting patients know AI was used does not greatly reduce confidence in the message.
“Every health system is grappling with this issue of whether we disclose the use of AI and how,” said senior author Anand Chowdhury, M.D., assistant professor in the Department of Medicine at Duke University School of Medicine. “There is a desire to be transparent, and a desire to have satisfied patients. If we disclose AI, what do we lose? That is what our study intended to measure.”
Chowdhury and colleagues sent a series of surveys to members of the Duke University Health System patient advisory committee. This is a group of Duke Health patients and community members who help inform how Duke Health communicates with and cares for patients. More than 1,400 people responded to at least one of the surveys.
The surveys focused on three clinical topics, including routine medication refill request (a low seriousness topic), medication side effect question (moderate seriousness), and potential cancer on imaging (high seriousness).
Human responses were provided by a multidisciplinary team of physicians who were asked to write a realistic response to each survey scenario based on how they typically draft responses to patients. The generative AI responses were written using ChatGPT and were reviewed for accuracy by the study physicians who made minimal changes to the responses.
For each survey, participants were asked to review a vignette that presented one of the clinical topics. Each vignette included a response from either AI or human clinicians, along with either a disclosure or no disclosure telling them who the author was. They were then asked to rate their overall satisfaction with the response, usefulness of the information, and how cared for they felt during the interaction.
Comparing authors, patients preferred AI-drafted messages by an average difference of 0.30 points on 5-point scale for satisfaction. The AI communications tended to be longer, included more details, and likely seemed more empathetic than human-drafted messages.
“Our study shows us that patients have a slight preference for messages written by AI, even though they are slightly less satisfied when the disclosure informs them that AI was involved,” said first author Joanna S. Cavalier, M.D., assistant professor in the Department of Medicine at Duke University School of Medicine.
When they looked at the difference in satisfaction when participants were told AI was involved, disclosing AI led to lower satisfaction, though not by much: 0.1 points on the 5-point scale. Regardless of the actual author, patients were overall more satisfied with messages when they were not told AI was involved in drafting the response.
“These findings are particularly important in the context of research showing that patients have higher satisfaction when they can connect electronically with their clinicians,” Chowdhury said.
“At the same time, clinicians express burnout when their in-basket is full, making the use of automated tools highly attractive to ease that burden,” Chowdhury said. “Ultimately, these findings give us confidence to use technologies like this to potentially help our clinicians reduce burnout, while still doing the right thing and telling our patients when we use AI.”
In addition to Chowdhury and Cavalier, study authors include Benjamin A. Goldstein, Vardit Ravitsky, Jean-Christophe Bélisle-Pipon, Armando Bedoya, Jennifer Maddocks, Sam Klotman, Matt Roman, Jessica Sperling, Chun Xu, and Eric G Poon.
More information:
Joanna S. Cavalier et al. Drafted Responses to Electronic Messages, JAMA Network Open (2025). DOI: 10.1001/jamanetworkopen.2025.0449, jamanetwork.com/journals/jaman … /fullarticle/2831219
Citation:
Patients’ affinity for AI messages drops if they know the technology was used, surveys reveal (2025, March 11)
retrieved 11 March 2025
from https://medicalxpress.com/news/2025-03-patients-affinity-ai-messages-technology.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.