
A new commentary published in the Journal of the Royal Society of Medicine warns that current risk-based regulatory approaches to artificial intelligence (AI) in health care fall short in protecting patients, potentially leading to over- and undertreatment as well as discrimination against patient groups.
The authors found that while AI and machine learning systems can enhance clinical accuracy, concerns remain over their inherent inaccuracy, opacity, and potential for bias which are not adequately addressed by the current regulatory efforts introduced by the European Union’s AI Act.
Passed in 2025, the AI Act categorizes medical AI as “high risk” and introduces strict controls on providers and deployers. But the authors argue this risk-based framework overlooks three critical issues: individual patient preferences, systemic and long-term effects of AI implementation, and the disempowerment of patients in regulatory processes.
“Patients have different values when it comes to accuracy, bias, or the role AI plays in their care,” said lead author Thomas Ploug, professor of data and AI ethics at Aalborg University, Denmark. “Regulation must move beyond system-level safety and account for individual rights and participation.”
The authors call for the introduction of patient rights relating to AI-generated diagnosis or treatment planning, including the right to:
- request an explanation;
- give or withdraw consent;
- seek a second opinion; and
- refuse diagnosis or screening based on publicly available data without consent.
They warn that without urgent engagement from health care stakeholders—including clinicians, regulators, and patient groups—these rights risk being left behind in the rapid evolution of AI in health care.
“AI is transforming health care, but it must not do so at the expense of patient autonomy and trust,” said Professor Ploug. “It is time to define the rights that will protect and empower patients in an AI-driven health system.”
More information:
The need for patient rights in AI-driven healthcare – risk-based regulation is not enough, Journal of the Royal Society of Medicine (2025). DOI: 10.1177/01410768251344707
Citation:
AI in health care needs patient-centered regulation to avoid discrimination, say experts (2025, June 25)
retrieved 25 June 2025
from https://medicalxpress.com/news/2025-06-ai-health-patient-centered-discrimination.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

A new commentary published in the Journal of the Royal Society of Medicine warns that current risk-based regulatory approaches to artificial intelligence (AI) in health care fall short in protecting patients, potentially leading to over- and undertreatment as well as discrimination against patient groups.
The authors found that while AI and machine learning systems can enhance clinical accuracy, concerns remain over their inherent inaccuracy, opacity, and potential for bias which are not adequately addressed by the current regulatory efforts introduced by the European Union’s AI Act.
Passed in 2025, the AI Act categorizes medical AI as “high risk” and introduces strict controls on providers and deployers. But the authors argue this risk-based framework overlooks three critical issues: individual patient preferences, systemic and long-term effects of AI implementation, and the disempowerment of patients in regulatory processes.
“Patients have different values when it comes to accuracy, bias, or the role AI plays in their care,” said lead author Thomas Ploug, professor of data and AI ethics at Aalborg University, Denmark. “Regulation must move beyond system-level safety and account for individual rights and participation.”
The authors call for the introduction of patient rights relating to AI-generated diagnosis or treatment planning, including the right to:
- request an explanation;
- give or withdraw consent;
- seek a second opinion; and
- refuse diagnosis or screening based on publicly available data without consent.
They warn that without urgent engagement from health care stakeholders—including clinicians, regulators, and patient groups—these rights risk being left behind in the rapid evolution of AI in health care.
“AI is transforming health care, but it must not do so at the expense of patient autonomy and trust,” said Professor Ploug. “It is time to define the rights that will protect and empower patients in an AI-driven health system.”
More information:
The need for patient rights in AI-driven healthcare – risk-based regulation is not enough, Journal of the Royal Society of Medicine (2025). DOI: 10.1177/01410768251344707
Citation:
AI in health care needs patient-centered regulation to avoid discrimination, say experts (2025, June 25)
retrieved 25 June 2025
from https://medicalxpress.com/news/2025-06-ai-health-patient-centered-discrimination.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.