![Credit: Unsplash/CC0 Public Domain medical decision](https://i0.wp.com/scx1.b-cdn.net/csz/news/800a/2025/medical-decision.jpg?resize=800%2C530&ssl=1)
The medical use of artificial intelligence (AI) threatens to undermine patients’ ability to make personalized decisions. New research by Dr. Christian Günther, scientist at the Max Planck Institute for Social Law and Social Policy, uses case studies from the UK and California to analyze whether and how the law can counter this threat to patient autonomy.
The legal scholar comes to the conclusion that the law has a proactive dynamic that allows it to react very well to innovations—and even better than extra-legal regulatory approaches.
“Contrary to widespread assumptions, the law is not an obstacle that only hinders the development and use of innovative technology. On the contrary, it actively shapes this development and plays a central role in the governance of new technologies,” explains Günther.
A multitude of clinical AI systems are currently being approved for use in health care systems worldwide. AI is defined as a technology capable of accomplishing the kinds of tasks that human experts have previously solved through their knowledge, skills and intuition. In particular, the machine learning approach has been a key driver in the development of clinical AI with such capabilities.
However, despite all the advantages associated with it, AI systems can pose a potential threat to the legally required informed consent of patients. This obligation requires the disclosure of information by the medical professional in order to redress the imbalance of expertise between the two sides.
In his research, Christian Günther identifies four specific problems that can occur in this context:
- The use of clinical AI creates a degree of uncertainty based on the nature of AI-generated knowledge and the difficulties in scientifically verifying that knowledge.
- Some ethically significant decisions may be made relatively independently, i.e. without meaningful patient involvement.
- Patients’ ability to make rational decisions in the medical decision-making process can be significantly undermined.
- Patients may not be able to respond appropriately to non-obvious substitutions of human expertise by AI.
To address these issues, Günther examined the norms underlying the principle of informed consent in the UK and California and, using a specific regulatory proposal, demonstrates how legal regulations can be developed in a targeted manner to both promote technological progress and protect patient rights.
More information:
Christian Günther. Artificial Intelligence, Patient Autonomy and Informed Consent
Provided by
Max-Planck-Institut für Sozialrecht und Sozialpolitik
Citation:
AI in medicine—a threat to patient autonomy? (2025, February 13)
retrieved 13 February 2025
from https://medicalxpress.com/news/2025-02-ai-medicine-threat-patient-autonomy.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
![Credit: Unsplash/CC0 Public Domain medical decision](https://i0.wp.com/scx1.b-cdn.net/csz/news/800a/2025/medical-decision.jpg?resize=800%2C530&ssl=1)
The medical use of artificial intelligence (AI) threatens to undermine patients’ ability to make personalized decisions. New research by Dr. Christian Günther, scientist at the Max Planck Institute for Social Law and Social Policy, uses case studies from the UK and California to analyze whether and how the law can counter this threat to patient autonomy.
The legal scholar comes to the conclusion that the law has a proactive dynamic that allows it to react very well to innovations—and even better than extra-legal regulatory approaches.
“Contrary to widespread assumptions, the law is not an obstacle that only hinders the development and use of innovative technology. On the contrary, it actively shapes this development and plays a central role in the governance of new technologies,” explains Günther.
A multitude of clinical AI systems are currently being approved for use in health care systems worldwide. AI is defined as a technology capable of accomplishing the kinds of tasks that human experts have previously solved through their knowledge, skills and intuition. In particular, the machine learning approach has been a key driver in the development of clinical AI with such capabilities.
However, despite all the advantages associated with it, AI systems can pose a potential threat to the legally required informed consent of patients. This obligation requires the disclosure of information by the medical professional in order to redress the imbalance of expertise between the two sides.
In his research, Christian Günther identifies four specific problems that can occur in this context:
- The use of clinical AI creates a degree of uncertainty based on the nature of AI-generated knowledge and the difficulties in scientifically verifying that knowledge.
- Some ethically significant decisions may be made relatively independently, i.e. without meaningful patient involvement.
- Patients’ ability to make rational decisions in the medical decision-making process can be significantly undermined.
- Patients may not be able to respond appropriately to non-obvious substitutions of human expertise by AI.
To address these issues, Günther examined the norms underlying the principle of informed consent in the UK and California and, using a specific regulatory proposal, demonstrates how legal regulations can be developed in a targeted manner to both promote technological progress and protect patient rights.
More information:
Christian Günther. Artificial Intelligence, Patient Autonomy and Informed Consent
Provided by
Max-Planck-Institut für Sozialrecht und Sozialpolitik
Citation:
AI in medicine—a threat to patient autonomy? (2025, February 13)
retrieved 13 February 2025
from https://medicalxpress.com/news/2025-02-ai-medicine-threat-patient-autonomy.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.