
A new set of guidelines have been launched to create trustworthy AI systems in health care. The first of its kind, the FUTURE-AI guideline provides recommendations covering the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring.
In recent years, artificial intelligence (AI) has made significant strides in health care, helping with tasks like disease diagnosis and predicting treatment outcomes. However, despite these advances, many health care professionals and patients are still hesitant to fully embrace AI technologies. This hesitation largely stems from concerns about trust, safety, and ethics.
In particular, existing research has shown that AI tools in health care can be prone to errors and patient harm, biases and increased health inequalities, lack of transparency and accountability, as well as data privacy and security breaches.
To overcome these challenges the FUTURE-AI Consortium has developed a comprehensive set of guidelines published in the BMJ. Developed by an international consortium of 117 experts from 50 countries the new guidelines called FUTURE-AI provide a roadmap for creating trustworthy and responsible AI tools for health care.
The FUTURE-AI guidelines are built around six guiding principles:
- Fairness: AI should treat all patients equitably and without bias.
- Universality: AI solutions should be applicable across different health care contexts and populations.
- Traceability: It should be possible to track how AI systems make decisions.
- Usability: AI tools must be user-friendly for health care professionals and patients alike.
- Robustness: AI systems should perform reliably under various conditions.
- Explainability: Patients and clinicians need clear explanations of how AI arrives at its conclusions.
Gary Collins, Professor of Medical Statistics at the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, and author of FUTURE-AI said, “These guidelines fill an important gap in the field of health care AI to give clinicians, patients, and health authorities the confidence to adopt AI tools knowing they are technically sound, clinically safe, and ethically aligned. The FUTURE-AI framework is designed to evolve over time, adapting to new technologies, challenges, and stakeholder feedback. This dynamic approach ensures the guidelines remain relevant and useful as the field of health care AI continues to rapidly advance.”
More information:
Karim Lekadir et al, FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare, BMJ (2025). DOI: 10.1136/bmj-2024-081554
Citation:
New guidelines establish framework for trustworthy AI in health care (2025, March 17)
retrieved 17 March 2025
from https://medicalxpress.com/news/2025-03-guidelines-framework-trustworthy-ai-health.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

A new set of guidelines have been launched to create trustworthy AI systems in health care. The first of its kind, the FUTURE-AI guideline provides recommendations covering the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring.
In recent years, artificial intelligence (AI) has made significant strides in health care, helping with tasks like disease diagnosis and predicting treatment outcomes. However, despite these advances, many health care professionals and patients are still hesitant to fully embrace AI technologies. This hesitation largely stems from concerns about trust, safety, and ethics.
In particular, existing research has shown that AI tools in health care can be prone to errors and patient harm, biases and increased health inequalities, lack of transparency and accountability, as well as data privacy and security breaches.
To overcome these challenges the FUTURE-AI Consortium has developed a comprehensive set of guidelines published in the BMJ. Developed by an international consortium of 117 experts from 50 countries the new guidelines called FUTURE-AI provide a roadmap for creating trustworthy and responsible AI tools for health care.
The FUTURE-AI guidelines are built around six guiding principles:
- Fairness: AI should treat all patients equitably and without bias.
- Universality: AI solutions should be applicable across different health care contexts and populations.
- Traceability: It should be possible to track how AI systems make decisions.
- Usability: AI tools must be user-friendly for health care professionals and patients alike.
- Robustness: AI systems should perform reliably under various conditions.
- Explainability: Patients and clinicians need clear explanations of how AI arrives at its conclusions.
Gary Collins, Professor of Medical Statistics at the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, and author of FUTURE-AI said, “These guidelines fill an important gap in the field of health care AI to give clinicians, patients, and health authorities the confidence to adopt AI tools knowing they are technically sound, clinically safe, and ethically aligned. The FUTURE-AI framework is designed to evolve over time, adapting to new technologies, challenges, and stakeholder feedback. This dynamic approach ensures the guidelines remain relevant and useful as the field of health care AI continues to rapidly advance.”
More information:
Karim Lekadir et al, FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare, BMJ (2025). DOI: 10.1136/bmj-2024-081554
Citation:
New guidelines establish framework for trustworthy AI in health care (2025, March 17)
retrieved 17 March 2025
from https://medicalxpress.com/news/2025-03-guidelines-framework-trustworthy-ai-health.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.