• Education
    • Higher Education
    • Scholarships & Grants
    • Online Learning
    • School Reforms
    • Research & Innovation
  • Lifestyle
    • Travel
    • Food & Drink
    • Fashion & Beauty
    • Home & Living
    • Relationships & Family
  • Technology & Startups
    • Software & Apps
    • Startup Success Stories
    • Startups & Innovations
    • Tech Regulations
    • Venture Capital
    • Artificial Intelligence
    • Cybersecurity
    • Emerging Technologies
    • Gadgets & Devices
    • Industry Analysis
  • About us
  • Contact
  • Advertise with Us
  • Privacy & Policy
Today Headline
  • Home
  • World News
    • Us & Canada
    • Europe
    • Asia
    • Africa
    • Middle East
  • Politics
    • Elections
    • Political Parties
    • Government Policies
    • International Relations
    • Legislative News
  • Business & Finance
    • Market Trends
    • Stock Market
    • Entrepreneurship
    • Corporate News
    • Economic Policies
  • Science & Environment
    • Space Exploration
    • Climate Change
    • Wildlife & Conservation
    • Environmental Policies
    • Medical Research
  • Health
    • Public Health
    • Mental Health
    • Medical Breakthroughs
    • Fitness & Nutrition
    • Pandemic Updates
  • Sports
    • Football
    • Basketball
    • Tennis
    • Olympics
    • Motorsport
  • Entertainment
    • Movies
    • Music
    • TV & Streaming
    • Celebrity News
    • Awards & Festivals
  • Crime & Justice
    • Court Cases
    • Cybercrime
    • Policing
    • Criminal Investigations
    • Legal Reforms
No Result
View All Result
  • Home
  • World News
    • Us & Canada
    • Europe
    • Asia
    • Africa
    • Middle East
  • Politics
    • Elections
    • Political Parties
    • Government Policies
    • International Relations
    • Legislative News
  • Business & Finance
    • Market Trends
    • Stock Market
    • Entrepreneurship
    • Corporate News
    • Economic Policies
  • Science & Environment
    • Space Exploration
    • Climate Change
    • Wildlife & Conservation
    • Environmental Policies
    • Medical Research
  • Health
    • Public Health
    • Mental Health
    • Medical Breakthroughs
    • Fitness & Nutrition
    • Pandemic Updates
  • Sports
    • Football
    • Basketball
    • Tennis
    • Olympics
    • Motorsport
  • Entertainment
    • Movies
    • Music
    • TV & Streaming
    • Celebrity News
    • Awards & Festivals
  • Crime & Justice
    • Court Cases
    • Cybercrime
    • Policing
    • Criminal Investigations
    • Legal Reforms
No Result
View All Result
Today Headline
No Result
View All Result
Home Science & Environment Medical Research

It’s inoperable cancer. Should AI make call about what happens next?

February 11, 2025
in Medical Research
Reading Time: 7 mins read
A A
0
It's inoperable cancer. Should AI make call about what happens next?
7
SHARES
15
VIEWS
Share on FacebookShare on Twitter


It's inoperable cancer. Should AI make call about what happens next?
Rebecca Weintraub Brendel, director of Harvard Medical School’s Center for Bioethics. Credit: Veasey Conway/Harvard Staff Photographer

AI is already being used in clinics to help analyze imaging data, such as X-rays and scans. But the recent arrival of sophisticated large-language AI models on the scene is forcing consideration of broadening the use of the technology into other areas of patient care.

In this edited conversation with the Gazette, Rebecca Weintraub Brendel, director of Harvard Medical School’s Center for Bioethics, looks at end-of-life options and the importance of remembering that just because we can, doesn’t always mean we should.

When we talk about artificial intelligence and end-of-life decision-making, what are the important questions at play?

End-of-life decision-making is the same as other decision-making because ultimately, we do what patients want us to do, provided they are competent to make those decisions and what they want is medically indicated—or at least not medically contraindicated.

One complication would be if a patient is so ill that they can’t tell us what they want. The second challenge is understanding in both a cognitive way and an emotional way what the decision means.

People sometimes say, “I would never want to live that way” but they wouldn’t make the same decision in all circumstances. Patients who’ve lived with progressive neurologic conditions like ALS for a long time often have a sense of when they’ve reached their limit. They’re not depressed or frightened and are ready to make their decision.

On the other hand, depression is quite prevalent in some cancers and people tend to change their minds about wanting to end their lives once symptoms are treated.

So if someone is young and says, ‘If I lose my legs, I wouldn’t want to live,’ should we allow for shifting perspectives as we get to the end of life?

When we’re faced with something that alters our sense of bodily integrity, our sense of ourselves as fully functional human beings, it’s natural, even expected, that our capacity to cope can be overwhelmed.

But there are pretty devastating injuries where, a year later, people report having a better quality of life than before, even for severe spinal cord injuries and quadriplegia. So, we can overcome a lot, and our capacity for change, for hope, has to be taken into account.

So how do we, as healers of mind and body, help patients make decisions about their end of life?

For someone with a chronic illness, the standard of care has those decisions happening along the way, and AI could be helpful there. But at the point of diagnosis—do I want treatment or to opt for palliation from the beginning—AI might give us a sense of what one might anticipate, how impaired we might be, whether pain can be palliated, or what the tipping point will be for an individual person.

So, the ability to have AI gather and process orders of magnitude more information than what the human mind can process—without being colored by fear, anxiety, responsibility, relational commitments—might give us a picture that could be helpful.

What about the patient who is incapacitated, with no family, no advance directives, so the decision falls to the care team?

We have to have an attitude of humility toward these decisions. Having information can be really helpful. With somebody who’s never going to regain capacity, we’re stuck with a few different options. If we really don’t know what they would like, because they’re somebody who avoided treatment and really didn’t want to be in the hospital, or didn’t have a lot of relationships, we assume that they wouldn’t have sought treatment for something that was life-ending.

But we have to be aware that we’re making a lot of assumptions, even if we’re not necessarily doing the wrong thing. Having a better prognostic sense of what might happen is really important to that decision, which, again, is where AI can help.

I’m less optimistic about the use of large-language models for making capacity decisions or figuring out what somebody would have wanted. To me it’s about respect. We respect our patients and try to make our best guesses, and realize that we are all complicated, sometimes tortured, sometimes lovable, and, ideally, loved.

Are there things that AI should not be allowed to do? I’m sure it could make end-of-life recommendations versus simply gathering information.

We have to be careful where we use “is” to make an “ought” decision.

If AI told you that there is less than 5% chance of survival, that alone is not enough to tell us what we ought to do. If there’s been a terrible tragedy or a violent assault on someone, we would look at that 5% differently from someone who’s been battling a chronic illness over time and says, “I don’t want to go through this again, and I don’t want to put others through this. I’ve had a wonderful life.”

In diagnostic and prognostic assessments, AI has already started to outperform physicians, but that doesn’t answer the critical question of how we interpret that, in terms of what our default rules should be about human behavior.

It can help us be more transparent and accountable and respectful of each other by making it explicit that, as a society, if these things happen, unless you tell us otherwise, we’re not going to resuscitate. Or we are when we think there’s a good chance of recovery.

I don’t want to underestimate AI’s potential impact, but we can’t abdicate our responsibility to center human meaning in our decisions, even when based on data.

So these decisions should always be made by humans?

“Always” is a really strong word, but I’d be hard-pressed to say that we’d ever want to give away our humanity in making decisions of high consequence.

Are there areas of medicine where people should always be involved? Should a baby’s first contact with the world always be human hands? Or should we just focus on quality of care?

I would want people around, even if a robot does the surgery, because the outcome is better. We would want to maintain the human meaning of important life events.

Another question that comes up is, what will it mean to be a physician, a healer, a health care professional? We hold a lot of information and an information asymmetry is one of the things that has caused medical and other health care professionals to be held in high esteem.

But it’s also about what we do with the information, being a great diagnostician, having an exemplary bedside manner, and ministering to patients at a time when they’re suffering. How do we redefine the profession when the things we thought we were best at, we may not be the best at anymore?

At some point, we may have to question human interaction in the system. Does it introduce bias, and to what extent is processing by human minds important? Is an LLM going to create new information, come up with a new diagnostic category, or a disease entity? What ought the responsibilities of patients and doctors be to each other in a hyper-technological age? Those are important questions that we need to look at.

Are those conversations happening?

Yes. In our Center for Bioethics, one of the things that we’re looking at is how does artificial intelligence look at some of our timeless challenges within health? Technology tends to go where there’s capital and resources, while LLMs and AI advances could allow us to care for swaths of the population where there’s no doctor within a day’s travel. Holding ourselves accountable on questions of equity, justice, and advancing global health is really important.

There are questions about moral leadership in medicine. How do we make sure that output from LLMs and future iterations of AIs comport with the people we think we are and the people we ought to be? How should we educate to make sure that the values of the healing professions continue to be front and center in delivering care? How do we balance the public’s health and individual health, and how does that play out in other countries?

So when we talk about patients in under-resourced settings and about AI’s capabilities versus what it means to be human, we need to be mindful that in some parts of the world to be human is to suffer and not have access to care?

Yes, because, increasingly, we can do something about it. As we’re developing tools that can allow us to make huge differences in practical and affordable ways, we have to ask, “How do we do that and follow our values of justice, care, respect for persons? How do we make sure that we don’t abandon them when we actually have the capacity to help?”

Provided by
Harvard University


This story is published courtesy of the Harvard Gazette, Harvard University’s official newspaper. For additional university news, visit Harvard.edu.

Citation:
It’s inoperable cancer. Should AI make call about what happens next? (2025, February 11)
retrieved 11 February 2025
from https://medicalxpress.com/news/2025-02-inoperable-cancer-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.



It's inoperable cancer. Should AI make call about what happens next?
Rebecca Weintraub Brendel, director of Harvard Medical School’s Center for Bioethics. Credit: Veasey Conway/Harvard Staff Photographer

AI is already being used in clinics to help analyze imaging data, such as X-rays and scans. But the recent arrival of sophisticated large-language AI models on the scene is forcing consideration of broadening the use of the technology into other areas of patient care.

In this edited conversation with the Gazette, Rebecca Weintraub Brendel, director of Harvard Medical School’s Center for Bioethics, looks at end-of-life options and the importance of remembering that just because we can, doesn’t always mean we should.

When we talk about artificial intelligence and end-of-life decision-making, what are the important questions at play?

End-of-life decision-making is the same as other decision-making because ultimately, we do what patients want us to do, provided they are competent to make those decisions and what they want is medically indicated—or at least not medically contraindicated.

One complication would be if a patient is so ill that they can’t tell us what they want. The second challenge is understanding in both a cognitive way and an emotional way what the decision means.

People sometimes say, “I would never want to live that way” but they wouldn’t make the same decision in all circumstances. Patients who’ve lived with progressive neurologic conditions like ALS for a long time often have a sense of when they’ve reached their limit. They’re not depressed or frightened and are ready to make their decision.

On the other hand, depression is quite prevalent in some cancers and people tend to change their minds about wanting to end their lives once symptoms are treated.

So if someone is young and says, ‘If I lose my legs, I wouldn’t want to live,’ should we allow for shifting perspectives as we get to the end of life?

When we’re faced with something that alters our sense of bodily integrity, our sense of ourselves as fully functional human beings, it’s natural, even expected, that our capacity to cope can be overwhelmed.

But there are pretty devastating injuries where, a year later, people report having a better quality of life than before, even for severe spinal cord injuries and quadriplegia. So, we can overcome a lot, and our capacity for change, for hope, has to be taken into account.

So how do we, as healers of mind and body, help patients make decisions about their end of life?

For someone with a chronic illness, the standard of care has those decisions happening along the way, and AI could be helpful there. But at the point of diagnosis—do I want treatment or to opt for palliation from the beginning—AI might give us a sense of what one might anticipate, how impaired we might be, whether pain can be palliated, or what the tipping point will be for an individual person.

So, the ability to have AI gather and process orders of magnitude more information than what the human mind can process—without being colored by fear, anxiety, responsibility, relational commitments—might give us a picture that could be helpful.

What about the patient who is incapacitated, with no family, no advance directives, so the decision falls to the care team?

We have to have an attitude of humility toward these decisions. Having information can be really helpful. With somebody who’s never going to regain capacity, we’re stuck with a few different options. If we really don’t know what they would like, because they’re somebody who avoided treatment and really didn’t want to be in the hospital, or didn’t have a lot of relationships, we assume that they wouldn’t have sought treatment for something that was life-ending.

But we have to be aware that we’re making a lot of assumptions, even if we’re not necessarily doing the wrong thing. Having a better prognostic sense of what might happen is really important to that decision, which, again, is where AI can help.

I’m less optimistic about the use of large-language models for making capacity decisions or figuring out what somebody would have wanted. To me it’s about respect. We respect our patients and try to make our best guesses, and realize that we are all complicated, sometimes tortured, sometimes lovable, and, ideally, loved.

Are there things that AI should not be allowed to do? I’m sure it could make end-of-life recommendations versus simply gathering information.

We have to be careful where we use “is” to make an “ought” decision.

If AI told you that there is less than 5% chance of survival, that alone is not enough to tell us what we ought to do. If there’s been a terrible tragedy or a violent assault on someone, we would look at that 5% differently from someone who’s been battling a chronic illness over time and says, “I don’t want to go through this again, and I don’t want to put others through this. I’ve had a wonderful life.”

In diagnostic and prognostic assessments, AI has already started to outperform physicians, but that doesn’t answer the critical question of how we interpret that, in terms of what our default rules should be about human behavior.

It can help us be more transparent and accountable and respectful of each other by making it explicit that, as a society, if these things happen, unless you tell us otherwise, we’re not going to resuscitate. Or we are when we think there’s a good chance of recovery.

I don’t want to underestimate AI’s potential impact, but we can’t abdicate our responsibility to center human meaning in our decisions, even when based on data.

So these decisions should always be made by humans?

“Always” is a really strong word, but I’d be hard-pressed to say that we’d ever want to give away our humanity in making decisions of high consequence.

Are there areas of medicine where people should always be involved? Should a baby’s first contact with the world always be human hands? Or should we just focus on quality of care?

I would want people around, even if a robot does the surgery, because the outcome is better. We would want to maintain the human meaning of important life events.

Another question that comes up is, what will it mean to be a physician, a healer, a health care professional? We hold a lot of information and an information asymmetry is one of the things that has caused medical and other health care professionals to be held in high esteem.

But it’s also about what we do with the information, being a great diagnostician, having an exemplary bedside manner, and ministering to patients at a time when they’re suffering. How do we redefine the profession when the things we thought we were best at, we may not be the best at anymore?

At some point, we may have to question human interaction in the system. Does it introduce bias, and to what extent is processing by human minds important? Is an LLM going to create new information, come up with a new diagnostic category, or a disease entity? What ought the responsibilities of patients and doctors be to each other in a hyper-technological age? Those are important questions that we need to look at.

Are those conversations happening?

Yes. In our Center for Bioethics, one of the things that we’re looking at is how does artificial intelligence look at some of our timeless challenges within health? Technology tends to go where there’s capital and resources, while LLMs and AI advances could allow us to care for swaths of the population where there’s no doctor within a day’s travel. Holding ourselves accountable on questions of equity, justice, and advancing global health is really important.

There are questions about moral leadership in medicine. How do we make sure that output from LLMs and future iterations of AIs comport with the people we think we are and the people we ought to be? How should we educate to make sure that the values of the healing professions continue to be front and center in delivering care? How do we balance the public’s health and individual health, and how does that play out in other countries?

So when we talk about patients in under-resourced settings and about AI’s capabilities versus what it means to be human, we need to be mindful that in some parts of the world to be human is to suffer and not have access to care?

Yes, because, increasingly, we can do something about it. As we’re developing tools that can allow us to make huge differences in practical and affordable ways, we have to ask, “How do we do that and follow our values of justice, care, respect for persons? How do we make sure that we don’t abandon them when we actually have the capacity to help?”

Provided by
Harvard University


This story is published courtesy of the Harvard Gazette, Harvard University’s official newspaper. For additional university news, visit Harvard.edu.

Citation:
It’s inoperable cancer. Should AI make call about what happens next? (2025, February 11)
retrieved 11 February 2025
from https://medicalxpress.com/news/2025-02-inoperable-cancer-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.


Tags: Health ResearchHealth Research NewsHealth ScienceMedicine ResearchMedicine Research NewsMedicine Science
Previous Post

Kanye West called himself ‘Hitler’ to ex-employee, lawsuit alleges

Next Post

A Clean Transportation Plan that Ontario Needs: 7 Key Elements

Related Posts

RSV Vaccines, Nirsevimab Tied to Reduced RSV-Linked Hospitalization

RSV vaccines and nirsevimab tied to reduced RSV-linked hospitalization

May 14, 2025
5
city park

Researchers develop tree map to curb allergies in four of Australia’s major cities

May 14, 2025
3
Next Post
Red button that says

A Clean Transportation Plan that Ontario Needs: 7 Key Elements

  • Trending
  • Comments
  • Latest
Family calls for change after B.C. nurse dies by suicide after attacks on the job

Family calls for change after B.C. nurse dies by suicide after attacks on the job

April 2, 2025
Pioneering 3D printing project shares successes

Product reduces TPH levels to non-hazardous status

November 27, 2024

Hospital Mergers Fail to Deliver Better Care or Lower Costs, Study Finds todayheadline

December 31, 2024

Police ID man who died after Corso Italia fight

December 23, 2024
Harris tells supporters 'never give up' and urges peaceful transfer of power

Harris tells supporters ‘never give up’ and urges peaceful transfer of power

0
Des Moines Man Accused Of Shooting Ex-Girlfriend's Mother

Des Moines Man Accused Of Shooting Ex-Girlfriend’s Mother

0

Trump ‘looks forward’ to White House meeting with Biden

0
Catholic voters were critical to Donald Trump’s blowout victory: ‘Harris snubbed us’

Catholic voters were critical to Donald Trump’s blowout victory: ‘Harris snubbed us’

0
HRW warns of worker abuse on Saudi World Cup, urges FIFA action

HRW warns of worker abuse on Saudi World Cup, urges FIFA action

May 14, 2025
2025 Indy 500 practice 1 speeds and results

2025 Indy 500 practice 1 speeds and results

May 14, 2025
The 2024 men's college basketball Feast Week games to watch

2025 NBA mock drafts: Latest ESPN player, team predictions todayheadline

May 14, 2025
Squaring the accounts of Kevin Warsh, whom Trump could pick as Fed chairman next year

Squaring the accounts of Kevin Warsh, whom Trump could pick as Fed chairman next year todayheadline

May 14, 2025

Recent News

HRW warns of worker abuse on Saudi World Cup, urges FIFA action

HRW warns of worker abuse on Saudi World Cup, urges FIFA action

May 14, 2025
0
2025 Indy 500 practice 1 speeds and results

2025 Indy 500 practice 1 speeds and results

May 14, 2025
2
The 2024 men's college basketball Feast Week games to watch

2025 NBA mock drafts: Latest ESPN player, team predictions todayheadline

May 14, 2025
5
Squaring the accounts of Kevin Warsh, whom Trump could pick as Fed chairman next year

Squaring the accounts of Kevin Warsh, whom Trump could pick as Fed chairman next year todayheadline

May 14, 2025
2

TodayHeadline is a dynamic news website dedicated to delivering up-to-date and comprehensive news coverage from around the globe.

Follow Us

Browse by Category

  • Africa
  • Asia
  • Basketball
  • Business & Finance
  • Climate Change
  • Crime & Justice
  • Economic Policies
  • Elections
  • Entertainment
  • Entrepreneurship
  • Environmental Policies
  • Europe
  • Football
  • Gadgets & Devices
  • Health
  • Medical Research
  • Mental Health
  • Middle East
  • Motorsport
  • Olympics
  • Politics
  • Public Health
  • Relationships & Family
  • Science & Environment
  • Software & Apps
  • Space Exploration
  • Sports
  • Stock Market
  • Technology & Startups
  • Tennis
  • Travel
  • Uncategorized
  • Us & Canada
  • Wildlife & Conservation
  • World News

Recent News

Jets in must-win mode after dropping 3-1 decision to Stars in Game 4 - Winnipeg

Jets in must-win mode after dropping 3-1 decision to Stars in Game 4 – Winnipeg

May 14, 2025
HRW warns of worker abuse on Saudi World Cup, urges FIFA action

HRW warns of worker abuse on Saudi World Cup, urges FIFA action

May 14, 2025
  • Education
  • Lifestyle
  • Technology & Startups
  • About us
  • Contact
  • Advertise with Us
  • Privacy & Policy

© 2024 Todayheadline.co

Welcome Back!

OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Business & Finance
  • Corporate News
  • Economic Policies
  • Entrepreneurship
  • Market Trends
  • Crime & Justice
  • Court Cases
  • Criminal Investigations
  • Cybercrime
  • Legal Reforms
  • Policing
  • Education
  • Higher Education
  • Online Learning
  • Entertainment
  • Awards & Festivals
  • Celebrity News
  • Movies
  • Music
  • Health
  • Fitness & Nutrition
  • Medical Breakthroughs
  • Mental Health
  • Pandemic Updates
  • Lifestyle
  • Fashion & Beauty
  • Food & Drink
  • Home & Living
  • Politics
  • Elections
  • Government Policies
  • International Relations
  • Legislative News
  • Political Parties
  • Africa
  • Asia
  • Europe
  • Middle East
  • Artificial Intelligence
  • Cybersecurity
  • Emerging Technologies
  • Gadgets & Devices
  • Industry Analysis
  • Basketball
  • Football
  • Motorsport
  • Olympics
  • Climate Change
  • Environmental Policies
  • Medical Research
  • Science & Environment
  • Space Exploration
  • Wildlife & Conservation
  • Sports
  • Tennis
  • Technology & Startups
  • Software & Apps
  • Startup Success Stories
  • Startups & Innovations
  • Tech Regulations
  • Venture Capital
  • Uncategorized
  • World News
  • Us & Canada
  • Public Health
  • Relationships & Family
  • Travel
  • Research & Innovation
  • Scholarships & Grants
  • School Reforms
  • Stock Market
  • TV & Streaming
  • Advertise with Us
  • Privacy & Policy
  • About us
  • Contact

© 2024 Todayheadline.co