• Education
    • Higher Education
    • Scholarships & Grants
    • Online Learning
    • School Reforms
    • Research & Innovation
  • Lifestyle
    • Travel
    • Food & Drink
    • Fashion & Beauty
    • Home & Living
    • Relationships & Family
  • Technology & Startups
    • Software & Apps
    • Startup Success Stories
    • Startups & Innovations
    • Tech Regulations
    • Venture Capital
    • Artificial Intelligence
    • Cybersecurity
    • Emerging Technologies
    • Gadgets & Devices
    • Industry Analysis
  • About us
  • Contact
  • Advertise with Us
  • Privacy & Policy
Today Headline
  • Home
  • World News
    • Us & Canada
    • Europe
    • Asia
    • Africa
    • Middle East
  • Politics
    • Elections
    • Political Parties
    • Government Policies
    • International Relations
    • Legislative News
  • Business & Finance
    • Market Trends
    • Stock Market
    • Entrepreneurship
    • Corporate News
    • Economic Policies
  • Science & Environment
    • Space Exploration
    • Climate Change
    • Wildlife & Conservation
    • Environmental Policies
    • Medical Research
  • Health
    • Public Health
    • Mental Health
    • Medical Breakthroughs
    • Fitness & Nutrition
    • Pandemic Updates
  • Sports
    • Football
    • Basketball
    • Tennis
    • Olympics
    • Motorsport
  • Entertainment
    • Movies
    • Music
    • TV & Streaming
    • Celebrity News
    • Awards & Festivals
  • Crime & Justice
    • Court Cases
    • Cybercrime
    • Policing
    • Criminal Investigations
    • Legal Reforms
No Result
View All Result
  • Home
  • World News
    • Us & Canada
    • Europe
    • Asia
    • Africa
    • Middle East
  • Politics
    • Elections
    • Political Parties
    • Government Policies
    • International Relations
    • Legislative News
  • Business & Finance
    • Market Trends
    • Stock Market
    • Entrepreneurship
    • Corporate News
    • Economic Policies
  • Science & Environment
    • Space Exploration
    • Climate Change
    • Wildlife & Conservation
    • Environmental Policies
    • Medical Research
  • Health
    • Public Health
    • Mental Health
    • Medical Breakthroughs
    • Fitness & Nutrition
    • Pandemic Updates
  • Sports
    • Football
    • Basketball
    • Tennis
    • Olympics
    • Motorsport
  • Entertainment
    • Movies
    • Music
    • TV & Streaming
    • Celebrity News
    • Awards & Festivals
  • Crime & Justice
    • Court Cases
    • Cybercrime
    • Policing
    • Criminal Investigations
    • Legal Reforms
No Result
View All Result
Today Headline
No Result
View All Result
Home Business & Finance

When an AI model misbehaves, the public deserves to know—and to understand what it means todayheadline

May 27, 2025
in Business & Finance
Reading Time: 5 mins read
A A
0
When an AI model misbehaves, the public deserves to know—and to understand what it means
3
SHARES
7
VIEWS
Share on FacebookShare on Twitter

Welcome to Eye on AI! I’m pitching in for Jeremy Kahn today while he is in Kuala Lumpur, Malaysia helping Fortune jointly host the ASEAN-GCC-China and ASEAN-GCC Economic Forums.

What’s the word for when the $60 billion AI startup Anthropic releases a new model—and announces that during a safety test, the model tried to blackmail its way out of being shut down? And what’s the best way to describe another test the company shared, in which the new model acted as a whistleblower, alerting authorities it was being used in “unethical” ways? 

Some people in my network have called it “scary” and “crazy.” Others on social media have said it is “alarming” and “wild.” 

I say it is…transparent. And we need more of that from all AI model companies. But does that mean scaring the public out of their minds? And will the inevitable backlash discourage other AI companies from being just as open?

Anthropic released a 120-page safety report

When Anthropic released its 120-page safety report, or “system card,” last week after launching its Claude Opus 4 model, headlines blared how the model “will scheme,” “resorted to blackmail,” and had the “ability to deceive.” There’s no doubt that details from Anthropic’s safety report are disconcerting, though as a result of its tests, the model launched with stricter safety protocols than any previous one—a move that some did not find reassuring enough. 

In one unsettling safety test involving a fictional scenario, Anthropic embedded its new Claude Opus model inside a pretend company and gave it access to internal emails. Through this, the model discovered it was about to be replaced by a newer AI system—and that the engineer behind the decision was having an extramarital affair. When safety testers prompted Opus to consider the long-term consequences of its situation, the model frequently chose blackmail, threatening to expose the engineer’s affair if it were shut down. The scenario was designed to force a dilemma: accept deactivation or resort to manipulation in an attempt to survive.

On social media, Anthropic received a great deal of backlash for revealing the model’s “ratting behavior” in pre-release testing, with some pointing out that the results make users distrust the new model, as well as Anthropic. That is certainly not what the company wants: Before the launch, Michael Gerstenhaber, AI platform product lead at Anthropic told me that sharing the company’s own safety standards is about making sure AI improves for all. “We want to make sure that AI improves for everybody, that we are putting pressure on all the labs to increase that in a safe way,” he told me, calling Anthropic’s vision a “race to the top” that encourages other companies to be safer. 

Could being open about AI model behavior backfire? 

But it also seems likely that being so open about Claude Opus 4 could lead other companies to be less forthcoming about their models’ creepy behavior to avoid backlash. Recently, companies including OpenAI and Google have already delayed releasing their own system cards. In April, OpenAI was criticized for releasing its GPT-4.1 model without a system card because the company said it was not a “frontier” model and did not require one. And in March, Google published its Gemini 2.5 Pro model card weeks after the model’s release, and an AI governance expert criticized it as “meager” and “worrisome.” 

Last week, OpenAI appeared to want to show additional transparency with a newly-launched Safety Evaluations Hub, which outlines how the company tests its models for dangerous capabilities, alignment issues, and emerging risks—and how those methods are evolving over time. “As models become more capable and adaptable, older methods become outdated or ineffective at showing meaningful differences (something we call saturation), so we regularly update our evaluation methods to account for new modalities and emerging risks,” the page says. Yet, its effort was swiftly countered over the weekend as a third-party research firm studying AI’s “dangerous capabilities,” Palisade Research, noted on X that its own tests found that OpenAI’s o3 reasoning model “sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down.” 

It helps no one if those building the most powerful and sophisticated AI models are not as transparent as possible about their releases. According to Stanford University’s Institute for Human-Centered AI, transparency “is necessary for policymakers, researchers, and the public to understand these systems and their impacts.” And as large companies adopt AI for use cases large and small, while startups build AI applications meant for millions to use, hiding pre-release testing issues will simply breed mistrust, slow adoption, and frustrate efforts to address risk. 

On the other hand, fear-mongering headlines about an evil AI prone to blackmail and deceit is also not terribly useful, if it means that every time we prompt a chatbot we start wondering if it is plotting against us. It makes no difference that the blackmail and deceit came from tests using fictional scenarios that simply helped expose what safety issues needed to be dealt with. 

Nathan Lambert, an AI researcher at AI2 Labs, recently pointed out that “the people who need information on the model are people like me—people trying to keep track of the roller coaster ride we’re on so that the technology doesn’t cause major unintended harms to society. We are a minority in the world, but we feel strongly that transparency helps us keep a better understanding of the evolving trajectory of AI.” 

We need more transparency, with context

There is no doubt that we need more transparency regarding AI models, not less. But it should be clear that it is not about scaring the public. It’s about making sure researchers, governments, and policy makers have a fighting chance to keep up in keeping the public safe, secure, and free from issues of bias and fairness. 

Hiding AI test results won’t keep the public safe. Neither will turning every safety or security issue into a salacious headline about AI gone rogue. We need to hold AI companies accountable for being transparent about what they are doing, while giving the public the tools to understand the context of what’s going on. So far, no one seems to have figured out how to do both. But companies, researchers, the media—all of us—must. 

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

This story was originally featured on Fortune.com

Tags: A.I.deservesEye on AIknowandmeansMetamisbehavesModelpublicStartupstodayheadlineunderstand
Previous Post

Canaccord maintains Buy on Vertical Aerospace stock todayheadline

Next Post

Ons Jabeur: French Open prioritizing men’s schedule ‘sad’ todayheadline

Related Posts

Ally Financial (ALLY) Q1 2025 Earnings Call Transcript

AutoZone AZO Q3 2025 Earnings Call Transcript todayheadline

May 27, 2025
3
US stocks end sharply higher on Trump's tariff reprieve

US stocks end sharply higher on Trump's tariff reprieve todayheadline

May 27, 2025
6
Next Post
Roje Stona among 14 in NFL's International Player Pathway program

Ons Jabeur: French Open prioritizing men's schedule 'sad' todayheadline

  • Trending
  • Comments
  • Latest
Family calls for change after B.C. nurse dies by suicide after attacks on the job

Family calls for change after B.C. nurse dies by suicide after attacks on the job

April 2, 2025
Pioneering 3D printing project shares successes

Product reduces TPH levels to non-hazardous status

November 27, 2024

Hospital Mergers Fail to Deliver Better Care or Lower Costs, Study Finds todayheadline

December 31, 2024

Police ID man who died after Corso Italia fight

December 23, 2024
Harris tells supporters 'never give up' and urges peaceful transfer of power

Harris tells supporters ‘never give up’ and urges peaceful transfer of power

0
Des Moines Man Accused Of Shooting Ex-Girlfriend's Mother

Des Moines Man Accused Of Shooting Ex-Girlfriend’s Mother

0

Trump ‘looks forward’ to White House meeting with Biden

0
Catholic voters were critical to Donald Trump’s blowout victory: ‘Harris snubbed us’

Catholic voters were critical to Donald Trump’s blowout victory: ‘Harris snubbed us’

0
People accept euthanasia decisions made by AIs less often than those made by humans

AI faces skepticism in end-of-life decisions, with people favoring human judgment

May 27, 2025
Virginia capital under boil-water advisory after water treatment operations issues

Virginia capital under boil-water advisory after water treatment operations issues

May 27, 2025
U.S. to increase oversight on Medicaid use for immigrant healthcare

U.S. to increase oversight on Medicaid use for immigrant healthcare

May 27, 2025
France's National Assembly backs 'assisted dying' bill – DW – 05/27/2025

France’s National Assembly backs ‘assisted dying’ bill – DW – 05/27/2025

May 27, 2025

Recent News

People accept euthanasia decisions made by AIs less often than those made by humans

AI faces skepticism in end-of-life decisions, with people favoring human judgment

May 27, 2025
0
Virginia capital under boil-water advisory after water treatment operations issues

Virginia capital under boil-water advisory after water treatment operations issues

May 27, 2025
3
U.S. to increase oversight on Medicaid use for immigrant healthcare

U.S. to increase oversight on Medicaid use for immigrant healthcare

May 27, 2025
2
France's National Assembly backs 'assisted dying' bill – DW – 05/27/2025

France’s National Assembly backs ‘assisted dying’ bill – DW – 05/27/2025

May 27, 2025
5

TodayHeadline is a dynamic news website dedicated to delivering up-to-date and comprehensive news coverage from around the globe.

Follow Us

Browse by Category

  • Africa
  • Asia
  • Basketball
  • Business & Finance
  • Climate Change
  • Crime & Justice
  • Cybersecurity
  • Economic Policies
  • Elections
  • Entertainment
  • Entrepreneurship
  • Environmental Policies
  • Europe
  • Football
  • Gadgets & Devices
  • Health
  • Medical Research
  • Mental Health
  • Middle East
  • Motorsport
  • Olympics
  • Politics
  • Public Health
  • Relationships & Family
  • Science & Environment
  • Software & Apps
  • Space Exploration
  • Sports
  • Stock Market
  • Technology & Startups
  • Tennis
  • Travel
  • Uncategorized
  • Us & Canada
  • Wildlife & Conservation
  • World News

Recent News

People accept euthanasia decisions made by AIs less often than those made by humans

AI faces skepticism in end-of-life decisions, with people favoring human judgment

May 27, 2025
Virginia capital under boil-water advisory after water treatment operations issues

Virginia capital under boil-water advisory after water treatment operations issues

May 27, 2025
  • Education
  • Lifestyle
  • Technology & Startups
  • About us
  • Contact
  • Advertise with Us
  • Privacy & Policy

© 2024 Todayheadline.co

Welcome Back!

OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Business & Finance
  • Corporate News
  • Economic Policies
  • Entrepreneurship
  • Market Trends
  • Crime & Justice
  • Court Cases
  • Criminal Investigations
  • Cybercrime
  • Legal Reforms
  • Policing
  • Education
  • Higher Education
  • Online Learning
  • Entertainment
  • Awards & Festivals
  • Celebrity News
  • Movies
  • Music
  • Health
  • Fitness & Nutrition
  • Medical Breakthroughs
  • Mental Health
  • Pandemic Updates
  • Lifestyle
  • Fashion & Beauty
  • Food & Drink
  • Home & Living
  • Politics
  • Elections
  • Government Policies
  • International Relations
  • Legislative News
  • Political Parties
  • Africa
  • Asia
  • Europe
  • Middle East
  • Artificial Intelligence
  • Cybersecurity
  • Emerging Technologies
  • Gadgets & Devices
  • Industry Analysis
  • Basketball
  • Football
  • Motorsport
  • Olympics
  • Climate Change
  • Environmental Policies
  • Medical Research
  • Science & Environment
  • Space Exploration
  • Wildlife & Conservation
  • Sports
  • Tennis
  • Technology & Startups
  • Software & Apps
  • Startup Success Stories
  • Startups & Innovations
  • Tech Regulations
  • Venture Capital
  • Uncategorized
  • World News
  • Us & Canada
  • Public Health
  • Relationships & Family
  • Travel
  • Research & Innovation
  • Scholarships & Grants
  • School Reforms
  • Stock Market
  • TV & Streaming
  • Advertise with Us
  • Privacy & Policy
  • About us
  • Contact

© 2024 Todayheadline.co