ECONOMYNEXT – Google’s revised Artificial Intelligence (AI) policy shows the company’s willingness to develop AI for weapons, Human Rights Watch has said, and underscores why voluntary guidelines are not a substitute for regulation and enforceable law.
“Google’s previous Responsible AI Principles stated the company would not develop AI “for use in weapons” or where the primary purpose is surveillance,” Anna Bacciareli, a senior researcher at Human Rights Watch (HRW) said.
“Google had committed to “not design or deploy AI” that causes “overall harm” or “contravenes widely accepted principles of international law and human rights.” Those red lines are no longer applicable.”
The revised AI policy does not state the company’s clear intentions in refraining from developing AI for weapons.
“The company’s revised AI Principles state that Google’s AI products will “align with” human rights without explaining how.”
“This move from explicitly prohibited uses of AI is deeply concerning. Sometimes, it’s simply too risky to use AI, a suite of complex, fast-developing technologies whose consequences we are discovering in real time,” HRW said.
That a global industry leader like Google can suddenly abandon self-proclaimed forbidden practices underscores why voluntary guidelines are not a substitute for regulation and enforceable law, HRW said.
Existing international human rights law and standards do apply in the use of AI, and regulation can be crucial in translating norms into practice.
Militaries are increasingly using AI in war, HRW pointed out, where their reliance on incomplete or faulty data and flawed calculations increases the risk of civilian harm.
“Such digital tools complicate accountability for battlefield decisions that may have life-or-death consequences.” (Colombo/Feb10/2025)