SYDNEY – Google has informed the Australian authorities that it received more than 250 complaints globally over nearly a year that its artificial intelligence (AI) software was used to make deepfake terrorism material.
The Alphabet-owned tech giant also said it had received dozens of user reports warning that its AI program Gemini was being used to create child abuse material, according to the Australian eSafety Commissioner.
Under Australian law, tech firms must supply the eSafety Commissioner periodically with information about harm minimisation efforts or risk fines. The reporting period covered April 2023 to February 2024.
Since OpenAI’s ChatGPT exploded into public consciousness in late 2022, regulators around the world have called for better guard rails so AI cannot be used to enable terrorism, fraud, deepfake pornography and other abuse.
The Australian eSafety Commissioner called Google’s disclosure “world-first insight” into how users may be exploiting the technology to produce harmful and illegal content.
“This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated,” eSafety Commissioner Julie Inman Grant said in a statement.
In its report, Google said it received 258 user reports about suspected AI-generated deepfake terrorist or violent extremist content made using Gemini, and another 86 user reports alleging AI-generated child exploitation or abuse material.
It did not say how many of the complaints it verified, according to the regulator.
A Google spokesperson said it did not allow the generation or distribution of content related to facilitating violent extremism or terror, child exploitation or abuse, or other illegal activities.
“We are committed to expanding on our efforts to help keep Australians safe online,” the spokesperson said by e-mail.
“The number of Gemini user reports we provided to eSafety represents the total global volume of user reports, not confirmed policy violations.”
Google used hatch-matching – a system of automatically matching newly uploaded images with already-known images – to identify and remove child abuse material made with Gemini.
But it did not use the same system to weed out terrorist or violent extremist material generated with Gemini, the regulator added.
The regulator has fined Telegram and Twitter, now known as X, for what it called shortcomings in their reports. X has lost one appeal over its fine of A$610,500 (S$515,200) but plans to appeal again. Telegram also plans to challenge its fine. REUTERS
Join ST’s Telegram channel and get the latest breaking news delivered to you.