There is still a lot we don’t know about the robocall in New Hampshire that impersonated President Joe Biden, likely with AI voice cloning technology.
It’s not certain who was behind the call (the New Hampshire attorney general’s office is investigating), what software was used in its creation (one analysis blames ElevenLabs, but the startup’s own tool disagreed) or how many voters received one.
But it made one thing clear: the pressure is on for regulators to take action on deepfakes ahead of the election.
The impact of deepfakes on society — and elections particularly — has been an anxiety for years. Easy-to-use generative AI tools have recently moved it from an issue in niche areas to a top security risk across the board. Before the Biden robocall, AI deepfakes were used in attempts to disrupt elections in Slovakia and Taiwan.
Congress has taken note. Sen. Mike Rounds (R-SD) told POLITICO that as the Senate hashes out its priorities on AI legislation, there is growing recognition that tackling the use of AI in campaign ads and communications should top the list. And after explicit deepfakes of Taylor Swift spread on X last week, lawmakers renewed calls for urgent legislation on the issue.
A reminder: no federal laws currently prohibit the sharing or creation of deepfakes, though several bills have been proposed in Congress and some states have passed laws to crack down on manipulated media. The Federal Election Commission, too, has been considering rule changes to regulate the use of A.I. deepfakes in campaign materials.
“Deepfakes is the first test that generative AI has thrown at us because it fundamentally eliminates all trust,” Vijay Balasubramaniyan, CEO of the phone fraud detection company Pindrop, told Steven Overly on a POLITICO Tech podcast episode that delved into the Biden robocall incident. “If we can’t get together and figure out how to solve that problem, yeah, the killer robots will definitely get us.”
No surprise that’s easier said than done. One especially tricky part will be figuring out how to tackle the full range of manipulated media — from older techniques like splicing in fake audio to the new generative AI-fueled advancements, and all the hybrids in between. The robocall, for one, was not a very advanced audio deepfake, according to Matthew Wright, who chairs Rochester Institute of Technology’s cybersecurity department.
“There are tools available now that that can do a better job, and consequently be more dangerous,” he told DFD.
Looking at the proposed federal bills and enacted state laws, it turns out there’s not a whole lot they collectively agree on, starting with even what should be regulated.
California and Washington’s laws target false depictions only of political candidates, while Texas and Minnesota go further to include those created with the intention of harming a political candidate or influencing election outcomes.
Consensus on what constitutes a deepfake is also lacking. Some bills distinctly cover images and video, while others extend to audio.
“This episode does highlight how important it is to have audio be included in these efforts,”said Mekela Panditharatne, counsel for the Brennan Center’s Democracy Program. “It could be kind of separated and done piecemeal. But I do think it makes sense to consider those different forms of gen-AI production together.”
Piecemeal seems to be the way regulation on deepfakes is moving. Wright drew parallels with the landscape for privacy legislation, where a patchwork of laws offer varying levels of protection.
A key question is who should be held accountable: phone service providers, platforms, developers or distributors of the deepfakes? How you answer that ends up defining the focus of proposed solutions.
At the federal level, bills have assigned responsibility to two main groups, said Panditharatne. The first includes the actors that fall under campaign finance disclosure requirements: campaigns, super PACs and donors. Often, the resulting bills address the timing of deepfakes — like one act that bans false endorsements and knowingly misleading voting information 60 days before a federal election — or transparency, as in the case of Rep. Yvette Clarke (D-N.Y.)’s bill which requires that political ads reveal their use of AI-generated material through mandatory labeling, watermarking, or audio disclosures.
The second category targets deepfake disseminators, so long as they meet certain knowledge or intent requirements in some cases. Rep. Joe Morelle’s (D-N.Y.) Preventing Deepfakes of Intimate Images Act would make it illegal to share deepfake pornography without consent.
“There is relatively little attention both at the federal and state level in holding other actors to account for deepfakes,” Panditharatne added, giving social media companies and AI developers as examples.
As with past content moderation issues, social media giants enjoy some protection from legal liability under federal law (thanks to the famous Section 230 of the Communications Decency Act), which complicates such efforts. The bipartisan Senate NO FAKES Act is one attempt; it proposes holding liable anyone who makes or publicly shares an unauthorized digital replica — including companies — and allowing for penalties that start at $5,000 per violation.
Still, it’s unclear to Wright whether any regulations under consideration, or industry solutions in development, could have prevented the Biden robocall. Wright said he has built a deepfake detection tool of his own, but also offers one solution for which the technology does not currently exist on phones. “Every microphone is going to have to have even live audio being constantly re-certified. That might have to be what’s required.”
The design of the scheme exploited an area on which detection focuses less: a direct line with no real-time feedback from social media and limited playback capabilities.
Enforcing the regulations being floated will require some sort of detection mechanism (manyhavebeeninvented). But for now, some bad actors with just a voter registration list, phone, and 30-second clip of a political figure can inevitably fly under the radar. The FTC has sponsored a challenge with a $25,000 top prize for the most effective approach to safeguard against the misuse of AI-enabled voice cloning, covering everything from imposter fraud to using someone’s voice without consent in music creation. Its suggestions include real-time detection and monitoring to alert users to voice cloning or block calls.
Commodity Futures Trading Commission Chair Rostin Behnam is throwing cold water on the idea that regulations will legitimize the crypto industry.
POLITICO’s Declan Harty reported for Pro subscribers on Behnam’s push for greater regulatory authority over crypto, which he sees as especially urgent after the Securities and Exchange Commission allowed the sale of Bitcoin ETFs.
“[The ETFs] have taken a speculative and volatile asset, wrapped it in a thin layer of indirect regulation, and packaged it as a shiny new product,” Behnam said at an event in Florida last week.
Declan writes that Behnam warned there’s still “nothing firmly in place” to protect investors from potential crypto fraud or volatility, something he says could be remedied by Congress giving his agency more regulatory authority after it brought almost 50 crypto-related regulatory actions in 2023. — Derek Robertson
Italy formally warned OpenAI it’s violating the General Data Protection Regulation.
POLITICO’s Clothilde Goujard reported on the regulatory shot across the bow for Pro subscribers this morning, with Italy’s data protection agency saying OpenAI has 30 days to contest alleged violations of data privacy committed by ChatGPT. Penalties can be up to four percent of a company’s global revenue.
The move follows Italy’s temporary ban of ChatGPT in the country in March 2023, and open investigations into the company’s practices in Spain, France, and Germany. — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]); Daniella Cheslow ([email protected]); and Christine Mui ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.