Brit lawmaker targeted by AI deepfake fails to get answers from US Big Tech
Appearing before Parliament, Meta, Google and X struggle to explain how fake political video circulated for so long
A member of the UK Parliament's lower house who was the victim of a deepfake AI campaign this week had a rare chance to confront the Big Tech executives who helped spread it. Their answers disappointed.
Representatives from Meta, Google, and X stumbled, offered platitudes, and explained their respective policies, but did little to compensate for spreading the potentially ruinous AI fake, or commit to ensuring it could not happen again after Conservative MP George Freeman confronted them.
Last autumn, Freeman was the subject of an AI-created fake that falsely claimed he had defected to a rival party, Reform. This was plausible enough, given several genuine Conservative defections in recent months, but entirely fabricated
Not only was it damaging to his reputation, but allowing political misinformation to continue to spread unchecked could end the democratic process in the UK, he argued. Freeman said platforms spreading the content are failing to respond. "There's no redress. There was no statement or principle that it was a problem," he said in Parliament yesterday, labeling the event a "serious disruption to democratic representation."
Step forward Google, which owns YouTube.
"We have policies about election ads which are aimed at ensuring that people are allowed to participate in free and fair elections just during election time," Zoe Darme, director for trust, knowledge and information products, told the House of Commons Science, Innovation and Technology Committee.
For videos that are "violative" under Google's definition, it might be picked up by a "classifier" or if not, "reported and reviewed against community guidelines and removed."
However, Darme was unable to say whether something so demonstrably false would in itself be "violative."
Next up, Wifredo Fernández, director of global government affairs at X (formerly known as Twitter), said: "We have our deceptive identities policy so that deals with impersonation, and we have our synthetic and manipulated media policy, which maybe would apply in this case," he said.
He outlined a three-part test under the platform's synthetic media policy, but noted it applied to confusion across X generally, not within Freeman's specific constituency. The possible outcome: a community note. Asked what action X had actually taken, Fernández said he'd "have to check with the teams." Freeman confirmed X had taken none.
Also among the US giants was Meta, owner of social media megaliths Facebook and Instagram.
Rebecca Stimson, UK public policy director, told Freeman, "It was labeled by our fact checkers, and it was down-ranked. And down rank does have a very significant impact. It can mean up to 80 to 90 percent less engagement."
- UK to rethink tech buying after Palantir contracts
- US appears open to reversing some China tech bans
- UK's 'world-first' deepfake detection framework unlikely to stop the fakes, says expert
- EU looking into Elon Musk's X after Grok produces deepfake sex images
She said Meta didn't always remove misinformation. Instead, it took a tiered approach, considering whether it was occurring during an election period, for example. She said Meta would never able to find and remove every instance of misinformation across every platform. But people could see it labeled with the correct information.
Addressing these responses, Freeman said: "It feels to me as though the platforms are taking the approach that 'they've got a policy' and not policing actively. It falls to us as Parliamentarians to police it. My instinct is to pass a very simple law that somebody's identity belongs to them and cannot be stolen, used, misappropriated, whatever the purpose… You should go to bed a night not fearing that in the morning, you find a deeply damaging, disruptive and dangerous misrepresentation of you." ®