Meta urged to go further in crackdown on 'nudify' apps

Meta has taken legal action against a company which runs ads on its platforms promoting so-called "nudify" apps, which typically using artificial intelligence (AI) to create fake nude images of people without their consent.
It has sued the firm behind CrushAI apps to stop it posting ads altogether, following a cat-and-mouse battle to remove them over a series of months.
"This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it," Meta said in a blog post.
Alexios Mantzarlis, who authors the Faked Up blog, said there have been "at least 10,000 ads" promoting nudifying aps on Meta's Facebook and Instagram platforms.
Mr Mantzarlis told the BBC he was glad to see Meta take this step - but warned it needed to do more.
"Even as it was making this announcement, I was able to find a dozen ads by CrushAI live on the platform and a hundred more from other 'nudifiers'," he said.
"This abuse vector requires continued monitoring from researchers and the media to keep platforms accountable and curtail the reach of these noxious tools."
In its blog, Meta sad: "We'll continue to take the necessary steps - which could include legal action - against those who abuse our platforms like this."
'Devastating emotional toll'
The growth of generative AI has led to a surge in "nudifying" apps in recent years.
They have become so pervasive that in April the children's commission for England called on the government to introduce legislation to ban them altogether.
It is illegal to create or possess AI-generated sexual content featuring children.
But Matthew Sowemimo, Associate Head of Policy for Child Safety Online at the NSPCC, said the charity's research had shown predators were "weaponising" the apps to create illegal images of children.
"The emotional toll on children can be absolutely devastating," he said.
"Many are left feeling powerless, violated, and stripped of control over their own identity.
"The Government must act now to ban 'nudify' apps for all UK users and stop them from being advertised and promoted at scale."
Meta said it had also made another change recently in a bid to deal with the wider problem of "nudify" apps online, by sharing information with other tech firms.
"Since we started sharing this information at the end of March, we've provided more than 3,800 unique URLs to participating tech companies," it said.
The firm accepted it had an issue with companies avoiding its rules to deploy adverts without its knowledge, such as creating new domain names to replace banned ones.
It said it had developed new technology designed to identify such ads, even if they didn't include nudity.
Nudify apps are just the latest example of AI being used to create problematic content on social media platforms.
Another concern is the use of AI to create deepfakes - highly realistic images or videos of celebrities - to scam or mislead people.
In June Meta's Oversight Board criticised a decision to leave up a Facebook post showing an AI-manipulated video of a person who appeared to be Brazilian football legend Ronaldo Nazário.
Meta has previously attempted to combat scammers who fraudulently use celebrities in adverts by the use of facial recognition technology.
It also requires political advertisers to declare the use of AI, because of fears around the impact of deepfakes on elections.

Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.