Meta Oversight Committee to Investigate Celebrity Deepfake Pornography

In recent times, with the advancement of artificial intelligence (AI), cases of deepfakes have become more common. And, more recently, this type of use of technology has begun to attack famous people and create fake pornography.

And, while much of this content is hosted on specific websites, it has reached social media.

With this in mind, the Meta Supervisory Committee, an independent body of Meta which can decide and make recommendations on the parent company, announced accept cases that may take into account how the company handles deepfake pornography.

How Meta’s Oversight Board Will Analyze Deepfake Pornography

deepfake

In both cases, the images were removed because they violated Mark Zuckerberg’s company policies on bullying and harassment, as well as falling outside of his pornography policy.

Meta does not tolerate “content that depicts, threatens, or promotes sexual assault, sexual assault, or sexual exploitation,” nor pornography or sexually explicit advertising on any of its platforms (Facebook, Instagram, WhatsApp, and Threads).

In Publication on its blog published alongside the Committee’s announcement, Meta reported that it had removed the posts for violating the section of its intimidation and harassment policy dealing with “photoshopping or derogatory sexual drawings,” as well as determining that it “violated the adult protection policy” nudity and sexual activity [da Meta]”.

The Board’s expectation is to use such cases to examine Meta’s policies and systems for detecting and removing nonconsensual deepfake pornography, says Board member Julie Owono.

“I can already tentatively say that the main problem is probably detection. It is not as perfect or, at least, it is not as efficient as we would like,” she pointed out.

Evil artificial intelligence illustration

Reviews outside the US

And it’s not just in the US that Meta faces criticism over its approach to content moderation. Her practices related to the topic outside the country and in Western Europe are also of concern.

In this regard, the Committee has already expressed concerns about the possibility of both celebrity deepfake victims receiving different treatment in response to the emergence of fake images on the platform.

We know that Meta is faster and more effective at moderating content in some markets and languages ​​than others. By analyzing a US case and another Indian case, we want to see if Meta is protecting all women equitably globally. It is vital that this issue is addressed and the council hopes to evaluate whether Meta’s policies and enforcement practices are effective in addressing this issue.

Helle Thorning-Schmidt, Co-Chair of the Objectives Monitoring Committee

Exponential growth of deepfake pornography

A recent study by Channel 4 found deepfakes of over four thousand celebrities. In January, a non-consensual deepfake of singer Taylor Swift broke out on social media, particularly on X, limiting searches for the artist’s name, which didn’t help much.

Last month, the NBC News reported that ads for a deepfake app running on Facebook and Instagram contained images of Jenna Ortega, when she was a minor, with no clothes on.

In India the phenomenon targets major Bollywood actresses, such as Priyanka Chopra Jonas, Alia Bhatt and Rashmika Mandann.

Research indicates that since the emergence of deepfake half a decade ago, pornography created with this technology has targeted women. Reports from WIRED released in 2023 highlighted that 244.62 thousand videos were sent to the 35 main hosting sites for this type of content, an absolute record.

Deepfake pornography is a growing cause of gender-based online harassment and is increasingly used to target, silence, and intimidate women online and offline. Several studies show that deepfake pornography primarily targets women. This content can be extremely harmful to victims, and the tools used to create it are becoming increasingly sophisticated and accessible.

Helle Thorning-Schmidt, Co-Chair of the Objectives Monitoring Committee

5-movie with taylor swift

In January, U.S. lawmakers implemented the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE Act), which allows people whose images were used in deepfake pornography to sue if they can prove the content was pirated without the their consent.

The congresswoman sponsoring the project, Alexandria Ocasio-Cortez, was the target of misuse of the technology earlier this year.

Victims of nonconsensual pornographic deepfakes have long waited for federal legislation to hold perpetrators accountable. As deepfakes become easier to create and access – 96% of deepfake videos circulating online are nonconsensual pornography – Congress must act to show victims that they will not be left behind.

Alexandria Ocasio-Cortez, Congresswoman, earlier this year

Post Meta oversight board will investigate celebrity deepfake pornography that first appeared on Olhar Digital.

Source: Olhar Digital

Leave a Reply

Your email address will not be published. Required fields are marked *