Facebook announced late Monday that it would ban “deepfakes,” which are AI-manipulated videos that distort reality, often simulating real people in fake situations.
The social media giant announced the changes in a company executive blog post, saying it will remove deepfakes and other types of heavily manipulated media from its platform.
Specifically, the company laid out two main criteria for removing content under the new rules. The first is that the company will remove content posted on Facebook if has been edited in ways that would “likely mislead someone into thinking a subject of the video said words that they did not actually say,” according to the post written by Monika Bickert, Facebook’s vice president of global policy management. Secondly, the platform will ban media if it’s the product of AI or machine learning that “merges, replaces, or superimposes content onto a video, making it appear to be authentic.”
Facebook came under fire last year for allowing a manipulated video of Speaker Nancy Pelosi that made it appear as though she was drunk by altering her speech to slur her words. At the time, Facebook said the video went through its fact-checking process, which does not require content to be true to be allowed on the platform. The company said it displayed a note with additional context about the video, telling users that it was false.
Under its new rules, Facebook told Recode it still would not take down the Pelosi video, saying that it does not meet the standards of the new policy. “Only videos generated by artificial intelligence to depict people saying fictional things will be taken down. Edited or clipped videos will continue to be subject to our fact-checking program. In the case of the Pelosi video, once it was rated false, we reduced its distribution,” the spokesperson told Recode.
Whether videos are deepfakes or not, they’re all subject to Facebook’s fact-checking system. If content is proven to be false, it can be flagged with a note labeling the content as such, and Facebook will deprioritize it in its News Feed.
In an email, Omer Ben-Ami, the co-founder of Canny AI (the Israeli advertising startup that last year helped artists produce a viral deepfake of Zuckerberg on Instagram, which Facebook opted to keep up) said Facebook’s new policy seemed “reasonable.” However, he cautioned that his company and others, “use this technology for legitimate reasons, mainly for personalization and localization of content.”
He said it was unclear why the policy only applies to content manipulated by artificial intelligence.
Overall, there are some exceptions to Facebook’s new rules: They don’t apply to videos that are parody or satire, nor do they ban videos edited “solely to omit or change the order of words” someone is saying.
The change builds on previous efforts Facebook has made in combating deepfakes. Last fall, the company helped launch a “Deepfake Detection Challenge” that’s meant to accelerate global research into technology that can identify misleading AI-manipulated videos. The company also began an initiative with Reuters that’s meant to train journalists to better spot manipulated media, including deepfakes.
“As the tech develops, so do policies, and we hope Facebook’s policymakers will make sure to make the distinction between content that was legitimately manipulated and malicious content,” added Ben-Ami.
Additional reporting by Rebecca Heilweil.
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.
Shirin Ghaffary Shirin Ghaffary https://cdn.vox-cdn.com/community_logos/52517/voxv.png Read More