Facebook is testing new tools aimed at preventing the discovery of photos and videos that prevent child sexual abuse and the sharing of such content.
“Using our apps to harm children is disgusting and unacceptable,” Antigone Davis, who oversees Facebook’s global security efforts, said in a blog post on Tuesday.
The move comes as the social network faces greater pressure to confront the problem amid its plans to enable default encryption for messages on Facebook Messenger and Facebook-owned photo service Instagram. End-to-end encryption will mean that the message cannot be seen by anyone, including Facebook and law enforcement officials, except the sender and recipient. Child safety advocates have expressed concern that Facebook’sChild predators can be difficult to crack.
The first tool Facebook is testing is a pop-up notice that searches users for a term associated with child sexual abuse. The notice will ask users if they want to continue, and includes links to criminal diversion organizations. The notice also states that child sexual abuse is illegal and that viewing these images can result in imprisonment.
Last year, Facebook said it analyzed child sexual abuse material reported to the National Center for Missing and Exploited Children. The company found that more than 90% of the material was the same or similar to previously reported material. The copies of the six videos are made up of more than half of the child abuse material reported in October and November 2020.
Davis wrote in a blog post, “The fact that only a few pieces of content were responsible for many reports that greater understanding of intent could help us stop this resurgence” The company also conducted another analysis, including Users were sharing these pictures for reasons other than harming the child, including “insulting the child or bad humor”.
The second tool Facebook said is that this test is a warning that will notify users if they try to share these harmful images. A security warning tells users that if they reshare this type of content, their account may be disabled. The company said it is using this tool to help identify “behavioral signals” of users who may be on a large disk sharing this harmful content. Davis said it helps the company “educate them about why it is harmful and encourage them not to share it”.
Facebook also updated its child safety policies and reporting tools. The social media giant said it would pull Facebook profiles, pages, groups and Instagram accounts dedicated to sharing innocent images of children with captions, hashtags or comments about children depicted in the image Has inappropriate signs of affection or comment. “Facebook users reporting content will also see an option to tell social networks that a photo or video” includes a child, “allowing the company to prioritize it for review.
during, According to a January report by Business Insider, images of online child sexual abuse have increased. From July to September, Facebook detected at least 13 million of these harmful images on the main social networks and Instagram.