In the last few years, Facebook‘s content policy has been the subject of criticism on many occasions. In particular, considering an increase in “Facebook crimes”, it could be argued that the immense popularity of this social media platform is radically and fundamentally changing the way we interact. Facebook, then, is not a safe harbor, nor is it the place of love and tolerance which Mark Zuckerberg, its principal founder, would want to make us believe.

In reality, Facebook is a global “crime scene”, where both the platform and social networks it enables are silent witnesses. Facebook crimes or, more generally, social media crimes, consist of uploading to social media platforms content that features acts of criminal conduct. Most incidents involve sharing a photo or a video on Facebook or using an electronic message system as WhatsApp or Facebook Messenger. Moreover, the “Facebook live” tool has meant that even violent crimes could be broadcasted live. This “criminal exhibitionism” has allowed social media users, in some cases, to report it and, consequently, to quickly lead to the arrest of the offenders. Lamentably, this “criminal exhibitionism” is inciting others to use the platform to share or engage in crimes of their own doing. It is also increasing the number of criminals that are sharing videos showing an offense by using, precisely, Facebook live as their tool of choice.

Facebook Community Standards: Brief Observations on the User Content Policy

Facebook outlines on its website the Community Standards that should be observed by its users. Particularly, the Facebook Standards foresee what is allowed and forbidden in the Community. These standards apply all around the world and outline the kinds of content such as photos, videos, and text posts which fall within the guidelines. Additionally, Facebook also has a content review process through which are implemented content removal tools. Facebook‘s content removal tool has been used, for instance, to deal with posts encouraging suicide and self-injury, images of nude children, depictions of graphic violence, content featuring violent behavior, sexual content, and pornographic images or videos.

It should be noted that Facebook is currently using artificial intelligence and machine learning tools to identify and remove violent content. Sometimes the AI algorithm, then, could be subject to errors and bugs leading to some child sexual abuse content passing through Facebook checks. Unfortunately, in the case of child sexual abuse content, Facebook is not using a specific algorithmic program to detect these kinds of posts but rather only implementing its general content algorithm.

A default content removal tool should therefore be introduced when users repeatedly post nude images or videos of babies and/or children. Furthermore, considering the diffusion on the internet of sexually exploitative videos and photos of women, including rape, it is imperative that Facebook adopts specific tools to protect the victims of these abuses. For the time being, the team of Facebook reviewers participate in the content review process when the social media platform’s technology is still in developmental stages or if the content under question needs further analysis to determine whether or not it adheres to the platform’s guidelines.

Another relevant issue concerns the role of the Facebook content moderator. Facebook content moderators are tasked with analyzing content potentially violating the terms of use, and are split into two categories. The first group of moderators evaluate violent content according to the Community Standards, erasing the user’s post if it contravenes the former. The second group perform the same evaluation process but for content of a sexual or pornographic nature. Facebook content moderators therefore operate to ensure compliance to the Community Standards, eliminating content which escaped notice from the platform’s algorithm.

The Most Frequent Facebook User Crimes

A wide number of crimes are committed on the Facebook platform and, as remarked previously, it can be said that this social media platform is a real online crime scene. Violent crimes such as murder, personal injury, child abuses and rapes, terrorism, encouraging suicide, insults, and libel are not the only offenses committed on the platform. Facebook scams and extortion committed by fake users are occurring more often than could be assumed.

However, the sobering reality is that rape is one of the most frequent crimes on Facebook. Indeed, some rapists, after their violence, post on Facebook videos or photos highlighting the assault in graphic detail. The “Facebook live” tool has allowed sexual offenders to live stream videos that feature rapes, gang rapes, tortures, murders, and child abuse. While Facebook employs a team of experts to handle such cases, it equally requires the active cooperation of its users to swiftly detect and report such crimes to the platform and competent authorities.

It should be noted that in case of a Facebook scam the Community Standards do not serve as a reporting tool. However, according to the Facebook Help Center, it is possible to report an account for impersonation or even a fake account, which are often used for Facebook scams. The only behavioral advice provided by Facebook for reacting to scams is outlined in its help section, which suggests being wary of “People who you don’t know in person asking you for money”, or “People claiming to be a friend or relative in an emergency”. Furthermore, Facebook identifies other kinds of scam, such as the “Romance Scam”, the “Lottery Scam”, including “Access Token Theft”. “Access Token Theft” is very dangerous as the user can have their identity stolen by clicking on a link.

Final Remarks

Social media crimes have been on the rise in the last few years and it is crucial that users are careful in the management of their accounts and interact wisely on social media platforms, especially Facebook.

However, the onus of responsibility should not fall solely on the individual user. Rather, Facebook should reconsider its user content policy, especially concerning categories of content that show child nudity, abuses, or bullying. For these, Facebook‘s policy should be more restrictive as to discourage online crimes as much as possible. Content-specific algorithms should be introduced to better detect, for example, instances of child abuse. The purpose of this tool would be to reduce at least the number of those media crimes on Facebook.

In this respect, it should be pointed out that Apple has recently announced that it will introduce some child safety features to protect its youngest users. Concretely, iOS and iPadOS will use new cryptography tools to help limit the spread of Child Sexual Abuse Material (CSAM) online, while ensuring user privacy. CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos. Furthermore, Siri and Search updates will provide parents and children expanded information and help if they encounter unsafe situations, allowing for intervention when users try to search for CSAM-related topics.

However, these child safety features could have dangerous consequences, essentially, when these same algorithms can breach user privacy agreements and potentially collect sensitive personal data. According to the future Apple child safety features, user information would be saved, producing a scenario that recalls the movie Circle. Hence, if Facebook intends to adopt a child abuse algorithm it should be very cautious and consider the potential negative impact from privacy breaches.

Finally, it should be noted that the Community Standards state “people need to feel safe in order to build community. We are committed to removing content that encourages real-world harm, including (but not limited to) physical, financial and emotional injury”. This sentence does not correspond to reality. Indeed, Facebook‘s policy has turned out to be quite ambiguous when considering for instance its attitude vis-à-vis child abuse cases.

In conclusion, Facebook’s content removal policy could potentially be arbitrary, with the chance of breaches and bugs in the AI algorithm used to enforce the policy leading to a situation where increasingly personal data has the potential to be criminalized. To this must be added the fact that Facebook is often delegating the content review process to moderators, who are not properly trained or are suffering from being exposed to content featuring abuse for extended lengths of time.

Leave a Reply