For two years now, many of us have struggled mightily with the behemoth that is Facebook. We report illegal posts—photographs or videos featuring RD’s children, threats toward the Hampstead community, threats toward anyone who disagrees with the Hoaxtead mobsters—and nine times out of 10, the message we receive back from some anonymous Facebook moderator runs along these lines: “Thank you for letting us know about this. We’ve reviewed the (image/post/video), and although it doesn’t go against any of our specific Community Standards, you did the right thing by letting us know about it. We understand that it may still be offensive or distasteful to you, so we want to help you see less of things such as this in the future”.
We’ve complained to Facebook, via their “How did you find this experience?” comment box, but our complaints vanish into the dark dank void that is the Facebook reporting system.
And yet, every now and then one of us is successful in having a post removed—our intrepid commenter, Sooper Seekrit Facebook Snitch™, is a dab hand at it, for example. So what makes the difference? How does Facebook decide what can stay and what must go?
The Facebook files
On Sunday, The Guardian reported that someone at Facebook had leaked a copy of the secret manuals and flow-charts used by moderators. According to The Guardian,
The Guardian has seen more than 100 internal training manuals, spreadsheets and flowcharts that give unprecedented insight into the blueprints Facebook has used to moderate issues such as violence, hate speech, terrorism, pornography, racism and self-harm.
There are even guidelines on match-fixing and cannibalism.
Currently, Facebook employs 4,500 content moderators and have announced plans to hire 3,000 more, but even that is a rather puny force when one considers the millions of posts, images, and videos per day which are submitted for review.
Each moderator has about 10 seconds in which to decide whether to ignore, “escalate”, or delete a reported item. “Escalation” means that the item is passed to a more senior manager for an action decision. Facebook has acknowledged the high rate of turnover amongst its moderators, who often suffer from anxiety and post-traumatic stress as a result of their jobs:
“People can be highly affected and desensitized. It’s not clear that Facebook is even aware of the long term outcomes, never mind tracking the mental health of the workers,” said Sarah T Roberts, an information studies professor from UCLA who studies large-scale moderation of online platforms. …
In January, moderators in similar roles at Microsoft sued the company, alleging that exposure to images of “indescribable sexual assaults” and “horrible brutality” resulted in severe post-traumatic stress disorder (PTSD). Microsoft disputes the claims.
Repeated exposure to extreme content can lead to “secondary trauma”, which is a condition similar to PTSD, but the witness is looking at images of what happened rather than being traumatized themselves.
How decisions are made
Hoaxtead researchers are constantly watching out for posts, images, and videos which are relevant to the Hampstead SRA hoax, so we were particularly interested in how Facebook makes decisions about material related to child abuse.
According to The Guardian,
Facebook’s policies on graphic violence, non-sexual child abuse and animal abuse reveal its attempts to remain open while trying to ban horrific images. Moderators remove content ‘upon report only’, meaning graphic content could be seen by millions before it is flagged. Facebook says publishing certain images can help children to be rescued.
Facebook’s policy, per their training manual, reads, “We allow ‘evidence’ of child abuse to be shared on the site to allow for the child to be identified and rescued, but we add protections to shield the audience”.
And here’s how they handle graphic violence against children:
So they will only remove evidence of physical non-sexual child abuse if it is “shared with sadism and celebration”. Everything else will either be marked as “disturbing” or ignored.
And of course, as we’ve discovered, Facebook’s reporting options contain no descriptors that come anywhere near the type of child abuse exemplified by sharing images of RD’s children. They don’t fit into Facebook’s algorithm, and so their images, videos, etc. are ignored. Oh, excuse us, “not actioned”.
What about threats of violence?
Things get even more puzzling in the murky area of threats of violence. Facebook draws a distinction between “credible” threats and “aspirational” threats, allowing the latter but not the former. And then there is the sticky question of “protected” versus ordinary people. For example,
Remarks such as “someone shoot Trump” should be deleted, because as a head of state he is in a protected category. But it can be permissible to say: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat”, or “fuck off and die” because they are not regarded as credible threats.
So “kick a person with red hair” or “let’s beat up fat kids” would not go against Facebook’s community standards, while “#stab and become the fear of the Zionist” would be deleted.
In a leaked document on moderating threats of violence, Facebook notes that “people use violent language to express frustration on line”, and feel safe doing this on Facebook. Sitting behind a keyboard and monitor, people seem to become disinhibited, and will often make threats they wouldn’t think of saying face to face.
“We should say that violent language is most often not credible until specificity of language gives us a reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design. From this perspective language such as ‘I’m going to kill you’ or ‘Fuck off and die’ is not credible and is a violent expression of dislike and frustration.”
It adds: “People commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways.”
Facebook conceded that “not all disagreeable or disturbing content violates our community standards”.
As we read through Facebook’s various leaked guidelines, we couldn’t help but be struck by the murkiness of it all. Why is it all right to talk about murdering a president, but not about murdering a “bitch”? How does Facebook know that a statement like the one about snapping a woman’s neck is not “credible”? Bringing it back to Hoaxtead, how do they know that a statement like Bronwyn Llewellyn’s infamous threat to murder RD was made in an “aspirational” rather than a “credible” manner?
We believe the time has come for Facebook to seriously reconsider its approach to creating a safe community that still values freedom of speech; in our view, freedom of speech ends where terror and abuse begin.