Ever since the U.S. elections last month, the topic of fake news has been in the headlines almost daily. In particular, following the election of Donald Trump, Facebook came in for some harsh criticism for having allowed false stories to circulate; many critics believe such stories might have influenced the outcome of the election. “Pizzagate” is an obvious example, but other stories have spread across Facebook and other social media platforms—some of them “pants on fire” false, others containing elements of truth.
It’s the stories with some “truthiness” to them (to quote another U.S. president) that cause the biggest headaches, of course. For example, shortly after the election, a story started spreading that Ford Motors had started moving production of some of its vehicles from Mexico to Ohio…just as Donald Trump had promised they would! Wahoo!
Oh, except that the story, while somewhat true, was based on this CNN report from August 2015, before Trump was even declared the Republican nominee. So…mostly false, then? Whoops.
This could be where third-party fact-checkers, such as FactCheck.org, come in. Facebook CEO Mark Zuckerberg noted in a post on 19 November that Facebook has relied on reports and reactions from users to point the way:
Historically, we have relied on our community to help us understand what is fake and what is not. Anyone on Facebook can report any link as false, and we use signals from those reports along with a number of others — like people sharing links to myth-busting sites such as Snopes — to understand which stories we can confidently classify as misinformation. Similar to clickbait, spam and scams, we penalize this content in News Feed so it’s much less likely to spread….We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties.
Mr Zuckerberg stated that some goals will be to improve detection of fake news stories, make it easier to report them, look into third-party verification, and “explor(e) labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them”.
But wait, weren’t they already supposed to be doing this?
In recent weeks, we’ve heard various reports that Facebook is about to, or has already, included a “fake news” option in its post reporting algorithm. But as some readers here have pointed out, this option has actually been available for some time now.
Indeed, a January 2016 post on Facebook’s “Newsroom” page states:
We’ve heard from people that they want to see fewer stories that are hoaxes, or misleading news. Today’s update to News Feed reduces the distribution of posts that people have reported as hoaxes and adds an annotation to posts that have received many of these types of reports to warn others on Facebook.
According to this post from a year ago, this is what was supposed to happen to fake news stories on Facebook:
Are we alone in never, ever having seen such an annotation on any story on Facebook?
Either Facebook dropped the idea, or it’s just never caught on.
In that same post, Facebook stated:
To reduce the number of these types of posts, News Feed will take into account when many people flag a post as false. News Feed will also take into account when many people choose to delete posts. This means a post with a link to an article that many people have reported as a hoax or chosen to delete will get reduced distribution in News Feed. This update will apply to posts including links, photos, videos and status updates. [Emphasis ours]
Assuming that Facebook wasn’t just having us on (see previous remarks re. non-existent annotations on posts), this could mean something interesting for the future of fake news on the world’s largest social network.
It doesn’t mean that fake news stories will be removed; it means they will be downgraded in the News Feed algorithm…and then only if they receive flags from “many people”. How many is “many”? That’s not specified. On a site that sees 1.15 billion monthly active users, “many” could be quite a lot.
And let’s face it: with the sheer volume of news stories, real and fake, that get shared daily on Facebook, how many million fact-checkers would it take to stem the flow of fake news that makes its way into those 1.5 billion news feeds?
It’s been suggested that rather than trying to tackle all the fake news, Facebook focus on stories that make it all the way to the “Trending” section…but that leaves an awful lot of rubbish floating around the average user’s News Feed.
So despite the noises Facebook has made about trying to be a bit more responsible in its dissemination of fake news, ultimately it looks like we’ll be stuck with this problem for some time to come.