Nov 14 2017

Fact-checking on Facebook

facebook-unlikeLast year Facebook announced that it was partnering with several outside news agencies, the Associated Press, Snopes, ABC News, PolitiFact and FactCheck.org, to fact-check popular news articles and then provide a warning label for those articles on Facebook. How is that effort working out?

According to a recent survey, not so well. Yale researchers Rand and Pennycook found only tiny effects overall, and it’s possible there is a net negative effect from the warning labels. Some people just ignore the labels. Perhaps more significant, however, is the fact that fake news articles that were missed by the fact-checkers were more likely to be believed as real because they lacked the warning label. The fact-checkers could not possibly keep up with all the fake news, so they were overwhelmed and most of the dubious content not only made it through the filters, but benefited from a false implication of legitimacy.

Further, the Guardian reports that this arrangement between Facebook and these news outlets compromise the ability of those news outlets from being a proper watchdog on Facebook itself. If their journalists are being paid by Facebook to fact-check, then they have a conflict of interest when reporting on how Facebook is doing. This conflict is exacerbated by the fact that news organizations are hard-up for income, and could really use the extra income from Facebook.

So it seems that the fact-checking efforts of Facebook were insufficient to have any really benefit, and may have even backfired. Warning labels on dubious news articles may be the wrong approach. It’s simply too easy to foil this protection by overwhelming the system. You could even deliberately flood Facebook with outrageously fake news stories to serve as flack and provide cover for the propaganda you really want to get through. In the end the propaganda will be even more effective.

The inherent problem seems to stem from the difference between a pre-publication editorial filter and a post-publication filter. Traditional journalism has editors and standards, at least in theory, that require vetting and fact-checking prior to a story being published. Outlets had an incentive to provide quality control in order to protect their reputation.

Of course tabloids also have a long history. They take a different strategy – abandoning any pretense to journalistic integrity, and simply spreading outrageous rumors or fabricated “infotainment.” At least it was relatively easy to tell the difference between a mainstream news outlet and a tabloid rag, although there is more of a spectrum with the lines blurred in the middle.

No one would seriously claim that this system was perfect. News organizations have their editorial biases, and they had a lot of control over what became news. Biases tended to average out over many outlets, however. The big concern was over consolidation in the media industry, giving too much power to too few corporations.

Social media has now upended this system. There is now, effectively, no pre-publication editorial filter. The infrastructure necessary to own and operate a news outlet is negligible, and social media creates a fairly level playing field. It is an interesting giant social experiment, and I don’t think we fully know the results.

What this means is that ideas spread through social media mostly according to their appeal, rather than due to any executive decisions made by gatekeepers. There are still power brokers – people who have managed to build a popular site and have the ability to dramatically increase the spread of a particular news item. That, now, is the name of the game – clicks, followers, and likes. This equals power to spread the kind of memes and news items that will generate more clicks, followers, and likes.

The free-market incentive, therefore, is for click-bait, not necessarily vetted quality news. Quality is still a factor, and will give an article a certain amount of clicks. My perception is that there are multiple layers of information on social media. There are subcultures that will promote and spread items that appeal to them. They may appeal to them because they are high-quality, or because they are genuinely entertaining. Or they may appeal because they cater to a particular echochamber or ideology.

So, if you love science, you can find quality outlets for science news and analysis. Within these subcommunities, quality may actually be a benefit and the cream does rise to the top.

But sitting on top of these relatively small subcommunities is the massive general populace, which rewards memes and clickbait. That is the realm of fake news and cat videos – entertaining fluff and outrageous tabloid nonsense. This realm is also easily exploited by those with an agenda wishing to spread propaganda – click-bait with a purpose.

Facebook, as the major medium for this layer of fake news, now faces a dilemma. How can and should they deal with it? The outsourced fact-checkers strategy is, if the recent survey is accurate, a relative failure. So now what?

I feel we can do better than to just throw up our hands and let this new system play itself out. Sometimes market forces lead to short term advantages but long term doom. Can our democracy function without a well-informed electorate? Can our electorate be well-informed in an age of fake-news? The entire situation is made worse by the fact that the very concept of fake news is used to further spread propaganda, to delegitimize actual journalism and dismiss any inconvenient facts.

Can we properly function as a society if we don’t at least have a shared understanding of reality, at least to the point that there are some basic facts we can agree on? Recent history does not fill me with confidence.

I don’t have the solution, but I do think that the large social media outlets should take the problem seriously and continue to experiment. Overall I think we need to find the proper balance between democracy of information, transparency, and quality control. Right now the balance has shifted all the way toward democracy, with a massive sacrifice of transparency and quality control. I don’t think this is sustainable.

There are, of course, things we can do as individuals – such as supporting serious journalism, and not spreading click-bait online. Everyone needs to be more skeptical, and to vet news items more carefully, especially before spreading them to others. But this is a band-aid. This is like addressing the obesity crisis by telling everyone to eat less and exercise.

We need systemic change. It’s an interesting problem, but there are certainly ways to at least improve the situation.

8 responses so far

8 Responses to “Fact-checking on Facebook”

  1. mumadaddon 14 Nov 2017 at 8:27 am

    FB tried an experiment recently where, for some users, they promoted comments containing the word ‘fake’. This apparently backfired — users would see the first comment under clearly legitimate stories decrying that story as fake.

    http://www.bbc.co.uk/news/technology-41900877

  2. edwardBeon 14 Nov 2017 at 10:44 am

    Here’s the latest idiocy from Facebook: “Facebook is testing a new method to combat revenge porn in Australia, the Australia Broadcasting Corporation reports. The strategy entails uploading your nude photos or videos to Messenger in order to help Facebook tag it as non-consensual explicit media.”

    Gee, there’s absolutely no way that could backfire or be compromised by disgruntled employees, is there? I don’t suppose those photos would ever be circulated among Facebook employees and their friends, would they?

    I personally don’t have any nude photos or videos to upload, (boy, are my friends glad!) but it seems to me that the easiest way to protect personal nude content is not to produce it, and barring that, only share it with people you trust, knowing that the people you trust today may not be the people you trust tomorrow.

    It amazes me how meddlesome Facebook is. It is another example of the “We know what’s best for everybody” mentality that seems all too common among the heavy hitters of alternative media like Facebook, Twitter and of course, Google. They are scrambling to correct the inherent failings of the monsters they have created and they make it worse every time.

  3. wellerpondon 14 Nov 2017 at 11:48 am

    What I don’t understand is why fb and others aren’t scrambling to taking the lead on reintroducing integrity to information distribution. I realize they make a bazillion dollars when the fake stuff is flying around. But, first they were denying the allagations, then they dragged their feet on a solution. What company wouldn’t want to be the one that eliminates foreign meddling in elections?

    Given the enormous resources of google and fb, I would think they’d see it as an opportunity to become the world’s most respected source for information. Even a small dent would make a difference…reintroduce the idea of independent verification. Be more transparent about where your facts are coming from.

    There are millions of people who do want their information based on facts. Why is no company trying to make money off of that?

  4. Art Eternalon 14 Nov 2017 at 12:35 pm

    Yes, cat videos exist as entertainment. There are also raccoon videos. These are mood altering videos compared to god-awful reality news.

  5. kasyx111on 14 Nov 2017 at 1:26 pm

    Facebook isn’t good at lots of things. They aren’t good at identifying fake profiles. They aren’t good for keeping our “less-than-tech-savvy” safe from scammers. They aren’t good at educating people about their own product. Their algorhythms keep me from seeing what my friends post and vice versa. Their reporting system is gutless, after a long delay you receive a “thank you” message for every report. They have become the new home for actual “Snuff films” which were only an urban legend in the 90s. They cannot keep fake business pages from selling fake puppies or parrot eggs. They can’t even keep users from posting porn. If you look up their board of directors, all of them have real and fake profiles on Facebook. They don’t even protect their own board member’s identities or pictures.

    And with all that…. we expect them to understand news stories and provide a nice balanced reporting stream? Let’s not be foolish. At 2 billion profiles, Facebook is under water when it comes to security and integrity of their product.

  6. Nitpickingon 14 Nov 2017 at 9:53 pm

    “I personally don’t have any nude photos or videos to upload, (boy, are my friends glad!) but it seems to me that the easiest way to protect personal nude content is not to produce it, and barring that, only share it with people you trust, knowing that the people you trust today may not be the people you trust tomorrow.”

    There’s also “not being ashamed of them.”

  7. Art Eternalon 15 Nov 2017 at 1:25 pm

    Facebook and Twitter have legal investment ties with Russian financial institutions. Yuri Milner the Russian billionaire has a controlling interest in this institutions. Why complain about false advertising on Facebook? Doesn’t investments influence politics?

  8. MaryMon 21 Nov 2017 at 9:41 am

    Yeah, it’s a tough problem. And I was just hearing on your podcast that the lawsuit–btw, yay!–included a claim of ScienceBasedMedicine being “fake news”. This is a thing I’m seeing from cranks all the time now–stuff they disagree with they call fake news.

    And if you get a curator person who agrees with them on GMOs, for example, legit science could be deemed fake. I mean, jeez–Naomi Oreskes is currently making a fool of herself on GMO issues on twitter and she’ll get cred.

    I guess that makes our job more crucial–acting as a counter-weight to the nonsense, being a source that people can find if they do bother to dig any deeper on some derp that they find.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.