Oct 22 2018

AI and Fake News

In testimony before congress, Mark Zuckerberg predicted that artificially intelligent algorithms that will be able to filter out “fake news” will be available in 5-10 years. In a recent NYT editorial, neuroscientist Gary Marcus and computer scientist Ernest Davis argue that this prediction is overly optimistic. I think both sides have a point, so let’s dissect their arguments a bit.

Fake news is the recently popular term for deliberate and often strategic propaganda masquerading as real news. There is, of course, a continuum from rigorous and fair journalism to 100% fakery, with lots of biased, ideological, and simply lazy reporting along the way. This has always existed, but seems to be on the rise due to the ease of spread through social media. Accusations of fake news have also been on the rise, as a strategy for dismissing any news that is unwanted or inconvenient.

Obviously this is a deep social issue that will not be solved by any computer algorithm. I would argue the only real solution is to foster a well-educated electorate with the critical thinking skills to sort out real from fake news. That is a worthwhile and long term goal, but even if successful there will always be a problem with fake news, regardless of the medium of its spread.

The real context here is the role that social media serves in spreading fake news. The reason that Zuckerberg, the creator of Facebook, is involved is because Facebook and other social media platforms have become a main source of news for many people. Further, they represent a new model for pairing people with the news they want.

The traditional model is to build respected news outlets who cultivate a reputation for quality and have a heavy editorial policy that filters the news. People then go to news outlets they respect, and get whatever news is fed to them, perhaps divided into sections of interest. But even this model has always been corrupted by ideologically biased editorial policy, and by sensationalism. You can attract eyes not only through well-earned respect by providing quality, but also by sensational headlines, or by catering to preexisting world-views.

In decades past we decried the plummeting standards of daytime television, the rise of tabloids, and the creep of “infotainment.” What we have now is not fundamentally different – just a continuation of these trends, and the rise of a new platform in social media.

But there is something fundamentally new about social media – they have largely replaced having an editorial filters with using computer algorithms to curate news for users. These algorithms choose what news is fed to you. So the real question is not whether these platforms should filter and curate the news (they already are), but how? They have largely just been giving people what they think they want. This has created automated feedback loops allowing the most sensational news to “go viral” and cocooning people in a virtual echochamber of their preexisting ideas. Even worse, someone with a mild interest in a fringe topic, like conspiracy theories, can be lead down a virtual rabbit hole of progressively radicalized opinions.

So what Zuckerberg is really talking about is changing these existing algorithms, so that they filter out obviously fake news rather than promoting them, and value quality more. Ideally, a news item that is true and fair would be spread more by the algorithms than one that is made-up BS specifically for the purpose of propagandizing public opinion (whether by a foreign hostile power to affect our elections or for some other purpose).

Marcus and Davis are essentially arguing that the software technology is not close to the point that it can reliably sort out real news from fake news. They use as an example a rather challenging piece of news that uses real facts to imply a causation where one does not exist (also by leaving out important facts such as the timeline of events).

I think their argument amounts to a nirvana fallacy, however. I have no reason to disagree with their assessment that AI will not be able to sort out subtle deception in news reporting in 5-10 years, but I also have no reason to think that this is what Zuckerberg was specifically claiming. There is a lot of room between what we have now and sophisticated AI able to sniff out the most subtle fake news.

First, there is a lot of blatantly fake news – utter dreck that was spread virally through Facebook and other platforms. Even if Facebook simply kept easily detected lies from spreading rapidly and being monetized, that would be a huge help. As the AI software improves, it will be more and more reliable and detecting less and less blatant lies and deception.

At the same time it is likely that those wishing to spread propaganda will learn how to get around the algorithms for detecting fake news. It will be an arms race. But if history is a guide, the dedicated programmers will be able to keep one step ahead and at least minimize the spread of fake news. (Google has been able to do this by constantly advancing and tweaking their search algorithms to frustrate attempts to hack it.)

Further, I don’t think there is going to be any one solution to such a complex problem. AI fake news detection will be one piece to a complex solution. Humans will have to be in the loop at some point as well. I envision a system whereby AI software filters out blatant falsehoods, but this is combined with some kind of crowd sourcing algorithms (like what Google uses) and a final human review process where necessary.

The result can also have various forms – removing entirely from the platform, reducing its spread to users, not allowing monetization, or simply giving warnings or showing ratings of reliability. Individual serial abusers can also be banned or severely restricted.

At the same time, the big platforms like Facebook needs to have their own watchdogs, with a fair degree of transparency, so that they don’t abuse the platform to promote their own hidden agenda.

None of this will necessarily impinge upon free speech or the free exchange of ideas. First, anyone can create their own website and publish whatever they want. There are also plenty of news outlets catering to every possible perspective.

But more importantly, social media news is already curated. It is not “free” in that algorithms are choosing what news to spread based on some criteria. The real question is – what qualities should the algorithms value more (such as sensationalism or accuracy)? Not helping people to spread objective lies or clear fraud is not the same as censorship. It’s quality control.

No responses yet