Sep 30 2021
YouTube Bans Anti-Vax Videos
Last year YouTube (owned by Google) banned videos spreading misinformation about the COVID vaccines. The policy resulted in the removal of over 130,000 videos since last year. Now they announce that they are extending the ban to misinformation about any approved vaccine. The move is sure to provoke strong reactions and opinions on both sides, which I think is reasonable. The decision reflects a genuine dilemma of modern life with no perfect solution, and amounts to a “pick your poison” situation.
The case against big tech companies who control massive social media outlets essentially censoring certain kinds of content is probably obvious. That puts a lot of power into the hands of a few companies. It also goes against the principle of a free market place of ideas. In a free and open society people enjoy freedom of speech and the right to express their ideas, even (and especially) if they are unpopular. There is also a somewhat valid slippery slope argument to make – once the will and mechanisms are put into place to censor clearly bad content, mission creep can slowly impede on more and more opinions.
There is also, however, a strong case to be made for this kind of censorship. There has never been a time in our past when anyone could essentially claim the right to a massive megaphone capable of reaching most of the planet. Our approach to free speech may need to be tweaked to account for this new reality. Further, no one actually has the right to speech on any social media platform – these are not government sites nor are they owned by the public. They are private companies who have to the right to do anything they wish. The public, in turn, has the power to “vote with their dollars” and choose not to use or support any platform whose policies they don’t like. So the free market is still in operation.
Further, free speech has never been absolute (no right or principle ever is). Free speech does not give one the right to commit fraud or slander, to sexually exploit children, or to incite immediate specific violence. So we have always recognized that free speech has limits, mainly by the rights of other people not to be directly harmed by your speech. The real question here, therefore, is not whether or not tech companies have the right to censor content on their platforms (they do), or whether it is ethically or legally acceptable to ever censor or punish harmful speech (it is). The real question is, should demonstrable misinformation harmful to public health be added to the list of regulated speech?
As an aside, the tech company question is also not a first amendment issue. That only involves the government censoring or punishing speech, not private companies or citizens. So no, YouTube is not violating anyone’s first amendment rights. But we certainly can extend the question to the legal realm, which would make it a first amendment issue. Can and should someone be sued over public health misinformation, and should the government play a role in policing such misinformation? With or without the government involved, as a society we need to decide if public health misinformation qualifies as harmful speech on the level of other harmful speech that can be regulated. I think there is a strong case to be made that it should.
People can believe any nonsense they wish, but I don’t think the principle of free speech demands that everyone has a right to spread harmful misinformation. We tend to err on the side of free speech with such questions, which is reasonable, but social media no longer gives us that option. We have to face the question directly and make a decision.
Another principle is relevant here – although variously repeated and attributed, the first documented quote is from Bernard Baruch in 1946, “Every man has the right to an opinion but no man has a right to be wrong in his facts. Nor, above all, to persist in errors as to facts.” While we respect free speech, we also have to respect reality and objective facts. Again, everyone has a right to be wrong, but that does not have to necessarily translate into an unlimited right to spread their error into the public discourse.
Traditionally public misinformation was kept in check by gatekeepers – editors, publications, institutions, and producers. Megaphones were mostly given to experts who had credentials and whose opinions could be vetted by other experts. This was not perfect, and outlets increasingly fell for sensational opinions over quality. But at least there was some filter, and reliable facts and analysis had a distinct advantage over poorly informed nonsense. Social media took away all those filters. There are many positives to the democratization of content creation and lowering the bar for mass communication. I exploit those advantages myself as much as I can (such as with this blog). But the net effect, arguably, has been negative.
Part of the problem is that it give con-artists and psychopaths unfettered access to public platforms, often anonymously. A free market very much depends on everyone playing fair, at least to some extent. With players acting in extreme bad faith, and weaponizing open platforms to cause harm or advance their personal interests, the free market of ideas is essentially broken. We can add to that social media algorithms that exploit human psychology to maximize engagement, leading to automated radicalization. Further, one could argue that social media can lead to a decrease in effective communication, by making it easy (even the default) to be isolated in ideological echochambers.
All these factors feed off each other. People become radicalized, or simply grossly misinformed, by misinformation and social media algorithms, and then they become a conduit for further misinformation. Without quality control, all facts become mere opinion, and everyone is content to live in their own reality (or, far more likely, one carefully crafted and curated for them to make them into a drone for some marketing campaign, political party, or ideology). Q-anon is a great example of this, where all of these factors were leveraged to convince a substantial portion of the society that blatantly absurd claims were true. This turned into a credible threat to democracy, one that is not over.
How, then, do we get to some new equilibrium point? How do we live with social media without it destroying democracy, public discourse, and public health? I think banning demonstrable misinformation is a reasonable measure. But we can also have a conversation about how best to do this. Perhaps we need public independent panels, including experts, scholars, and public representatives, to make such decisions, with a relatively high bar for what counts as demonstrable misinformation. Decisions should be transparent and reviewable. There are things that we can collectively agree upon, and in specific cases we may need to limit the right not to disagree with reality, but to publicly spread that disagreement along with facts that are demonstrably wrong.