Feb 23 2023

Should Tech Companies Be Liable for Content

The Supreme Court (SCOTUS) is hearing a case that will have profound effects on social media – is Google liable for a terrorist killing? The family of Nohemi Gonzalez is suing Google, because she was shot by an Islamic terrorist in 2015 and the family alleges this act was abetted by Google recommending videos encouraging such acts. Google argues it is protected by Section 230 of the Communications Decency Act of 1996.

In cases like this there is always legal complexity, and it’s not my intention to do a legal analysis of the case. I just want to focus on Section 230. There are some misconceptions about what it says and does not say, so here is the actual wording:

“no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The law was passed in 1996, and was meant to protect website owners from liability stemming from what third parties post on their websites. It is not, as some pundits state at times, about protective “passive” platforms. I have also heard some commenters frame it as similar to protecting phone companies for the content of phone conversations. But Section 230 specifically refers to “interactive computer service” – so, not passive.

But at the same time, this law is more than a quarter of a century old. It is literally “web 1.0” and was written prior to the social media revolution. Google itself was created in 1998, two years after this law passed. The first blog was created in 1994, but blogs were not popular until later. WordPress, the most popular blog platform, was created in 2003. Twitter was created in 2006, and Facebook in 2004.

The law was meant to protect free speech, although it is important to recognize that it is not a first amendment issue as government regulation of speech is not at issue. The law also has exceptions for content related to sex work, copyright violations, and violation of federal law. So you can’t use Section 230 to defend a criminal act. A later law also carved out further exceptions for content dealing with sex work and sex trafficking. This law made sense in 1996, to support the emerging internet ecosystem and protect tech companies who were providing platforms for content from endless lawsuits. Tech companies also retain the ability to censor content themselves on their own platforms – to remove content that violates their user agreements.

Arguably, however, the most significant change that happened to social media after this law was passed was the development of algorithms that promote certain types of content over others. Social media platforms are now anything but passive providers of space for third party content. They play an active role in promoting content. They aren’t exactly editors who decide what to publish or not publish, but the effect of their algorithms can be almost the same. Content can be invisible or viral depending on their algorithms. This is very much not like a phone company or internet service provider. Imagine after a phone call and operator gets on and says, “Hey, how would you like to call this person,” trying to jack up your long-distance bill (remember those), or promoting a phone chat line. What if that phone chat line specifically promoted by the phone company provided a dubious service, or was meant to indoctrinate people into a terrorist organization? Is the phone company liable then?

Social media tech giants are not just passive platforms. They have a tremendous amount of control over not just which content is allowed on their platforms, but h0w it is promoted and spread. At the very least we need to rethink whether a blanket shield not only for the content on their platform but for the behavior of the companies themselves is appropriate. k

The biggest legislative challenge to Section 230 came from Donald Trump. He wanted to weaken the protections, based on the idea that these tech companies were favoring liberal content over conservative content, but made little headway. But then, when he was banned from Twitter, he revised these efforts, trying to bypass Congress by urging regulators to more narrowly define Section 230 protections.

So now here we are at the SCOTUS. Lawyers for the Gonzalez family argue that YouTube (now owned by Google) played an active role in promoting content that arguably lead to the killing of Nohemi. Google argues they are protected under Section 230. Again, there are a lot of legal layers here, and I am no expert. In testimony so far, for example, many Supreme Court Justices seem to be very worried about the tsunami of lawsuits that a decision in favor of Gonzalez would create. Some also argue that modifying Section 230 should be done by Congress, not SCOTUS. SCOTUS can also decide the case narrowly or broadly – rule on a technicality or make sweeping statements about internet regulation.

Perhaps the best outcome is some deliberately narrow decision on this specific case by SCOTUS, and for Congress to take up the issue, crafted an update to Section 230 that better reflects the current reality. There is a lot of room for nuance, and for balancing various concerns. My overall view is that internet companies should not be responsible for third party content (beyond the usual basic legal restrictions) but should be responsible for their own algorithms and behavior. I leave it to legislators to craft the optimal balance of these concerns.

But unfortunately, Congress is quite dysfunctional these days. This should be a technocratic endeavor, with different factions coming to a compromise that represents a reasonable balance of different priorities. I would predict, however, that any legislative move on Section 230 will instead be bogged down in extreme political rhetoric. I hope I’m wrong.

No responses yet