Recently there has been a lot of talk about “Section 230”. People either want it enforced, repealed, or reformed. Which is kind of ironic since most people talking about it have no clue what it is.
So what is it?
It’s actually a very short and easy to understand law introduced by the Communications Decency Act in 1996. You can read it here. In short it provides some manner of protection to websites that include user generated content so they aren’t liable (in most cases, there are a handful of exceptions such as with copyrights and prostitution) for content they host that wasn’t generated by them.
That’s it?
Pretty much, yeah.
Don’t they have to choose between being a publisher or a platform?
Nope. It explicitly says that they are not liable as a publisher for things created by their users.
What if they moderate content?
Uh, the bill specifically mentions that. You didn’t read the link I gave you, did you?
Well it was long and boring.
It’s not that long. Just read section c. It’s only 26 words long.
Look I’m reading your dumb blog, will you just answer my question?
Fine….
You can read up on it’s history here…
What did I just tell you?
I know, I know. It was a response to a lawsuit against Prodigy Communications…
What is Prodigy Communications?
…
Facebook in the early 90’s.
Ah ok.
Anyway they were sued because someone didn’t like what someone posted on one of their message boards. Courts had ruled that online services weren’t liable for things posted by their subscribers, just like how bookstores were liable for books they sold. But when Prodigy was sued by Stratton Oakmont, Inc. the court ruled that because Prodigy moderated their message boards they were effectively a publisher and thus liable for any content they let through.
So they were a publisher, not a platform?
Exactly, the publisher vs platform thing was before Section 230 was enacted. A lot of people had a problem with that situation. Including the authors of the Communications Decency Act. By equating moderating with publishing, it created a huge disincentive for online services to moderate user generated content. So added language to the law explicitly stating that moderating content does not make you the publisher.
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph.
I told you, that stuff is boring.
Sorry.
So if we want social networks to stop moderating content all we need to do is repeal Section 230?
Not necessarily. More likely they would just moderate more. If they are liable for anything you post they would have to proactively delete all potentially problematic content.
Or they could moderate nothing and not be liable for anything!
No one wants that.
Yes I do!
No, you really don’t.
Go to your public Gmail account and check your spam folder. How much of that is stuff you are really interested in? Your email is only usable because you have automated software that filters out spam and abuse. Without moderation, your social network feed would look the same. The interesting content would be swamped out by ads for male enhancement pills or pleas from Nigerian princes for financial help.
Unmoderated the Internet is full of junk. The difficult thing about building social networks is not handling the posts and friend connections and all that. I mean that part isn’t easy, but it’s not the difficult part. The difficult part is helping the user find the interesting needle in the haystack full of fecal matter.
Ok, but what if we just limited how they can moderate? Like not allowing them to censor opinions based on politics?
That’s not going to help very much either. What is considered “political” can be defined so broad that it would prevent them from moderating anything. Remember, last year the question of whether or not it was a good idea to wear a mask during a global pandemic of a respiratory disease was one of the hottest political topics out there. There is a huge amount of overlap in terms of censoring “political opinions” and the things we legitimately want to censor.
One big example of “social media censorship” was Twitter’s decision to permanently ban Milo Yiannopoulos. That decision was not made based on Milo’s politics. It was made because he was organizing a harassment campaign against Leslie Jones’s Twitter account. Yet there were political undertones. He was part of a movement that was complaining that the new Ghostbuster’s reboot that she was part of was an example of political correctness gone too far, and that she was an example of feminists and liberals forcing… I don’t know, I honestly can’t figure out why they bothered to even watch a movie they knew they wouldn’t like.
Most of the complaints about “conservative censorship” aren’t that conservatives are being moderated because of the views. It’s that sites like Twitter and Facebook are being inconsistent in their use of moderation. That when a conservative makes an off color joke he gets banned or when he says something factually incorrect a correction is added, but when a liberal does the same thing they take no action.
Unfortunately this is necessarily going to be the case when you deal with subjective measures like this. There is no way to objectively measure how offensive something is, and any attempt to do so will only lead to people tiptoeing as close to the line as they can get and trying to exploit a loophole somewhere.
So you are for big tech censorship?
As a general rule, no. It almost always backfired. See the Milo example above. Getting banned just gave him publicity when he was suddenly made the martyr for conservative censorship.
Publicity he pissed away a few months later after video came out of him defending pedophiles (that’s the larger bump in the Google trends graph a few months later), but still, far more than he would have had otherwise. The Streisand Effect is real.
Another problem is that by dismissing “improper” opinions out of hand, you aren’t really convincing the people who hold them to believe something else. You are just causing them to leave polite society for a realm where their opinions are accepted, whether that be Gab, Parler, or 8kun. There they will live happily ever after in their own little echo chamber.
And finally, there is the fact that some day you will have an opinion that is not considered “proper.” The Overton window is constantly shifting, and some day it won’t be looking out over your personal convictions. When that happens, you will hope for societal structures and norms to tolerate you.
So what can we do about social network censorship?
Don’t use them. Or at least don’t rely on them. Twitter should not be our “public square”. Facebook should not be our only connection to the outside world. In fact the notion that they are the primary way most people get their news today is simply wrong. They have their uses, sure. But recognize them for what they are, channels that are controlled by someone else. They shouldn’t be your only or even primary mechanism for making your voice heard.
That’s one reason I restarted this blog. I wanted to have a way I could express my thoughts in a medium I controlled. I would encourage others to do the same. No, it won’t get the same level of traffic as Twitter, but that’s hardly the point. It is a channel where I can control the content.