In junior high, I discovered that my school library had declined to carry a book. (I don’t remember the title.) Incensed, I went to the Headmaster (yes, I went to that kind of school) to confront him about censorship. To his credit, he didn’t respond, “get the hell out of my office!” Instead, he said, “Brad, if the KKK or the Neo-Nazi Party sent us their literature to add to our library collections, do you think we should do it?” I didn’t.
What the Headmaster succinctly conveyed was that the school library had a duty to define and police its standards for inclusion. This is not the same thing as censorship. Nor is it a violation of the KKK’s and Neo-Nazi’s right to free speech. They have that right, but to quote a magnificent speech by Sasha Baron Cohen, freedom of speech is not freedom of reach.
Big Tech companies that don’t want to spend money moderating the content that goes onto their platforms love to invoke the First Amendment and talk about free speech. The problem is that isn’t what the First Amendment is about:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
The Amendment only prevents Congress from interfering with free speech and freedom of the press. It makes no statement about the rights or obligations of other entities—like school libraries or Big Tech companies—to do those things.
Facebook (now Meta) was founded in 2004. It has had 18 years to scale to the global Leviathan it is today because it always avoided preemptively checking what went on its platform, which would be time consuming and expensive. Users and advertisers have always been able to post whatever they wanted to Facebook instantly, with no oversight coming from the company.
Yes, the company employs legions of traumatized moderators who work with algorithms to scour its platforms for hate speech and sex trafficking (the dictionary definition of a low bar), but the ethos has always been “publish everything now, spot check a little… eventually.”
Section 230, an amendment to the 1996 Telecommunications Act (eight years before Facebook was even a glimmer in Mark Zuckerberg’s unblinking eye), protects the company from being treated as the publisher of third party content. This means that you can’t sue Facebook for what people put on Facebook no matter how mendacious, dangerous, or disgusting it is. (Taking a serious second look at 230 is probably the only topic on the planet about which the 45th President and I agree.)
Despite this immunity, Facebook is having a rough quarter.
The stock may be inching back up (it dropped precipitously after the company confessed that the moderate privacy-protecting changes Apple made in iOS would cost Facebook $10 billion in ad revenue this year), but last week came two new blows worth tracking because they may open the company to financial consequences for their refusal to moderate content.
The First Blow: Two California legislators introduced a bill that would make social media companies liable when children become addicted. Facebook is the obvious focus, although Snap and TikTok can’t be happy about it.
As usual, Facebook’s handling of the suit contained an aria of evasion. You’ll see it at the end of this passage from The Wall Street Journal coverage:
This week’s California bill was partly inspired by The Wall Street Journal’s 2021 reportingthat… Facebook… found that one in eight of its users reported engaging in compulsive use of social media that affected their sleep, work, parenting or relationships, according to internal documents. Company documents obtained through Frances Haugen, a former Facebook employee and whistleblower, also showed the company was aware that its Instagram platform can negatively affect the self-image of teenage girls in particular.
A spokeswoman for Meta said the company disagreed with the characterizations of its research… The company also rolled out new features that aim to encourage thoughtful and age-appropriate Instagram usage, including notifications that remind heavy scrollers to “take a break,” and nudges that encourage teens to explore different types of content if they have been looking at one topic too long, the spokeswoman said.
Telling teens to shift to obsessing about different topics on a highly addictive platform like Instagram is like Altria and R.J. Reynolds saying that a great way to avoid cigarette addiction is to switch to smoking cigars, or like Diageo telling an alcoholic whose favorite drink is gin to switch to Scotch. That’s not how addiction and recovery work.
No matter how damaging the product may be to its users, Facebook will never do anything to reduce time spent with its products. There’s a vast difference between “taking a break”—which translates to “come back later”—and “you need to stop doing this to yourself.”
California has been ahead of the rest of the country on legislation to rein in Big Tech companies with its CCPA and CPRA acts (although those were citizen initiatives rather than the work of the legislature). Since the internet doesn’t respect state borders, this new legislation is effectively a national bill that the companies will need to respect outside of California. We can expect Facebook to unleash its army of lobbyists to kill the bill or amend it into impotence.
If the bill does become a law with teeth, then Facebook will find itself instantly embroiled in an endless and very expensive mass of lawsuits by angry parents with damaged kids and big medical bills. From a PR perspective, it’s also not a great look.
By the way, I agree that protecting kids from social media addiction is a good thing, but why stop at kids? What about the rest of us? It’s not like adults are models of self-control when it comes to social media. I certainly am not.
The Second Blow, which got less notice, is from Australia and is potentially a much bigger deal. A regulator has sued Facebook because the company hasn’t done enough to police the advertising that runs on the platform. Here’s a passage from the coverage in a different article from The Wall Street Journal:
“Meta should have been doing more to detect and then remove false or misleading ads on Facebook, to prevent consumers from falling victim to ruthless scammers,” commission Chair Rod Sims said. “The essence of our case is that Meta is responsible for these ads that it publishes on its platform.”
The impetus for the suit is that unscrupulous advertisers have used celebrities’ images without permission to trick users into investing in things like cryptocurrencies. One user lost a whopping $450,000.
The reason this is a big deal is that it opens Facebook to a new form of liability around the ads that run on its platform. The company already struggles with misinformation and disinformation in the content that people post: having to increase its scrutiny of advertising would come with a hefty price tag that Facebook would rather avoid. That development would also put Facebook in conflict with the advertisers that pay its bills.
Like California, Australia is tougher on digital companies that the rest of the world—it recently got Facebook and Google to pay media properties for the content that users see on its platforms—so if this suit prevails in Australia watch for copycats elsewhere.
Section 230 protects Facebook from the traditional responsibilities of publishers, and Facebook has insisted again and again that it is not a media property. But that does not mean that Facebook has no responsibility for what happens on its platform.
The new California bill and the Australian lawsuit show that the nature of Facebook’s responsibility remains to be determined.
NOTE: If you’d like to get pieces like this delivered right to your inbox (along with other goodies), then please subscribe to my newsletter!
Leave a Reply