Let’s Not Put the Government in Charge of Moderating Facebook

By Will Oremus

Photo: Alex Wroblewski/Getty

Facebook co-founder Chris Hughes’ New York Times op-ed last week calling for his former company to be broken up sparked responses from ex-Facebook employees (some in favor of the idea, some against it), Democratic presidential candidates (some in favor, some against), and Facebook itself (definitely against). Politico called the question of a Facebook breakup “a new litmus test” for White House aspirants.

Polarizing as it may be, however, dismantling a major American company isn’t the most radical element of Hughes’ proposal — nor the most troubling. Largely overlooked in all the debate has been his call for a U.S. government agency to regulate online speech, not only on Facebook but on all social media. It’s an idea that may hold intuitive appeal for those concerned about the power of social networks to decide what we can and can’t say — until you start to think about how it might actually work, and how to solve the problems created by social media without creating even bigger ones.

Hughes fails to make the case for why online speech should be subject to extra government scrutiny, let alone be made the province of a special government agency.

Breaking up Facebook might marginally reduce the power that Mark Zuckerberg himself holds, but antitrust action alone won’t solve the problem of companies making the rules for online speech. It merely spreads the problem around. Recognizing that, Hughes suggests an additional remedy: the same government agency in charge of regulating privacy would also “create guidelines for acceptable speech on social media.” While acknowledging that the idea of government censorship “may seem un-American,” he notes that courts have already carved out exceptions to the First Amendment, and suggests that more may be needed to respond to problems such as online harassment and live-streaming violence.

He’s right about one thing: the idea of the government deciding what online speech is acceptable does seem un-American. Yes, there are limits to the types of speech that are protected by the First Amendment (although Hughes’ example of shouting “fire” in a crowded theater isn’t one of them). But as the Verge’s Adi Robertson points out, those same limits already apply equally to online and offline speech. Hughes fails to make the case for why online speech should be subject to extra government scrutiny, let alone be made the province of a special government agency. If the plan is to establish new legal restrictions on speech, the courts seem unlikely to uphold them. If it’s just to set up optional “guidelines,” it’s hard to imagine how those would differ substantially from platforms’ existing policies.

Setting aside the question of whether federal regulation of online speech is constitutional, it’s worth thinking about whether it would even be desirable. There are assuredly flaws in how the major social networks, including Facebook, moderate speech. But the idea that a government agency would necessarily do better is naive.

It’s easy to forget when proposing noble-sounding government interventions that the government is run by people whose view of the problem might run counter to one’s own. While Democrats are urging social media platforms to ban white nationalists, Republicans are pressuring them to stop “censoring” conservative views. It may be obvious to those on the left that hate speech rules on social media should be used to ban the likes of Alex Jones and Milo Yiannopoulos, and they’re frustrated that Facebook and Twitter haven’t acted more decisively. (“Ban the Nazis!” has become a rallying cry for critics of Twitter CEO Jack Dorsey.) But President Trump is more interested in the alleged “hate speech” of Ilhan Omar, the Democratic Congresswoman from Minnesota whose statements on Israel have been criticized by some as playing into anti-Semitic tropes.

Anyone who wants a U.S. government agency making new rules for online speech needs to take a moment to think about exactly who would appoint that agency’s leader, and to what political ends they might seek to put it. If the answer is Trump, then leaving content moderation in the hands of private companies might start to look like the lesser of two evils.

It is maddening, of course, to see companies whose chief interests are growth and profit dither and flail when it comes to content moderation. Hughes suggests that, in Facebook’s case, it’s due to a lack of accountability, stemming from the absence of government regulation and Facebook’s dominant market position. (Facebook-owned Instagram, Whatsapp, Messenger, and Facebook itself are all among the world’s most popular social networks, with Facebook the largest of all.)

Realistically, however, decisions about what sorts of speech to tolerate are not as obvious as they seem, and there’s often a tension between consistency and common sense. Rules that seem logical in one context inevitable break down in another, as when Facebook banned the Pulitzer Prize-winning “napalm girl” photo, or developed a patchwork of hate speech rules that ended up protecting white men but not black children.

They didn’t want to be in the business of deciding what people can and can’t say. It has been pressure from the public and the media that has pushed platforms to take these problems more seriously.

Before we throw up our hands and tear up the First Amendment, it’s worth considering that the major Internet companies, including Facebook, have actually proven at least somewhat more accountable than Hughes gives them credit for when it comes to regulating online speech. This time last year, Infowars founder and far-right conspiracy theorist Alex Jones was operating unfettered on every major social platform. Now he’s banned by most and under siege on the rest. Yiannopoulos and Laura Loomer have been cast out from both Twitter and Facebook. Those two platforms, along with YouTube, waged a whac-a-mole campaign against videos of the New Zealand massacre in April.The results were far from perfect — but it wasn’t for lack of trying. Harassment and hate speech are still huge problems with serious consequences, but Facebook and Twitter both have large and ever-growing teams devoted to addressing them, and both have genuinely made it a priority in recent years.

None of this is to defend those platforms from scrutiny, let alone to praise them for their public-spiritedness or foresight. On the contrary, the scrutiny is arguably the only reason they’ve acted. They didn’t want to be in the business of deciding what people can and can’t say, beyond relatively easily identifiable stuff like nudity and spam that visibly pollutes the average user’s experience. It has been pressure from the public and the media, and in some cases their own employees, that has pushed platforms to take these problems more seriously, and to act to defuse the outcry. The outcome is almost always imperfect, and sometimes the platforms break their own rules in the process. But it’s worth noting that this pressure has generally been far more progressive in both intent and effect than the pressure coming from Congress — which, at least until the House changed hands this year, focused on conspiracy theories such as the notion that Twitter is shadowbanning conservatives, or that Facebook has a vendetta against Trump supporters such as Diamond and Silk.

Antitrust and privacy regulation are plausibly bipartisan issues, on which we should be able to trust regulators more than we trust Silicon Valley behemoths. But deciding who can say what online is one realm in which government oversight might be worse than the alternative.