YouTube’s LGBTQ Problem Is a Business-Model Problem

By Will Oremus

Will Oremus
Photo by Szabo Viktor on Unsplash

YYouTube wants to be a platform for self-expression for all kinds of people. It also wants to make piles of money selling and placing ads via automated systems with minimal human review.

It’s having trouble doing both.

This week, a group of LGBTQ video creators sued YouTube and parent company Google in federal court, alleging that it has systematically discriminated against their content. Specifically, the lawsuit accuses YouTube of filtering, demonetizing, and otherwise limiting videos that deal with LGBTQ identities, making it hard for their creators to reach a wide audience and make money. The suit alleges violations of free-speech protections and civil rights, among other statutes, and seeks class-action status.

The lawsuit cites, for example, videos about gender identity that YouTube demonetized; accounts focused on transgender issues that YouTube suspended; and a news and entertainment show aimed at the LGBTQ community to which YouTube declined to sell ads. At the same time that YouTube was restricting this content, the suit alleges, it was failing to moderate an avalanche of bigoted comments directed at the creators on their own video pages. The suit, which you can read in full here, also cites examples of videos mocking or criticizing LGBTQ people that YouTube appears not to have subjected to the same restrictions.

On the one hand, YouTube and Google bill themselves as LGBTQ-friendly companies. Google is an official sponsor of the San Francisco Pride parade and has campaigned for gay rights around the world. “We’re proud that so many LGBTQ creators have chosen YouTube as a place to share their stories and build community,” YouTube spokesperson Alex Joseph said in a statement.

On the other hand, this is not the first time the LGBTQ community has felt betrayed by the company, and by YouTube in particular. There was an uproar in June when video journalist Carlos Maza, who is gay, accused the platform of siding with a right-wing provocateur who assailed Maza with homophobic slurs and spurred a harassment campaign against him. (After declining to ban Crowder or remove his videos, YouTube eventually opted to demonetize them.) So what’s going on here?

While the lawsuit portrays the discrimination as intentional on YouTube’s part, you don’t have to assume it’s a conscious effort to see how the platform could be discriminating in systematic ways. YouTube’s content policies and moderation processes are designed in such a way that it is bound to make unfair decisions every day, especially when it comes to topics or identities that are politically contested.

Every social media platform walks a line between permissiveness and moderation. Most aim for a semblance of political neutrality, lest they be accused of bias, while enforcing rules aimed at keeping users safe and their feeds nontoxic. The idea that goals like civility and safety can be entirely disentangled from politics has not fared well, however, in a time of rising white nationalism and social conflict. Companies such as Facebook and Twitter have struggled to strike the right balance, earning condemnation for their failure to rein in hate speech even as they face allegations of anti-conservative bias from Congressional Republicans and the president.

YouTube has an additional problem that stems from the nature of its advertising business. Its ads don’t just crop up at random places in users’ feeds, but are instead tied to specific videos. Because they’re video ads, they’re also costly to produce and to place compared to the display ads that are more common on other platforms. They have more in common with TV commercials than, say, a promoted tweet or Facebook post. All of which means that the financial success of Google’s YouTube division hinges on making sure advertisers feel comfortable with the user-created videos on which their ads are appearing. While Google doesn’t report YouTube’s earnings separately, the company cited pressure from advertisers to control YouTube’s content as a factor in its disappointing revenue growth earlier this year.

Unlike TV networks, however, YouTube doesn’t handpick all of its content, nor does it match ads to videos by hand. The content comes from users, and the matching is done via algorithm. That creates a high risk of ads being placed on videos that make for awkward or even offensive pairings, even if they don’t violate YouTube’s terms of service. In response to complaints from advertisers, YouTube over the years has significantly restricted the types of videos that are eligible for advertising, enforcing “advertiser-friendly content guidelines” that rule out broad swaths of content. These include violence, adult content, content relating to tobacco or firearms, and perhaps most vaguely, content pertaining to “controversial issues and sensitive events.” YouTube uses a combination of machine learning, user flagging, and an appeal process with human moderators to apply these guidelines.

From the standpoint of YouTube’s advertising business, that all makes sense, at least in theory. The company is leaving ad money on the table by ruling out tons of videos that would probably be just fine for many advertisers. After all, TV is full of sex, violence, and controversy. But YouTube has decided that it’s better off erring on what it considers the safe side, because if advertisers come to see it as a risky platform, they’ll just avoid it altogether. Indeed, advertisers boycotted the platform in 2017, and again this year.

From the standpoint of a social media platform, however, these guidelines are extremely constraining, not to mention nearly impossible to consistently enforce, especially with software. Advertising is not the only way to make money on YouTube videos: The company touts merchandising and paid channel memberships as alternatives. But advertising remains the dominant mode of monetization and the one that makes the most sense for most creators. The upshot is that, if you want to make money from ads on YouTube, you either have to steer far clear of content that anyone could consider sensitive, or brace yourself for a never-ending battle with the platform’s algorithms and human reviewers over exactly what content they consider “ad-friendly.”

It doesn’t help that “ad-friendly” is clearly a moving target. I wrote in July about the struggles of YouTube creators covering the Hong Kong protests. It was clear from their experiences with YouTube’s appeal process that even the company’s human moderators had varying interpretations of what constituted violence (police macing protesters?) or sensitive events (dissidents criticizing the Chinese Communist Party?). Several videos that were initially demonetized or hidden from search results were reinstated upon review. Others that had been rejected upon review were reinstated only after YouTube got press inquiries or Twitter complaints about them and took a second look.

It became clear in the course of my reporting that YouTube wasn’t intentionally doing the Chinese government’s bidding by suppressing coverage sympathetic to the protests. But it was effectively doing so unintentionally, by making such coverage detrimental to creators’ economic interests through demonetization and other forms of filtering. Even when human moderators reinstated content demonetized by the algorithms, the reversal came too late for that content to find a wide audience. And while the role of user flagging was unclear, it’s easy to imagine supporters of the Chinese government effectively gaming YouTube’s systems by reporting videos they didn’t like, thereby triggering the algorithm and forcing creators to go through the appeal process.

YouTube’s treatment of LGBTQ creators feels analogous, in some ways. Again, a marginalized group that some factions of society would like to silence is finding its videos unfairly flagged and punished by some combination of hostile users, ham-handed algorithms, and overmatched human reviewers, who are no doubt bringing their own biases to the job.

The possibility that at least some of those reviewers harbor blatantly homophobic attitudes is one that bears investigating. The lawsuit alleges that at least one human reviewer, identified as the head of a call center in South Asia, explained to the creators of GlitterBombTV.com’s channel GNews! that their channel had been restricted because of “the gay thing,” which he said contravened YouTube’s policies against “shocking” or “sexually explicit” content.

But it’s also worth investigating whether homophobia is embedded in the algorithms, content guidelines, and moderation systems themselves. The lawsuit suggests that videos are being flagged under the content guidelines merely for including words such as “lesbian,” “transgender,” or “queer.” It presents evidence that even videos that are wholly unrelated to sexuality have been demonetized or barred from being viewed in “restricted mode” because their creators identify as LGBTQ, appeal to an LGBTQ audience, or have other videos that deal with sexuality.

YouTube denies this. “Our policies have no notion of sexual orientation or gender identity and our systems do not restrict or demonetize videos based on these factors or the inclusion of terms like ‘gay’ or ‘transgender,’” Joseph said in a statement.

But machine-learning algorithms don’t need to be explicitly trained to be homophobic in order to operate in a discriminatory manner. There are all kinds of ways that YouTube’s software might learn to treat LGBTQ creators and videos as suspect, the simplest being that homophobic users are repeatedly flagging it as such. There might also be advertisers who have made it known to YouTube that they’re uncomfortable running ads on content that deals with gender identity.

The issue is not that YouTube’s guidelines explicitly discriminate against LGTBQ creators. It’s that their vagueness and fundamental conservatism leave plenty of room for discriminatory enforcement, whether by human or machine. Marginalized groups in particular will always be disadvantaged by a system that relies partly on users to report what they find “sensitive” or “controversial.”

If YouTube wants to stop hurting LGBTQ creators, it isn’t enough to claim that its policies are agnostic toward gender identity or sexual orientation. It needs policies that specifically prohibit discriminatory enforcement, just as countries have laws protecting vulnerable groups. It needs to rethink the guidelines that treat controversy or sensitivity as reason in itself to demonetize a video. And it needs to implement strong processes to guard against bias at the level of the algorithm, the content reviewer, and maybe even the bigoted user.

A YouTube spokesperson said the company does test its systems for bias and train its reviewers on the nuances of the content policies, but declined to elaborate other than to say that no bias had been found. The company also noted that it recently updated its ad-friendly guidelines to include more specific examples.

Most of all, YouTube needs to stop pretending that its user platform can be a place for unfettered expression at the same time that its advertising platform is a sterile environment free of controversy. Claiming that demonetization isn’t a form of punishment is disingenuous. It’s fine to apply stricter standards and an extra layer of scrutiny to videos that YouTube monetizes or recommends. But if those standards include avoiding topics that affect marginalized groups, then YouTube itself becomes an engine of oppression.