Your Speech, Their Rules: Meet the People Who Guard theĀ Internet


Illustrations: Priya Mistry

When Facebook started 15 years ago, it didn’t set out to adjudicate the speech rights of 2.2 billion people. Twitter never asked to decide which of the 500 million tweets posted each day are jokes and which are hate speech. YouTube’s early mission wasn’t to determine if a video shot on someone’s phone is harmless speculation, dangerous conspiracy theory, or information warfare by a foreign government. Content platforms set out to get rid of expression’s gatekeepers, not become them.

Yet here we are. Controversial content takedowns are regular news. In August 2017, Cloudflare withdrew its DDOS protection service from the Daily Stormer, an American neo-Nazi online publication. A year later, Apple, Facebook, YouTube, and Twitter removed content by conspiracy theorist Alex Jones. Late in 2018, Apple pulled Tumblr from the iOS App Store, reportedly because of child pornography. Tumblr in turn banned all adult content and is now back in the App Store. Like tariffs on companies that get passed on to consumers, restrictions on platforms flow downstream to silence users — writers, trolls, bigots, activists, shitposters, people who like porn, people who like politics.

We want platforms to provide tools that expand expression while protecting us from the harms caused by that newly enabled and amplified expression. We want to protect speech, protect people, and protect society — and we disagree wildly over how to do these all at once. Meanwhile, as policymakers, academics, nonprofits, and private companies work on solving this very hard problem, someone has to wake up, go to the office, sit at their desk, and make these decisions every day. What stays up — and what comes down? What conduct is encouraged, what is tolerated, and what will get you banned (and for how long)? Who decides?

This is where the trust and safety team comes in. Most companies operating an online platform have one. It sometimes goes by other names — “content policy” or “moderation” — and comes in other flavors, like “community operations.” Whatever the name, this is the team that encourages social norms. They make platform rules and enforce them. They are at once the judges and janitors of the internet.

This is not the job of a few dozen techie randos, but tens of thousands of workers, both full-time employees and contractors. In December 2017, YouTube said it planned to hire 10,000 people to review and moderate videos. As of early 2018, Facebook reportedly had about 7,500 content reviewers, and the company said it planned to grow the number of safety and security workers to 20,000. Working conditions vary widely, including contractors overseas and in the United States in large-scale operations making rapid calls; employees answering user emails and reviewing escalations and edge cases; operations managers tracking the accuracy of frontline calls, tweaking the system, and introducing new recruits to their daily decisions; policy managers tinkering with rule sets to keep up with reality.

The work can take a toll. In late 2018, a Facebook contractor sued the company for “failing to provide a safe workplace” for the thousands who moderate the platform, alleging that the job required them to “witness thousands of acts of extreme and graphic violence.” A Verge investigation in late February was the latest story to detail the human cost of attempting to moderate billions of pieces of content quickly and accurately.

They are at once the judges and janitors of the internet.

As Medium’s head of legal, I’ve led our team for almost four years. In that time, I’ve talked to and learned from people who do some version of this job across a range of tech companies. They scan message boards to figure out whether a new meme is covert racism or nonsense. They assess the credibility of a shooting threat from an anonymous stranger whose profile picture is a candy-colored anime princess. They understand how this work can be at once depressing and surprisingly uplifting, stressful and fascinating, reliably absurd and gravely serious.

Journalists and academics are starting to cover this emerging job. But so far I’ve seen few concrete stories about how this work gets done: the realities of crafting and revising platform policies, managing a team, making moderation decisions, and living with the results. This is that story. I talked to about 15 trust and safety employees who work or have worked full-time at companies including YouTube, Facebook, Twitter, Reddit, Pinterest, Google, Automattic, Slack, Tumblr, Airbnb, Etsy, Quora, Internet Archive, and Medium. Every day, they make decisions that deeply affect our lives—and theirs.

To encourage them to talk freely (or at all), I promised anonymity. Most use pseudonyms to correspond with users because they’ve been threatened or stalked online. I’ve let them do the same here. In some cases, they’ve asked me not to name the companies they work for (or have worked for in the past). Meet them:

  • Olivia has worked in trust and safety at several platforms, including Google and Facebook, and now leads content policy at a tech platform.
  • James has worked on policy teams at companies including Dropbox and Reddit and now leads at a gaming company.
  • Rob has worked at several major tech companies that provide content platforms.
  • Marie is a policy manager at a tech company that provides a platform for real-world transactions.
  • Josh has worked on or with trust teams at several tech platforms.
  • Vanessa has worked for almost a decade in community, policy, and safety roles at companies including Facebook and Pinterest.
  • Adam works on the trust team at a tech platform.
  • Y.X. Yang has worked on content policy at Google, Twitter, and Pinterest.
  • Martin has worked in trust at a major platform for more than five years.
  • Mathilda worked on the trust teams at several platform companies.
  • Nora has worked at several internet marketplace platforms, including Etsy.
  • Remy works on the trust team of a major content platform.
  • Jessica has worked in trust for more than a decade, doing content moderation and policy work at social media companies.
  • Rita has worked in trust and safety for about five years, overseeing content moderation and user policy at two large platforms.

Olivia: I remember deciding what to do about the Google video of Saddam Hussein’s hanging that wound up on the front page of the New York Times the day after it happened. I was like, “What the hell? I’m totally ill-equipped for this.” It was humbling.

Mathilda: Someone once posted, “I’m going to jump off a bridge.” We were up in the middle of the night trying to figure out which bridge, calling police overseas, trying to find family members, deciding if we were overstepping on privacy. Their family was like, “Yeah, this happens sometimes.” And we were like, “Well, I guess we just have to let this go.” You only have so many of those in you. Who knows, it could have just been some troll fucking with people.

Nora: At one point [at Etsy], we decided to ban spells, potions, readings — anything promising some sort of cosmic outcome. Buyers were complaining or trying to return them because they weren’t working, and it was causing all kinds of problems. It was the interplay between so many different trust and safety vortices — fraud, content policy, case management. On the day we announced and started taking things down, we brought in sage and all kinds of talismans to ward off the hexes and incantations the sellers were going to cast on us. A whole community had sprung up and been nurtured in the marketplace — people were spending serious money on this stuff — and we were curtailing a large portion of it. We did get a bunch of responses claiming our hair would fall out or whatever, so I might still be cursed?

Y.X. Yang: During the Great Recession, there was an upswing in ads for payday loans [at Google]. At the time, we didn’t have a precedent. We discussed whether these ads were providing value to users — bear in mind that there were people who needed these cash loans to pay for food or rent for the week… We realized some types of loans were better than others, so for a while we tried to enforce what we thought was a reasonable loan interest rate and made sure that each payday loan company had a brick-and-mortar location (to avoid scammy fronts). At the end of the day, we decided payday loans were pretty much the worst, so we nixed the entire business model. If I could do this all over again, I think I’d be able to make a quicker, clearer decision. But at the time, I didn’t have a model for thinking through how to make the call.

“You can’t just wave your hands in the air anymore and be like, ‘No, actually, we’re just a content platform. Nothing we do affects the real world.’”

Rob: Sometimes I give people an example and say, “Okay, you get to write the rules, right? Do we allow or not allow the N-word?” And their initial reaction is, “Well, no. That’s terrible. Of course not.” Alright. So that means no rap? “Oh, no, no, no. That’s fine, right?” And does it matter who’s saying it? And all of a sudden, you see them go from this very clear misconception about what we do to digging in and being thoughtful in a way we all are on a day-to-day basis.

Mathilda: There’s a young blonde woman sitting on her bed reading white pride speeches in a sexy baby voice, and you’re just like, “What the fuck?” Or close-up videos of vaginoplasty surgery with tons of followers. People are probably drawn to this out of more than medical interest, but in the end, it’s just a video of a procedure, so hey, and also, how do we know if these patients knew where this footage was going to wind up?

Nora: When you have a job where literally you come in on a Monday morning and the urgent email you’re getting is about someone falling off a zebra… you have to have a sense of humor about that.

Rob: People are quick to dismiss our policies and decisions as a product of a bunch of college students sitting around in flip-flops and not really thinking about the impact of this stuff, which could not be further from the truth. I mean, the flip-flops may be true.

Adam: I think normal people who read the news about a big-deal takedown mistake are like, “You people are a bunch of chuckleheads. Why are you the Keystone fucking cops? Why can’t you get anything right?” And you’re just like, human expression is nuanced and complex. Everyone wants you to take down the things they don’t like and leave up the things they do. Millions of posts go up every day, and there are hundreds of millions of people with different opinions about what should stay or go, and every one of them thinks they’re right.

James: People think you spend five seconds looking at someone’s content and you’re like, whatever, that looks good, or that’s banned. Actually, no. I think our median time per case is 10 minutes or something close to that. On complicated cases, it goes even longer, because we’ll investigate and piece it together.

Mathilda: The idea that it’s like a blur of anonymous content that nobody cares about is wrong. It feels very intimate and very personal. I was talking to a reporter who was like, “Do you guys ever just stay late, get drunk, and take down stuff you disagree with for kicks?” What!? No! You’d be fired in a second. Every takedown is a wound. People agonize over the hard calls.

Olivia: People would be blown away by how much attention, experimentation, and thoughtfulness has gone into crafting policies and by how hard we try to enforce them fairly. No one understands that. It’s not just some dude in a hoodie drinking Mountain Dew in a basement.

Jessica: I think the biggest misconception is that the team doesn’t exist. In every company, there’s at least one person on content moderation. The other misconception is that everything’s done by algorithm and nobody’s looking at things. People really are looking at things, and that’s why it takes so long.

Rob: Another take is that [platforms] just don’t care. But people doing this are always campaigning for more. More people, more resources. For people to take us seriously inside our own companies and outside.

Nora: I was a lawyer working in a big firm in New York making rich people richer. I come from an orientation toward justice and fairness and also intellectual inquiry and honesty about what is really happening. This is a healthier application of that instinct, because I actually have an impact on real people in real time.

Marie: Before this, I worked at an immigrant and refugee resource center where we provided resources for people dealing with a lot of the same issues, like sexual assault or domestic violence. I would help them get resources. It was similar work, but in the nonprofit space instead of the corporate space. I think what motivates me is a survivor focus.

“I ended up here by being sort of a troll and a super complainer on the internet. As a result, my karma is now to deal with people like me.”

Remy: I ended up here by being sort of a troll and a super complainer on the internet. As a result, my karma is now to deal with people like me. It’s a weird position to be in, to be this person who’s always saying that blank is wrong, or we should or shouldn’t do that because of some mysterious ethical principle no one cares about. On the one hand, you’re the biggest advocate of the user, the people whose real lives are affected. But on the other, you’re this weird authority figure who comes in and ruins their day or suspends their passion project.

Y.X. Yang: I think when I started out [over a decade ago], I was surprised. I was like, “I can just block this entire domain, and they won’t be able to serve ads on it?” And the answer was, “Yes.” I was like, “But… I’m in my mid-twenties.”

Rob: Fairly early on, some people asserted that we [at Facebook] had more power than the Supreme Court because our user base then was twice the U.S. population. And at first, we were dismissive, but we realized there was some truth to it. I mean, there are obviously alternative platforms, but if Facebook became what we hoped and thought it would, it may end up being the primary way people communicate. That responsibility weighed heavily on all of us.

Josh: The idea of an admin mode… the first time I saw one of those, I was like, this is really behind the curtain. You can take stuff down or put stuff up. Greater knowledge or greater — power’s the wrong word. I wasn’t going to do anything with the power. It’s just getting to feel plugged in.

Vanessa: I don’t think I’ve ever seen anyone power hungry. I’ve seen a shit-ton of self-righteousness.

Martin: For the most part, the people who are lifers are caring people who just want to make their sites work. And they’re not getting off on stifling people. They don’t really have very strong political agendas. They’ve just taken a hard line against abuse, and it’s apolitical.

“I used to go home at night physically ill, because I felt like, oh my god, we’re fucking this up. Sometimes that responsibility weighed on me a lot.”

Mathilda: I am constantly struck by how few of these decisions are easy and how much our decisions affect the people involved. I feel a lot of responsibility to run each case into the ground—sometimes to absurd ends, spending days sleuthing out individual tickets—and a broader responsibility to create, evolve, and adhere to a coherent system of logic. I still think having logic and due process is important, because it’s the best we can do. But as I’ve spent more time doing the job, I think it’s in some ways a fallacy intended to make everyone feel better about the messiness of being human.

Rob: [Early on at Facebook] I felt like this is a lot of power, but I also felt like we had really smart people on our team. We’re well-intentioned, and we really care. So, if anyone could get this right, we felt we had a shot. I will also say… I used to go home at night physically ill, because I felt like, oh my god, we’re fucking this up. Sometimes that responsibility weighed on me a lot. We didn’t have any precedent to know whether we were making the right decisions.

Remy: I know there’s a level of power we have that others don’t, that they don’t have this ability to say if someone stays or goes. I guess I try to deal with it in a really practical way… to apply the rules consistently and make good decisions in the moment. I mean, how else are you going to get through it on a daily basis? Of course you should be dealing with and understanding the impact and the human side. But I’m not setting out to wield power when I open up my computer for the day.

Marie: When I tell people about my work on hate groups who try to use our service, their number one question is, “Who are you to make those decisions? Tech companies should not be getting involved with that. That’s not their place.” Of course, I’m not going to give them the entire presentation and tell them about the research that went into it and the risks we’re trying to mitigate against. I don’t want to tell them that if we don’t do our jobs, there can be real-world consequences, like more members of the KKK using our platforms to commit violent hate crimes. I obviously can’t talk about things like that publicly. But without bringing it up, nobody will understand why it totally is my place to do the things that I do. Because if I didn’t, real lives would be in danger.

Vanessa: One narrative that’s not always helpful, in my opinion, is that social media is the problem. It’s not just social media. Technology gives all the human dynamics that have always been present a new flavor. What we’re doing is figuring out how we can apply our knowledge to help technology companies understand and solve for these things. Technology is here to stay. Social media is here to stay. We can talk about how it’s shitty, but above all, let’s come together and make it better.

Y.X. Yang: My sense is that a lot of people who work in trust and safety are usually not part of the dominant group, which also makes for a very interesting and… kind of sad dynamic when you have people reading things like, “Oh, this company just doesn’t care about women,” or, “This company just doesn’t care about gay people”… like, half this team is underrepresented, and they do care.

Martin: Nothing comes down from the top. I think the top is so segregated from the bad. No tech leaders want to think about the bad. They just want someone to take care of it. And yeah, we’ll do that. But I don’t think anything’s politically motivated. The person who’s deciding whether your post gets suspended is not some authoritative 1984 deep-state person. This is just a guy sitting there who’s like, “Well, that was a shitty thing to say. Bye.”

Rita: The mascot for every trust and safety team should be Lisa Simpson.

Y.X. Yang: The emotional labor of it all gets very heavy, especially when the topics we’re grappling with become more prominent in the press. The team starts to feel like the rest of the company is sort of looking at them like, “What the fuck? What are you doing? Why can’t you make the right decision?” They don’t really understand how complicated the decision is.

Adam: Swimming against the tide is the job. You’re the company’s conscience. You’re the person advocating for values that are not pure business values. Your job is to be the unfun spouse who’s like, “Yeah, I get that you want a sports car. We need to get the minivan.”

Y.X. Yang: Sometimes other people in the company ask, “Did you know this was going to happen?” Yes, and we told the product team a year ago that if they did X, Y would happen, and because of the priorities at the time, the product manager went and did the thing anyway. It’s like, “But that’s okay, we’ll help you through this mess.”

Adam: “Six months ago we told you, ‘Don’t pave the city with banana peels.’ You decided, ‘Let’s see what happens if we pave the city with banana peels.’ We are now here to clean up the injuries.”

Vanessa: As with many roles focused on the larger good, it’s important to catch yourself when you’re walking around the company self-righteously and being like, “You’re welcome for being your conscience and heart and soul and working hard on the things that you’re not paying attention to.”

Adam: Creators and product people want to live in optimism, in an idealized vision of how people will use the product, not the ways that people will predictably break it. They just don’t want to live there. The separation of product people and trust people worries me, because in a world where product managers and engineers and visionaries cared about this stuff, it would be baked into how things get built. If things stay this way—that product and engineering are Mozart and everyone else is Alfred the butler—the big stuff is not going to change.

Nora: We have executives and product managers shadow trust and safety agents during calls with users. Sitting with an agent talking to a sexual assault victim helps build some empathy, so when they go back to their teams, it’s running in the back of their brain when they’re thinking about building things. Then, when we go into a room and say, “If you do that, you’re going to increase human trafficking,” they don’t think we’re just making it up.

Jessica: Early on at YouTube, a psychologist or therapist or whatever came in. They were really inexperienced with this sort of stuff. There was this moment where everyone was telling their stories about all of the most horrific things they saw. Their goal was, “Can we gross this person out?” That therapist never came back. It was like, “Great. Not helpful, everyone.”

Adam: I feel like I carry around the knowledge of how easily a person’s life can be destroyed. You might make a tasteless joke, or someone might accuse you of something or want to discredit you, and then a mob of millions can converge within hours to hate you and stalk you and lie about you and encourage unstable lunatics to kill you. It can happen to any of us. You will be torn to shreds. You’re one of the eggs broken to make the free speech omelet. Your life will never be the same. The internet will move on to the next thing. And the free speech absolutists and the disingenuous trolls will shrug together and say, “Gee, I guess some snowflakes just can’t handle hearing views they disagree with.”

Y.X. Yang: You kind of have to quarantine the team in a separate part of the building, because they have a very particular sense of humor that nobody else will understand. That kind of humor is very necessary, because everyone needs a safety valve. We say really weird things and look at really weird things. You think you have the stomach to see the really weird things, but I guarantee you, you do not. Your weird is maybe some kind of strange fetish that you saw on Reddit, but our weird is three orders of magnitude beyond.

Remy: If you look at a weight lifter, their body shape becomes a certain way because they’re putting themself under this strain and stress again and again. What’s the equivalent sort of psychological or mental adaptation that a trust and safety person ends up developing? You’re kind of constantly sharpening the knife of your judgment. All the time you’re like, “Okay, we’re going to just do the right decision in the right moment in the right situation in the right context.” And at the same time, you’re also using that sharpened tool to, like, open cans and do stuff you shouldn’t. You’re constantly sharpening and blunting that blade at the same time.

Mathilda: It’s actually made me more sympathetic to people and conscious of my own actions, which is the opposite of what I thought sifting through hatred and abuse all day would do. The internet divvies up power in strange ways that have many democratic benefits but that also allow suffering and shame to be amplified. It’s really made me change my tone when I send emails to other support teams, because I know what will make someone go above and beyond to help me and what will be passed around the team as a joke.

Remy: You become hypersensitized. Either supersensitized or desensitized, but for me it’s more the first. I had a period when some topics would come up in social settings outside of work, like misinformation. I had to kind of like plug my ears and just be like, “La, la, la, la, la, la.” Like I can’t hear another sound related to this topic right now.

Remy: It only makes sense that whatever negative and toxic effects ordinary people get from heavy internet use will only be multiplied in people whose job is to deal with the worst of it. It’s a no-brainer. I don’t know what the solution should be. The only option I really see people propose is “magic A.I.,” but we still have to train and check automated systems (never mind build them!). So, how would that solve it? It just pushes the burden to someone else.

Mathilda: This happens in lots of industries. A company makes something and externalizes costs that others have to pay. Partly because that’s what current corporate principles incentivize, partly because when you make a new thing, no one has any idea what the real long-term costs are. People post millions of things a day on dozens of platforms. And they’re demanding more moderation. How? It can be done by people or robots or a combo. The robots are not good at it yet and may never be, especially if you like free speech. And people are expensive, even when you don’t pay them well. I think if you really seriously internalized the long-run, environmental costs of platforms as they exist today, we’d be blown away by the cost. But it’s impossible to ignore. So, I think the companies themselves should be more responsible for internalizing that cost or modifying their business models. And if the money’s already been taken out of the company by a relative few who received capital windfalls, then they (and their investors and bankers) should give it back so we can pay moderators a fair wage. This ecological disaster made them billions. Let them pay for the cleanup.

Martin: As a joke, I say I’m an internet janitor. I just clean up the shit. My real answer is, “I work for this website. And most people use it for good, but the people who don’t use it for good, I kick them off the website.” And it’s that simple. The people who do bad things, I kick them off. If I’m feeling really randy, I’ll be like, “So, you know, I like to take away people’s right to speech.” But it’s like, “No, I’m not. You have plenty of other outlets to go to. We’re not stifling anyone. If you’re being irresponsible, then that’s it. You know you’re doing it. You know you’re a bad person.”

“I say I’m an internet janitor. I just clean up the shit.”

Marie: I love the work, but it does mean digging into stuff people don’t enjoy talking about, like child pornography and sexual assault. The hardest part is not being able to share that with a lot of people. My partner has been like, “Hey, when we’re in public, could you say SA instead of sexual assault, because it’s really not cool when we’re in public and you’re talking openly about sexual assault and people overhear it. It’s just not fun for me.” I was like, I don’t even think about it, because it’s all I do all day. Some people are like, “What’s the worst thing that’s ever happened?” And I have learned the hard way, you think you want to know the worst thing that’s ever happened, but you don’t.

Olivia: I think when we all started, it was sort of why wouldn’t we defend free speech? Why wouldn’t we defend the exchange of ideas and open dialogue? The best ideas should win out. But no one thought it would turn into what it is.

Rob: I think we were somewhat naive about stuff [at the start], so it felt like, well, we’ve got this opportunity to do this great thing. Now the trustworthiness of the content itself has been called into question in a way we weren’t expecting. All of a sudden, we’re not just talking about what the content is on its face, but the motivations behind it. When we were trying to identify the IRA [Internet Research Agency] content, we started looking at examples of it. We realized that, were it not for the source being a Russian misinformation campaign, the content on its face was totally fine. We were seeing stuff like “Repost if being a real Texan means you wear flip-flops to a barbecue.” Stuff that doesn’t come close to being a violation on its face. But it’s part of this campaign of us versus them and creating the other, so you take it down because of who created it.

James: You can’t just wave your hands in the air anymore and be like, “No, actually, we’re just a content platform. Nothing we do affects the real world.”

Adam: Remember “We’re the free speech wing of the free speech party”? How vain and oblivious does that sound now? Well, it’s the morning after the free speech party, and the place is trashed. And the party planners, where are they now? Pontificating between Art Basel auctions about how bad they feel.

Martin: Confrontational, unpopular thoughts—that’s been the hardest. The idealist in me from five years ago would say, “Yeah, let him talk it out. We’ll talk it out.” But now I think we’re past that.

Remy: Pretty much anything you say can and will be used against you in the court of public opinion. People are always going to take screenshots. They’re always going to post your good or badly written email onto Twitter. It could end up on BuzzFeed. Anytime you do anything, it could become an international incident, depending on the way the wind blows or how the cultural constellations are aligned that week.

Y.X. Yang: I think companies are going to take a lot more stuff down. They’re just going to be forced to. I think that kind of pressure is going to move companies to be a lot more restrictive — which is disappointing, personally, but I get it.

James: One depressing part is that China did a frighteningly good job of their version of trust and safety. Turns out if you throw enough resources at it, you can build a firewall that filters out just about whatever you want.

James: I know someone who resigned to take a less stressful job. I think their exact quote was, “I had to decide whether a threat of a school shooting in the town where my brother goes to high school was realistic or not, and I just can’t do that.”

Marie: It would be smart if there were a time-out period, the same way there is for people who clean up radioactive material. You just hit your date and that’s the end of your career in that field.

Remy: It’s weird thinking about doing other kinds of work now, because how would I ever be able to explain to people what I’ve learned or what I’ve become as a result of this? In a way, it does seem like once you’re in this field, you’re kind of stuck in it for life, in good and bad ways. I don’t know how you ever let that go.

Mathilda: It’s the most interesting job in the world. You see the underbelly of humanity every day and just have to dive in.

Rob: For me, the interesting thing is not, “Wow, that’s a really terrible thing, and I want to see that,” or, “I can’t believe someone wanted to save it.” It’s how we draw the line around that thing and the impact and precedent we set by defining it this way.

Olivia: I liken it to being a doctor. You have to be very clinical. The work is intellectual and abstract. Yes, there’s a decision at the end of the day, but the approach is very academic… It’s a responsibility that someone just has to innately feel.

Marie: My manager always says, “We have the freedom to take the most socially just approach.” These things are inevitably going to happen, and they’re terrible, and we’re going to create the infrastructure and the rule system so they are responded to in the best possible way every time.

Remy: What I love about it is that it’s so deep. And it can be mysterious. The process of how you arrive at a decision, and then trying to make it not mysterious so you can communicate it to your team, to the rest of the organization.

Martin: This job has satisfied my troubled and curious soul in ways I never could have imagined.

“If I do my job correctly, it will make a difference. I will make a difference. I will save someone today.”

Nora: The number of fetish subcultures that I learned of from this work… The thing that warms my heart is there’s a place for everyone. Every weirdo has their some other weirdo that they can find on the internet.

Rita: All it takes is something small to make me an optimist again. You can look at violent ISIS videos for days, and then all it takes is a subreddit where some genius has Photoshopped little top hats onto bees. I love that shit. It restores my hope about the internet.

James: Trust and safety is very much a human experience job. It’s like, someone said they were going to kill themselves today, they had a plan to do it, and then we called the cops, and they showed up and saved their life. There’s an impact that many other jobs don’t have. I was an EMT for four years. In a lot of ways, I think that prepared me for this job. You show up at a car accident or something. It’s a gory scene, and bad things are happening, but you shelve your emotional reaction while still understanding the emotional reaction, and it’s like, I have a job to do. If I do my job correctly, it will make a difference. I will make a difference. I will save someone today.