The conundrums of Twitter policy

You can't solve an online platform's support struggles in 280 characters

It took only eleven minutes on an otherwise ordinary Thursday to set off the latest internet firestorm. A Twitter support agent took advantage of his last day at the office to briefly disable the controversial personal account belonging to one Donald J. Trump, who just so happens to be the current president of the U.S.

Those eleven minutes triggered the typical polarized reactions. A large swath of the internet celebrated that an account with a long history of bullying and sharing falsehoods was suspended; another substantial group, on the other hand, felt vindicated in their assertions that the tech companies are inherently against them.

The truth, of course, is far more banal than either group would like to believe: an employee with particular views took advantage of his or her access and the opportunity afforded by their impending departure to make a political statement they believed in.

Twitter the company did not intend to take any particular action or stance on Trump's Twitter presence that day, and while you can debate whether or not they should, the actual discussion online and in the media seemed to focus on what support agents can and should be able to do.

Many of these betrayed a lot of confusion about what support agents at these companies are able to do, and most of the rest tried to prescribe solutions that made this seem like a simple problem.

I spent over five years working at Microsoft on OneDrive, which, as a document and photo storage and sharing service, had to navigate these problems regularly. While I don't pretend to be an expert, I learned that these kinds of decisions, about what a support agent can and should be able and encouraged to do, are some of the toughest conversations you'll have in a company whose business revolves around user-generated content. So let's talk about why this is so hard.

A quick note before we dive in, though: I am extrapolating from what I know to policies elsewhere. Abuse on the internet is a moving target, and the policies at all the companies I mention evolve over time, so while this is an informed opinion, please note that I don't have first-hand knowledge about any current policies at any given company.

Before you get to an agent

Let's start with the obvious: a company's employees are never going to see 99.99% of the content people post on a large service. Sites like Twitter have far, far more content than any number of human beings could review, and indeed, it would be fairly creepy if someone at Twitter were reading everybody's tweets. (Also, can you even imagine how miserable those jobs would be?)

This is even more true of services that have both private and public content: most companies only really care about content that is seen by users other than its owner. If you are uploading private files to Dropbox or videos to Vimeo, there are probably very few scenarios that would get your stuff viewed by anyone.1

If no one is looking at most content, though, how does an agent ever get involved in looking at a particular account, tweet, video or file? From one of a few sources:

  • Reports from other users: Most sites have a mechanism for a user to report a particular piece of content or account as abusive or otherwise in violation of their terms. Here's what this looks like on Facebook:
  • Automated detection systems: For instance, Twitter could have heuristics to detect death threats in some common languages based on wording in tweets, and flag those for review.

Depending on the confidence of an automated system and the service's bias towards false positives or false negatives, a user's content or even their account may get hidden or suspended. But in many cases, nothing happens immediately: instead, the content gets flagged and sent to an agent for further, manual review.

The not-so-great powers of support agents

We've all heard horror stories from early startups where every employee had substantially complete access to all of their users' data. Thankfully, this is not the case at the vast, vast majority of companies, and certainly not considered acceptable at any of the large, publicly-traded companies you're probably dealing with. Assuming a non-bozo company, their support agents actually have an incredibly limited set of tools at their disposal.

Here's what agents can usually do:

  • Review the content that was flagged, and potentially any other shared content on that account. (This is critical to establish the context of a post: Over the years I've told many of my closest friends to go die in a fire, usually in response to them joking with me, but I am not genuinely threatening to kill them. Seriously, y'all. I only tell you to die out of love. <3)
  • Remove specific offensive content, or otherwise mark it as inappropriate.
  • Modify the internal reputation of the account, which may affect what that account has the ability to do.
  • Contact the owner of the account with questions or instructions to address the issue.
  • Suspend, or in the most egregious cases, close the account.

Most every service also has some capabilities that are specific to that service. For instance, many photo sharing services can make an entire album of photos temporarily private if inappropriate content is found in any one photo, until the user resolves the issue.

But seriously: that's essentially a comprehensive list of capabilities, and even all those presume the company in question has a really well-developed set of internal tools. Agents at many companies may only be able to do a small subset of these.

A sad agent, hard at work

What support agents can't do, and why

Importantly, most of the things that panicked Hot Takes cited as potential concerns when the Twitter incident happened are simply not capabilities agents have. In particular, agents at most companies cannot modify a user's data or posts2. Neither can they impersonate that user to take actions on that user's behalf: actions like adding friends, sending new tweets, or liking other people's posts. Sorry, Ted Cruz; that was all you, buddy.

Above and beyond the inexistence of tools that would allow agents to do some of the most concerning theoretical operations, well-run companies also implement appropriate controls to ensure that their employees aren't secretly abusing their systems. Here are just a few of the safeguards that most companies have in place.

Minimal privileges

At most well-run companies, support agents have far greater restrictions on their capabilities than engineers, in accordance with the principle of least privilege. Engineers occasionally need broader access to user data in order to debug issues, and so they may have the theoretical ability to manipulate user records in the ways that concerned folks fear. But those folks are also highly trained, well-paid, and thoroughly informed about the firing and lawsuits that would inevitably follow should they abuse the trust of their company.

Support agents, on the other hand, have much greater churn, and are often outsourced. With less loyalty and responsibility comes less power: they are given the lowest amount of access needed to do their job and perform the operations described above, and that's it.

Logging

In addition to not providing the privileges to do anything too nefarious in the first place, well-run companies log any operations that support agents take so they can understand if something untoward has occurred. This is exactly why Twitter was able to identify exactly what happened with the Trump account so quickly.

For engineers, too, a proper logging system will take note of any escalations of privilege that they perform, and in many cases the systems will even prompt the engineer to justify why they need the additional access they're seeking.

Auditing

Once you have logging in place, you also need to pay attention to what your logs are telling you, and this means reviewing and analyzing the operations that are taking place. If you have an agent who is accessing data inappropriately, you can pick up on it. And if you're really smart, you'll also build tools to analyze which issues are most frequently causing your support agents to get involved and automate those, so they can focus on the trickier issues.

While this is less common at consumer-oriented services, companies that target businesses and assert they meet certain compliance requirements, especially, must have all of these safeguards in place. Most common industry certifications require a company to document their processes and demonstrate they are being thoughtful about securing their customers' data.

The 11 minute incident

In the curious case of Donald Trump's Twitter account, a single agent, who apparently worked for Twitter via a third party contractor, was able to suspend Trump's account. Just a few minutes later, someone else at Twitter noticed and removed the suspension, but there was a lot of hand wringing: how could this have happened, folks asked themselves?

Well, let's review the operations an agent could take above: they were able to view Trump's tweets, determined (albeit probably in a pre-meditated way) that they were a violation of the terms of service, and suspended the account. All of these are completely consistent with the normal operations that agent would perform to do their job every single day, so it makes sense why they were able to do this.

Still, many people believe that any single agent shouldn't be able to disable a Twitter account belonging to such a high profile individual as the President of the U.S. It's a reasonable assertion, but what are the policies you would put in place to make it happen?

No unilateral account suspensions allowed!

Should Twitter agents not be able to unilaterally suspend accounts at all? Perhaps any account suspension should be reviewed by an actual full-time Twitter employee, or a manager, or at least a second agent?

That's one option, but then how do you deal with the vast armies of trolls, bots and spammers that seem to make up such a large proportion of Twitter's user base? Twitter isn't doing a terribly great job of keeping up with those abusers in the first place, and forcing agents to go through additional hoops before they can deal with one of these accounts would only make that issue worse.

The real problem, of course, is that @realDonaldTrump is one of the highest-profile accounts on the service. Whenever something happens on or to that account, it draws outsized attention. It's not unreasonable to suggest that you should handle those accounts differently from a support and abuse perspective. But how?

Let's track "special" accounts!

One approach is to manually create a list of super-high-profile, potentially controversial accounts and handle any customer service or abuse issues on those through an alternate means. But it's clear that Twitter would struggle to curate any such manual list, given the controversies that have popped up over other suspensions and deletions.

Once you go beyond the narrow realm of the American internet, Twitter's employees will be even more hard pressed to maintain such a "special people" list. Do they really know the controversial high-profile accounts in Italy? What about Bangladesh? Moreover, any such manually created list will instantly lead to accusations of bias based on who is or isn't on it.

Or, you know, double down on the blue checkmark

Another option might be to use the blue checkmark as a signal of account reputation, and treat anyone with that magical indicator in a special way. This is perhaps getting us onto the right track: by the point an account gets that stamp of approval, Twitter has validated that it's tied to a real person or organization, so ostensibly they deserve more benefit of the doubt.

Unfortunately, by now there are tens of thousands of blue checkmark accounts, and there have been several cases where people with those accounts are abusive towards others. The checkmark is not a get-out-of-jail-free card for abusing Twitter's terms of service, and agents still need to be able to intervene quickly when an issue arises. Also, the checkmark itself has caused plenty of controversy, with Twitter struggling to determine who should and shouldn't be eligible for one. After all, white supremacist Richard Spencer's Twitter account has had that fancy blue sigil for months.

Twitter itself says it's added some safeguards (and if I had to guess, for now that safeguard is simply "no one gets to touch the Trump account without the approval of someone high up in legal and PR"). But there's no easy solution here.

What can be done?

I don't point out that this is a hard problem in order to shrug my shoulders and pretend there's no way to do better. Twitter has a long and sordid history of not taking these policy questions seriously enough, and this has manifested itself in a variety of ways over the past several years. It's most visible and most awful in the ongoing and overwhelming amounts of rage and abuse that folks, especially women and people of color, encounter just as soon as any of their opinions rise to any prominence.

That may finally be starting to change, as Twitter is rolling out a number of new policies (although the jury is still very much out). But there's still no simple solution to the questions highlighted by the suspension of Trump's account, and "this should just not be allowed" is a great example of why great product managers ask their users about their problems, not suggested solutions.

It's always dangerous to propose solutions without being directly involved in the problems3, but I will venture to suggest the outline of one: Twitter should have an internal reputation score for each account, that is completely opaque to the outside observer.

That reputation score should control what a user is able to do, from messaging others, to appearing in other people's notifications, to having their tweets surface more or less prominently in lists of replies to a given tweet. Furthermore, that reputation score should affect what support group that account should be bucketed into, and folks with a high score should be handled by more experienced, better-trained support agents. Then you can prevent your lowest tier of agents from touching those accounts.

The score must be opaque for three important reasons:

  • To prevent users, especially the abusive ones, from gaming it. A public score is likely to be (once again) perceived as an endorsement by Twitter, and that will set off an arms race for people to become "recognized."
  • An opaque score allows Twitter to change how it defines it without public scrutiny. No such scoring system can be perfect from day one, and Twitter will need to iterate and evolve in response to new data.
  • Importantly, an opaque score can allow Twitter to use internal metrics that it might not want to reveal, even via a proxy measurement. Those internal metrics can be used to create a richer view of those users.

How do you generate that internal reputation score? The blue checkbox could be one factor if it were less of a status symbol and more of a confirmation of an account's authenticity. An account's number of followers and people following and engagement with others is another. The number of times an account's posts are reported is a third, and can be used to identify user accounts that are more controversial. Every interaction with your product can help paint a comprehensive picture of what kind of user this person is, and whether or not you want them on your service.

With that internal reputation scoring in place, you can do more to advantage users who have invested time and years into the platform and to disadvantage new accounts and bots in subtle ways that won't be immediately detectable and exploitable. You can ensure that voices aren't being inadvertently silenced on your platform. And you can confirm your best users are getting support from your best-trained agents.

That doesn't mean that you can't suspend Donald Trump's account if you conclude your policies mean you should: just that the people making that decision should be the ones best equipped to do so.

Fixing the root issue

There's an even more important solution than the technological one, though: Twitter (and other companies in the user-generated content business) have to understand that a platform that is safe, inclusive and thriving requires more hands-on management and curation than they've had to date.

That's inevitably going to require hiring more highly-skilled, well-trained employees, ideally directly, and empowering them to create positive feedback cycles with the product team. Too many content companies have gotten caught up in the rhetoric of being a platform to consider that you have to craft the platform you want to become.

Kevin Systrom, CEO of Instagram, discusses the need to prune the trolls and how he thinks about Instagram needing to make the internet a better place quite persuasively in this Recode Decode interview, and we need more CEOs to embrace this viewpoint.

It's true that we've only scratched the surface of the technical solutions to all of this, and those will inevitably improve over time as machine learning enables automated systems to better categorize content. But machine learning will not replace the need for actual humans making thoughtful decisions in order to make their platform a better place every single day. Moreover, you need those humans in each major market, because only locals or very, very close observers can help you navigate the unique culture and problems of netizens in each country.

The alternative is continuing to make awkward mistake after awkward mistake, until the good reputations that have benefited tech companies so much thus far are irretrievably lost. If that happens, we won't be able to address problems thoughtfully and in ways that make sense for our platforms: the solutions will be imposed on the tech industry, in ways almost guaranteed to be short-sighted and overbroad.

Photo by Tim Gouw on Unsplash.

  1. One rare counterexample: many companies automatically screen all content for known child pornography images and report those to the National Center for Missing and Exploited Children. 

  2. In rare cases, such as on forums, moderators do have the ability to edit users' posts, generally to remove personally-identifying information or attacks, but every forum software I've ever encountered makes it clear the post was edited and by whom. 

  3. Not that I haven't been guilty of this before, but here's why it's risky: if you at least give the folks at these companies a little bit of the benefit of the doubt, you have to consider that there are smart people who can come to the same conclusions you can. Which means, most likely, that there are good reasons why they haven't done the seemingly-obvious things. Or there are less than great reasons, but ones that are hard internally: a culture that is still holding on to outdated assumptions, or a massive backlog of more urgent work, for instance. 

Distinguished reader! If you liked this article, why not follow me on Twitter?

comments powered by Disqus