I’ll say it: I love Twitter.
I use Twitter to follow breaking news stories, to promote my work and the work of colleagues and peers I admire, and to consume and laugh at jokes and memes. I like spending time on the platform to stay informed and connect with people.
But it goes without saying that I would like Twitter a lot less if I were being bullied and harassed every day.
Harassment has been a growing problem on Twitter over the past few years. Incidents like Gamergate, actor Robin Williams’ death, and the backlash over actress Leslie Jones’ casting in an all-female remake of Ghostbusters shed light on the ugly side of Twitter — the side where individuals hide behind egg profile photos and false names and use hateful, discriminatory language. In this post, we’ll dive into the history of the issue on Twitter and what the site recently announced it’s doing to fight it.
Twitter Fights Harassment: A Long Time Coming
There have been reports of Twitter harassment for almost as long as the site has existed. Blogger Ariel Waldman was one of the first users to chronicle just how difficult — and sometimes, impossible — it was to get Twitter to intervene in cases of repeated, pervasive harassment back in 2008. A stalker published her personal and contact information on the platform, which prompted a string of threats, stalking, and abusive tweets. Waldman started reaching out to Twitter and CEO Jack Dorsey for help — only to find out that its terms of service were “up to interpretation,” and that the company wouldn’t intervene on her behalf.
Since then, prominent Twitter users have demanded Twitter take a harder line and shut down accounts that only exist to spew hate. Celebrities and public figures on Twitter have been able to get Twitter to suspend bullies’ accounts, but users demanded a better system for reporting, censoring, and silencing abusive language on the platform.
To make sure we’re all on the same page, Twitter Rules specifically prohibit the kind of abuse we’re talking about here — threats, hate speech, impersonation, and harassment on the basis of users’ race, ethnicity, gender, religion, sexual orientation, age, ability, disease, or nationality. However, until changes as recent as March 1, 2017, there haven’t been a lot of options for users who are being targeted to report and stop the abuse.
In December 2016, Dorsey asked for general user feedback — where else, but on Twitter:
Following in the footsteps of Brian Chesky: what’s the most important thing you want to see Twitter improve or create in 2017? #Twitter2017
— jack (@jack) December 29, 2016
A lot of people asked for the ability to edit tweets (I want that capability myself), but a huge portion of responses centered around harassment: providing more and better capabilities for users to stop and report it, more transparency into how abuse is handled by Twitter, and more swift punishment and suspension of repeat offenders.
Twitter started rolling out its responses to user demands in early 2017. Most of these features are operational, but some haven’t been fully implemented, so keep an eye out for these new measures if you ever have to report a tweet.
7 Ways Twitter Is Fighting Cyberbullying and Harassment
1) Expanded notification filtering
Twitter users can use this tool to filter which types of accounts they receive notifications from. For example, if you don’t want to receive notifications from a user without a profile photo, you could specify that. This tool is meant to filter out abuse from unverified accounts or specific people users have identified as unwanted.
2) More ways to mute content
Twitter expanded on the mute button’s capabilities so users can mute keywords or entire phrases from their notifications sections. Users can also decide how long they want to mute those words — whether it be for a day, a month, or indefinitely. In this way, you can customize which content you see in your notifications and when you see it.
3) Greater transparency around reporting
Whereas previously, users had a hard time understanding when or if their reports of abuse were even being processed, Twitter is now providing transparency. Users will receive notifications when and if Twitter decides to take action so they can keep track of previous reporting.
4) Twitter “time-out”
In a recent article, (warning: explicit/offensive language) BuzzFeed reported that some Twitter users were seeing another new feature, similar to the time-out we all experienced as children (unless you were better behaved than I was). If users’ tweets are flagged as abusive or otherwise in violation of Twitter Rules, their tweets are temporarily limited from view by users who don’t follow them. Hopefully neither you nor your brand’s Twitter will see this notification, but the company is hoping it will send a message to abusers to stop what they’re tweeting or risk further punishment.
5) Safer search results
Machine-learning algorithms will filter search results so users aren’t served content from accounts that have been reported, muted, or otherwise marked as abusive. The content will still be on Twitter if users are really looking for it, but if it could potentially be abusive, it won’t be served up as a primary search result.
6) Collapsing abusive tweets
Twitter will start identifying and hiding tweets that are deemed “low quality” or from potentially abusive accounts so users see the most relevant conversations first. Like the safe search feature, those tweets will still be on Twitter — but users have to search for them specifically.
7) Stopping creation of new abusive accounts
Using another algorithm, Twitter will prevent abusive and flagged users from creating multiple new accounts they can use to spam and harass other users. The algorithm will scan for multiple accounts from the same email addresses and phone numbers, for example, as a way to spot potential bullies.
Machine Learning to Prevent Cyberbullying
If your personal Twitter or your brand’s Twitter are targeted by abuse and harassment on the platform, you have a host of new tools available at your disposal to make sure it stops and that your reputation isn’t affected.
I’m curious to learn more about the new algorithms’ efficacy to block one-off and repeated offenses, and it’s gratifying to see how seriously Twitter is taking this problem. Similar to Facebook’s prompt response after learning about the impact of pervasive fake news stories on the platform, it’s heartening to see social media platforms listening to what users ask for — and working to make social networks a safe place to be.