Twitter Introduces New Tools to Reduce the Impact of Trolls and Abuse

Twitter’s VP of Engineering Ed Ho showed that the platform would be ramping up its efforts to combat trolls & abuse, noting that they needed to show more on the front:

We heard you, we did not move fast enough last year; now we are thinking about progress in days & hours not weeks and months.

— Ed Ho (@mrdonut) January 31, 2017

Ho’s statements sounded strong and were backed up by CEO Jack Dorsey. Perhaps Twitter was finally going to take the lead on this & work to meaningfully reduce the impact of on-platform harassment.

That hope was dampened however when a few days later, Twitter showed this:

We heard your feedback. You can report Tweets that mention you, even though the author has blocked you. Learn more:

— Twitter Safety (@TwitterSafety) February 1, 2017

An underwhelming start – basically, this means that you can report people who are harassing you even if you are not necessarily seeing their tweets, a long-time flaw in the system.

But while this was not the ground-breaking progress many had hoped for, it was a start, and a positive step to see Twitter actively working towards stamping out anti-social activity.

Furthering this new push, today (which is Safer Internet Day), Twitter has announced 3 new measures to combat trolls and abuse. And while they are still not necessarily game-changers, they do underline that Twitter is taking action & are working to solve one of their core problems.

Here is what’s been announced.

1. “Stopping the creation of new abusive accounts”

The first measure goes to the heart of one of Twitter’s biggest difficulties – that being that abusers, once reported, can simply open up a new account & start harassing you again.

To combat this, Twitter has announced that they are “taking steps to identify people who’ve been permanently suspended & stop creating new accounts”.

How Twitter might actually go about doing this is not explained – Twitter is not providing any details because it would help those trying to circumvent the system. Some possible selections could include detecting a users’ IP address & blocking new accounts from it (though this may not be possible in all cases), while they may have measures in place to identify similar accounts made shortly after a suspension has been implemented.

As noted, there’re ways to circumvent such processes, which is why Twitter is not sharing the details, but if effective, the new process can help Twitter eliminate repeat offenders – and may even support a way for the platform to get rid of fake accounts used by bot traders.

Bot traffic has come under developed scrutiny in the wake of the recent US Presidential Election, with a study identifying huge networks of fake accounts which are being used to send spam & boost interest in trending topics. If Twitter can improve its methods of identifying the sources of such traffic, it may be capable to tackle this problem too – though given the market emphasis on active users, there’s a question of whether it is in Twitter’s interest to eliminate such fakes, beyond abusive profiles.

2. “Introducing safer search results”

Twitter’s introducing a new ‘safe search’ option which “removes Tweets that contain potentially sensitive content & Tweets from blocked & muted accounts from search results.”

You would think the removal of accounts you have blocked and/or muted from search would be a given, but evidently not.

The new selection will give users the ability to eliminate results from their search experience – you’ll be able to find such content if you want to, but, according to Twitter, “it will not clutter search results any longer”.

Users will be able to control these search filters once they are made available.

3. “Collapsing potentially abusive or low-quality Tweets”

And the last addition Twitter’s announced is an algorithm-defined way to order tweet responses to hide “potentially abusive and low-quality replies”

Twitter is using machine learning to identify lower quality tweets, using qualifiers like the date an account was made, follower to following ratio and other spam detection measures to categorize the originating author & filter their replies accordingly.

As you can see from the above GIF, you’ll be able to see these ‘low quality’ response, they will be hidden behind a ‘Show less relevant replies’ prompt – similar to your junk e-mail folder. Given this, Twitter is not actually eliminating abusive content, they’re hiding it from view, but as these filters will be applied to all accounts, the results could have a meaningful effect. Giving these accounts no visibility will decrease their presence, and hopefully, their motivation to tweet such comments.

These are Twitter’s latest efforts in their ongoing push to eliminate trolls & abuse, a problem that is plagued the platform for years. Indeed, reports last year suggested that one of the reasons potential suitors opted against making serious bids to buy Twitter largely revolved around the platform’s abuse problems & the potential damage they could cause.

On this, it’s interesting to note that Twitter shares have developed to their highest levels in recent months on the back of today’s announcement.

Twitter introduced the ability to mute specific words from your timeline back in November, as well as an AI-powered quality filter  which can detect & eliminate questionable tweets from your timeline.

Those measures show that Twitter is working to address the problem, that they’re actively seeking new methods to one of social media’s biggest pain points. Those methods are not easy, there is no simple way to censor a users’ timeline, especially given the real-time engagement that makes Twitter what it’s.

But Twitter is trying to find answers & ways to improve community safety.

And they are not done yet – as noted by Ho:

We will be rolling out a number of product changes in the days ahead. Some changes will be visible & some will be less so.

— Ed Ho (@mrdonut) January 31, 2017

These new measures aren’t a magic bullet to eliminate trolls & abuse, but such an option simply doesn’t exist. Hopefully through the accumulation of tools & options – and increased transparency on their efforts – we will see Twitter take positive steps towards building a safer environment.



We will be happy to hear your thoughts

Leave a reply