Hate speech is still too easy to find in …

Shortly after the Pittsburgh synagogue shooting, I noticed that the word “Jews” was trending on Twitter. As a social media researcher and educator, I was concerned that violence would spread online, as it has in the past.

The activity of alleged synagogue shooters on Gab’s social media site has drawn attention to the site’s role as a hateful alternative to more common options like Facebook and Twitter. Those are among the social media platforms that have vowed to combat hate speech and abuse online on their sites.

However, as I explored online activity after the shooting, I quickly realized that the problems are not only found on sites like Gab. Rather, hate speech is still easy to find on major social media sites, including Twitter. I also identified some additional steps the company could take.

Incomplete responses to new hate terms

He expected new threats to emerge online around the Pittsburgh shooting, and there were signs that it was happening. In a recent anti-Semitic attack, the leader of the Nation of Islam, Louis Farrakhan, used the word “termite” to describe Jews. I looked up this term, knowing that racists would likely use the new insult as a keyword to avoid detection when expressing anti-Semitism.

Twitter had not suspended Farrakhan’s account in the wake of another of his anti-Semitic remarks, and Twitter’s search function automatically suggested that he might be looking for the phrase “termites devour bullets.” That turns the Twitter search box into a poster of hate.

However, the company had apparently adjusted some of its internal algorithms, because my search results did not show tweets with anti-Semitic uses of the word “termite.”

Unnoticed posts for years.

As I continued my search for hate speech and called for violence against Jews, I found even more disturbing evidence of shortcomings in Twitter’s content moderation system.

In the wake of the 2016 US election and the discovery that Twitter was being used to influence the election, the company said it was investing in machine learning to “detect and mitigate the effect on users of Internet activities. fake, coordinated and automated account. “

From what I found, these systems have not even identified violent threats and very simple, clear and straightforward hate speech that have been on their site for years.

When I reported on a tweet posted in 2014 that advocated killing Jewish people “for fun,” Twitter rejected it the same day, but its standard automated Twitter prompt gave no explanation as to why it hadn’t been touched for over four years. .

He hates system games.

When I reviewed the obnoxious tweets that hadn’t been captured after all those years, I noticed that many did not contain text, the tweet was just an image.

Without text, tweets are harder to find for users and Twitter’s own hateful identification algorithms. But users who specifically search for hate speech on Twitter can scroll through the activity of the accounts they find, seeing even more hateful messages.

Twitter seems to be aware of this problem: users who report a tweet need to review some other tweets from the same account and send them at the same time. This ends up submitting a bit more content for review, but still leaves room for some to go undetected.

Help for the struggling tech giants

When I found tweets that I believed violated Twitter policies, I reported them. Most of them were quickly removed, even within an hour. But some obviously offensive posts took several days to arrive.

There are still some text-based tweets that haven’t been removed, despite clearly violating Twitter’s policies. That shows that the company’s content review process is not consistent.

It may appear that Twitter is getting better at removing harmful content and removing a lot of content and memes and suspending accounts, but much of that activity is unrelated to hate speech.

Rather, much of Twitter’s attention has focused on what the company calls “coordinated manipulation,” such as bots and rogue profile networks run by government propaganda units.

In my opinion, the company could take a significant step to enlist the help of members of the public, as well as researchers and experts like myself and my colleagues, to identify hateful content.

It’s common for tech companies, including Twitter, to offer payments to people who report security vulnerabilities in their software.

However, all the company does for users who report problematic content is send an auto-generated message saying “thank you.” The disparity in the way Twitter handles code issues and content reporting sends a message that the company prioritizes its technology over its community.

Instead, Twitter could pay people to report content that violates your community guidelines, offering financial rewards for eliminating social vulnerabilities in your system, as if those users were helping you identify software or hardware problems.

A Facebook executive expressed concern that this potential solution could backfire and lead to more hate online, but I think the bounty program could be structured and designed in a way that avoids that problem.

Much more to do.

There are other problems with Twitter that go beyond what is posted directly on your own site. People who post hate speech often take advantage of a key feature of Twitter: the ability of tweets to include links to other content on the Internet.

That feature is critical to the way people use Twitter, sharing content of mutual interest across the web. But it is also a method of distributing hate speech.

For example, a tweet may appear totally innocent, saying “This is fun” and providing a link. But the link, to content not posted on Twitter’s servers, shows a message full of hate.

Surprising numbers of profiles have names and Twitter.[index company=twitter] Drive with hateful messages.

Additionally, Twitter’s content moderation system only allows users to report hateful and threatening tweets, but not accounts whose profiles contain similar messages.

Some of these accounts, including those with profile photos of Adolf Hitler, and names and Twitter handles that defender who burns the Jews, don’t even tweet or follow other Twitter users.

Sometimes they might just be found when people search for words in their profiles, turning the search box back to Twitter on a delivery system.

These accounts can also, although it is impossible to know, be used to Communicate with others on Twitter via direct message, using the platform as a covert communication channel.

Without tweets or other public activity, it is impossible for users to report these accounts through the standard content reporting system. But they are just as offensive and harmful, and should be evaluated and moderated like any other content on the site.

As people seeking to spread hate become increasingly sophisticated, the Twitter community’s guidelines, but more importantly, its law enforcement efforts, need to catch up and keep up.

If social media sites want to avoid becoming, or remaining, vectors of information warfare and plagues of hateful ideas and memes, they must step up much more actively and at least have their thousands of full-time content moderators. Employees search like a teacher did over the course of a weekend.

This article was published in The Conversation by Jennifer Grygiel, assistant professor of communications at Syracuse University, under a Creative Commons license. Read the original article.