Twitter cracking down on hate speech, but will it go overboard?

When you put a huge group of diverse people together, whether on the internet or in the real world, there will always be bullies and those promoting hate speech. Sadly it seems to be the American way so when Twitter recently professed to become the traffic cop to stop this, it makes one wonder if things can go overboard.

Most of us already realize we live in a society that is extremely politically correct and people will quickly fire off a missive to silence someone while then screaming that another person is impeding on their free speech.

The only way to really police for this on Twitter would be for real humans to monitor, which we all know is impossible so it will be left to bots. The other thing they will more than likely do is make it so users can report hate speech when they ‘report’ a tweet. This always backfires as sophisticated users know that it just takes a load of bots to report tweets they don’t like that are far from ‘hate speech’ and that person is suddenly silenced.

This happened during the election here in America where many Hillary Clinton supporters suddenly found their most inane posts disappearing. One was even just a picture of a tranquil lake, but this is how it works when you deal with cyber bullies.

While it’s a nice idea for Twitter to attempt this but in today’s world bullies and purveyors of hate speech know how to work around whatever obstacles are placed in their path.

can twitter really control hate speech social media

Twitter is vowing to crack down further on hate speech and sexual harassment, days after CEO Jack Dorsey said in a tweetstorm that the company is not doing enough to protect its users.

The company has spent the last two years trying to clamp down on hate and abuse on its generally free-wheeling service.

Dorsey echoed concerns of many users and critics who say Twitter it hasn’t done enough to curb the abuse. But others worry that it’s muzzling free speech in the process.

In an email Twitter shared with media outlets Tuesday, the company’s head of safety policy outlined the proposed new guidelines that tighten existing rules and impose some new ones. They aim to close loopholes that allowed people to glorify violence, for example.

The email was sent to the company’s Trust and Safety Council, a group of outside organizations that advises Twitter on its policies against abuse.

“It’s good that Twitter is thinking these things through and being fairly transparent about what they are doing,” said Emma Llanso, director of the free expression project at the Center for Democracy and Technology, a nonprofit that’s a member of the Trust and Safety Council. But, she added, it will be very important to have a clear appeals process and ways to review whether the policies are effective.

Twitter sent it to the group for input, and the changes are not yet final. News of the changes was first reported by Wired.


Some of the changes are aimed at protecting women who unknowingly or unwillingly had nude pictures of themselves distributed online or were subject to unwanted sexual advances. They would also try to shield groups subject to hateful imagery, symbols and threats of violence.

Among the proposed changes, Twitter said it would immediately and permanently suspend any account it identifies as being the original poster of “non-consensual nudity,” including so-called “creep shots” of a sexual nature taken surreptitiously. Previously, the company treated the original poster of the content the same as those who re-tweeted it, and it resulted only in a temporary suspension.

It said it would also develop a system allowing bystanders on Twitter to report unwanted exchanges of sexually charged content, whereas in the past it relied on one of the parties involved in the conversation to come forward before taking action. Twitter already allows bystanders to report other violations on behalf of someone else.

The San Francisco-based company also said it would take new action on hate symbols and imagery and “take enforcement action against organizations that use/have historically used violence as a means to advance their cause,” though it said more details were to come.

It didn’t say what the hate symbols might be and enforcing this could prove difficult. While some hate symbols, like the swastika, are widely recognized, groups have also adopted lesser-known, seemingly innocuous symbols to show hate. Those include “Pepe the Frog,” a cartoon frog that has become a symbol for the “alt-right” movement known for racist and misogynist views. The milk emoji has also been used by white supremacists as an online symbol.

Twitter already takes action against direct threats of violence, the company said it would also act against tweets that glorify or condone violence.


On Friday, Dorsey foreshadowed the coming policy changes in a series of tweets, saying the company’s efforts over the last two years were inadequate.

“Today we saw voices silencing themselves and voices speaking out because we’re (asterisk)still(asterisk) not doing enough,” Dorsey tweeted.

At the same time, Llanso said, Twitter also must tread carefully not to sweep up legitimate discourse along with hate speech and abuse. Unlike Facebook, Twitter permits anonymity. While this can be used as a tool for abuse and harassment, it also allows for people and groups to speak out when they otherwise couldn’t.

“Any kind of policy that is about taking down speech online, will be used for its intended purpose, but also by others who are looking to get things censored online,” she said. “People out there looking to silence voices they disagree with are very savvy.”

In the end, whether the policies work will have to be tested out in the field, by Twitter’s users.

“There will definitely be mistakes,” Llanso said.


The moves also come amid intense scrutiny from congressional investigators into how Russian agents used Twitter, Facebook, and Google to influence last year’s U.S. election. Twitter has said it would appear at a public congressional hearing on Nov. 1 after already briefing a Senate committee.

The company has handed over the handles, or profile names, of 201 accounts it believes were linked to Russia. It has also said at least $274,000 in U.S. ads were bought by Russia Today, a Russian-government-linked media outlet, last year.