Twitter is implementing a feature that will prompt users to “rethink” their tweet before they send it, and then asking them to reveal their ethnicity, Reclaim the Net reported.
“When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” said the Twitter support team when they announced testing for the feature in May.
After Twitter users type a tweet and hit send, an automated system could respond with a warning. These warnings will be based on the words and phrases that the company and users have deemed “harmful.”
If any allegedly harmful words or phrases are in the tweet, then the “rethink” message will appear, and users will have to amend, confirm, or renege their decision to send the tweet.
If that isn’t dystopian enough, Twitter will ask users for their ethnicity after they try to send a tweet that contains badThink.
It’s not yet clear whether the “rethink” feature would be biased against American, Christian, and conservative beliefs and ideas, as most social media platforms are in regard to other features and standards.
“We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” said Sunita Saligram, Twitter’s global head of site policy for trust and safety, Reclaim the Net reported.
Then the prompt will give users five “agree or disagree” statements about Twitter’s function that flagged the tweet as harmful.
Users are asked to agree or disagree with the following statements:
- I was speaking out against hate speech.
- It’s important for me to use this type of language to defend myself or others.
- The language in my Tweet is not offensive or disrespectful.
- The person I’m replying to would not consider this Tweet offensive.
- Twitter is unfairly targeting me for the type of language that I use.
The feature comes as Twitter has been on a mission to crack down on hate speech, which the company has defined according to its radical leftist interpretation of the world.
As part of this effort, Twitter punished more than 584,000 accounts for “hate” messages and another 396,000 for “abusive” messages during the first six months of 2019, Twitter revealed in its transparency report.
Instagram is pursuing a similar method to impose technological and social controls on the way humans interact with each other online.
As part of its effort to stop “harassment before it even starts,” Instagram will notify users that they are “about to leave a negative comment,” Reclaim the Net reported.