Quantcast
Friday, April 26, 2024

Twitter Asks Users for Their Ethnicity after Warning Them to ‘Rethink’ Tweets

'We’re trying to encourage people to rethink their behavior and rethink their language before posting...'

Twitter is implementing a feature that will prompt users to “rethink” their tweet before they send it, and then asking them to reveal their ethnicity, Reclaim the Net reported.

“When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” said the Twitter support team when they announced testing for the feature in May.

After Twitter users type a tweet and hit send, an automated system could respond with a warning. These warnings will be based on the words and phrases that the company and users have deemed “harmful.”

If any allegedly harmful words or phrases are in the tweet, then the “rethink” message will appear, and users will have to amend, confirm, or renege their decision to send the tweet.

If that isn’t dystopian enough, Twitter will ask users for their ethnicity after they try to send a tweet that contains badThink.

It’s not yet clear whether the “rethink” feature would be biased against American, Christian, and conservative beliefs and ideas, as most social media platforms are in regard to other features and standards.

“We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” said Sunita Saligram, Twitter’s global head of site policy for trust and safety, Reclaim the Net reported.

Then the prompt will give users five “agree or disagree” statements about Twitter’s function that flagged the tweet as harmful.

Users are asked to agree or disagree with the following statements:

  1. I was speaking out against hate speech.
  2. It’s important for me to use this type of language to defend myself or others.
  3. The language in my Tweet is not offensive or disrespectful.
  4. The person I’m replying to would not consider this Tweet offensive.
  5. Twitter is unfairly targeting me for the type of language that I use.

The feature comes as Twitter has been on a mission to crack down on hate speech, which the company has defined according to its radical leftist interpretation of the world.

As part of this effort, Twitter punished more than 584,000 accounts for “hate” messages and another 396,000 for “abusive” messages during the first six months of 2019, Twitter revealed in its transparency report.

Instagram is pursuing a similar method to impose technological and social controls on the way humans interact with each other online.

As part of its effort to stop “harassment before it even starts,” Instagram will notify users that they are “about to leave a negative comment,” Reclaim the Net reported.

Copyright 2024. No part of this site may be reproduced in whole or in part in any manner other than RSS without the permission of the copyright owner. Distribution via RSS is subject to our RSS Terms of Service and is strictly enforced. To inquire about licensing our content, use the contact form at https://headlineusa.com/advertising.
- Advertisement -

TRENDING NOW

TRENDING NOW