Twitter will test notifying users when they reply to a tweet using offensive or hurtful language, in an attempt to clean up conversations on the social media platform, the corporate announced in a tweet Tuesday.
When users click send on their reply, they will be informed if the phrases in their tweet are similar to these in posts that were reported, and asked if they would like to revise it or not.
Twitter has long been underneath stress to clean up hateful and abusive content on its platform, which is policed by users flagging rule-breaking tweets and by technology.
Twitter’s policies don’t enable customers to focus on individuals with slurs, racist or sexist tropes, or degrading content material.
The company took action against nearly 396,000 accounts under its abuse policies and over 584,000 accounts under its hateful behavior policies between January and June of 2019, based on its transparency report.
Asked whether the experiment would instead give customers a playbook to find loopholes in Twitter’s guidelines on offensive language, Saligram stated that it was focused on the majority of rule-breakers who should not repeat offenders.
Twitter stated the experiment, the first of its kind for the corporate, will start Tuesday and last at least a few weeks. It’s going to run worldwide but for English-language tweets.