(Reuters) – Twitter Inc will test sending users a prompt when they reply to a tweet using “offensive or hurtful language,” in an effort to clean up conversations on the social media platform, the company said in a tweet on Tuesday. When users hit “send” on their reply, they will be told if the words in their tweet are similar to those in posts that have been reported, and asked if they would like to revise it or not.
Twitter has long been under pressure to clean up hateful and abusive content on its platform, which are policed by users flagging rule-breaking tweets and by technology. Twitter’s policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content. The company took action against almost 396,000 accounts under its abuse policies and more than 584,000 accounts under its hateful conduct policies between January and June of last year, according to its transparency report.
Source: Reuters.com
FINANCED BY
This project was funded in part through a U.S. Embassy grant. The opinions, findings, and conclusions or recommendations expressed herein are those of the implementers/authors and do not necessarily reflect those of the U.S. Government.
PARTNERS