Twitter’s new safety policies

Read about the new safety policies being rolled out by Twitter in the coming weeks.

Written by Rebekah Connolly

twitter’s-new-safety-policies-thumbanail

Twitter has announced that they will be rolling out new safety policies over the next few months. The new policies will change some of the rules around what is allowed on Twitter and also aim to improve the experience of reporting a user for inapproriate behaviour.

Here’s an outline of some of these changes and what they’ll mean for users on Twitter.

Non-consensual nudity

Twitter’s Current Policy

Currently, anyone who posts non-consensual nudity will be asked to delete the tweet and will be temporarily blocked from their account. This happens whether they tweeted the material on purpose or if they unknowingly shared something non-consensual. 

If they post non-consensual nudity again, their account will be permanently suspended.

Twitter’s new policy

  • Twitter will redefine the term ‘non-consensual nudity’ to include content like upskirt imagery, “creep shots,” and hidden camera content. Anyone can report this content, since the target may not know the tweets actually exist.
  • If an account is identified as the original source of non-consensual nudity and/or it is clear that the user is posting this content to intentionally harass someone, their account will be immediately and permanently suspended.
  • When a Tweet-level report is received about an account posting non-consensual nudity, a full account review will be carried out. If this review finds the account is dedicated to posting non-consensual nudity, Twitter will suspend the entire account immediately.

Unwanted sexual advances

Twitter’s current policy

Pornography and sexually charged conversation is generally allowed on Twitter. The only time Twitter will take enforcement action is if/when a participant in the conversation reports the conversation. This is because it can be hard to tell if the sexual content is wanted or consensual to a bystander, so it is up to the participant to report it.

A person who is not involved in the conversation cannot report this as an unwanted sexual advance.

Twitter’s new policy

  • The Twitter Rules will be updated to make it clear unwanted sexual advances are unacceptable. They will continue to only take enforcement action if a participant in the conversation reports it.
  • Twitter are developing improvements to bystander reporting, and they will begin to look at past interaction signals (such as blocking or muting an account) to determine whether something may be unwanted.

Hate symbols and imagery

Twitter considers hateful imagery to be logos, symbols or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation or ethnicity/national origin. 

From Monday 18th December 2017, hateful imagery will not be allowed in an account’s profile image or profile header. It will be permitted in Tweets, but only when marked as sensitive media. 

Sensitive Media

  • Sensitive media on Twitter includes media such as adult content, graphic violence or hateful imagery. When it appears in a Tweet, Twitter may place it behind a warning which will advise viewers to be aware that they will see sensitive media if they click through. 

Violent groups

Twitter’s rules around violent groups specify that users cannot make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people.

  • Twitter will suspend accounts that are affiliated with violent extremist groups, such as groups which use or promote violence against civilians as a means to advance their cause.
  • Both first-person and bystander reports of violent groups will be reviewed by Twitter and those who believe their account was suspended in error can submit an appeal.

Abusive profile information

Twitter now prohibits using a profile username, display name or bio to engage in abusive behaviour, such as targeted harassment or expressing hate towards a person, group or protected category.

  • Both first-person and bystander reports of abusive profile information will be reviewed by Twitter.
  • If an account is found to include any abusive behaviour the account will be permanently suspended on the first violation.
  • If someone believes their account was suspended in error they can submit an appeal

Glorifying violence and violent threats 

Twitter prohibits content that glorifies acts of violence in a way that could inspire others to replicate it and cause real offline danger, or where people were targeted beause of their membership in a protected group.

  • Twitter have a zero tolerance policy towards violent threats due to the serious potential for offline harm. Accounts found to be posting violent threats will be permanently suspended. 
  • The consequences for accounts which violate Twitter’s glorification of violence policy depend on the severity of the violation and the account’s previous record. 

Click here to find out more about Twitter’s range of enforcement options for policy violators. 

Explaining these new rules

In order to make sure Twitter users fully understand the new rules and policies on what is and isn’t allowed on the platform, Twitter will be taking these steps:

  • Updating the Twitter Rules.
  • Updating the Twitter Media Policy to clarify what is meant by adult content, graphic violence, and hate symbols.
  • Launching a Help Centre to explain what happens when a Tweet is reported, and the different enforcement options they can choose from in dealing with a report.
  • Launching policy-specific help pages to provide more details on what the policy is, examples of what kind of content crosses the line, and what to expect when a tweet is reported.
  • Updating the messages users receive when their account is locked, suspended, appealed or another action taken.

Read our fact sheet about staying safe on Twitter.

Our work is supported by

funders-logo1
Community-foundation
funders-logo3
rethink-ireland