Twitter have introduced a series of measures aimed at improving the experience of users on their site. The social media company have said that their efforts are focused on improving the quality of conversations on the site by addressing issues like dehumanising language, user behaviour, and making live video a safer experience for everyone.
Below is an outline of some of the new measures introduced on Twitter recently:
Dehumanising language on Twitter
Twitter are expanding their Hateful Conduct Policy to include content that dehumanises others because they are members of a certain group.
This means language that makes someone feel less than human, normalises violence against a group of people, or could have consequences for the person outside of Twitter will be considered abusive. Even if the content of the tweet is not strictly against the Twitter Rules, it will still be marked as abusive if the language is dehumanising in any way.
Share your thoughts on the dehumanising language policy
Twitter are looking for public feedback on the dehumanising language policy to help them develop the policy further. They would like people to share the ways that this new policy could impact on their communities and culture, and are looking for a global perspective on the issue. If you would like to share your thoughts, you can fill out the form at the bottom of this page before Tuesday, October 9 at 2pm Irish time.
Twitter have changed how certain tweets are shown in search and in conversations based on user behaviour of individual accounts. They are hoping to identify tweets and accounts that demonstrate abusive behaviour before other users have to report them. This means Twitter won't have to wait for a report before they take action.
Some of the behaviours they look out for include:
- If an account has not confirmed their email address
- If the same person signs up for multiple accounts at the same time
- Accounts that repeatedly Tweet and mention accounts that don’t follow them
- Behaviour that might indicate a coordinated attack
Since introducing this policy, there has been a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations.
Twitter are introducing stricter enforcement of guidelines when it comes to messages sent by users during live broadcasts. They hope that this will make live broadcasts a safer and more enjoyable experience for viewers and for the people running the broadcast.
Users can report and vote on chats they consider to be abusive, and group moderation determines if someone can continue chatting. Twitter will also review and suspend accounts who are repeatedly violating the guidelines.
If you are watching a live broadcast and see something abusive, report it.
A new default filter has been launched to hide direct message requests if they appear to be of a low quality. This will filter out spam messages or advertisements from your message requests, allowing you to focus on responding to the more genuine interactions.
— Twitter (@Twitter) August 22, 2018
Suicide and self-harm
Twitter have expanded the number of countries included in their suicide and self-harm service that will show a message to a user who appears to be tweeting or searching for topics or keywords that could indicate they are in crisis. The message will show supportive information, directing them to a service that can help.
Other changes include introducing security keys for login verification, and working with the Universities of Oxford and Leiden to carry out studies on public conversation to understand and find ways to keep improving the Twitter experience.
Learn more about your safety on Twitter with our Twitter Safety Factsheet.