Instagram is finally making it easier to address people by their defined pronouns! Today, 12 May 2021, it revealed on Twitter that it has just launched a new feature that lets users add up to four pronouns on their profile. “Now you can add pronouns to your profile with a new field. It’s another way to express yourself on Instagram, and we’ve seen a lot of people adding pronouns already, so hopefully, this makes it even easier. Available in a few countries today with plans for more,” wrote the social media giant.
Add pronouns to your profile ✨
— Instagram (@instagram) May 11, 2021
The new field is available in a few countries, with plans for more. pic.twitter.com/02HNSqc04R
Users will have the choice to make the pronouns public or make them visible only to their followers, and for now, only a few countries will have access to the feature. The company has not shared its plans to expand it in all markets yet.
As for users under 18, they will have this setting turned on by default. Instagram says people can fill out a form to have a pronoun added, if it’s not already available, or just add it to their bio instead. A couple of members of the Screen Shot team have the pronouns setting already available to them, suggesting it’s live in the UK.
Instagram is certainly not the first platform to also allow users to add pronouns to their profiles. “Dating apps, like OkCupid, have already introduced the feature, as have other apps like Lyft,” explained The Verge. Interestingly, Facebook allowed users to define their pronouns in 2014, although the feature limited people to “he/him, she/her, and they/them.” This appears to still be the case while Instagram will offer more options.
A little over six months have passed since Instagram and fellow social media platforms have implemented the Sensitivity Screens feature in an attempt to censor images and videos representing and promoting self-harm. While the initiative initially only included photographic and video material, it has now been extended to censor drawings, illustrations, cartoons, and memes depicting or describing methods of self-harm and suicide.
The announcement followed the on-going call for action by Ian Russell, father of 14-year-old Molly Russell—the British teenager who committed suicide in 2017, whose Instagram and Pinterest accounts contained graphic material featuring self-harm. Throughout his campaign, Ian Russell has openly linked his daughter’s tragic death to Instagram’s lack of regulation when it comes to the spread of harmful content. He stated on several occasions that Instagram has played an evident role in Molly’s spiralling mental health: letting her entering an echo-chamber of hashtags, images, and groups encouraging self-harm and linking it to suicide.
By now, we all know how the algorithm works, one post about depression might be only a few hashtags away from others about suicide. As we naively scroll down our feed, it can be easier than expected to spiral into a hole of graphic imagery of cutting and bruising. This normalisation of self-harm could easily trigger a particularly vulnerable person. According to Russell, who has researched his daughter’s digital presence in search for answers, Pinterest was sending Molly automated emails featuring disturbing material. This evidently proves that social media platforms were not doing enough to block the spreading and sharing of disturbing content.
In an attempt to stem the proliferation of toxic content and the unforeseen consequences that self-violence images are having on users, Adam Mosseri, head of Instagram, is improving the company’s effort to tackle the issue, by expanding the already existing ban to drawing and illustrations as well. Earlier this year, Instagram changed its policies and invested in new technologies that allowed the platform to block double the content, with 77 per cent of content being removed before even being reported. “To help us stay aware of new trends or cultural nuances, we meet every month with academics and experts on suicide and self-harm. (…) In the UK, we are working with the Samaritans on an industry-wide effort to shape new guidelines to help people in distress. Outside of Europe, we also have additional technology that helps us proactively find people who might be in need. We want to bring this to Europe but there are important legal considerations under EU law, so we’re working with our European regulator,” Mosseri said in an official statement.
But the ban also comes with some side effects. The Sensitivity Screen feature has also affected accounts promoting mental-health awareness and fighting the stigma surrounding the issue. Many profiles that use illustrations to educate and spread information on mental health, ended up having their posts banned due to the new regulations. As much as it’s necessary to prevent people from being exposed to harmful content, it’s equally fundamental to let users share their experiences and have a medium to reach out for help.
Once again, the elements responsible for mental health issues and suicides among teenagers are uncountable, and solely blaming Instagram would be a feeble attempt at simplifying an incredibly complex issue. Having said that, social media platforms have a responsibility when it comes to self-regulation, and the work of people such as Molly’s father shows how crucial it is to push aside social networks’ agenda and force platforms to regulate themselves more carefully. But Adam Mosseri and his team are not the only ones who have the obligation to make Instagram a safe place—users must scrutinise the social media platforms they use as well, and signal harmful content as soon as they come across it.