Meta, Snapchat and TikTok are finally coming together to do something about the harmful effects of some of the content hosted on their platforms, and it's about time.
In association with the Mental Health CoalitionAll three brands are using a program called Thrive, which is designed to safely flag and share information about harmful content, focusing on content related to suicide and self-harm.
A Meta blog post says: “Like many other types of potentially problematic content, content about suicide and self-harm is not limited to a single platform… That’s why we’ve worked with the Mental Health Coalition to establish Thrive, the first signal-sharing program to share signals about violating content about suicide and self-harm.
“Thrive will allow participating technology companies to share signals about content that violates rules on suicide or self-harm so that other companies can investigate and take action if the same or similar content is shared on their platforms. Meta provides the technical infrastructure that underpins Thrive… enabling safe sharing of signals.”
When a participating company like Meta discovers harmful content on its app, it shares hashes (anonymized code belonging to pieces of content related to self-harm or suicide) with other tech companies, so they can scan their own databases for the same content, since it tends to spread across platforms.
Analysis: A good start
As long as there are platforms that rely on users uploading their own content, there will be those who break the rules and spread harmful messages online. This could come in the form of scammers trying to sell fake courses, inappropriate content on channels aimed at children, and content related to suicide or self-harm. Accounts that post this type of content are usually very good at skirting the rules and staying under the radar to reach their target audience; the content is often removed too late.
It's good to see social media platforms (which use comprehensive algorithms and casino-like architecture to keep their users engaged and automatically serve them content to interact with) actually taking some responsibility and working together. This kind of ethical cooperation between the most popular social media apps is much needed. However, this should only be the first step on the road to success.
The problem with user-generated content is that it needs to be constantly policed. AI can help automatically flag harmful content, but some will slip through the cracks anyway – much of this content is nuanced and contains subtext that a human somewhere in the chain will have to see and flag as harmful. I’ll certainly be keeping an eye on Meta, TikTok, and other companies when it comes to their evolving policies on harmful content.