In today's digital environment, children are routinely exposed to harmful, age-restricted and even illegal online content. The continued proliferation of smart devices and easy, anywhere access to user-generated content (UGC) through social media and chat platforms only exacerbates the problem.
With the touch of a button, people can now be exposed to more adult, extreme and illegal content than ever before. And in the UK it has been revealed that children themselves are now the biggest perpetrators of sexual abuse against children (52%).
The UK's National Police Chiefs' Council (NPCC) has directly correlated this trend with the ease of access young people have to the internet via smartphones. While the IWF has revealed that the development of new technologies such as artificial intelligence (AI) will only further exacerbate the creation, distribution and ease of access of illegal content such as child sexual abuse material (CSAM).
Safer Internet Day brings together thousands of young people, parents and organizations each year to raise awareness about how children can stay safe online. It provides an opportunity to bring the conversation and application of safer practices to the mainstream, helping to protect children from inappropriate or illegal content and visiting the dark web.
This year should also become a springboard for companies to improve their online age monitoring and content moderation to ensure they are doing everything they can to protect the young and vulnerable.
Ease of access in an online world
While easy access to the Internet has brought numerous benefits, the truth is that as a result of its rapid and often unregulated development, children now have unprecedented access to illegal and age-restricted online content.
Naturally, this is a top concern for parents – Ofcom research reveals that 75% are concerned about their children viewing inappropriate content when online. 73% specified this as adult or sexual content.
Not only is viewing this content horrifying in itself, it also has a detrimental impact on children's long-term wellbeing and mental health, as well as distorting young people's views on sex and appropriate behaviour.
Many popular social media sites allow users to create accounts from as young as 13 years of age, even though illegal and age-restricted content can be accessed and shared. This, along with the camera and video capabilities of modern smart devices, makes it very easy for users to quickly produce, upload, and view inappropriate content.
Therefore, the digital world needs to catch up with the offline world, where seeing an individual's documentation and physical appearance is the norm, making it much easier and practical for authorities to restrict their access to products and content restricted by age.
Age assurance and content moderation.
To overcome these challenges, organizations must quickly implement and improve their content moderation and age monitoring infrastructure. The advent of technologies like AI has made it much easier and more practical than ever to identify and eradicate illegal online content at scale, accurately and at low cost.
With regulatory guidance on online security measures still being issued, organizations have historically had few commercial incentives to implement these technologies. But the well-being of our young people is at stake. Instead of pointing fingers, now is the time to implement pragmatic solutions to resolve the issue of how to best protect children online. Companies should engage and partner with subject matter experts in this area, including regulators and security technology providers. This way we can build an ecosystem of effective solutions that, when implemented, truly provide protection to young people when they go online.
With a host of age verification solutions, such as email address age estimation, businesses have the tools at their disposal to verify the age of customers with minimal friction. At the same time, content moderation tools now enable analysis of uploaded or live-streamed content in real time, before it is published, providing instant solutions that flag or remove illegal material.
Additionally, implementing proactive solutions, such as user and participant verification, facilitates consent and reduces the risk of revenge porn, intimate image abuse, exploitation, slavery, and sex trafficking.
Security and privacy
Despite these tools available, there is still work to be done to protect children online and remove illegal content from sites. This challenge is often exacerbated by the current security and privacy debate, where technology companies and social media companies assert the importance of encryption to keep user data safe. However, the problem with encryption is that bad actors can abuse it to distribute and circulate illegal or age-restricted content online.
This is a debate that will only grow in importance in the coming months, especially now that the UK Online Safety Act has become law. Fortunately, as new privacy-preserving authentication tools such as email address emerge, organizations will be able to mitigate this concern. This is something that was specifically mentioned in recent guidance from the Information Commissioner's Office (ICO), the body that regulates both privacy (GDPR in the UK) and online content.
Therefore, it stands to reason that as regulators push these strong new laws and technology and social media companies adapt to the changes, companies become responsible for their own actions. This means taking the lead in implementing content moderation and age monitoring tools that drive meaningful change in their industries. Safer Internet Day offers the perfect opportunity to start doing just that.
We have presented the best parental control app.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: