A risk to Australia's cybersecurity landscape


A recent study by Western Sydney University, Adult Media Literacy in 2024, revealed worryingly low levels of media literacy among Australians, particularly given the deepfake capabilities unleashed by new AI technologies.

This shortcoming poses a risk to cybersecurity, given that human error remains the leading cause of security breaches. As disinformation and deepfakes become increasingly sophisticated, the need for a cohesive national response is more urgent than ever, the report noted.

Because AI can produce highly convincing disinformation, the risk of human error is magnified. People who are not media-savvy are more likely to fall for such traps, potentially compromising sensitive information or systems.

The growing threat of disinformation and deepfakes

While AI offers undeniable benefits in generating and distributing information, it also presents new challenges, including disinformation and deepfakes, which require high levels of media literacy across the country to mitigate.

Tanya Notley, an associate professor at Western Sydney University who was involved in the report on adult media literacy, explained that AI introduces some particular complexities to media literacy.

“It’s becoming increasingly difficult to identify where AI has been used,” he told TechRepublic.

To overcome these challenges, people need to understand how to verify the information they see and how to differentiate between a quality source and one that is likely publishing deepfakes.

Sadly, around 1 in 3 Australians (34%) say they have “little confidence” in their media literacy. Education is a factor, with only 1 in 4 (25%) Australians with a low level of education saying they are confident in verifying information they find online.

Why media literacy is important for cybersecurity

The connection between media literacy and cybersecurity may not be obvious at first glance, but it is critical. A recent study by Proofpoint found that 74% of CISOs consider human error to be the “most important” vulnerability for organizations.

Low media literacy exacerbates this problem. When people cannot effectively assess the credibility of information, they become more susceptible to common cybersecurity threats, including phishing scams, social engineering, and other forms of manipulation that lead directly to security breaches.

A now-infamous example of this occurred in May, when cybercriminals successfully used a deepfake to impersonate the CFO of an engineering firm, Arup, to convince an employee to transfer $25 million to a series of Hong Kong bank accounts.

The role of media literacy in national security

As Notley noted, improving media literacy is not just a matter of education. It is a national security imperative, particularly in Australia, a country where there is already a shortage of cybersecurity personnel.

“Focusing on just one thing, as many people have, like regulation, is insufficient,” he said. “We actually have to have a multifaceted approach, and media literacy does a number of different things. One of them is increasing people’s knowledge of how generative AI is being used and how to think critically and ask questions about it.”

According to Notley, this multi-pronged approach should include:

  • Media Literacy Education: Educational institutions and community organizations should implement robust media literacy programs that provide people with the skills necessary to critically evaluate digital content. This education should cover not only traditional media, but also the nuances of AI-generated content.
  • Regulation and policy: Governments must develop and enforce rules that hold digital platforms accountable for the content they host. This includes requiring transparency about AI-generated content and ensuring that platforms take proactive measures to prevent the spread of disinformation.
  • Public awareness campaigns: National campaigns are needed to raise awareness about the risks associated with low media literacy and the importance of being critical consumers of information. These campaigns should be designed to reach all demographic groups, including those who are less likely to be digitally literate.
  • Industrial collaboration: The information technology industry plays a crucial role in improving media literacy. By partnering with organisations such as the Australian Media Literacy Alliance, technology companies can contribute to the development of tools and resources that help users identify and resist disinformation.
  • Training and education: Just as first aid and workplace safety drills are considered essential, with regular updates to ensure staff and the wider organization are compliant, media literacy should become a mandatory part of employee training and be regularly updated as the landscape changes.

How the IT industry can support media literacy

The IT industry has a unique responsibility to leverage media literacy as a core component of cybersecurity. By developing tools that can detect and flag AI-generated content, tech companies can help users navigate the digital landscape more safely.

And as Proofpoint research notes, CISOs, while concerned about the risk of human error, are also optimistic about the ability of AI-powered solutions and other technologies to mitigate human-centric risks, highlighting that technology can be the solution to the problem that technology creates.

However, it's also important to create a blame-free culture. One of the main reasons human error is such a big risk is that people are often afraid to speak up for fear of being punished and even losing their job.

Ultimately, one of the biggest defenses we have against misinformation is the free and trustworthy exchange of information, so CISOs and IT teams should actively encourage people to speak up, flag content they are concerned about, and if they are worried they have fallen for a deepfake, report it immediately.

scroll to top