Australia proposes mandatory AI safeguards


Requiring AI models to be tested, keeping humans informed and giving people the right to challenge automated decisions made by AI are just some of 10 mandatory safeguards proposed by the Australian government as ways to minimise AI risk and build public trust in the technology.

The safeguards, which Industry and Science Minister Ed Husic put out for public consultation in September 2024, could soon apply to AI used in high-risk environments. They are complemented by a new voluntary AI Safety Standard designed to encourage companies to adopt AI best practices immediately.

What are the mandatory barriers to AI that are being proposed?

Australia’s proposed 10 mandatory safeguards aim to set clear expectations on how to use AI safely and responsibly when developing and deploying it in high-risk environments. They aim to address the risks and harms that AI creates, build public trust and provide businesses with greater regulatory certainty.

Railing 1: Responsibility

Similar to the requirements of Canadian and EU AI legislation, organizations will need to establish, implement and publish an accountability process for regulatory compliance. This would include aspects such as policies for data and risk management and clear internal roles and responsibilities.

Railing 2: Risk management

A risk management process will need to be established and implemented to identify and mitigate AI risks. This process must go beyond a technical risk assessment and should consider potential impacts on individuals, community groups and society before a high-risk AI system can be put into use.

SEE: 9 innovative AI use cases for Australian businesses in 2024

Guardrail 3: Data protection

Organizations will need to protect AI systems to safeguard privacy with cybersecurity measures, as well as develop robust data governance measures to manage data quality and provenance. The government noted that data quality directly impacts the performance and reliability of an AI model.

Railing 4: Tests

High-risk AI systems will need to be tested and evaluated before being brought to market. They will also need to be continuously monitored once deployed to ensure they are performing as expected. This is to ensure they meet specific, objective and measurable performance metrics and that risk is minimised.

Ways the Australian Government is supporting safe and responsible AI

Railing 5: Human control

High-risk AI systems will require significant human oversight. This means that organizations must ensure that humans can effectively understand the AI ​​system, monitor its operation, and intervene when necessary across the AI ​​supply chain and throughout the AI ​​lifecycle.

Railing 6: User information

Organisations will need to inform end users if they are subject to AI-based decisions, interact with AI or consume AI-generated content, so that they are aware of how AI is being used and where it impacts them. This should be communicated in a clear, accessible and relevant manner.

Guardrail 7: Challenging AI

People who are adversely affected by AI systems will have the right to challenge their use or outcomes. Organisations should establish processes for people affected by high-risk AI systems to challenge decisions made using AI or to lodge complaints about their experience or the treatment they received.

Railing 8: Transparency

Organizations must be transparent with the AI ​​supply chain regarding data, models, and systems in order to effectively address risks. This is because some actors may lack critical information about how a system works, resulting in limited explainability, similar to the issues with today’s advanced AI models.

Railing 9: AI logs

A number of records will need to be retained and maintained on AI systems throughout their lifecycle, including technical documentation. Organisations should be prepared to provide these records to relevant authorities upon request and in order to assess their compliance with security measures.

SEE: Why generative AI projects risk failure if they don't understand the business

Railing 10: AI assessments

Organisations will be subject to compliance assessments, described as an accountability and quality assurance mechanism, to demonstrate that they have complied with rules for protecting high-risk AI systems. These assessments will be carried out by AI system developers, third parties or government entities or regulators.

When and how will the 10 new mandatory protective barriers come into force?

Mandatory railings are subject to a public consultation process until October 4, 2024.

Following this, the government will look to finalise the safeguards and put them into effect, according to Husic, who added that this could include the possible creation of a new Australian AI Act.

Other options include:

  • Adapting existing regulatory frameworks to include new barriers.
  • Introduction of framework legislation with associated amendments to existing legislation.

Husic said the government will do so “as soon as possible.” The protective measures are the result of a long consultation process on AI regulation that has been underway since June 2023.

Why is the government taking the approach it is taking to regulation?

The Australian government is following in the footsteps of the EU and adopting a risk-based approach to regulating AI. This approach seeks to balance the benefits that AI promises to bring with its deployment in high-risk environments.

Focus on high-risk environments

The proposed safeguards in the barriers are intended to “prevent catastrophic damage before it occurs,” the government said in its proposals document on safe and responsible AI in Australia.

The government will define what constitutes high-risk AI as part of the consultation, but suggests it will take into account scenarios such as adverse impacts on people's human rights, adverse impacts on physical or mental health or safety, and legal effects such as defamatory material, among other potential risks.

Businesses need guidance on AI

The government says businesses need clear safeguards to deploy AI safely and responsibly.

A newly released Responsible AI Index 2024, commissioned by the National AI Centre, shows that Australian businesses consistently overestimate their ability to employ responsible AI practices.

The index results showed:

  • 78% of Australian businesses believe they were implementing AI safely and responsibly, but only 29% of them were right.
  • Australian organisations are adopting on average just 12 of the 38 responsible AI practices.

What should businesses and IT teams do now?

Mandatory safeguards will create new obligations for organizations using AI in high-risk environments.

IT and security teams will likely be involved in meeting some of these requirements, including data security and quality obligations, and ensuring model transparency throughout the supply chain.

The voluntary AI safety standard

The government has published a Voluntary AI Security Standard that is now available to businesses.

IT teams looking to be prepared can use the AI ​​Security Standard to help their organizations meet obligations under any future legislation, which may include new mandatory safeguards.

The AI ​​Safety Standard includes advice on how companies can apply and adopt the standard through specific case study examples, including the common use case of a general-purpose AI chatbot.

scroll to top