Australian IT professionals must prepare for AI regulation


The most recent (and probably final) version of the European Union's impending AI Law was recently leaked. This is the world's first comprehensive law designed to regulate the use and application of artificial intelligence, and history shows that when the EU regulates something, the rest of the world tends to adopt it.

For example, companies doing business in Australia often comply with the GDPR, simply because European law requires it. The same is likely to happen when the EU AI Law comes into force.

Despite the Australian Government's recommendations to comply with the EU AI Law, there are no mandatory regulations governing the use of AI in Australia. That said, Australian companies will need to start implementing the rules proposed by the EU law if they want to continue to scale and do business with EU companies or even local companies with EU partnerships.

Australian government advice is to comply with the EU

As with the GDPR, the advice from Australian officials is to comply. For example, as noted by the Australian-British Chamber of Commerce as guidance for Australian organisations:

“The EU AI Law applies to all companies that deploy AI systems on the EU market or make them available within the EU, regardless of their location. Consequently, Australian companies carrying out any of the following activities must comply with the legislation:

  • develop and commercialize artificial intelligence systems;
  • implement artificial intelligence systems; and
  • using AI systems in their products or services.”

With this in mind, Australian IT professionals should take a close look at EU regulation around AI. It is likely to be standardized as best practice even in the absence of local regulation.

The benefit for Australian businesses that comply with the requirements of the EU AI regulations is, as with the GDPR, that once they have done so, they will essentially already be prepared and compliant when the Australian government introduces local regulations.

Five things Australians should know about the EU AI Law

Australia currently has legislation covering components of AI, such as data protection, privacy and copyright laws. There is also an Australian AI Ethics Framework, which is voluntary but covers much of what EU laws aim to legislate.

PREMIUM: Companies should consider writing an AI ethics policy.

The eight core principles of the AI ​​Ethics Framework provide Australian organizations with a “best practice” way of thinking about how AI should be created and used, particularly with regard to safety, benefits and human justice. .

The EU's approach is essentially to take these philosophical ideas and turn them into specific regulations that organizations must follow. For example, the five key areas that EU laws will regulate are:

  • Risk classification system for AI systems: These will range from “minimal” to “unacceptable” and, as the application of AI is considered to have “higher” risk, it is subject to higher levels of regulation.
  • Obligations for high-risk AI systems: Most regulation focuses on commitments related to ensuring data quality, transparency, human oversight and accountability.
  • Prohibition of certain uses of AI: Uses of AI that pose an unacceptable risk to human dignity, security or fundamental rights will be prohibited, including social scoring, subliminal manipulation or indiscriminate surveillance.
  • AI system labeling and notification system: For the sake of transparency and accountability, there will also be a notification and labeling system for artificial intelligence systems that interact with humans, generate content or categorize biometric data.
  • Government structure: This will involve national authorities, a European AI Board and a European AI Office to oversee the implementation and compliance of the AI ​​Law.

Outside of those “high risk” AI models (and that will be a small percentage located in specific verticals like defense and law enforcement), most AI models used by consumer-facing companies will have light regulatory requirements. Furthermore, for the most part, Australian organizations that had adopted the full scope of the ethical guidelines that the Australian government had proposed should encounter no difficulties in meeting the requirements of EU laws.

Lack of mandatory regulation may leave Australian data and AI professionals unprepared to meet compliance

However, the nature of voluntary obligations and the lack of a coherent regulatory agenda mean that not all Australian AI has been conducted with the full scope of ethical guidelines in mind.

SEE: Australia is not the only country to develop a voluntary AI code of conduct.

This could cause challenges later if the organization decides it wants to scale and has already incorporated AI processes that violate EU regulation into its business. For this reason, forward-thinking organizations are unlikely to follow the strictest set of guidelines.

Non-compliance will limit Australian businesses locally and globally

Once these EU laws come into force in June, anyone who has incorporated AI into any of their products and processes will need to act quickly to achieve compliance. According to Boston Consulting Group, compliance requirements will be tiered with the highest risk requests having the shortest deadline. However, most organizations will need to comply within 6 to 12 months.

Those who are not will not be able to bring their AI models to Europe. Not only will this have a direct impact on whether they want to do business there, but it also means that partnerships around AI with other organizations doing business in Europe will become complicated, if not impossible.

This is why it will be particularly important for Australian organizations to ensure that the AI ​​models being used comply with the EU AI Act, so as not to potentially exclude themselves from business opportunities locally in Australia.

scroll to top