Add a bookmark to get started

20 February 20245 minute read

Extra-territorial application of the AI Act: how will it impact Australian organisations?

On 2 February 2024, ambassadors of the various member states of the European Union (EU) approved the final version of the Artificial Intelligence (AI) Act (AI Act), after having reached political agreement on the AI Act in December 2023. The AI Act is the first piece of legislation of its kind, regulating the development, deployment and use of AI in the EU (see here for more general information about the AI Act), with its various obligations and requirements taking effect in a staged approach over the next couple of years.

One of the core features of the AI Act relevant to organisations outside the EU is that it will have extra-territorial scope, meaning that organisations outside the EU may be subject to the AI Act under certain circumstances. Given the fairly prescriptive requirements of the AI Act, as well as the potentially significant prescribed penalties, it is critical that all Australian organisations that deploy, develop or offer AI-driven solutions in the EU consider how, and the extent to which, the AI Act may apply to them.

 

Who is caught by the extraterritorial reach of the AI Act?

The AI Act applies generally to deployers, importers, distributors and manufacturers of AI systems in the EU. However, Article 2 of the AI Act extends its application to organisations outside the EU, where:

  • such organisations place AI products on the market or put such products into service in the EU; and
  • the outputs produced by such AI products are used by persons in the EU, subject to the various exceptions in Article 2, which contemplate the circumstances under which the AI Act will not apply, generally, including in relation to:
    • any research, testing and development activities undertaken in respect of AI systems prior those systems being placed on the market or into service;
    • AI systems developed and used for the sole purpose of scientific research and discovery; and
    • free and open-source AI systems (except if its use would cause that AI system to be deemed a prohibited or high-risk AI system, it would cause that system to be subject to transparency obligations).

In other words, if your organisation is:

  • “placing” (first making available),“making available” (supplying for distribution or use in the course of a commercial activity, whether for a fee or free of charge) or “putting into service” (supplying to a user) an AI system in the EU market; or
  • making the outputs produced by an AI product available in the EU, then your organisation will be caught.

However, “output” is not currently defined by the AI Act, nor is it clear how this term will be interpreted; regardless, we consider that this could include any content, predictions, recommendations or decisions of an AI system (as per the OECD definition of outputs), and we recommend that organisations err on the side of caution and adopt a broad interpretation of the term, when considering whether the outputs of their AI systems will be caught.

 

What does this mean for your organisation?

It is likely that many Australian organisations will be caught by the AI Act, which establishes some fairly significant penalties for breaches, including fines of:

  • for contraventions of provisions related to AI products classified as prohibited or carrying unacceptable risk, the higher of 7% of the relevant organisation’s global turnover, or EUR35 million;
  • for contravention of certain other obligations under the AI Act, the higher of 3% of the relevant organisation’s global turnover, or EUR15 million; and
  • for supplying incorrect information in response to requests from relevant national competent authorities, the higher of 1% of the relevant organisation’s global turnover, or EUR7.5 million.

So how best can an organisation ensure its compliance? First and foremost, Australian organisations should ensure that they:

  • understand the territorial reach of their use or offering of AI products and relevantly, whether such use or offering will extend to the EU;
  • are well-advised as to the requirements of the AI Act that apply to them; and
  • understand how to minimise the extent to which the AI Act may apply to them; for example, organisations may employ technical techniques such as geo-blocking, in order to ensure that their AI products (or their outputs) cannot be used in the EU.

Further, organisations should ensure that their internal processes, procedures and broader compliance environments allow for the compliance with the AI Act requirements to which they are subject. To that end, organisations may consider some of our recommendations in our other articles.

 

How can DLA Piper help?

DLA Piper is well-placed to assist any organisation that wishes to understand its obligations under the AI Act or the preparatory measures they may take to ensure compliance with those obligations. We bring together a global depth of capability on AI with an innovative product offering in a way no other firm can. Our global, cross-functional team of lawyers, data scientists, programmers, coders and policymakers deliver technical solutions on AI adoption, procurement, deployment, risk mitigation and monitoring.

Further, because AI is not always industry-agnostic, our team includes sector-focused lawyers with extensive experience in highly regulated industries. We’re helping industry leaders and global brand names across the technology, life sciences, healthcare, insurance and transportation sectors stay ahead of the curve on AI.