On 1 August 2024, the European Union’s Artificial Intelligence Act (AI Act) became law but will be progressively implemented over the next two to three years. The AI Act has effect outside the European Union in certain circumstances, so Australian businesses will need to prepare for the possible impact of the AI Act on their use of AI technologies in their business.
The AI Act (formally, the EU Regulation No. 2024/1689 on artificial intelligence) entered into force on 1 August 2024. The obligations under the AI Act will take effect in stages over a two to three year period.
To whom does the AI Act apply?
The AI Act applies to all businesses operating within the European Union (EU) and differentiates between “providers” (businesses that develop AI systems), “deployers” (businesses that use AI systems in the course of commercial or professional activities), “importers” (persons located or established in the EU that offer AI systems into the EU market from non-EU entities) and “distributors” (persons that provide AI systems to the EU market that are not providers or importers, regardless of whether they are based in the EU or not).
How does the AI Act apply to entities located outside the EU?
The AI Act will apply to providers based outside the EU that develop an AI system and offer it for sale or put it into service in the EU. It also will apply to providers and deployers based outside the EU where the output produced by the AI system is utilised within the EU.
Accordingly, where an Australian company offers an AI system for use in the EU, sells or offers to sell an AI system into the EU or uses an AI system to generate outputs used by businesses or individuals located in the EU, that company may be subject to the AI Act, even if it does not have a physical presence in the EU.
Australian manufacturers that export products into the EU may need to conform with the AI Act if the product incorporates or integrates an AI system into the product.
What is regulated by the AI Act?
The AI Act regulates “Artificial intelligence systems” (AI Systems) and “General-purpose AI models” (AI Models).
An “AI System” is defined as a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
The AI Act regulates AI Systems depending on the level of risk associated with the AI System. There are four levels of risk – “unacceptable risk”, “high risk”, “limited risk” and “minimal risk”.
Unacceptable risk AI Systems
AI Systems with “unacceptable risk” are outright prohibited (subject to certain exceptions, which are not discussed in this Insight).
AI Systems with unacceptable risk include, but are not limited to, subliminal, manipulative or deceptive AI Systems that distort behaviour of persons to take decisions they otherwise would not have taken and that likely cause significant harm, AI systems that classify persons based on social behaviour, AI Systems that create or expand facial recognition databases through untargeted scraping of facial images, AI Systems that infer emotions of people in workplaces and education institutions and AI Systems that make risk assessments of a person’s criminal propensity based solely on profiling.
High risk AI Systems
The “high risk” category contains most of the complex regulation in respect of AI Systems. An AI System will be high risk where it is intended to be used as a “safety component” of a product subject to EU laws. Certain AI Systems are deemed to be “high risk” including AI Systems for remote biometric verification, biometric categorisation and emotional recognition, AI Systems used in education and vocational training contexts, AI Systems used to determine recruitment, promotion and termination of employment, and AI Systems used to determine eligibility for public assistance to evaluate creditworthiness or health insurance risk and to classify emergency calls. AI Systems will not be deemed “high risk” if they do not pose a significant risk of harm to health, safety or rights of individuals.
If an AI System is classified as being “high risk”, then the provider of such an AI System must comply with a set of obligations relating to risk management systems, data governance rules, technical documentation, logging, transparency, design, accuracy, robustness and security. Providers must all undertake third party conformity assessment before product launch, to demonstrate that the system complies with mandatory requirements for trustworthy artificial intelligence, to be repeated if the provider substantially modifies the system or its purpose.
To ensure compliance, the AI Act will permit market surveillance authorities to conduct regular audits and facilitate post-market monitoring.
Importers and distributors of high risk AI Systems will also be subject to (more limited) governance, compliance, transparency and record-keeping obligations under the AI Act.
Limited risk AI Systems
Under the AI Act, “limited risk” refers to risks associated with the lack of transparency in the use of AI. For limited risk AI Systems, the AI Act will impose specific transparency obligations to ensure that humans are informed when necessary of the use of AI Systems. Limited risk AI Systems include chatbots, AI Systems that generate audio, image, video or text content (including “deepfakes”) and AI Systems that deploy “emotion recognition systems or biometric categorisation systems”.
Minimal risk AI Systems
“Minimal risk” AI Systems, which include applications such as AI-enabled video games or spam filters, are not regulated.
Regulation of General Purpose AI Models
In addition to regulating specific AI Systems based on a risk assessment, the AI Act also regulates “General Purpose AI Models” which include generative AI models, such as ChatGPT’s GPT-4 or Google’s Gemini.
The AI Act requires providers of these AI models to disclose information to downstream providers regarding the model through technical documentation, instructions and policies. It includes being transparent about content used to train the model.
Where an AI model poses a “systematic risk”, the provider will be subject to additional regulation. An AI Model trained using a total computing power of more than 10^25 FLOPS is considered to pose a systematic risk.
The AI Act imposes a number of transparency obligations concerning content produced by the use of generative AI to minimise the risk of manipulation, deception and misinformation. Providers must mark the AI outputs in a machine-readable format and ensure that they are detectable as artificially generated or manipulated. Deployers will be required to disclose that the content has been artificially generated or manipulated, unless humans have reviewed the content or a natural or legal person holds editorial responsibility for the content’s publication.
The AI Act applies to providers of General Purpose AI models put onto the EU market or where deployers use such models intending the output to be used in the EU.
Penalties for non-compliance
Entities covered by the AI Act that fail to comply with their obligations can be subject to substantial fines. If an entity breaches the prohibitions applying to “unacceptable risk”, AI Systems fines of up to €35 million or 7% of global turnover may apply. Other breaches of the AI Act’s obligations may result in penalties of up to €15 million or 3% of global turnover.
Staggered implementation of the AI Act
Although the AI Act entered into force on 1 August 2024, it will not fully apply until 1 August 2026. In the two year period, the AI Act will be implemented in stages, as follows:
- By February 2025, the use and deployment of Prohibited AI Systems will be banned;
- By August 2025, the use and deployment of General Purpose AI Models will be regulated; and
- By August 2026, most of the remaining obligations will take effect, including for specific standalone high-risk AI system categories and the obligations for limited risk AI categories. Some high risk AI Systems categorised pursuant to Annex I will not be regulated until 2 August 2027.
Conclusion
Australian companies that currently offer, or are intending to offer, an AI System for use in the EU, sell or offer to sell an AI System into the EU, use an AI System to generate outputs used by businesses or individuals located in the EU or export products to the EU that incorporate or integrate an AI System should start their preparations to comply with the AI Act as soon as possible.
For those businesses, steps that should broadly be taken to prepare for the application of the AI Act include:
- Complete an audit to identify existing and potential use of AI systems and classify those systems against the risk classification under the AI Act;
- Where applicable, identify the potential application of the AI Act to existing and future design and development processes for systems and products, and then assess where changes may be required to guarantee compliance with the AI Act;
- Review supply chains to assess whether changes may be required both upstream and downstream as a result of the implementation of the AI Act, particularly with respect to transparency and risk management obligations;
- Monitor the communications from the EU Commission, including with respect to guidelines that are to be developed and released.
- Obtain legal advice in relation to compliance with the AI Act.
Key Takeaways
- The EU’s AI Act applies to all entities that develop, deploy, import or distribute AI systems to the European Union, regardless of where the entity is located. It will also apply to providers of General Purpose AI models put onto the EU market or where deployers use such models intending the output to be used to the EU.
- Accordingly, Australian companies to whom the AI Act applies should plan for compliance with the EU AI Act