The AI Act – Europe leads the charge on AI regulation

15 March 2024

On Wednesday, the EU Parliament approved the final form of the world’s first major legislative framework regulating the use of Artificial Intelligence (AI). The endorsement of the final form of the legislation, which was extensively debated within the institutions of the EU, is a significant milestone in the regulation of AI and has implications for any business seeking to capitalise on the opportunities offered by fast-evolving AI systems. 

As with comparable reforms in the past (such as the GDPR), the EU framework is likely to influence the development of a regulatory response to AI in New Zealand and in other jurisdictions.  In addition, the AI Act could be of direct relevance to certain New Zealand businesses, given the possibility of extra-territorial application where AI systems are used in the EU.

This article summarises some of the key features of the new regime.

Overview of the AI Act

As in the original draft legislation (summarised in our article here), the AI Act adopts a risk-based approach to regulating AI systems with various tiered obligations.

The risk categories are detailed, but we set out a brief summary of the key features below:  

Unacceptable risk

AI systems with an “unacceptable” level of risk to safety or human rights are strictly prohibited. The list of prohibited systems was extensively negotiated between the institutions of the EU last year. In the finalised text, it includes:

 

  • “real-time remote biometric identification systems in publicly accessible spaces” – which would capture live facial recognition systems. Interestingly, in a significant amendment to the draft approved in May 2023, the scope of that ban is now confined to use “for the purposes of law enforcement” (subject to various exceptions such as searching for victims of abduction, preventing terrorist attacks, etc.) Other uses of such systems are not prohibited outright but are instead classified as high-risk, as discussed further below;

  • systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;

  • systems that “infer emotions” in the workplace or educational institutions (other than for “medical or safety reasons”); and

  • “biometric categorisation systems” used to categorise individuals and draw inferences about their personal characteristics (subject to a specific exception for law enforcement).

Some of the other categories reflect the fears of dystopian misuse that underpinned the original draft legislation in 2021 – including social scoring systems, systems that predict the likelihood of a person committing a criminal offence, or systems that deploy “subliminal techniques” to distort behaviour (an example given is “machine-brain interfaces").

High risk

Systems that create a “high risk” to health and safety or human rights are not banned, but are subject to various quality management obligations including strict “conformity assessments” (to assess conformity with existing laws where applicable, and to confirm compliance with various other safeguards set out in the Act). The majority of these obligations apply to the “providers” (i.e. the developers) of high-risk AI systems. 

High risk systems also include systems designed for certain designated uses. The list is lengthy but includes, by way of example, systems used for:

 

  • determining access to educational and vocational training or for the recruitment, evaluation or promotion of employees;

  • credit applications or credit scoring (subject to a helpful new exception for systems used to detect financial fraud);

  • risk assessment and pricing in relation to life and health insurance;

  • certain law enforcement purposes, for example to evaluate the reliability of evidence; and

  • biometric identification (other than for law enforcement purposes, which are generally prohibited as noted above). This is subject to certain exceptions including where systems are used to verify the identity of a “specific natural person.”

Low / minimal risk

For lower risk activities, the AI Act imposes more general requirements, such as transparency obligations. That requires that AI systems should make clear to users that they are interacting with an AI system (unless that is obvious based on the context), and any limitations on the system’s capabilities should be made clear.

 

Providers of all AI systems are encouraged under the AI Act to adhere voluntarily to codes of conduct. These are expected to set out requirements related to a range of factors including sustainability, accessibility for persons with a disability, and diversity of development teams.


Various exceptions will apply. In particular, the AI Act will not apply to systems which are used for the sole purpose of research and innovation or for non-professional use. In addition, it will not apply to systems used exclusively for military or defence purposes.

Next steps and implications

The AI Act is expected to enter into force in May 2024, after passing final linguistic and proofing checks and formal endorsement by the European Council. Implementation will then be staggered: the bans on unacceptable AI systems will apply within six months; codes of practice take effect within nine months; and obligations for high-risk systems take effect within three years.  

We expect that many New Zealand businesses will have a close interest in the reforms. The framework captures any providers where a system’s “output” is used in the EU, regardless of where the provider is based. Therefore, the AI Act could apply directly to New Zealand developers of AI systems if used in the EU. It will be important for such businesses to ensure they are compliant with the new regime. Breach of the prohibition on unacceptable uses can result in fines of up to €35 million or in some cases up to 7% of total worldwide annual turnover. Breach of the obligations for “high-risk” systems can result in fines of up to €15 million or up to 3% of total worldwide annual turnover.

In addition, the reforms will be of more general relevance to other businesses, given the likely influence of the AI Act on regulatory developments in New Zealand. As with the GDPR, which triggered a convergence in international privacy laws towards similar standards, the detailed and prescriptive requirements of the AI Act could serve as a “best practice” framework which drives other AI regulation here and overseas. At the same time, other jurisdictions are considering more flexible approaches (see our article here) which may be better suited for keeping pace with the accelerating development of AI technology.  It will be interesting to see which approach the New Zealand government favours, should it look to implement its own reforms in due course. 


Bell Gully’s Consumer, Regulatory and Compliance (CRC) team have been closely monitoring international developments in the regulation of AI. If you have any questions, please get in touch with the contacts listed, or your usual Bell Gully adviser.


Disclaimer: This publication is necessarily brief and general in nature. You should seek professional advice before taking any action in relation to the matters dealt with in this publication.