AI Act: a step closer to the first rules for Artificial Intelligence

18 May 2023

As the development of Artificial Intelligence (AI) continues to accelerate at light speed, increasingly vocal concerns are being raised as to the ethical, legal and societal implications of powerful new AI systems. Governments are exploring the need for new legislation, and even the creators of AI systems are calling for greater regulation.

Against that backdrop, the European Parliament is expediting the EU’s draft Artificial Intelligence Act (AI Act) – the first major AI-specific legislation, which attempts to regulate the use of AI across the 27 EU member states.

Last week, in a significant milestone, a joint committee of the European Parliament voted to approve a strengthened version of the draft AI Act. The new version will now progress to a vote by the whole European Parliament later this year.

This update provides a summary of the AI Act and the recent amendments, and highlights some relevant implications for New Zealand businesses.

The EU’s AI Act and recent amendments

The AI Act contemplates a risk-based approach to regulating AI systems with tiered obligations. The risk categories, together with the relevant changes proposed by the joint committee, are summarised in the table below. The recent changes are focused on ensuring that AI systems are “appropriately controlled and overseen by humans” and are safe, transparent, and non-discriminatory.

Unacceptable risk

AI systems with an “unacceptable” level of risk to people’s safety would be strictly prohibited. This would include, for example, systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring. 

In last week’s announcement, the joint committee substantially amended the listed activities to include further uses of AI which are considered to be intrusive and discriminatory, such as:

  • “Real-time” remote biometric identification systems in publicly accessible spaces (this ban was previously limited to law enforcement).
  • “Post” remote biometric identification systems (i.e. where analysis of footage occurs after it is captured, rather than in real-time), with the only exception being law enforcement for the prosecution of serious crimes (and only after judicial authorisation).
  • Biometric categorisation systems using sensitive characteristics (for example, gender, race or ethnicity).
  • Predictive systems which detect the risk of criminal behaviour based on profiling.
  • Scraping of biometric data from the internet or CCTV.
  • Systems which look to “infer emotions” from human users, for use in law enforcement, border management, the workplace, or educational institutions.
High risk

Separate rules apply for AI systems that create a “high” risk to health and safety, or the fundamental rights of natural persons. These systems are permitted in principle, subject to compliance with specified mandatory requirements. The AI Act deals with high-risk systems in two broad categories: physical products (such as machinery, toys and medical devices) and other systems (for example, critical infrastructure where AI could create a risk to health and safety, educational and vocational training systems, recruitment or HR systems, or systems used to evaluate credit scores). These systems must be accompanied with strict “conformity assessments” to assess compliance with the requirements of the AI Act (e.g. via the establishment and maintenance of risk management systems).  

The recent proposed amendments made various additions to the designated high-risk areas, including AI systems which influence voters in political campaigns, and social media platforms’ systems for recommending content.

Limited/low risk

For lower risk activities (such as chatbots), the AI Act imposes a general requirement for transparency. That is, AI systems should make clear to users that they are interacting with an AI system, and any limitations on the system’s capabilities should be made clear. The draft AI Act also promotes “explainability” – requiring that individuals are able to obtain meaningful explanations regarding decisions made by AI systems that affect them. 

As part of the committee's recent changes, new requirements apply to generative foundation models (like ChatGPT), including design requirements to ensure that a model does not generate illegal content.

Minimal risk

For other AI systems, no additional formal obligations apply. However, providers of such systems are encouraged under the AI Act to adhere voluntarily to codes of conduct. These are expected to set out requirements related to a range of factors including environmental sustainability, accessibility for persons with a disability, stakeholder participation in the design and development of the AI systems, and diversity of development teams.

More general legal obligations and regulatory requirements will also continue to apply to AI, such as data protection, human rights, intellectual property, fair trading and safety laws.

In other changes, the joint committee proposed the introduction by EU member states of “regulatory sandboxes” to facilitate the development and testing of innovative AI systems under strict regulatory oversight, before those systems are placed on the market or otherwise put into service. In addition, the joint committee included new exemptions for research activities and AI components provided under open-source licenses.

Consequences for breach are material, including potential penalties of up to 7% of annual global revenue, or, if greater, €40 million (following increases recommended by the joint committee).

Next steps for the AI Act

Following last week’s endorsement by the joint committee, the updated draft AI Act will now require endorsement in a plenary session of the EU Parliament (due to occur on 12-15 June), after which it will be negotiated and finalised with the Council as part of the EU legislative process. The AI Act is expected to be finalised and passed into law by the end of 2023. It will then have a two-year implementation period. Therefore, on current projections, it is likely to take effect in the first half of 2026 (although certain provisions relating to high-risk AI may take effect sooner).

Implications for New Zealand

1. The AI Act is likely to influence the manner in which many New Zealand businesses create and deploy AI technologies. Based on the current draft, the AI Act has extra-territorial application and will apply to New Zealand businesses if they offer AI systems or services within the EU. This is familiar territory for New Zealand businesses that have found themselves within the long-reach of the GDPR. The AI Act is also likely to set a ‘high-water mark’, particularly for global companies seeking to apply a consistent approach to regulatory compliance across multiple jurisdictions.

2. The AI Act is set to provide a reference point for the international development of AI regulation, including in New Zealand. The aims of the AI Act are commendable, including protecting individuals, fostering trust in AI systems and providing a clear regulatory environment to support investment and entrepreneurship. However, the AI Act continues to stimulate debate on the many challenges of regulating AI and providing an appropriate framework for AI innovation, in the context of a rapidly evolving range of AI systems.


Bell Gully’s Consumer, Regulatory and Compliance (CRC) team will be closely monitoring international development on the regulation of AI. If you have any questions, please get in touch with the contacts listed, or your usual Bell Gully adviser.

 


Disclaimer: This publication is necessarily brief and general in nature. You should seek professional advice before taking any action in relation to the matters dealt with in this publication.