This article provides a brief snapshot of recent legislative developments from overseas and summarises how these jurisdictions are seeking to impose appropriate guardrails as the growth of AI continues to accelerate.
In early June, Australia took its first steps towards regulating AI, releasing a discussion paper titled: Safe and Responsible AI in Australia (here). The consultation, which closes on 26 July 2023, observes that various risks associated with AI are already addressed by existing laws, including privacy, competition, copyright and consumer protection laws, but seeks feedback on the extent of regulatory ”gaps”. In addition, the paper seeks submissions on whether any “high-risk” AI applications should be banned completely and the elements that should be considered as part of a risk-based regulatory approach.
In the EU, the proposed AI Act establishes differing regulatory requirements for varying categories of risk: minimal, limited, high and unacceptable. Minimal risk AI is permitted with no restrictions, while unacceptable risk AI is banned (see our recent article here for further details). On 14 June 2023, the European Parliament approved the form of the proposed AI Act and will now debate the draft legislation with the European Commission and the Council (a “trilogue”). The trilogue is expected to focus on various contentious issues, including how to categorise particular technologies such as biometrics, and how the AI Act will be supervised and enforced. Once passed, the new law will then have a two-year implementation period (and is likely to take effect in the first half of 2026).
The UK has sought to actively position itself as a leader in AI innovation and regulation. At a recent tech conference on 12 June 2023, Prime Minister Rishi Sunak described the UK as the “home of global AI safety regulation”. That follows a recent UK Government white paper: AI Regulation: A Pro-Innovation Approach (here), which focuses on the potential benefits of AI and proposes a light-touch “principles-based” approach rather than “heavy-handed” legislation. The application of the various principles (which include safety and security, transparency, explainability, fairness, accountability, and contestability) will be at the discretion of existing regulators, allowing prioritisation according to the needs of their relevant sectors. Separately, the UK Government has established a specialist AI taskforce, and earlier this month announced that it will host a global summit on AI regulation later this year.
In a speech on 22 June, Senate Majority Leader Chuck Schumer outlined high-level principles for a broad new legislative framework for regulating AI, describing the technology as “world-altering”. Specific details of the legislation remain unclear, but it will be based on five “pillars”, developed following consultation with AI developers and industry experts: Security; Accountability; Foundations; Explain (ensuring that companies share how an AI system arrived at a particular solution); and Innovation. Schumer described the initiative as “an all-hands-on-deck effort” involving various committees. The initiative follows a “Blueprint for an AI Bill of Rights” issued by the White House earlier this year, which establishes key principles to help guide the design and use of AI. President Biden, who has described the issue as a “top priority”, met with a panel of industry experts earlier this week to discuss the development of AI regulation. Separately, a recent public consultation by the National Telecommunications and Information Administration has sought feedback on the development of auditing systems to assess the risks of AI systems.
At present, the New Zealand Government has not issued any specific proposals to regulate AI technology. For the time being, the use of AI will be regulated under existing frameworks, including in particular:
- Privacy: As the Privacy Commissioner emphasised earlier this month, the Privacy Act 2020 is “technology neutral” and already applies to AI systems which use personal information. The OPC has also recently issued guidelines on the responsible use of AI (here). The recommendations include carrying out privacy impact assessments before using AI systems, and ensuring that “senior leadership has given full consideration of the risks and mitigations of adopting a generative AI tool and explicitly approved its use.”
- Consumer law: The Fair Trading Act 1986 includes various prohibitions against misleading or deceptive conduct, and a new prohibition against “unconscionable conduct” (see our separate update here). While the use of AI in many cases is unlikely to engage these prohibitions, certain applications could be challenged as misleading or unconscionable in particular circumstances (e.g. chatbots which provide false information, the use of deepfakes, or AI-generated customer reviews).
It remains to be seen to what extent these existing frameworks are sufficient to regulate AI in New Zealand, or whether the rapidly-evolving technology requires the introduction of a new statutory framework. In the meantime, the increasing range of overseas proposals serve as a reminder of the complexity of the challenge ahead – and the difficult balance required to ensure the risks of AI are minimised without unduly stifling innovation.
Bell Gully’s Consumer, Regulatory and Compliance (CRC) team will be closely monitoring international development on the regulation of AI. If you have any questions, please get in touch with the contacts listed, or your usual Bell Gully adviser.