A balancing act: achieving safer online services and media platforms

28 June 2023

The Department of Internal Affairs (DIA) has been tasked with developing a new framework for the regulation of online and media content in New Zealand. The aim is to better manage the risks of harm from content. The Safer Online Services and Media Platforms discussion document for public consultation has been released and the DIA is seeking feedback on the proposal by 31 July 2023. The proposed new regime would see both online and traditional media content brought under one new independent regulator.

Our article provides an overview of the discussion document, including the key proposed changes. It outlines some of the main concerns raised by commentators, and sets out the next steps in the public consultation process.

Background

The rise of new forms of digital media and online platforms has offered numerous benefits, but it has also led to concerns about the proliferation of harmful and illegal content online. Earlier this month, the video of the Christchurch mosque attacks resurfaced on Twitter and was viewed by over 1000 accounts. Despite the video being illegal in New Zealand and Twitter being a supporter of the Christchurch Call, Twitter initially refused to take it down, stating that the account had not broken Twitter’s safety policies.  Following an intervention by the DIA and inquiries from the New Zealand Herald, Twitter has now removed the content1. This event has provoked further discussion about the need for regulation of social media platforms.   

The main New Zealand legislation that regulates content (the Broadcasting Act 1989 and the Films, Videos, and Publications Classification Act 1993) are over 30 years old. In response, and in line with developments in other jurisdictions, the New Zealand government has initiated a public consultation to strengthen and further develop the existing regulatory framework.

Currently, there are gaps in the system and regulatory responsibility is split across a range of organisations including the DIA, the Police, the Classification Office, the Film & Video Labelling Body, the Broadcasting Standards Authority, the New Zealand Media Council, Netsafe and the Advertising Standards Authority. The new regime would bring all content under one regulatory umbrella, including content on social media platforms which often falls outside the scope of existing bodies.

The objectives of the proposed changes are2:

  • to achieve better consumer protection for all New Zealanders and their communities by setting safety-based outcomes and expectations for platforms,
  • to provide better protection to children, young people and other vulnerable New Zealanders,
  • to reduce risk and improve safety without detracting from essential rights like freedom of expression and freedom of the press, and
  • to promote a safe and inclusive content environment while remaining consistent with the principles of a free, open, and secure internet.
What’s being proposed

The discussion document seeks feedback on an overall approach to a new regime with some high-level detail about how this might be delivered in practice. The proposal includes the following key aspects:

  1. New Legislation: Parliament will pass legislation setting expectations for the safety-based outcomes platforms must achieve, as well as the mandate and scope of the new independent regulator. A draft bill is not expected until at least 2024. The new legislative framework would repeal the Classification Act, but it would carry over existing provisions on illegal “objectionable” material. A code-based regime would replace the current classification regime for legal content. Amendments would also likely be made to the Broadcasting Act. Existing bodies such as the Broadcasting Standards Authority will likely form part of the new system, however they will sit under one independent regulator.
  2. Codes of Practice: Codes of practice will contain more detailed expectations for harm minimisation, user protection and transparency across services. Platforms over a specified size or risk level would be required to comply with codes of practice to manage content and address complaints about particular harmful content. Codes of practice would be developed by industry groups with support from the regulator. Platforms will be expected to align their terms of service and operating procedures to the relevant codes. The DIA has indicated the codes could cover3:
    • processes for platforms to remove content and reduce the distribution of unsafe content,
    • accessible processes for consumer complaints for particular content,
    • support for consumers to make informed choices about potentially harmful content,
    • how platforms will report on these measures, including on transparency, and
    • how they are reducing the impact of harm from content, and their performance against codes.
  3. Independent Regulator: A new independent regulator would be responsible for approving the codes of practice, overseeing compliance with the codes, and education outreach. The DIA has indicated the regulator would be focused on areas with the highest risk (e.g. harm to children, the promotion of terrorism or violent extremism) and would not have any powers over editorial decision-making or individual users who share content. The government would only intervene with individual pieces of content if they are, or could be, illegal – a power that already exists. The DIA has emphasised that the regulator will not have the power to require platforms to takedown content that is not illegal.
  4. Expansion of regulated platforms: The scope of regulated entities will extend beyond traditional media services like TV and radio broadcasters to include digital media platforms, social media platforms, and other online service providers. For example, there could be rules requiring the responsible and transparent design of “ranking algorithms” such as social media newsfeeds, metrics for reporting on harm, and limits on the ability of users who post harmful content to reach wide audiences.

    Most of the obligations in the proposed framework would apply to “regulated platforms” whose primary purpose is to make content available. Current indications are that the platform or service is likely to have either an expected audience of 100,000 or more annually or 25,000 account holders annually in New Zealand. Alternatively, the regulator may designate a platform as a regulated platform if it is unclear whether the threshold has been met, or the risk of harm from that platform is significant. 

  5. Illegal Content: The DIA is not proposing to change the definitions of what is currently considered illegal in New Zealand. The new regime would retain powers of censorship for the most extreme kinds of content (called ‘objectionable’ material, defined in section 3 of the Classifications Act). The new regulator would have powers to require illegal material to be taken down quickly from public availability. Criminal and civil penalties would still apply and prosecutions could continue to be undertaken by government agencies. 

    The DIA is proposing that the regulator should also have powers to deal with material that is illegal for other reasons, for example, harassment or threats to kill and is seeking feedback on what other kinds of illegal material the regulator should have powers to deal with. The proposed enforcement powers include directing platforms to take remedial action, issuing formal warnings, seeking civil penalties for significant regulatory non-compliance, and requiring platforms to take down illegal material quickly when directed and be liable for not meeting specified timeframes.

Concerns about greater regulation of content

While the objectives of the proposed new regime are well-meaning and important, concerns have been raised by commentators regarding the potential impact on freedom of speech and freedom of the press. It is crucial that any regulatory change strikes the right balance between safeguarding public safety and protecting these fundamental rights. There are also questions of how, if at all, the regime could be enforced against overseas-based platforms. Some key concerns raised to date include:

  1. Overreaching censorship: Some commentators are concerned that the new regime could result in over-censorship, impinging on legitimate speech and inhibiting the free flow of ideas. For example, algorithms filtering content may go further than is necessary. Striking the right balance between protecting users from harmful content and upholding the right to freedom of expression is paramount. As noted above, the DIA has indicated that the regulator will not be editorialising or regulating “harmful content” directly, instead the code regime is intended to apply to systems and processes which seek to provide information and options to users to filter and restrict their own content.
  2. Ambiguity and subjective judgment: Defining and identifying harmful content is difficult. Commentators have expressed concerns about the potential for subjective interpretation, which may result in inconsistent content moderation decisions across platforms.
  3. Enforceability: Many of the largest platforms that New Zealanders engage with, such as Facebook, Instagram, Twitter, Netflix and Amazon Prime, are headquartered overseas. The DIA has indicated that the new regime will be designed to align with those in other jurisdictions (such as Australia and the European-Union), so that a breach of New Zealand’s codes would likely also amount to a breach of other countries’ codes. Regulators would then work together across jurisdictions to address harmful content.
Where to next?

Consultation on the Safer Online Services and Media Platforms discussion document is now open and submissions close on Monday 31 July 2023. Feedback received by the DIA will inform further decisions about the new framework, the eventual Bill, and the development of the new regulator.

Bell Gully welcomes feedback from clients and other interested parties on the issues covered.

If you have any questions on the Safer Online Services and Media Platforms consultation, or would like assistance in making submissions, please get in touch with the contacts listed or your usual Bell Gully adviser.

[1] Christchurch mosque attack: Terrorist’s livestreamed video resurfaces on Twitter - NZ Herald. [2] Safer Online Services and Media Platforms Factsheet, June 2023 < Safer Online Services and Media Platforms - Factsheet >.[3] Ibid.

 


Disclaimer: This publication is necessarily brief and general in nature. You should seek professional advice before taking any action in relation to the matters dealt with in this publication.