Always check the source – could ChatGPT be defaming you?

27 June 2023

A radio host in the United States alleges he was defamed by ChatGPT when it completely fabricated his involvement in legal proceedings. We analyse how New Zealand’s defamation laws might respond to ChatGPT.

Mark Walters, a radio host from Georgia, USA, has filed the first defamation lawsuit against OpenAI, L.L.C. (OAI) alleging ChatGPT fabricated “false and malicious” accusations against him.

Walters alleges a journalist provided ChatGPT a link to the proceeding Second Amendment Foundation (SAF) v Robert Ferguson, and requested a summary of the SAF’s accusations against the defendants in that proceeding. The SAF case involves a civil rights complaint against the Washington Attorney-General, arising from its investigation of individuals and groups who broadly oppose restrictions on the right to bear arms.

ChatGPT suggested the proceeding was brought by the SAF’s founder against Walters, who was accused of (amongst other things) defrauding and embezzling funds from the SAF during the time he served as treasurer and CFO. A phenomenon OAI describes as “hallucinations” meant ChatGPT was able to provide the journalist with a completely fabricated record of SAF’s complaint in that proceeding. 

In reality, Walters was not a defendant in the SAF proceeding (he isn’t even named in it), had never held a position as the SAF’s treasurer or CFO, and had never been accused of defrauding or embezzling funds from the SAF. In Walters’ words, every statement pertaining to him in ChatGPT’s “fact” summary was false.

Walters claims OAI defamed him when ChatGPT published those allegations to the journalist1.

What are artificial intelligence “hallucinations”?

An artificial intelligence (AI) hallucination is the phenomenon of a machine, such as ChatGPT, generating seemingly realistic information which does not correspond to any real-world input. AI hallucinations can range from the entertaining to the concerning, particularly when dealing with convincing misinformation.

Defamation law in New Zealand

What is defamatory?

There is no single definition for what is defamatory. Generally, defamatory statements include statements which:

  • impact negatively on a person’s or a business’s reputation,
  • could cause other people to avoid or shun a person or business, or
  • are false and discredit a person or business.

There are no hard and fast rules for what can be considered defamatory in any given context. Some statements can be blatantly defamatory. However, even something which seems relatively innocuous or innocent could be considered defamatory in the right context.

The statement must be “published”

Publication is an essential element of the tort of defamation. Publication is the process by which the defamatory material was conveyed or disseminated and captures each step in the chain of publication. Generally, liability extends to any person (or entity) who participated in, secured, or authorised the publication. In practice, this means an author or journalist which composes allegedly defamatory material (the originator) is as responsible in law as the entity which disseminates that material to the wider public (e.g. media entities).

Does ChatGPT “publish” content in its own right?

Whether ChatGPT publishes allegedly defamatory content in its own right will be a key (and novel) issue for the courts.

The broad approach to cases involving defamation and the internet suggests defamation laws will continue to be applied without amendment, to the online environment. Broadly, a person who participates in or contributes to the publication of another person’s defamatory statements is prima facie liable as a publisher. However, in the case of publication occurring via the internet, liability might be restricted if the involvement was entirely passive.

Some guidance may be drawn from other cases where the courts were faced with novel issues of online publication:

  • Search engine operators: The issue of whether a search engine operator is liable as the publisher of search result “snippets” containing allegedly defamatory material has come before the courts on several occasions. In A v Google New Zealand2, the High Court of New Zealand deemed the issue too novel to deal with on a summary judgment application, but suggested there may be a need to consider whether there was a “stamp of human intervention” in the way the search engine programme was written. In the UK, the court has held that an operator did not publish words in the “snippets” generated by the search engine, as it had no role in formulating search terms and did not authorise or cause the snippet to appear on a user’s screen in any meaningful sense3. The Australian High Court recently held Google was not a “publisher” when its search engine returned results that included hyperlinks to defamatory webpages, finding it merely facilitated access to those webpages and had no active participation in communicating the webpage’s content4. However, there is some authority for the conflicting view that Google Inc is not a “passive” participant in the dissemination of material to users of the Google search engine5.
  • Publication of third-party comments online: In Murray v Wishart the Court of Appeal considered the distinction between a “mere host” and a publisher in the context of liability for comments published by third parties on a Facebook page operated by the defendant6. The Court held that a host could be liable as publisher if they had actual knowledge of defamatory material and failed to remove it within a reasonable time, allowing responsibility for the statement to be inferred.

Courts will apply the established principles to determine whether an AI platform provider is liable as a publisher of allegedly defamatory material originating on the platform. In doing so, AI platform providers will likely be at risk of being the primary publisher of allegedly defamatory content originating on its platform, particularly where the content was the result of an AI hallucination. In such a case, the active participation by the AI means it unlikely the provider could establish the AI was effectively a “mere facilitator”.

The innocent dissemination defence

A defence of innocent dissemination exists if a person published material that is the subject of defamation proceedings solely in the capacity of a processor7 or distributor, and is able to prove:

  • they did not know the matter contained defamatory material,
  • they did not know the matter was of a character likely to contain defamatory material, and
  • that lack of knowledge was not due to negligence.

Whether this defence has the flexibility to adapt to emerging technologies is questionable, given the narrow definition of processor in the Defamation Act 1992 (Act).

It is unclear whether OAI or similar AI platform providers could establish this defence, as it is usually lost when actual knowledge of possible defamation exists. Critically, OAI is aware ChatGPT can fabricate and publish defamatory material and essentially admits as much with its bottom-of-page disclaimer that “ChatGPT may produce inaccurate information about people, places or facts”.

Intention to defame is not necessary

If you put the question directly to ChatGPT – has it ever defamed someone? – it will confirm it does not have personal experiences, opinions or intentions of its own, or the ability to defame.

However, from a legal stand-point, that isn’t quite right. Intention is not a necessary element of defamation in New Zealand. Any person or entity which is involved in publishing a defamatory statement can be liable even if they (or it) did not intend to defame anyone.

What if you repeat something defamatory which ChatGPT published?

A person can be liable in defamation even if all they do is repeat defamatory words which came from someone else, including from ChatGPT. There is no defence that would protect you if all you did was repeat defamatory words that came from ChatGPT.

Impact of the nature and extent of publication

The extent of publication is relevant both at the interlocutory and damages stages.

It is presumed harm to reputation will occur automatically when a defamatory statement is published, and a plaintiff does not need to prove actual harm in order to obtain damages for defamation. However, that presumption is rebuttable. A defendant has a complete defence if it can show the harm caused was less than minor. That defence may form the basis of a strike-out application, on the basis the cause of action is clearly untenable. This principle could have a chilling effect on claims similar to the SAF case in New Zealand. In an identical fact scenario, OAI could seek strike-out, claiming the plaintiff’s reputation was not harmed as a result of ChatGPT publishing the defamatory statement to a journalist who shared that statement with one other person who was in a position to immediately confirm it was false. 

In a successful action where the extent of publication was limited, damages will typically be lower. Damages usually increase commensurate to the extent of publication and are typically higher in cases involving nationwide publication. However, this is not always the case. The nature of the group which received the publication is relevant. Damages could increase notwithstanding limited distribution if the relevant group was (for example) everyone in the plaintiff’s community.

OAI’s options for defending the SAF case in New Zealand would change if the journalist had gone on to publish an article featuring ChatGPT’s defamatory statements. If that occurred, ChatGPT (OAI), the journalist, the publishing entity and any other person or entity concerned with the article could be jointly or severally liable. As a joint tortfeasor, ChatGPT as the originator of the allegedly defamatory material in the article could be held jointly responsible for the whole damage suffered by the plaintiff, rather than being liable only in relation to the separate act of its publication to the journalist.

Essentially, if ChatGPT publishes defamatory content to a journalist, it risks exposure to the full damages claim if the defamatory content is ultimately published to a wider audience.

The principle of “intended republication”

The extent to which ChatGPT “knew” it was responding to questions posed by a journalist for the purpose of publication is another issue the courts may need to grapple with.

This engages the principle in Slipper v BBC8, where the test for a defendant’s liability for a republication by another is whether the republication should have been within the reasonable contemplation of the defendant. Another line of authority coming from UK decisions suggests a defendant should only be liable as a publisher of a republication if it can be shown the defendant intended or authorised the republication9. As a general principle, a source may attract liability for its contribution to a publication if it knew and intended that its contribution would be republished.

There is the potential for either test to be met where an AI platform like ChatGPT passes information to a journalist, particularly if it was on notice the information would be republished by the media.

Could an AI program correct a defamatory statement?

The Act provides a mechanism for pursuing a correction in lieu of damages. In circumstances where there is no realistic prospect of successfully defending a claim (i.e. because the statement was an AI hallucination which is demonstrably untrue), the correction mechanism could provide an avenue for speedy resolution of what could otherwise be a drawn-out and expensive defamation proceeding.

Section 26 of the Act recognises that in any proceedings for defamation, a plaintiff may seek a recommendation from the court that the defendant publish a correction, and the court may make that recommendation. The recommendation may cover the content, timing and placement of the correction.

A correction published by a defendant pursuant to a court recommendation will bring the proceedings to an end. In that case, the plaintiff will be entitled to no relief other than solicitor-client costs against the defendant. In contrast, failure to publish a correction despite being recommended to do so by the court will be taken into account at the damages stage, in the event the plaintiff is successful at trial. The plaintiff in that case would also be awarded solicitor client costs against the defendant, unless the court orders otherwise.

It remains to be seen whether AI programs like ChatGPT could publish suitable corrections (or whether developers like OAI could offer to do so as part of a settlement offer). If a correction was ordered and OAI could not comply, it would be exposed to higher damages. However, as courts have discretion whether to recommend a correction, if compliance by AI programs is not possible, it is likely we would not see the section 26 resolution process utilised in defamation proceedings involving AI programs.

If you have any questions about the matters raised in this article, please get in touch with the contacts listed or your usual Bell Gully adviser.

[1] This is not an isolated incident, with Reuters also reporting that a regional Australian mayor may sue OpenAI if it does not correct false claims being made by ChatGPT regarding his involvement in a bribery scandal.[2] A v Google New Zealand [2012] NZHC 2352.[3] Metropolitan International Schools Ltd v Designtechnica Corp [2009] EWHC 1765 (QB).[4] Google LLC v Defteros [2022] HCA 27 (17 August 2022).[5] E.g. see Trkulja v Google Inc [2012] VSC 533; Yeung v Google Inc [2014] HKCFI 1404; (2014) 4 H.K.L.R.D. 493.[6] Murray v Wishart [2014] 3 NZLR 722; Wishart  Murray [2013] NZHC 540.[7] Defined as a person who prints or reproduces, or plays a role in printing or reproducing, any matter.[8] Slipper v BBC [1991] Q.B. 283 (CA).[9] Berezovsky v Terluk [2011] EWCA Civ 1534; Starr v Ward [2015] EWHC 1987 (QB).


Disclaimer: This publication is necessarily brief and general in nature. You should seek professional advice before taking any action in relation to the matters dealt with in this publication.