In Walters v. OpenAI, a Georgia state trial court rejected a defamation lawsuit brought against OpenAI by radio talk show host Mark Walters. Walters had sought monetary compensation from the company for damage to his reputation allegedly caused when, in response to prompting from a journalist, OpenAI's ChatGPT tool produced an output that wrongly described Walters as a defendant in a lawsuit who had been accused of fraud.
The court initially denied OpenAI’s motion to dismiss in January 2024, after which the parties engaged in a few months of discovery. OpenAI then filed a motion for summary judgment. In order to aid the court’s consideration of this motion, NYU’s Technology Law & Policy Clinic filed a friend-of-the-court brief emphasizing that the court should consider the core purposes of defamation law and conduct a fact-specific inquiry to adapt the law to this new and increasingly widespread technology.
In its opinion, the court granted summary judgment in favor of OpenAI on three independent bases. The court first concluded that the ChatGPT output at issue could not be “reasonably understood as describing actual facts” about Walters, in part because ChatGPT specifically warned the journalist that it possessed insufficient information to answer his prompts and because OpenAI generally and repeatedly warns all of its users that “ChatGPT can and does sometimes provide factually inaccurate information.” The court additionally found significant that the journalist admitted that within a short time after prompting ChatGPT, he “had confirmed that the output did not contain actual facts.” As the court noted, “[i]f the individual who reads a challenged statement does not subjectively believe it to be factual, then the statement is not defamatory as a matter of law.”
The court also likened OpenAI to a “publisher” and explained that Walters failed to show how OpenAI was at fault, under either a negligence or actual malice standard, in allowing ChatGPT to produce the output. Walters had merely alleged that “because ChatGPT is capable of producing mistaken output, OpenAI was at fault simply by operating ChatGPT at all.” The court rejected this proposed rule, which “would impose a standard of strict liability” on OpenAI, holding the company liable for any injury caused by ChatGPT, no matter what steps OpenAI might take to prevent such injury.
Finally, the court held that, in any event, Walters had not suffered and was not entitled to any damages arising out of the inaccurate output.