A dithered abstract image with lines.

Walters v. OpenAI

In Walters v. OpenAI, the radio talk show host Mark Walters has sued OpenAI for defamation. According to the complaint, when a journalist prompted ChatGPT for information about an unrelated lawsuit, the tool wrongly described Walters as a defendant in that case who had been accused of fraud. Walters is seeking monetary compensation from the company for the reputational damage allegedly caused by these false statements.

OpenAI initially moved to dismiss the case, but the Georgia Superior Court denied the company’s motion in January 2024 and allowed the case to proceed. After a few months of discovery, OpenAI filed a motion for summary judgment in November 2024. This new motion, which the court has not yet decided, raises two pressing questions that bear on whether and how individuals can sue GenAI companies for defamation. First, should we think of outputs from GenAI tools as factual assertions? OpenAI argues that here we should not, pointing to its disclaimers and warnings about ChatGPT’s lack of reliability. Second, how should the traditional fault requirements of defamation law—which demand that a plaintiff show a statement was made negligently or, in some cases, with actual malice—apply to GenAI companies? OpenAI suggests that you can’t hold it responsible for ChatGPT’s outputs because no human employee is directly in the loop, while Walters responds that OpenAI should be liable because the company was generally aware that ChatGPT is capable of hallucinations. NYU’s Technology Law & Policy Clinic has filed a friend-of-the-court brief emphasizing that, in answering these questions, the court should consider the core purposes of defamation law and conduct a fact-specific inquiry to adapt the law to this new and increasingly widespread technology.

LAST UPDATED 03/13/2025