Can I sue a GenAI company for defamation if its tool generates false information about me?

SHORT ANSWER

Probably yes.

CONFIDENCE LEVEL

12345High

LONG ANSWER

Defamation law allows you to sue someone who says or writes something about you that’s false and harmful to your reputation. You can use a lawsuit to stop them from repeating the falsehood or force them to compensate you for the harm they caused.

In the United States, defamation law differs a bit from state to state, but the basic elements of a defamation claim are substantially the same. You typically have to prove: 1) a false statement that purported to be fact about you; 2) publication of the statement to a third party (i.e., anyone but you); 3) some level of fault on the part of the individual or entity that published the statement; and 4) reputational harm.

But suing a GenAI company for defamation would not be a standard case. The most likely scenario would involve a suit against the company for a “hallucination”—something made up—that the GenAI returned to a user or users. Its success would hinge on a number of currently unresolved legal questions. Fortunately, there are at least two live court cases that might start to provide some answers.

If you tried to sue a GenAI company, the first hurdle that you’d have to overcome is convincing a court that the harmful hallucination constituted a statement of “fact.” In one of the current cases, OpenAI is going out of its way to avoid that conclusion, pointing to its disclaimers about ChatGPT’s lack of reliability and suggesting that since its outputs are “probabilistic,” they can never really be considered factual statements. For sure, given that GenAI tools are simply “advanced pattern recognizers” trained to “predict the next word in a sentence,” it’s hard to see how their outputs could ever be thought of as supplying “facts” or “truth” in any meaningful sense.

But how a GenAI tool is trained and actually operates under the hood will matter less for this step of the defamation analysis than how the GenAI output at issue is presented in the context of a specific use case. Would an average user read the output and reasonably assume it’s making a statement of fact? For example, Microsoft has integrated GenAI into Bing, a search engine that it maintains “offers you reliable, up-to-date results – and complete answers to your questions.” Microsoft won’t—and shouldn’t—get to evade liability for producing defamatory search results when it widely advertises its new GenAI tool as a reliable source of information. A contrary disclaimer in such a situation will only get the company so far.

A closely related issue is that of “publication.” Here, you’d have to show that the harmful hallucination was somehow shared with or “published” to someone else. This is a pretty intuitive requirement—after all, if your worst enemy typed a fib-filled diatribe about you into a Word document that no one ever read, where’s the harm to you or your reputation?

OpenAI is currently defending itself against a defamation case by basically arguing that ChatGPT is the Word document. The company refers to its Terms of Use—which assign users rights to their ChatGPT outputs and states that users are “responsible” for them—to describe ChatGPT as nothing more than “a private drafting tool” that “helps [a user] write or create content owned by the user.”

This is a novel argument that the courts will likely reject, particularly when it comes to publicly available GenAI tools like ChatGPT and Bing. Defamation law has traditionally recognized that publication “encompasses any communication of the idea to a third party other than the plaintiff [and defendant] themselves,” no matter who “owns” the communication. As long as a company operating a GenAI tool has shared the harmful hallucination with someone other than itself—even if it’s just the individual user who prompted the tool—you’re likely to clear this hurdle.

Then there’s the issue of “fault.” Generally speaking, you can’t hold a person or company liable for defamation unless you can show that at the very least they acted negligently in sharing the falsehood. (And if you’re a public official or public figure hoping to sue, you’d have to show that they acted with an even greater level of fault, what’s known as actual malice—that is, with knowledge that the statement was false or with reckless disregard as to whether it was false or not.)

What evidence would you need to prove that a GenAI company was negligent (or worse) in allowing its GenAI tool to output a harmful hallucination? Is it enough to point out that the GenAI company generally knows, or should know, that its tool can sometimes produce false information? Or must you somehow show that the GenAI company was specifically aware, or should have been specifically aware, that its tool produces the particular hallucination that harms you?

Courts have yet to sort this all out. For now, though, it’s worth noting that you’re likely to clear this hurdle—that is, a GenAI company will likely be found at fault—if the GenAI company has been put on notice that its tool is producing a specific harmful hallucination about you, but then does nothing to try to rectify it. This may have major implications for the design of GenAI tools going forward. Will GenAI companies need to implement notice-and-takedown procedures for people to report harmful hallucinations? Will they need to finetune their filters or erect greater content-moderation guardrails against defamation?

Whether you can successfully sue a GenAI company for defamation will ultimately come down to the facts of your particular situation. Did the GenAI tool produce the harmful hallucination in a context, like a search engine, where users reasonably expect factual responses? After discovering the GenAI tool’s capacity to produce the harmful hallucination, did you or someone else notify the GenAI company? If the answers to these questions are both yes, then you just might have a defamation claim against the GenAI company.

LAST UPDATED 10/14/2023