OpenAI has been hit with its first defamation lawsuit after ChatGPT fabricated legal accusations against a radio host.

Mark Walters, a radio host in Georgia, is suing OpenAI in what appears to be the first defamation lawsuit against the company over false information generated by ChatGPT. 

The chat AI stated that Walters had been accused of defrauding and embezzling funds from a non-profit organization. ChatGPT generated the information in response to a request from a third party, a journalist named Fred Riehl. 

According to the lawsuit, Riehl asked ChatGPT to summarize a real federal court case by linking to an online PDF. In response, ChatGPT generated a false summary of the case — detailed and convincing, but wrong in many aspects — including false allegations against Walters. 

Riehl never published the false information ChatGPT generated but checked the details with another party. It is unknown from the facts of the case how Walters came to find out about the fabricated information. Walters filed his lawsuit on June 5 in Georgia’s Superior Court of Gwinnett County, seeking unspecified monetary damages from OpenAI.

The Verge highlights that “despite complying with Riehl’s request to summarize a PDF, ChatGPT is not actually able to access such external data without the use of additional plug-ins. The system’s inability to alert Riehl to this fact is an example of its capacity to mislead users.” Since then, ChatGPT has been updated to alert users that “as an AI text-based model, I don’t have the ability to access or open specific PDF files or other external documents.” 

Chatbots like ChatGPT have no reliable way to distinguish fact from fiction, and when asked to confirm something the asker has suggested to be true, they will frequently invent facts, including dates and figures. That has led to widespread complaints about false information generated by chatbots.

When people interfacing with ChatGPT don’t realize that it isn’t a “super search engine” and can and will fabricate outright lies, cases increasingly emerge of these errors causing harm. Namely, a professor threatening to fail his class after ChatGPT falsely claimed his students used AI to write their essays and a lawyer facing legal repercussions after using ChatGPT to cite fake legal cases.

Though OpenAI includes a small disclaimer on ChatGPT’s homepage stating that the system “may occasionally generate incorrect information,” the company has also presented ChatGPT as a source of reliable data in its ad copy. OpenAI CEO Sam Altman has even gone so far as to state that he “prefers learning new information from ChatGPT than from books.”

So is there a legal precedent to hold a company accountable for false or defamatory information generated by its AI systems? It’s hard to say.

In the U.S., Section 230 safeguards internet firms from legal liability for information produced by a third party and hosted on their platforms. But it’s unknown whether these protections apply to AI systems — particularly when those systems were created, trained, and hosted by the company in question.

Law professor Eugene Volokh notes that Walters did not notify OpenAI about the false statements about him to give the company a chance to remove them and that there have been no actual damages due to ChatGPT’s output. Volokh concludes that while “such libel claims (against AI companies) are in principle legally viable,” Walters’ lawsuit “should be hard to maintain.”

“In any event, though, it will be interesting to see what happens here,” says Volokh.

Comments

comments