Home Man sues after ChatGPT falsely accuses him

Man sues after ChatGPT falsely accuses him

June 9 — ChatGPT is notorious for generating false and misleading information, an issue that OpenAI and the conversational AI industry have neatly glossed over by calling fabricated narratives “hallucinations.” In essence, ChatGPT has no particular commitment to the truth.

After a ChatGPT “hallucination” accused radio host Mark Walters of defrauding and embezzling funds from a non-profit organization, Walters has decided to sue OpenAI for defamation. In terms of the details, ChatGPT wrote that Walters had misappropriated funds from a gun rights group in excess of $5 million. Further, ChatGPT described Walters as someone who illegally used ill-gotten funds for personal expenses and as someone who manipulated financial records. None of those things are true.

ChatGPT generated the false information in response to a request from journalist Fred Riehl, who may have discovered this information on ChatGPT himself. Mark Walters’ defamation case was filed this week in Georgia’s Superior Court of Gwinnett County. Walters is seeking unspecified monetary damages from OpenAI.

False information

In light of widespread complaints about misinformation generated by ChatGPT (and similar chatbots), this defamation case has turned heads. Conversational AI systems cannot discern reality from fiction, and when asked for information, chatbots commonly invent dates, facts, and figures.

To illustrate, earlier this month, Manhattan lawyer Steven Schwartz unexpectedly found himself playing the role of defendant in a legal case, after using ChatGPT to assist with document creation. As it turned out, the information within the document had been made up by ChatGPT. The “facts” were not real.

After his college-age children had recommended ChatGPT, Schwartz erroneously believed that ChatGPT had greater reach than Google, and that it operated like a search engine. “I heard about this new site, which I falsely assumed was, like, a super search engine,” he told a judge.

Although OpenAI includes a small disclaimer on the ChatGPT homepage saying that systems “may occasionally generate incorrect information,” the company also presents ChatGPT as a reliable source of data. In ad copy, ChatGPT has been touted as a way to “get answers” or to “learn something new.”

 Dangers and work-arounds

These instances exemplify how large language model outputs cannot inherently be trusted, despite our inclination to trust them. In a business context, while ChatGPT and similar tools provide business value, it’s very risky for employees to rely on them blindly. Where data is limited, ChatGPT simply fills in the blanks with its own fabrications.

Says OpenAI CEO Sam Altman, “ChatGPT is incredibly limited but good enough at some things to create a misleading impression of greatness.”

Last month, OpenAI noted that it is developing a new method of AI chatbot training that is intended to address the issue of “hallucinations.”

In the interim, one way to limit the effect of ChatGPT hallucinations is to use langchans to connect to existing sources of knowledge that contain proprietary information. In the context of serious professional research inquiries, for example, langchans can improve the quality and reliability of large language model outputs, although there’s still room for error.

For more insights into managing ChatGPT risks, please see CyberTalk.org’s eBook. Lastly, subscribe to the CyberTalk.org newsletter for executive-level interviews, analyses, reports and more each week. Subscribe here.