top of page
Writer's pictureWilliam C. Little

A Cautionary Tale: AI Sometimes Acts Like a Mischievous Child-Like Machine

Updated: Jun 22, 2023

A New York attorney had the misfortune of finding out the hard way about the limitations of using Artificial Intelligence (AI) as a substitute for legal research. Referred to sarcastically in a New York Times headline as the "ChatGPT Lawyer," Mr. Steven A. Schwartz was recently on the hot seat before a U.S. District Judge having to explain why he should not be sanctioned for citing bogus case law in a brief he had submitted to the court. Mr. Schwartz's explanation? The cases were provided by ChatGPT, and he thought they were legitimate. Sadly, he was mistaken. In a subsequent court filing, Mr. Schwartz said that he "greatly regrets" his reliance on the AI chat bot and he was "unaware of the possibility that its contents could be false." Mr. Schwartz will likely (hopefully) survive this snafu, but his unfortunate tale has been reported far and wide by major news outlets such as the New York Post, Bloomberg Law, the BBC, and Reuters, to name just a few.



A Cautionary Tale for Using Legal AI


Mr. Schwartz will certainly not be the last legal professional to make the mistake of naively trusting AI-generated content. I have also witnessed it first hand in one of my own cases.


In a brief filed in a Texas appellate court, an opposing party appears to have used AI to help draft an argument. Two cases cited in the brief are Miller v. Wilson, 456 S.W.3d 654 (Tex. 2000), and Brown v. Davis, 123 S.W.2d 321 (Tex. 2002). The problem is neither of those cases actually exist. In situations like this, you can count on any opposing counsel worth their salt to bring such issues to the court's attention. Just ask Mr. Schwartz.


Did you happen to notice anything unusual about those two case citations? The volume and page numbers are reversible pairs. In the first citation, the pairs are "456" and "654." In the second case, the reversible pairs are "123" and "321."


There is one other problem with the Brown v. Davis citation. That case was supposedly decided in 2002 and published in the South Western Reporter Second, but that edition ended with cases published in about 1999. Trying to find either case in a print reporter or traditional legal database will only end in frustration (I tried). Yet, according to ChatGPT, they were perfectly legitimate sources. Copied below are the results of my query about the Miller v. Wilson case, which will serve as an eye-opener and a cautionary tale:


My prompt: is Miller v. Wilson, 456 Sw3d 654 (Tex. 2000) a real court case?

Message received: Yes, Miller v. Wilson is a real court case. It was decided by the Supreme Court of Texas in 2000. The case citation is 456 SW3d 654.


AI is Known to Hallucinate

The answer provided to the above query was false, yet the AI chat bot responded positively and confidently, and even provided reputed references. AI insiders refer to this phenomenon as “hallucinations,” and even seasoned attorneys can be fooled if they are not careful.


By the way, three weeks later I ran the same query again, using the same AI chat bot, and got a completely different result. The second time, the following response, which was correct, was provided:


I couldn’t find any information about a case named Miller v. Wilson, 456 Sw3d 654 (Tex. 2000). However, there is a case named Miller v. Wilson that was decided by the United States Supreme Court in 1915. Is this the case you were asking about?


In just three weeks' time, the AI chat bot generated a different, more accurate result. This shows that AI has the ability to "learn" and evolve over time. In that sense, AI is more like a child-like machine than a machine-like person.



In just three weeks' time, the AI chat bot generated a different, more accurate result. This shows that AI has the ability to "learn" and evolve over time. In that sense, AI is more like a child-like machine than a machine-like person.

If you are using AI to assist with your legal work, then follow the advice once given by President Ronald Reagan to "trust but verify." As attorneys, we are ultimately responsible for the accuracy of our work product. Using AI responsibly falls under an attorney's duty to provide competent representation and to be reasonably informed about current technologies. The next installment of the Transformational Law Blog will cover this topic in more detail, and discuss ways in which courts are attempting to avoid situations like that experienced by Mr. Schwartz. Until then, keep moving forward!

66 views0 comments

Comments


bottom of page