A prior Transformational Law post shared a cautionary tale about relying too heavily on artificial intelligence (AI) language models such as ChatGPT for high-end functions such as legal research and writing. Doing so could land you not only in hot water, but, as was the case of a certain New York attorney, in the hot seat having to answer to a United States District Judge. This post goes a step farther and examines the duty of attorneys to responsibly and competently use AI technology. We will first examine the approach taken of the American Bar Association (ABA) on this subject, and then consider local rules already adopted by various federal courts requiring attorneys to certify that they are using AI responsibly. Is this level of oversight necessary or are “helicopter” judges needlessly micromanaging the legal profession? You decide.
The American Bar Association Joins the AI Debate
It is well known that AI language models can generate inaccurate responses, sometimes referred to as “hallucinations.” (See this post to learn more about AI hallucinations). For example, ChatGPT posts this prominent notice about the limitations of using its chat bot:
While engaging with ChatGPT’s bot can be fun, interesting, and even beneficial to your law practice, take notice of this additional disclaimer discretely posted at the bottom of your screen: “ChatGPT may produce inaccurate information about people, places, or facts.” Consider yourself warned (again)!
In May 2023, the ABA adopted Resolution 604, which states in relevant part:
Responsible individuals and organizations should be accountable for the consequences caused by their use of AI products, services, systems, and capabilities, including any legally cognizable injury or harm caused by their actions or use of AI systems or capabilities, unless they have taken reasonable measures to mitigate against that harm or injury.
Although AI technology is relatively new, the ABA’s position is derived from long-standing principles that have guided the legal profession. Resolution 604 is a modern expression of the cannons of professional conduct that hold attorneys accountable for how they deploy technology in the practice of law.
Does a 1989 Document - The Texas Lawyer’s Creed – Apply to Modern AI?
In 1989 the Texas Supreme Court established The Texas Lawyer’s Creed to promote professionalism and higher confidence in the legal system. Can that document withstand the passage of time and does it still apply in the face of modern AI technologies? There are at least a couple of provisions that arguably require attorneys who utilize Legal AI to do so in a responsible manner. For example, the section entitled “Lawyer and Judge” imposes these standards on attorneys:
6. I will not knowingly misrepresent, mischaracterize, misquote or miscite facts or authorities to gain an advantage; and
8. I will give the issues in controversy deliberate, impartial and studied analysis and consideration.
In discussing the attorney’s duty to clients, the creed states: “A lawyer owes to a client allegiance, learning, skill, and industry. A lawyer shall employ all appropriate means to protect and advance the client’s legitimate rights, claims, and objectives.” This language suggests that, if you choose to use Legal AI, then you should do so with learning and skill. However, the creed raises an even bigger question. By stating that attorneys “shall employ all appropriate means” to represent their clients’ interests, does that impose an affirmative responsibility on you to utilize AI in your legal practice? The ABA seems to think so.
Are Attorneys Obligated to Utilize AI in the Practice of Law?
The ABA has encouraged attorneys to stay ahead of the curve when it comes to Legal AI. In August 2019, the ABA adopted Resolution 112 that states in part:
[I]t is essential for lawyers to be aware of how AI can be used in their practices to the extent they have not done so yet. AI allows lawyers to provide better, faster, and more efficient legal services to companies and organizations. The end result is that lawyers using AI are better counselors for their clients. In the next few years, the use of AI by lawyers will be no different than the use of email by lawyers—an indispensable part of the practice law.
Not surprisingly, given its benefits, more business leaders are embracing AI, and they naturally will expect both in-house lawyers and retained counsel to embrace it as well. Attorneys who acquire competence in using AI technology will have an advantage and be considered more valuable to their organizations and clients. Thus, from a business development standpoint, it is worth the investment of time to stay ahead of the AI learning curve.
The Role of Courts to Ensure that Attorneys Use AI Responsibly
First and foremost, it is the personal responsibility of attorneys to use AI technology responsibly. Attorneys stand as guardians of their clients’ legal interests and have a duty to provide competent legal representation. The duty of competence extends to not only to a substantive knowledge of law but also to the competent use of technology.
Courts are taking a more proactive approach to ensure that attorneys live up to these high ideals of technological competence. For example, U.S. District Judge Brantley Starr in the Northern District of Texas requires attorneys to abide by the following “Mandatory Certification Regarding Generative Artificial Intelligence”:
All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being. These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why. Accordingly, the Court will strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing.
Personally, I would prefer that courts not impose an affirmative obligation to file this type of certification in every case. Instead, a more conditional approach such as that used by the U.S. Bankruptcy Court for the Northern District of Texas seems like a better approach. According to that court’s General Order 2023-03 signed on June 21, 2023, a certification is required only if any portion of a pleading or other document filed with the court was AI generated. In that situation, the filing attorney must “verify that any language that was generated was checked for accuracy, using print reporters, traditional legal databases, or other reliable means.”
These type of requirements are not just confined to the Lone Star State. In the U.S. District Court for the Northern District of Illinois, the Standing Orders for Civil Cases Before Magistrate Judge Gabriel A. Fuentes impose this requirement: “Any party using any generative AI tool to conduct legal research or to draft documents for filing with the Court must disclose in the filing that AI was used, with the disclosure including the specific AI tool and the manner in which it was used.” It is inevitable that more courts will adopt similar requirements, so be on the lookout for them in your jurisdiction.
If you are still unsure about whether or how to incorporate AI in your legal practice, I hope this post helps you to take the next step on your journey to prepare for the technological changes that are surely coming. Doing so will not only make you better, faster, and more efficient, but it could even be considered necessary to comply with your professional duty of competence.
Until the next post, keep moving forward!
Comments