The Risks of AI in Legal Practices: A Growing Concern
Introduction
In recent weeks, numerous reports have highlighted the troubling trend of attorneys facing repercussions for submitting legal documents laden with what one judge aptly described as “bogus AI-generated research.” While the specifics of each case may vary, the common denominator remains the same: legal professionals are increasingly turning to large language models (LLMs) such as ChatGPT for assistance in legal research and drafting, often without fully understanding the limitations of these technologies.
The Dangers of AI Hallucination in Legal Research
One substantial issue arising from this reliance is the phenomenon known as "AI hallucination," where models fabricate non-existent legal cases. This has led lawyers, who may not fully grasp the nuances of LLMs, to incorporate inaccurate information into their filings. A 2023 aviation-related lawsuit serves as a stark example, where attorneys incurred fines for including fictitious citations generated by AI. Such incidents prompt the question: why do attorneys continue to use AI tools despite these risks?
Time Constraints and AI Adoption
The primary factor driving attorneys to adopt AI tools is the time-sensitive nature of legal work. Legal research databases like LexisNexis and Westlaw have integrated AI capabilities, making them attractive to lawyers managing heavy caseloads. While many legal practitioners are not employing ChatGPT directly for drafting, they frequently leverage LLMs for research purposes. Nevertheless, a significant portion of lawyers, much like the general public, lacks a deep understanding of how these technologies function. For instance, one sanctioned attorney mistakenly referred to ChatGPT as a "super search engine," only to discover that it produces plausible but misleading information.
Professional Insights on AI Utilization
Andrew Perlman, the dean of Suffolk University Law School, asserts that many attorneys successfully use AI tools without encountering issues. The cases that garner attention for erroneous citations are outliers. Perlman emphasizes that while challenges like hallucination persist, the potential benefits of generative AI for legal service delivery are significant. According to a Thomson Reuters survey conducted in 2024, 63% of lawyers reported using AI for tasks such as summarizing case law and researching statutes, with 12% indicating regular usage.
Real-World Examples of AI Misapplication
Recent high-profile cases have illuminated the consequences of misapplying AI in legal settings. For instance, in a motion submitted on behalf of journalist Tim Burke, significant misrepresentations of case law were revealed, leading Judge Kathryn Kimball Mizelle to strike the motion from the record. Nine instances of hallucinations were identified within the document, underscoring the risks associated with relying on AI-generated content.
In another case, lawyers involved in a copyright infringement suit were found to have submitted documentation with inaccurate citations produced by Claude AI, leading to further complications. Similar issues arose with expert witness declarations supported by ChatGPT, which also contained factual errors.
The Importance of Verification
The ramifications of such inaccuracies are serious, as judges base their decisions on the integrity of the documents presented to them. A California judge, initially persuaded by a well-argued brief, later discovered that the cited case law was entirely fabricated. Perlman points out that lawyers have historically filed documents that include incorrect citations, particularly when time constraints are pressing.
To mitigate these risks, Perlman encourages lawyers to utilize AI tools in ways that enhance rather than replace their judgment and expertise. Effective AI applications in legal contexts include sifting through extensive discovery documents, analyzing briefs, and brainstorming potential arguments.
Industry Response and Future Implications
As the use of AI tools has escalated, the American Bar Association issued its first guidance in 2024 regarding attorneys’ use of generative AI. The guidance stresses the importance of maintaining a robust understanding of these evolving technologies and the necessity of verifying their outputs. Lawyers must weigh the confidentiality risks associated with using LLMs and inform clients of their AI usage when relevant.
Perlman believes that generative AI holds the potential to revolutionize the legal profession and that in the near future, the focus may shift from assessing the competence of attorneys who use AI to evaluating those who do not. Conversely, some legal professionals maintain a skeptical stance, asserting that no diligent attorney should depend solely on AI for research and writing without thorough verification of the material produced.
Conclusion
As the legal profession grapples with the challenges and opportunities presented by generative AI, it remains imperative for attorneys to exercise caution. By remaining vigilant and ensuring the accuracy of their work, legal professionals can leverage the advantages of AI while minimizing the associated risks. The balanced integration of technology in law will define the future landscape of legal practice.
