Using ChatGPT for Legal Research.  Not so fast!  

By now, you’d have to be a hermit not to have heard about OpenAI’s ChatGPT and its potential to change the practice of law.  There have been many legal articles also discussing the need to practice caution in applying ChatGPT, in its current form, to legal issues.  A lawyer working on a case filed in the U.S. District Court for the Southern District of New York recently incorporated case law produced by ChatGPT in a legal pleading. Unfortunately for the lawyer and their client, ChatGPT produced imaginary cases.  The judge in the case, US District Judge Kevin Castel, was not amused and issued an Order to Show Cause why sanctions should not be imposed against the lawyers.  See Mata v. Avianca, No: 22-cv-1461 (PKC), (S.D.N.Y May 4, 2023).

In its order, the judge stated that plaintiffs’ counsel filed a pleading “replete with citations to non-existent cases.”  Specifically, plaintiff’s counsel cited a case out of the Eleventh Circuit and provided an excerpt from the case. The Clerk of the Eleventh Circuit Court of Appeals certified that no such case existed and the docket number of the opinion cited by plaintiff’s counsel was for a different case. In addition, neither Westlaw nor Lexis has the case in their database. The actual Federal Reporter citation for the case was a different case out of the District Court of the District of Columbia, not the Eleventh Circuit. The case cited by plaintiff’s counsel also contained additional citations to other bogus cases and quotes. The lawyer also cited an additional five nonexistent cases in the same pleading, all provided by using ChatGPT as a legal research tool.  A hearing was held on June 8, 2023, but the judge has not yet ruled on sanctions.    

This case seems to be a case of first impression and a stark reminder that lawyers should be very, very cautious if they decide to use ChatGPT in their legal practice.

Texas Judge Addresses the Use of AI in Legal Practice

It appears a Texas judge is attempting to address the specific problem that occurred in the matter from New York. U.S. District Judge Brantley Starr in the Northern District of Texas issued an order on May 30, 2023, calling the legal research produced by the current form of AI prone to  “hallucinations and bias.” He reminded lawyers that they, not AI, swore an oath to “uphold the law and represent their client.”  His order now requires all attorneys appearing before the Court to “file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being.” 

Other U.S. Courts, state courts, administrative agencies, and any other entity with judicial or quasi-judicial authority may soon follow with similar requirements.  

ABA Comment on Competence Involving Technology

In 2012, a comment was added to ABA Rule 1.1, “Competence,” stating that lawyers must understand the ramifications of the technology they use in their practice. Comment 8 provides that in order “[t]o maintain the requisite knowledge and skill, a lawyer should keep abreast of change in the law and its practice, including the benefits and risks associated with relevant technology. . .”  (emphasis added)

Technological advancements impacting the practice of law are moving at lightning speed.  In addition to understanding the “benefits and risks associated with relevant technology,” lawyers need to educate themselves on the limits of the technology they use.