Many harms flow from the submission of fake opinions.”

                                                                          Judge Castel in Mata v. Avianca

In a well-reasoned opinion, a Court sanctioned a Massachusetts lawyer in the amount of $2,000 for citing fictitious cases in court pleadings that were produced by an AI tool. The case highlights some of the real risks of using AI in the legal profession, emphasizing that attorneys must exercise due diligence and be transparent when relying on AI-generated content. There is nothing wrong with using reliable AI technology for assistance in preparing legal documents. However, the ethical and professional rules that govern all attorneys require them to ensure the accuracy of their filings. This particular case highlights those obligations.   

The Honorable Brian A. Davis, an Associate Justice of the Superior Court of Massachusetts issued the opinion on February 12, 2024. In the opening paragraph of the order, the Court stated:  

This decision and order addresses two disturbing developments that are adversely affecting the practice of law. . . The first is the emerging tendency of increasingly popular generative artificial intelligence (“AI”) systems, such as ChatGPT and Google Bard, to fabricate and supply false or misleading information. The second is the tendency of some attorneys and law firms to utilize AI in the preparation of motions, pleadings, memoranda, and other court papers, then blindly file their resulting work product in court without first checking to see if it incorporates false or misleading information. (emphasis added)

Smith vs. Farwell, et. al., Lawyers Weekly No. 12-007-24, Suffolk Superior Court, Civil Action No. 2282CV01197, J. Davis (February 12, 2024). 

Counsel in Smith filed three separate pleadings (motions and legal memoranda) with fictitious or nonexistent cases. The Court inquired as to how the fictitious cases were included in the pleadings. Counsel’s response was that he was “unfamiliar” with the fictitious cases and their origin. After further inquiry, counsel disclosed that the pleadings had been prepared by “interns.” Whereupon, the Court instructed counsel to file a written explanation of the origin of the cases. The written explanation filed by counsel disclosed that the cases were created by an AI system and acknowledged his lack of diligence in failing to thoroughly review the pleadings before they were filed with the Court. 

A hearing was conducted on December 7, 2023 for the Court to determine if any sanctions were warranted in the case. At the hearing, counsel apologized to the Court and indicated the fictitious case citations “were not submitted with the intention to mislead the Court.” He explained the pleadings had been prepared by two new law graduates who had not yet passed the bar and an associate attorney. The associate attorney had admitted to him that she used an AI system for assistance in drafting the pleadings. Counsel further indicated that he had no idea the associate attorney had employed AI to draft the pleadings until after the Court discovered the fictitious cases. He admitted that he had reviewed the pleadings for style and grammar, but not for accuracy of the case citations. 

The Court recognized the sincerity and honesty of counsel. However, the Court found counsel’s candor with the Court did not exonerate him from responsibility. The Court ruled that sanctions were warranted because he failed to take precautions to ensure the fictitious cases were not included in the pleadings. In the order, the Court on multiple occasions cited the NY case that sanctioned a NY lawyer and his law firm in June 2023 for citing fictitious cases. See Mata v. Avianca, Inc., 2023 WL 4114965 at *1 (S.D.N.Y. June 22, 2023); see also NY Lawyers and Law Firm Sanctioned for Citing Fake Cases Derived from AI, MSBA Blog, June 28, 2023.  

Maryland attorneys may want to consider Rule 1-311, Md. Gen. Prov. when studying the Smith case. Moreover, several of the Maryland Attorneys’ Rules of Professional Conduct may be applicable. Maryland attorneys may want to look at Rule 19-303.1 (meritorious claims and contentions), 19-303.3 (candor toward tribunal), Rule 19-305.1 (responsibilities of partners, managers, and supervisory attorneys), 19-305.3 (responsibilities regarding non-attorney assistants), and Rule 19-305.5 (unauthorized practice of law). 

Perhaps pontificating, but more likely issuing a soft warning to all attorneys, Judge Davis ended his opinion in Smith by stating:  

It is imperative that all attorneys practicing in the courts. . . understand that they are obligated. . . [under the rules]  . . to know whether AI technology is being used in the preparation of court papers that they plan to file in their cases and, if it is, to ensure that appropriate steps are being taken to verify the truthfulness and accuracy of any AI-generated content before the papers are submitted.

To quote Judge Davis, “there is a powerful lesson here.” There is nothing inherently amiss about lawyers embracing AI as a powerful tool that complements their expertise. However, lawyers must remain lawyers! And read, check and verify the facts they assert in their pleadings, their legal arguments, and case law to substantiate their positions. Legal standards require nothing less. It’s clear that relying solely on AI without proper review can lead to detrimental consequences for lawyers.