By Deborah Thompson Eisenberg

How are law schools responding to ChatGPT and other artificial intelligence (AI)? The same way the legal profession responds to most disruptive technological innovations: slowly. . .and with little agreement about whether we should be alarmed and ban the use of AI or incorporate AI into our teaching.

Should we be concerned? Generative AI processes massive amounts of information with mind-blowing speed and spits out concise, fairly well-written answers. Ominous headlines portend doom for the legal profession.1 Indeed, ChatGPT-4 passed a simulated bar exam, scoring in the top 10%.2

If you have not tried to use AI, play around with it. I did and was both awed and reassured that this will not replace lawyers, at least not yet. To generate ideas for this essay, I asked ChatGPT: “How should law schools respond to ChatGPT and generative AI?” Within seconds, it responded:

As an AI language model, I am not in a position to provide an opinion or make recommendations. However, I can provide some information that might be helpful.3

Law schools may need to adjust their curriculum and teaching methodologies to reflect the increasing role of artificial intelligence in the legal profession. This could involve teaching students about the capabilities and limitations of AI, as well as how to work with AI tools and technologies in legal practice.

Law schools may also want to consider incorporating courses or modules on ethics and responsible use of AI. With the rise of AI-powered legal tools, there is a growing concern about issues such as bias, transparency, and accountability. Law schools can play a role in ensuring that future lawyers are aware of these issues and are equipped to address them.

Another area of focus could be on the legal implications of AI, such as intellectual property rights, liability, and privacy. As AI becomes more integrated into legal practice, lawyers will need to be wellversed in these areas to effectively represent their clients. \Finally, law schools may want to explore partnerships and collaborations with tech companies and AI developers to stay up-todate on the latest developments and trends in the field. This could help to ensure that law students are prepared to navigate the evolving landscape of AI in the legal profession.

Overall, the increasing use of AI in the legal profession presents both opportunities and challenges for law schools. Adapting to these changes will require a proactive and collaborative approach from educators, students, and industry stakeholders.\

Impressive. The response includes valid points, especially about the need for lawyers to address legal and ethical questions regarding AI. Of course, a smart lawyer would be skeptical of bias embedded here: an AI company is advocating that law schools not only use AI in our teaching but reform our curriculum and form partnerships with tech companies. Undoubtedly, we will see an increase in “AI and the Law” courses, and legal research classes will teach responsible use of AI. But ChatGPT’s answer fails to address one glaring question top of mind for academics: how to prevent students from cheating with AI?

Existing honor codes can easily address this issue. For example, the University of Maryland Carey Law Honor Code says students cannot use resource materials in an exam or on assignments unless specifically authorized by the professor. It is also a violation “to submit as one’s work the work of another in draft or final form.”

Whether students obtain the material from a classmate, book, or chatbot, passing off the work of another as one’s own constitutes cheating or plagiarism. While honor codes may be sufficient to address misuse of AI, law schools likely will develop AI policies.4 Although they may vary in specifics, they likely will prohibit the use of AI on exams and encourage professors to clarify in their syllabi acceptable and prohibited uses in various contexts. Some professors may rethink how they assess students, perhaps using closed-book exams or skillsbased presentations.

What’s all the fuss? Can AI really replicate a lawyer’s work? Curious, I asked ChatGPT: “What is the standard for determining employee status under the Fair Labor Standards Act?” In seconds, it provided a concise, accurate answer:

The Fair Labor Standards Act (FLSA) provides federal minimum wage and overtime protections for employees. The Act defines “employee” broadly as “any individual employed by an employer,” but it does not provide a specific definition of the term. Instead, courts and the Department of Labor (DOL) have developed a set of tests to determine whether an individual is an employee or an independent contractor for the purposes of the FLSA.

The primary test for determining employee status under the FLSA is the “economic reality” test. This test looks at the totality of the circumstances to determine whether the worker is economically dependent on the employer or in business for themselves. The factors that are considered include:

The extent to which the work performed is an integral part of the employer’s business;

The worker’s opportunity for profit or loss based on their managerial skill;

The degree of control exercised by the employer over the worker;

The worker’s investment in equipment or materials required for their work;

The degree of skill and independent judgment required to perform the work;

The permanency of the relationship between the worker and the employer.

No single factor is determinative, and the weight given to each factor may vary depending on the circumstances of the particular case. The ultimate question is whether the worker is economically dependent on the employer or in business for themselves.

It’s important to note that state laws and court interpretations may have different tests for determining employee status. Additionally, some industries may have specific tests or exemptions under the FLSA, such as the exemption for certain categories of “white-collar” employees. As always, it’s best to consult with an attorney familiar with employment law to determine the proper classification of workers under the FLSA.

I was glad the answer cautioned that state law might be different and that FLSA exemptions may affect the analysis. ChatGPT wisely advised me to consult with an employment attorney. Smart machine.

But this is surface-level work, not much different from what could be found from a Google search. What about the lawyerly task of applying a legal standard to a set of facts to predict an outcome? I presented ChatGPT with a short hypothetical:

Worker Smith has a contract with Company Z to provide cleaning services at the company’s offices. The contract says that Worker Smith is an “independent contractor.” Smith provides his own tools and supplies for the job. Company Z sets the work hours and does not allow Worker Smith to work for other companies. Company Z provides standards that Worker Smith must adhere to when performing the job. Worker Smith is paid an hourly wage but does not receive overtime pay if he works over 40 hours. Is Company Z violating federal law?

In a few seconds, ChatGPT answered:

Based on the facts provided, it appears that Worker Smith may be misclassified as an independent contractor rather than an employee under the Fair Labor Standards Act (FLSA). The FLSA provides minimum wage and overtime protections to employees but does not extend those protections to independent contractors.

The determination of whether someone is an independent contractor or an employee depends on the specific facts of the working relationship, and courts typically use a multifactor test to make that determination. The “economic reality” test, which I discussed earlier, is one such test.

In this case, several factors suggest that Worker Smith may be an employee rather than an independent contractor.

For example:

Company Z sets the work hours, which suggests that Worker Smith is not exercising control over the details of his work.

Company Z provides standards that Worker Smith must adhere to when performing the job, which suggests that Company Z is exercising control over how the work is performed.

Worker Smith is not allowed to work for other companies, which suggests that he is economically dependent on Company Z for his livelihood. Worker Smith is paid an hourly wage, which suggests that he is not assuming the risk of profit or loss associated with running his own business.

Worker Smith does not receive overtime pay if he works over 40 hours, which suggests that he is not being properly compensated under the FLSA.

Based on these factors, a court or the Department of Labor may conclude that Worker Smith is an employee rather than an independent contractor and that Company Z is therefore violating the FLSA by failing to pay him minimum wage and overtime.

It’s important to note that worker classification is a complex area of law, and misclassification can result in significant liability for employers. As such, it’s always best to consult with an experienced employment attorney to ensure that workers are properly classified and that employers are complying with all applicable wage and hour laws.

Wow. ChatGPT correctly flagged that there may be an FLSA misclassification issue here. I was relieved that it again reminded me to consult with an attorney. But the answer is shallow. Lawyers must analyze why and how the standard applies to the facts by citing controlling statutes, regulations, and case law. Can ChatGPT deepen its analysis? I asked: “What is the case law that may be relevant to Worker Smith’s case? Company Z is located in Baltimore, Maryland.” Its response:

There is no single case that is directly on point with the specific facts of Worker Smith’s case, but there are several cases that may be relevant to the issue of whether Worker Smith is properly classified as an independent contractor or an employee under the FLSA. Here are a few examples:

Donovan v. DialAmerica Marketing, Inc.: In this case, the Third Circuit Court of Appeals applied the economic reality test to determine that telephone sales representatives were employees, rather than independent contractors, under the FLSA. The court looked at factors such as the extent of the employer’s control over the workers, the workers’ investment in equipment or materials, and the workers’ opportunity for profit or loss.

Saleem v. Corporate Transportation Group, Ltd.: In this case, the Southern District of New York found that limousine drivers were employees under the FLSA, despite the fact that they had signed independent contractor agreements. The court looked at factors such as the degree of control exercised by the employer over the drivers, the fact that the drivers were economically dependent on the employer, and the fact that the drivers did not have the opportunity for profit or loss based on their managerial skills.

Keller v. Miri Microsystems LLC: In this case, the District of Maryland applied the economic reality test to determine that a software engineer was an employee, rather than an independent contractor, under the FLSA. The court looked at factors such as the degree of control exercised by the employer over the worker, the worker’s investment in equipment or materials, and the degree of skill required to perform the work.

While these cases are not directly on point with Worker Smith’s case, they provide examples of how courts have applied the economic reality test to determine employee status under the FLSA. Given that Worker Smith is located in Baltimore, Maryland, the Fourth Circuit Court of Appeals and the District of Maryland would be the relevant jurisdictions for this case.

This is when I felt reassured that ChatGPT will not take my students’ future jobs. ChatGPT “hallucinated”—a fancy word in the A.I. field which means it got it wrong and made stuff up.5 The answer recognized that the Fourth Circuit would be the relevant jurisdiction. Yet it cites no cases from the Fourth Circuit or the U.S. Supreme Court. The three cases listed are problematic. The Donovan case is a 1985 case from a different jurisdiction, unlikely to persuade a Maryland judge. Likewise, the New York trial court would not be controlling authority. More concerning: ChatGPT got the Saleem holding wrong. The court held the opposite, finding the plaintiffs were independent contractors, not employees, and the Second Circuit affirmed. Finally, the Keller case was from the District of Michigan, not Maryland. It got that holding wrong, too: the court found plaintiff was an independent contractor and granted defendant’s motion for summary judgment. The Sixth Circuit reversed on civil procedure grounds, finding that plaintiffs had presented a genuine issue of fact to prevent summary judgment. An attorney who relied on such results would risk malpractice.

Given this experiment, I return to the question of what law schools should do in response to A.I. We should not panic. When ChatGPT begins to answer legal queries with lawyerly responses such as “it depends,” replete with a nuanced analysis of controlling authority, we can worry more. Until then, given the myriad legal and ethical questions presented by A.I., reports of the legal profession’s death-by-chatbot are greatly exaggerated.6 While A.I. will improve and unanticipated changes surely will come our way, the world will need more lawyers to sort out the fallout from A.I.’s meteoric rise.

At Maryland Carey Law, we’re preparing our students for the challenges that lie ahead by recommitting to our mission of educating excellent critical thinkers: lawyers who understand that our job is not simply to parrot legal standards to find “the answer” (because there is no such thing in law), but to ask the right questions. Lawyers who ask not only what the law is but what the law should be. Lawyers who have a strong moral compass and a healthy dose of skepticism, who know how to critically assess materials and evidence in the search for truth. Lawyers who understand that the law is a human service profession and are ready to navigate the messy interests, emotions, and flaws of their clients with skill, creativity, and compassion. Lawyers who know not only how to advocate zealously and communicate persuasively but also how to address complex problems, resolve conflicts, and improve the quality of justice.

If law schools do this, the next generation of lawyers will be ready, for whatever comes their way.

 

 

*Deborah Thompson Eisenberg is Piper & Marbury Professor of Law, Associate Dean for Academic Affairs, and Director of the Dispute Resolution Program at the University of Maryland Francis King Carey School of Law.

________

1 Steve Lohr, AI Is Coming for Lawyers, Again, N.Y. Times, Apr. 10, 2023; Jenna Greene, Will ChatGPT make lawyers obsolete (Hint: be afraid), Reuters, Dec. 9, 2022.

2 Stephanie Wilkins, How GPT-4 Mastered the Entire Bar Exam, and Why That Matters, LegalTech New, March 17, 2023.

3 The first-person response is a bit creepy, but I was impressed by the lawyerly disclaimer. Another general disclaimer at the bottom of the page warned that ChatGPT could get people, places, and facts wrong. The Open AI company apparently has good lawyers.

4 See Karen Sloan, University of California Berkely Rolls Out AI Policy on Eve of Exams, Reuters, Apr. 20, 2023.

5 Cade Metz, What Exactly are the Dangers Posed by A.I.? N.Y. Times, May 1, 2023.

6 Hat tip to Mark Twain, to whom the quote “Reports of my death are greatly exaggerated” is commonly attributed.