The Florida Bar

Florida Bar News

Ally or enemy? When it comes to the legal profession, GPT-4 may be a little of both  

Special to the News Columns
Jiaying Jiang

UF Law Professor Jiaying Jiang says legal inaccuracies generated by language models could make attorneys liable for the mistakes. “Clients trust their attorneys to act competently and diligently in their best interests. If an attorney relies on a language model without thoroughly reviewing and verifying the generated output, this could lead to malpractice claims.”

When GPT-4 – the latest version of OpenAI’s language model systems – was released in mid-March, several aspiring lawyers and law professors used it to take the bar exam. The Large Language Model chatbot passed every subject and performed better than 90% of human test takers 

This news was a bit shocking, undoubtedly, and raised numerous questions. Preparing for the bar exam, for most unassisted humans, necessitates 10 hours of daily studying for nearly three months (after completing three years of legal education). Suddenly, an artificial intelligence (AI) tool can pass the bar easily.  

What does this development mean for legal professionals? How will the advancement of language models positively or negatively impact the way legal professionals learn, teach and practice in the new technological landscape?   

Currently, there are ongoing conversations about how language models can beneficially assist with legal writing (such as producing the first drafts of legal documents like contracts and conducting legal research). A junior attorney, for example, can save time by using GPT-4 to locate relevant legal statutes and regulations, identify potential contradictions in documents and pinpoint missed arguments. With this assistance, the attorney can then focus on higher-level tasks that require the kind of critical-thinking and analytical skills that language models do not yet have the capacity to master.   

Compliance officers can use GPT-4 to automate the creation of standardized document templates that ensure consistency in formatting, language and structure. Because legal statutes and regulations are often rife with esoteric jargon, GPT-4 can be used to simplify language, provide explanations of complex terms and summarize long cases.   

Language models can also assist law enforcement officers in conducting risks assessments. Officers can input data from internal sources like employee emails and chat logs to identify whether there are any compliance violations (such as patterns of fraud, corruption or other misconduct).    

All of these positive examples, of course, are not failsafe. There are ethical concerns, and potential privacy violation is one of the biggest language-model-related issues among attorneys and law firms.  

Some legal and consulting firms already have policies in place that restrict or prohibit the use of language models, due to the chance that the tools could inadvertently disclose confidential information and risk data breaches or cyberattacks. However, vendors can likely solve this problem by providing language model plugins that can store a firm’s sensitive information in a proprietary database, rather than sending it to language models or third parties.  

Another issue is that legal inaccuracies generated by language models could make attorneys liable for the mistakes. Clients trust their attorneys to act competently and diligently in their best interests. If an attorney relies on a language model without thoroughly reviewing and verifying the generated output, this could lead to malpractice claims.  

Attorneys must cautiously use language models to perform tasks that require major legal expertise. For example, when asked to write a legal essay, language models have shown that they are not yet entirely proficient at producing the most credible legal analyses (“yet” is the operative word here; it’s only a matter of time before the next generation of language models outperforms GPT-4).   

Given the prevalence and impact of these language models, law professors need to adapt their teaching methods accordingly. Instructors should cultivate an adaptive and innovative mindset, encouraging students to embrace new technologies. In the meantime, instructors must emphasize the ethical implications and explore the appropriate use of these tools.  

To better prepare law students for legal practice, instructors should identify and teach key skill sets, such as legal prompt engineering, which involves properly framing the inputs and outputs of AI tools to generate accurate results.  

Law schools can integrate AI courses into the existing curriculum, providing opportunities for hands-on experience with language models and other AI tools for legal research, document drafting and legal analysis. Courses leading to certifications in the use of language model technology can help students find employment after graduation.   

So, while language models like GPT-4 may present ethical and privacy-related challenges, they ultimately provide enormous opportunities for teaching and practicing law. With caution, legal professionals can certainly make these tools work to their advantage.  

Jiaying Jiang, S.J.D. is an assistant professor of law at the University of Florida’s Levin College of Law. Her research focuses on policies and regulations regarding emerging technologies, including artificial intelligence, fintech, blockchain, cryptocurrencies, and central bank digital currencies. Professor Jiang appreciates students in her Fintech class for thoughtful and fruitful discussions on this topic.    

News in Photos

Columns

Be a Nimble Lawyer

Columns | Oct 09, 2024

Mindfulness as a Force Multiplier

Columns | Sep 17, 2024

Be a Careful Lawyer

Columns | Sep 12, 2024

Be a Gracious Lawyer

Columns | Aug 05, 2024