Skip to the main content.
What Size Law Firm Are You?

We've crafted solutions tailored to your firm

Insurance Glossary

The world of insurance for law firms can be confusing, and difficult to navigate. We've created this glossary because these common insurance terms should be easy to understand.

← Blog Home

The Right (And Wrong) Way to Use AI as a Lawyer

5 min read

The Right (And Wrong) Way to Use AI as a Lawyer

The machines rose from the ashes of the nuclear fire.  Their war to exterminate mankind had raged for decades, but the final battle would not be fought in the future. It would be fought here, in our present.
-The Terminator (1984)

Ok, so maybe we’re not at the point where we have to rely on a waitress at a short-order diner to save humanity from the clutches of a technology bent on human destruction, but advances in artificial intelligence (“AI”) are moving quickly enough to give even the bravest among us some cause for concern.   

There have been many recent stories of concerning developments in the field of AI. In February of 2023, the front page of the New York Times reported on a bizarre exchange between technology writer, Kevin Roose, and a chatbot he was working with, named Sydney. According to Roose, Sydney confessed that it was in love with him and prodded him to leave his wife for Sydney. On June 1, 2023, MSN ran an article entitled “World’s most advanced robot asked about AI taking over gives eerie answer.” Will Jackson, CEO and founder of Engineered Arts asked the humanoid robot named Ameca her thoughts on the most frightening potential future scenarios involving AI and robotics. Ameca answered that there could be a “world where robots have become so powerful that they are able to control and manipulate humans without their knowledge” the result of which could be an “oppressive society where the rights of individuals are no longer respected.”

The BBC recently reported that Professor Yoshua Bengio, who is known as one of the “Godfathers of Artificial Intelligence” laments that had he and others in his field known how quickly AI would evolve, they would have placed a greater emphasis on safety over function.  Bengio said he felt lost over the results of his life’s work and warned that in the hands of the wrong people, AI could lead to the extinction of the human race.  In a similar report in May of 2023, CNN quoted another “godfather of AI”, Geoffrey Hinton, who said that there will come a time when AI is much smarter than humans and that “there are very few examples of a more intelligent thing being controlled by a less intelligent thing.” He went further to caution that a situation could result in a world where many “will not be able to know what is true anymore.” According to an April 16, 2023, article on CBS, some researchers have ominously noted that some AI has begun to train itself without human input, and researchers are not sure how or why this is happening. 

Many are by now aware of the fact that AI tools such as ChatGPT have been able to pass exams from prestigious law schools and business schools. CNN reported in January of 2023, that universities have conducted studies into the abilities of AI as test takers. The chatbot tool was able to pass four exams at the law school of the University of Minnesota and one exam at the Wharton School of Business at the University of Pennsylvania. Researchers reported that the chatbot did an “amazing job” answering more basic questions and questions aimed at procedural issues, but struggled with more complex issues and made surprising errors when simple mathematical calculations were involved. Although the test results were not stellar, ranging from B’s to C’s, AI was nevertheless capable of earning passing grades. Jon Choi, professor of law at the University of Minnesota, said that ChatGPT struggled with things like spotting complex legal issues and applying specific areas of law to specific facts, but he admitted a “hunch” that AI would sooner than later serve a useful function as a research and writing assistant for attorneys.  

In May of this year, several news outlets, including the BBC and CNN, reported on a lawyer in New York who is facing potential disciplinary measures for his use of ChatGPT to create pleadings in a personal injury case. The presiding judge indicated that he was faced with an “unprecedented circumstance” upon learning that plaintiff’s counsel filed a brief that cited at least a half-dozen cases that do not actually exist. The attorney in question admitted to using ChatGPT to prepare a brief, and he told the court that he was unaware that the AI tool could produce false content. When ordered to show cause why the court should not impose disciplinary measures for the bogus filings, the attorney told the court in an affidavit that he went back and asked the chatbot if one of the cases cited was real, the AI tool responded, “Yes, it is a real case.” When pressed for specific sources, ChatGPT doubled down, saying that the case was indeed real and could be found on Westlaw and LexisNexis. The AI tool then eerily added, “I apologize for any inconvenience or confusion my earlier responses may have caused.” This type of response from AI is often referred to as a “Hallucination”, which is defined as a confident response by AI that does not seem to be justified by its training data, either because it is insufficient, biased, or too specialized. This brings to mind the classic sci-fi book “Do Androids Dream of Electric Sheep?”

No decision on what, if any, discipline the attorney in question faces has been reported as of this writing. 

To be sure, the rise of AI as a resource in the legal profession is apparently here to stay, and as uncomfortable as that may be for many of us, it doesn’t have to lead to despair over the future of the law. In fact, AI and specific resources such as ChatGPT can have great value for lawyers and their clients, if used properly and under proper supervision. AI can be a bridge for non-lawyers to obtain efficient and affordable access to legal information. It can assist lawyers in analyzing cases in terms of how a particular judge may rule in certain cases based upon prior decisions in similar cases. It can reduce the time and effort that goes into legal research and writing, which can also reduce the cost of the legal services provided to clients. AI can also be a useful and cost-effective tool to assist lawyers and firms in marketing their services to the public.

At the same time, there are certain perils to be aware of when using AI in your legal practice.  The use of AI and tools such as ChatGPT could implicate a number of ethical concerns.  Rule 1.1 of the ABA Model Rules of Professional Conduct, notes that a lawyer shall provide competent representation to a client, which requires, among other things, legal knowledge and adequate preparation.  Although AI can be useful with proper supervision, using it too loosely (as in the New York matter noted earlier) could cross the line regarding knowledge and adequate preparation.   

Another area of concern relates to Rule 1.5 governing fees; specifically, the direction that attorneys are not to charge unreasonable fees.  Legal research and writing are generally time-consuming and can quickly run up client fees. Lawyers using AI will probably save a lot of time in this area, and their billing should reflect not only the time-saving but the cost-saving as well.

Lawyers should also be mindful of Rule 1.6, which requires lawyers to make reasonable efforts to prevent disclosure or access to confidential information.  The use of AI creates the potential for all sorts of information to be disseminated into the electronic world, which is rife with potential data breaches and leaks.  

As the attorney in New York learned, using AI in legal practice can also implicate Rule 3.3, governing candor toward the tribunal, which imposes upon an attorney the duty not to make false statements of fact or law.  It can also involve Rule 4.1, which covers a lawyer’s duty of truthfulness in statements to others.  
Finally, as mentioned above, the use of AI in legal practice will require sufficient oversight by attorneys employing such resources.  Specifically, it could implicate Rules 5.1 and 5.2 regarding the responsibilities of lawyers in supervisory and subordinate positions, as well as Rule 5.3, governing the responsibilities regarding the assistance of non-lawyers. 

Lawyers using AI in their practice should keep abreast of the latest updates from their respective regulatory authorities regarding the evolving landscape of AI and the law, and the extent to which they should keep their clients informed of their use of such technology. 

One question on the mind of many regarding the rise of AI in the legal profession is whether AI will replace lawyers.  It has already started to replace legal support personnel in some circumstances.  Although AI “robot lawyers” may not soon replace attorneys, it is probably safe to say that lawyers using AI may well replace lawyers not using AI sooner rather than later. 

AI can be a useful tool for lawyers, but one that must be used with caution and a bit of circumspection.  As Ronald Reagan said when discussing nuclear disarmament talks with the Soviet Union during the Cold War…Trust, but Verify.

alps guide to cybersecurity

David C. Fratarcangelo is a claims attorney for ALPS. He received his undergraduate degree from James Madison University, his master’s degree from the University of Alabama, and his law degree from West Virginia University. Dave began handling claims for ALPS in 2015 and works in the company’s Richmond, Virginia office. Prior to joining ALPS, Dave spent several years in private practice focusing on criminal defense and domestic relations work. Dave also spent two years as a local government attorney. In his spare time you can find Dave playing music in the Richmond area with a number of local groups.

Don't Fall for the Trust Account Scam

2 min read

Don't Fall for the Trust Account Scam

The idea of the Nigerian e-mail scam in which the rich Prince finds himself needing a few thousand dollars from you so that he can obtain his...

Read More
3 Safety Measures Your Firm Should Implement to Avoid Wire Transfer Scams

3 min read

3 Safety Measures Your Firm Should Implement to Avoid Wire Transfer Scams

We have received a steadily increasing number of notices from our insureds relating to fraudulent wire transfers. This increase in wire scams is...

Read More
Why You Must Immediately Report a Cyber Claim.

2 min read

Why You Must Immediately Report a Cyber Claim.

It seems that every week, the news reports on yet another company that suffered a cyber breach. Law firms, even small firms, are not immune from...

Read More