Generative AI In Legal Work — What’s Fact And What’s Fiction?

Zach Warren from the Thomson Reuters Institute discusses the potential and the pitfalls.

question-4612919_1280Ed. Note: This is part of a series detailing Gen AI’s impact on the legal profession from our friends at Thomson Reuters. For a further deep dive on Gen AI, download the Future of Professionals Report here.

Generative AI is one of the biggest talking points today. It promises to transform how work is done across multiple sectors, and legal work is a particularly rich area of opportunity.

Many firms and legal departments are already switched on to Generative AI, and there is no avoiding it. We spoke to Zach Warren from the Thomson Reuters Institute about the potential and pitfalls of the technology, and to separate fact from fiction — including a question about notorious generative AI “hallucinations.”

Warren: With surprising enthusiasm. Legal has something of a reputation for slow uptake of tech, which we’ve seen recently with the cautious adoption of cloud and some types of AI. That may be because lawyers and firms have preferred to do things the way they’ve always been done.  

But with Generative AI it’s been a different story. Thomson Reuters conducted a survey only a few months after ChatGPT was launched, and it found that firms were already playing around with it. Interestingly, 82% of survey respondents said Generative AI could be used for legal tasks, and 51% said it should be used. However, that’s on a hypothetical level — at the time in April, only 3% had actually adopted Generative AI. And even now, the vast majority of legal professionals are in “wait and see” mode. 

Who should be overseeing the adoption of Generative AI?

Warren: At the moment it’s primarily CIOs, IT directors, and CTOs who have been tasked to be at the forefront of technology adoption. But Generative AI will be the most transformative tech in legal, so partners and managing partners should be learning more about it now.  

In fact, everyone should have a stake in it to some extent, because Generative AI is intended to be democratized — so we will see it being used in everything from onboarding new employees to day-to-day transactional work. 

Warren: Speed, efficiency, consistency in research, document reviews, and many repetitive tasks are some of the many promised benefits. 

However, there is uncertainty around who realizes the value of Generative AI. One of the biggest questions for partners will be, “What is chargeable now?” Firms bill in six-minute increments, which requires great transparency — will clients insist firms cut billable hours now that work can be done much faster with Generative AI, or even done by the clients themselves? 

Going back to the survey I mentioned, 80% of corporate clients actually want their firms to use Generative AI — but they also want firms to add value and skills above and beyond the tech. So, firms will really need to prove the value that they’re providing in this new era, which puts the emphasis on more experienced lawyers. 

How does the cost of implementing generative AI in law firms compare to other technology investments commonly made in the legal industry?

Warren: It’s a bit higher right now, but that’s for generative AI across the board. Generative AI requires a lot of data to work, which means a lot of processing power to run the searches. There just aren’t enough servers to make generative AI as widespread as the technology deserves right now, but there are a lot of smart people working on that problem. 

Warren: Absolutely. Manual work like research and document or contract drafting — obvious use cases for Generative AI — is typically the domain of first- and second-year associates. 

I heard a good quote recently: “All the writing you learn in law school will become editing.”

That’s because Generative AI is so good at producing first drafts. We’re seeing lots of interest in firms and departments hiring prompt engineers — and this is a skill which could soon become part of every new lawyer’s training.  

Can you explain what Generative AI hallucinations are?

Warren: This is one of the biggest concerns surrounding the technology currently. Hallucinations are basically errors that pop up in the output of Generative AI, presented as fact, and which it can’t recognize as wrong. The reason for this is that Generative AI is not actually “intelligent” — it predicts the next possible word in a sentence, giving the most likely answer based on previous results.  

Even OpenAI’s GPT-4 is around 85-90% factually accurate on multiple choice questions — that level of accuracy is obviously not where you want to be in legal. However, Retrieval Augmented Generation (RAG) models can take feedback about the accuracy of answers and fold that back in, so things could improve quickly. 

Warren: At the moment, Generative AI — particularly public, open-source tools — is not a good fit beyond general question-and-answer tasks. Firms and departments need to create guidelines around proper usage. And the applications of Generative AI should for now be restricted to internal work, or work where you can afford to be wrong — and someone always needs to check the output. 

For the immediate future, most usage of Generative AI will be through new features baked into pre-existing tools from trusted technology providers. However, some of the very biggest firms are hiring data scientists and developing their own knowledge models. We’re going to see a hybrid of these approaches, with legal technology vendors keeping in-house data behind a wall but checking it against publicly available datasets. This means the results will be both accurate and up to date.  


Zach Warren leads technology and innovation content for the Thomson Reuters Institute. Zach has been writing and speaking on tech and innovation for more than a decade, and with Thomson Reuters, charts the future of professional services industries, including legal, tax, and risk & fraud, through writing, podcasts, speaking engagements, and more. Zach was also the lead author of TRI’s Generative AI in the Law Firm Report, among other technology-centric reports. Before coming to Thomson Reuters, Zach was the editor-in-chief of ALM’s Legaltech News and featured on Law.com, The American Lawyer and ALM events such as Legalweek.