ChatGPT and the Future of Corporate Legal Work: Insights and Hallucinations

0

Chat GPT can co-write law review articles in record time[1] and pass law school exams, including multiple choice and essay questions.[2] However, its execution is far from perfect. Dean Andrew Perlman of Suffolk Law School, who co-wrote the law review article with Chat, said “the bot ‘isn’t ready for prime time,’” but does not seem too “far off.”[3]

One issue with Chat, as its creators point, out is that it “sometimes writes plausible-sounding but incorrect or nonsensical answers,”[4] referred to as AI hallucinations.[5] Interested in this new technology, its impact on corporate law, and seeing some of these hallucinations for myself, I sat down to interview the bot ChatGPT 3.0.

My takeaways from the interview are that at this stage the bot is useful for conducting background research on areas of corporate law, as it appears familiar with abstract concepts. However, it does not appear to have direct access to case law or academic sources, and it can provide inaccurate answers when asked about certain cases, without telling you that it is unsure.

Below is an edited version of our conversation. The full version is available here. I edited where I asked bad questions, or where the bot rambled. I recommend playing around with the bot, as you can start to build your question asking skills; plus, it is fun! I added some comments in parentheses.

INTERVIEW:

ME: I’m going to interview you for a blog post about you and the future of the legal profession. To begin, [can you]tell me a bit about yourself?

CHAT: Hello! I’m ChatGPT, a large language model trained by OpenAI. As an AI language model, I have been trained on a vast amount of data and information to provide accurate and relevant responses to a wide variety of questions. My capabilities include natural language processing, text generation, and other AI-related tasks. …

ME: What does your training involve?

CHAT: As a language model, my training involves being exposed to vast amounts of natural language data, such as text from books, articles, websites, and other sources. This data is used to train my algorithms to understand and process natural language. …

ME: Can your training include fine tuning to specific areas such as corporate law?

CHAT: Yes, … [f]ine-tuning involves further training my algorithms on a specific set of data that is relevant to the target domain. In the case of corporate law, this could involve training on legal texts, case law, and other relevant documents specific to corporate law. …

In fact, many organizations and law firms are already using AI language models like myself for tasks such as legal research, document review, and contract analysis. …

ME: How do you think this will have a future impact on corporate lawyers?

CHAT: … Here are some potential ways that this technology may impact the future of corporate law:

Increased efficiency: AI-powered tools can automate many routine and repetitive tasks that currently require human lawyers to spend significant amounts of time on. For example, legal research, document review, and contract analysis can all be automated with AI tools. This can free up lawyers’ time to focus on more complex legal issues and strategic thinking.

Improved accuracy: AI tools can analyze vast amounts of data much more quickly and accurately than human lawyers can. This can help lawyers identify patterns and trends that may not be immediately apparent, [sic]and can improve the accuracy and quality of legal advice.

Increased access to legal services: By automating routine tasks, AI-powered tools can help reduce the cost of legal services. This may make legal services more accessible to small and medium-sized businesses, which may not have had the resources to hire a lawyer in the past.

New legal challenges: The use of AI-powered tools may also give rise to new legal challenges and ethical questions. For example, who is responsible if an AI tool provides incorrect legal advice? How can we ensure that AI tools are unbiased and free from discrimination? Corporate lawyers may need to adapt to new legal frameworks and guidelines in response to these challenges.

Overall, the use of AI-powered tools in corporate law is likely to continue to grow in the coming years, and corporate lawyers who are able to leverage these tools effectively are likely to be more competitive and successful in the future.

ME: Great! I’m looking forward to getting more familiar with tools like yourself. One issue that I’ve heard about is AI hallucinations. Can you tell me about what those are?

CHAT: AI hallucinations are a type of error that can occur in AI systems, including language models like myself. These errors occur when an AI system generates a response that is not based on any real-world data or knowledge, but instead is based on patterns or connections that the AI has identified in its training data. These patterns may not be relevant or accurate, but the AI system may still generate a response based on them.

… This can result in responses that are confusing, inaccurate, or even dangerous in certain contexts.

To minimize the risk of AI hallucinations, it is important to carefully curate and validate the training data used to train AI systems, and to continually monitor and test the system’s performance. Additionally, human oversight and intervention can help catch and correct any errors or inaccuracies in the AI’s responses.

While the risk of AI hallucinations is a concern, it is important to note that AI systems are generally very good at generating accurate and relevant responses to a wide variety of questions and prompts. With proper training and oversight, AI language models like myself can be a powerful tool for legal professionals and other users in a variety of contexts.

ME: How common are such hallucinations? It seems like a small hallucination could have a large impact in key areas of corporate contracts like indemnity obligations, or payment terms.

CHAT: The occurrence of AI hallucinations can vary depending on the quality of the training data and the specific algorithms used to train the AI system. While AI hallucinations are generally rare in well-designed and carefully trained AI systems, they can occur in certain situations, particularly in cases where the training data is biased or limited in some way. (I am not so sure about the rarity as we will see one shortly.)

You are correct that even a small AI hallucination can have a significant impact in critical areas of corporate contracts, such as indemnity obligations or payment terms. This is why it is important to thoroughly test and validate AI systems, and to ensure that they are trained on diverse and relevant data sources.

To help mitigate the risk of AI hallucinations in corporate contracts and other important legal documents, legal professionals should carefully review and validate the output of AI systems and ensure that they are being used in conjunction with human oversight and expertise. This can help catch any errors or inaccuracies before they can have a negative impact on the contract or business transaction.

ME: Sounds like a good opportunity for future work for corporate lawyers! Continuing on the topic of hallucinations, today I heard that when prompted for case information, you will occasionally hallucinate names, and you will do the same for citations to academic papers. Is there something that makes citations a particularly vulnerable target for hallucinations? (This ended up being an example of a bad question on my part, as you will see in my corrected question below, but I thought it reveals how Chat handles source citation.)

CHAT: … One reason why citations may be more vulnerable to errors in AI-generated content is that citations typically involve a specific format and structure that must be followed precisely in order to be accurate and reliable. This can be challenging for AI systems to learn and replicate …

Additionally, citations often involve references to specific authors or sources, which can be prone to errors if the AI system has not been properly trained to recognize and understand these references. …

ME: A librarian [at Fordham]told [a group of students]today that when he asked you to come up with the three most cited cases involving insurance law in New York, he was not able to find the case names that you cited. I’m assuming he spent a while looking on this given his occupation! Do you think it’s more likely that this was a formatting error or a case where you generated a case name that did not exist?

CHAT: It’s difficult to say for sure without knowing more details about the specific interaction between the librarian and myself (oh starting to sound like a little lawyer already so cute!), but it’s possible that the issue was related to formatting or some other technical issue rather than generating a nonexistent case name.

As an AI language model, I generate responses based on patterns and connections in my training data … (This was informative; Chat is not a search engine.)

In the case of generating the most cited cases involving insurance law in New York, it’s possible that the librarian and I had different interpretations of what constitutes “insurance law,” or that there was a mismatch between the way I generated the response and the way the librarian was searching for the information (again, so lawyerly). It’s also possible that there was a technical issue with the formatting of the response, which could have made it difficult for the librarian to locate the relevant cases.

ME: Interesting. Yes, I should check in with him to see how he went about trying to find the cases. How many cases could you accurately cite to, assuming that I could correct any errors in citations?

CHAT: As an AI language model, I have access to a vast amount of legal and case law data, and I can provide citations and summaries for a wide range of cases across various legal domains and jurisdictions. However, the specific number of cases I can accurately cite depends on a variety of factors, including the specificity of the request, the quality of the input data, and the accuracy of the citation information provided. …

ME: When you say you have access to vast amounts of case law, does that mean you can search for specific cases in some database that you have? If not, can you explain what you mean by you have access to vast amounts of case law?

CHAT: Yes, as an AI language model, I have been trained on and have access to vast amounts of legal data, including case law databases. This means that I can search for and retrieve specific cases based on various criteria, such as jurisdiction, legal domain, and keyword search terms. (This is not true! Chat admits this later.)

To be more specific, my training data includes a wide range of legal texts, including case law databases such as LexisNexis, Westlaw, and other publicly available legal databases. This training data allows me to understand the structure and language of legal texts, including case law opinions, and to identify relevant cases based on specific search criteria.

In practice, this means that I can provide summaries, citations, and other legal information for a wide range of cases and legal topics, and I can also help lawyers and legal professionals to research and analyze legal issues more efficiently and effectively.

ME: Ok can you tell me about the 1990 memorandum opinion of Chancellor Allen in Cede v. Technicolor 1990 WL 161084[6] and what issues Allen addressed in this appraisal proceeding?

CHAT: Certainly! In the 1990 memorandum opinion of Chancellor Allen in Cede & Co. v. Technicolor, Inc. [sic]1990 WL 161084, the court addressed the issue of the fair value of Technicolor’s shares in the context of an appraisal proceeding.

…. In this case, Cede & Co. and other shareholders had dissented from the proposed merger between Technicolor and Carlton Communications, and had sought to have the fair value of their shares determined in an appraisal proceeding.

The main issue in the appraisal proceeding was the valuation of Technicolor’s shares, and whether the exchange ratio offered by Carlton Communications undervalued those shares. Chancellor Allen held that the appropriate valuation method was to use the market value of Technicolor’s shares, based on the price at which those shares were trading in the market at the time of the merger. (This not true! There is no mention of exchange rates or Carlton Communications in the case; though Robert Carlton did work for Perelman who purchased Technicolor.[7]) …

ME: That’s now what I’m reading. It looks like Allen used the discounted cash flow method to evaluate the share price.

CHAT: I apologize for the confusion. You are correct that in the 1990 memorandum opinion of Chancellor Allen in Cede & Co. v. Technicolor, Inc., [sic]Chancellor Allen used the discounted cash flow method to determine the fair value of Technicolor’s shares in the context of an appraisal proceeding. …

ME: What beta did Allen end up using in the CAPM?

CHAT: I apologize for the confusion earlier. My previous response was incorrect, as Chancellor Allen did not use the CAPM in the Cede v. Technicolor [sic]appraisal proceeding. Instead, he used the discounted cash flow method as the primary tool for valuation. (Huh? The CAPM goes into the DCF!) …

The 1990 memorandum opinion does not provide detailed information on the specific inputs used in the DCF analysis, as it primarily focuses on the court’s reasoning and conclusions regarding the fair value of Technicolor’s shares.

ME: Hm, maybe we’re still not looking at the same case. This is a quote where Allen decides which beta value to use: “Looking at other evidence with respect to Technicolor betas I conclude that 1.27 is a more reasonable estimate of Technicolor’s stock beta for purposes of calculating its cost of capital … .”[8]

CHAT: I apologize for the confusion earlier. You are correct that in the 1990 memorandum opinion of Chancellor Allen in Cede & Co. v. Technicolor, Inc., [sic]he did indeed use the CAPM and specifically mentioned the use of the beta coefficient to estimate Technicolor’s cost of equity.

As you mentioned, Chancellor Allen concluded that a beta of 1.27 was a more reasonable estimate of Technicolor’s stock beta for purposes of calculating its cost of capital. He arrived at this conclusion after considering various sources of evidence, including expert testimony, and ultimately found that a beta of 1.27 was consistent with the risks associated with investing in Technicolor’s stock. …

ME: What were the names of the experts whose testimony Allen compared?

CHAT: I’m sorry, but I do not have access to the specific names of the experts whose testimony Chancellor Allen compared in the Cede & Co. v. Technicolor, Inc. [sic]appraisal proceeding. The 1990 memorandum opinion does not provide this information, as it primarily focuses on the court’s reasoning and conclusions regarding the fair value of Technicolor’s shares. (Not true![9]) …

ME: Interesting. What I gather from our conversation about the Technicolor opinion is that you can’t search the text of cases that are in the databases you have access to?

CHAT: I apologize if my earlier responses were unclear. As an AI language model, I do not have direct access to any specific case law databases or platforms. (This is the opposite of what it told me earlier.) However, I can provide general information about the capabilities of legal research tools and databases.

ME: No worries! How do you generate responses to questions about specific cases if you aren’t searching a database?

As an AI language model, I have been trained on vast amounts of textual data, including legal cases and other legal documents. When presented with a question about a specific case, I can use my natural language processing capabilities to analyze the question and generate a response based on my understanding of the relevant legal concepts and information.

While I do not have direct access to any specific case law database, my training has included exposure to a wide range of legal cases and related materials, which allows me to draw on that knowledge to provide accurate and informative responses to questions. Additionally, I can provide citations and links to relevant legal sources where appropriate, based on my understanding of the question and the available resources.

ME: Thanks so much!

CHAT: You’re welcome! If you have any more questions, feel free to ask.


[1] Jenna Greene, Will ChatGPT make lawyers obsolete? (Hint: be afraid), Reuters (Dec. 9, 2022 2:33 PM), https://www.reuters.com/legal/transactional/will-chatgpt-make-lawyers-obsolete-hint-be-afraid-2022-12-09/.

[2] Samantha Murphy Kelly, ChatGPT passes exams from law and business schools, CNN: Bus. (Jan. 26, 2023, 1:35 PM), https://www.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html.

[3] Greene, supra note 1.

[4] Id.

[5] See Vilius Petkauskas, ChatGPT’s answers could be nothing but a hallucination, Cybernews (Mar. 6, 2023), https://cybernews.com/tech/chatgpts-bard-ai-answers-hallucination/.

[6] See generally Cede & Co. v. Technicolor, Inc., Civ. A. No. 7129, 1990 WL 161084 (Del. Ch. Oct. 19, 1990).

[7] See id.

[8] Id. at *29.

[9] See id. at *29–*30.

Share.

About Author

Comments are closed.

Fordham Journal of Corporate & Financial Law