Demystifying Black Box AI: 7 Tips For Lawyers To Promote Explainable AI

Even the AI experts who create and train AI black box models don’t always fully understand their internal processes and decision-making mechanisms.

artificial-intelligence-4111582_1920I stumbled on a real eye-opener when exploring the generative AI tool, ChatGPT. My query for the “top three greatest artists of all time” returned a list of white European men: Leonardo da Vinci, Michelangelo Buonarroti, and Vincent van Gogh. 

Ahem! Georgia O’Keeffe would like a word, as would Frida Kahlo, Kara Walker, and many other great international artists of all races and genders.

ChatGPT’s response sheds light on a few biases lurking within AI systems. Preventing biases is a major reason regulators emphasize the need for “explainable” AI. 

But how can lawyers describe the inner mysteries of AI models when even technology specialists refer to specific functions as “black box AI”? Below are seven tips to help you navigate black box AI and promote the use of explainable AI for a more inclusive future.

Embrace The Challenge Of Explainable AI

Legal practitioners play a vital role in ensuring AI solutions adhere to the principles of fairness, neutrality, and unbiased decision-making. Regulations like those in the EU AI Act aim to hold AI technologies accountable for their decision-making processes. 

However, even the AI experts who create and train AI black box models don’t always fully understand their internal processes and decision-making mechanisms. Potential biases go unexamined and unchallenged. The lack of accountability often makes using black box AI inappropriate in areas such as healthcare, criminal justice, credit scoring, or hiring processes. 

Sponsored

But not all AI systems are impenetrable. Think of the “black box” metaphor as a call to action, not an insurmountable obstacle, as we strive toward enhancing AI’s explainability. In embracing this challenge, you can lead the charge for greater transparency and accountability in AI. 

Learn The Basics Of AI

Learn basic AI concepts like machine learning, deep learning, and natural language processing. Familiarize yourself with ethical concerns such as data privacy and algorithmic biases. Education demystifies AI, empowering you to better interpret and explain AI to clients, colleagues, regulators, and the public. (Check out these resources and strategies to learn more about AI now.)

Collaborate With Technology And AI Specialists

Work closely with data scientists, AI developers, and tech experts to learn about the key drivers behind AI decisions. They can help you translate technical jargon into understandable language, which is a considerable benefit when explaining AI to clients and regulators. They can also help you set clear and realistic expectations around transparency, reliability, and fairness.

Sponsored

Prove AI’s Validity And Fairness

Maybe you can’t see or explain how an AI model works, but you can still evaluate its outcomes. Ensure tech specialists perform rigorous testing and validation of AI algorithms to identify and mitigate potential biases. These efforts generate evidence of AI’s fairness and efficacy to present to stakeholders.

Standardize AI reporting

Develop standardized formats for reporting AI outputs and explaining decision-making processes. Standards enhance clarity and facilitate cross-communication between legal professionals, clients, and regulators. Tailor these and other audit reports to emphasize essential information and justifications.

Prioritize Transparent AI Models

As often as possible, opt to use transparent AI models that provide the clearest explanations of their decision-making process. Some applications may require complex AI models, but the number of interpretable and explainable AI techniques is growing. 

Insist On Transparency From Vendors

Question vendors about the mechanics behind their tools. Request explicit details about how systems make decisions and process data. You may need to acquire detailed technical documentation and assurances of compliance with relevant regulations.

Implement Ethically Sound And Interpretable AI Solutions

Striving for interpretability, advocating for regulations that promote transparency, and engaging in interdisciplinary collaborations with data scientists and technologists are all part of the solution. 

Demystifying black box AI helps you ensure accountability, protect against biases, comply with regulations, and advocate for ethical practices in an increasingly AI-driven world. This is one way lawyers can help pave the way to a future where AI operates as a force for good while adhering to the principles of transparency, fairness, and accountability. 


Olga MackOlga V. Mack is the VP at LexisNexis and CEO of Parley Pro, a next-generation contract management company that has pioneered online negotiation technology. Olga embraces legal innovation and had dedicated her career to improving and shaping the future of law. She is convinced that the legal profession will emerge even stronger, more resilient, and more inclusive than before by embracing technology. Olga is also an award-winning general counsel, operations professional, startup advisor, public speaker, adjunct professor, and entrepreneur. She founded the Women Serve on Boards movement that advocates for women to participate on corporate boards of Fortune 500 companies. She authored Get on Board: Earning Your Ticket to a Corporate Board SeatFundamentals of Smart Contract Security, and  Blockchain Value: Transforming Business Models, Society, and Communities. She is working on Visual IQ for Lawyers, her next book (ABA 2023). You can follow Olga on Twitter @olgavmack.

CRM Banner