AI in Banking and Finance, November 30, 2023

This semi-monthly column by Sabrina I. Pacifici highlights news, government reports, industry white papers, academic papers and speeches on the subject of AI’s fast paced impact on the banking and finance sectors. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions. Each entry includes the publication name, date published, article title and abstract.

NEWS:

International Monetary Fund, November 30, 2023. AI’s Reverberations across Finance. Financial institutions are forecast to double their spending on AI by 2027. Artificial intelligence tools and the people to use them are the new must-haves for the world’s financial institutions and central banks. In June 2023, JPMorgan Chase & Co. had 3,600 AI help-wanted postings, according to Evident Insights Ltd., a London-based start-up tracking AI capabilities across financial services companies. “There’s a war for talent,” said Alexandra Mousavizadeh, the founder of Evident Insights. “Making sure you are ahead of it now is really life and death.” Like other technological breakthroughs, AI offers fresh potential—accompanied by novel risks. The financial services industry may be among the biggest beneficiaries of the technology, which may enable them to better protect assets and predict markets. Or the sector may have the most to lose if AI spurs theft, fraud, cybercrime, or even a financial crisis that investors can’t conceive of today.

Bloomberg, November 29, 2023. The Robots Will Insider Trade. Also OpenAI’s board, kangaroo grazing and bank box-checking. This article references the following in the first paragraph. Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure. Jérémy Scheurer, Mikita Balesni, Marius Hobbhahn. arXiv:2311.07590v2 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2311.07590. Mon, 27 Nov 2023. We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.

Bloomberg, November 29, 2023. JPMorgan Says AI Technology Is Starting to Generate Revenue. Bank has hundreds of projects ‘close to coming to fruition’; Headcount won’t necessarily be cut by AI, Heitsenrether says.

Bloomberg Law, November 28, 2023. Artificial Intelligence Drives New Approach to M&A Due Diligence. Merger and acquisitions where the target’s value derives from its software or intellectual property commonly have a separate diligence workstream that focuses on open-source compliance, security, or IP. But with the legal and regulatory challenges that come from using generative AI, acquirers should consider a new diligence workstream that focuses on generative AI issues. Acquirers in all industries and sectors also should review their approaches to diligence given the broad potential use cases.

The Hill, November 29, 2023. AI could endanger humanity in 5 years: Former Google CEO. Former Google CEO Eric Schmidt said he thinks artificial intelligence (AI) capabilities could endanger humanity within five to 10 years and companies aren’t doing enough to prevent harm, Axios reported Tuesday. In an interview at Axios’s AI+ Summit, Schmidt compared the development of AI to nuclear weapons at the end of World War II. He said after Nagasaki and Hiroshima, it took 18 years to get a treaty over test bans but “we don’t have that kind of time today.” AI dangers begin when “the computer can start to make its own decision to do things,” like discovering weapons. The technology is accelerating at a quick pace. Two years ago, experts warned that AI could endanger humanity in 20 years. But now, Schmidt said experts think it could be anywhere from two to four years away.

Brookings, November 21, 2023. AI can strengthen U.S. democracy—and weaken it. “Following Senate Majority Leader Chuck Schumer’s (D-NY) latest AI Insight Forum, which focused on elections and good governance, all eyes are on the developing intersection of artificial intelligence (AI) and democracy. Complementing the bipartisan Senate forum series are the Biden administration’s recent executive order on AI and Vice President Kamala Harris’ trip to the United Kingdom to attend the AI Safety Summit. This increased focus on AI comes at a time of heightened attention to the state of democracy in both the United States and globally. In this first part of a new series on the risks and possibilities of the confluence between AI and democracy, we provide an overview of three principal areas where AI may transform democratic governance and its execution. Subsequent installments of the series will offer deeper dives into these topics and policy recommendations for lawmakers.

American Banker, November 20, 2023. [read free] AI copilots: How are banks using them, and do you need one? Microsoft, Google, Salesforce, ServiceNow, Blend, Q2 and Intuit are among the software companies that offer copilots. Here’s why banks are taking notice.

American Banker, November 17, 2023. [read free] BankThink Gen AI is the key to making Gen Z love banks. Gen Z is stepping into the workforce, and as they start generating income, they will need bank accounts. So far, less than half of Gen Z individuals old enough to hold bank accounts have one, so financial institutions are eager to grow their youthful clientele. And the tool banks hope will foster this connection? Generative AI, of course.  Banks have been fast adopters of generative AI, but most are still only using it in ways that don’t excite young people: fraud protection, coding efficiency, customer support and back-office process improvements. When it comes to using AI to create the future of banking for Gen Z, banks need to think a bit more creatively.  Gen Z wants a new kind of bank: mobile-first, no phone calls, no branch visits and hyper-personalization. They want unique investment advice, since two-thirds are already saving for retirement. And they want all of their financial services to be accessible in one place, with tight integration between all their accounts. That’s far from today’s reality, even after a decade of digital banking innovation. Consumers have different accounts for mutual funds, mortgages, checking accounts and student loans, and they get investment advice from disparate, and sometimes untrustworthy, sources.

American Banker, October 26, 2023. UK financial watchdog eyes using AI to monitor bank transactions

American Banker [read free], October 6, 2023. Why Visa is investing in artificial intelligence startups. Visa has launched a $100 million initiative that will invest in companies focused on generative artificial intelligence — a technology that is attracting a lot of interest throughout the financial services industry. Generative AI refers to the use of large language models to build AI that can create original content such as text or images. Investment has been pouring into generative AI for months, much of it geared toward improving how the payment experience can be personalized for consumers and merchants. Eventually, generative AI can be used to develop new forms of financial services, shopping, marketing, risk management and other uses. But to reach that stage, AI companies will need a partner that can provide immediate scale, experts say.

PAPERS:

Bloomberg [fee only], November 29, 2023. The Robots Will Insider Trade. Also OpenAI’s board, kangaroo grazing and bank box-checking. This article references the following in the first paragraph. Free to read – Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure. Jérémy Scheurer, Mikita Balesni, Marius Hobbhahn. arXiv:2311.07590v2 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2311.07590. Mon, 27 Nov 2023. We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.

Posted in: AI in Banking and Finance