Here’s how banks are using and experimenting with generative AI
Tags
Jennifer Fuller, US financial services lead at PA Consulting, discusses generative AI in banking with American Banker’s Carter Pape.
Click here to read the full American Banker article
The article notes that large language models could change how banks interact with customers and their own knowledge bases, and how they protect themselves and their customers from fraud and financial crimes, but few have released products that actually deploy the nascent technology.
That has left smaller banks that are in the learning and experimentation stages to take cues from technology leaders on where large language models – the kind of technology that powers OpenAI's ChatGPT – will become most useful in banking.
Two examples of banks using large language models in an experimental capacity or otherwise keeping its use strictly internal include Goldman Sachs using generative AI to help developers write code or JPMorgan Chase using it to analyze emails for signs of fraud.
Additionally, JPMorgan Chase trademarked a technology in May for a product called IndexGPT that could select investments for wealth management clients. The product is apparently part of a larger effort by the bank of leaning into technology investments, specifically in artificial intelligence. Unlike others, the trademark specifies that customers (not just bank employees) would interact with the model.
As banks grow more interested in adopting AI for various use cases, they need to be careful about their strategy for doing so, according to Jennifer. “One of the big risks about AI for organizations at the moment is it turning into a Frankenstein’s monster of pet projects,” Fuller said. “Everybody’s doing their own little thing with AI, but to really get the organizational value at a strategic level, you need to build a framework where AI is part and parcel of the way that your organization does business.”
One way that banks are making AI part and parcel of their business is by organizing their knowledge bases by training language models on internal documentation and allowing employees to interact with a language model that can answer questions that can only be answered by searching that documentation.