As longtime user of digital technology, the financial services industry is poised for an AI-enabled transformation, an artificial intelligence expert says.
Avi Goldfarb, a professor at the Rotman School of Management at the University of Toronto, made that prediction during The Disruptive Economics of AI, a webinar he presented on June 5.
“A lot of the industries that are already experiencing rapid transformation are those that were already digitized and had already experienced rapid productivity growth since the 1970s, so the tech industry, first and foremost,” said Goldfarb, who is the Rotman Chair in Artificial Intelligence and Healthcare, and professor of marketing at the Rotman School.
Of course, the financial services industry has been so heavily invested in digital that it has its very own tech industry — fintech.
“AI is already having an impact in many aspects of financial services, including lending, underwriting, fraud detection, documentation, and personalized marketing,” Goldfarb told Rethinking65 during follow-up correspondence. But these incremental changes — which he refers to as point solutions — are only a prelude to a much bigger transformation, he predicted.
During the webinar, Goldfarb said businesses that adopt artificial intelligence use it either for point solutions — the most common method — or system solutions, which can produce the biggest benefits. Point solutions are easy, he said.
“It’s relatively straightforward to look at your existing workflow, find some part of that workflow that the tool can help, like a prediction problem within an individual’s workflow, take out the human in that task, drop in the machine, and keep the workflow the same,” Goldfarb said during the webinar. “These point solutions are useful, and they incrementally improve productivity as they as they diffuse.”
A recent New York Times article reported that many companies are replacing entry-level jobs with AI, causing problems for recent college graduates. When Rethinking65 asked him about this “job apocalypse,” as the Times refers to it, Goldfarb responded that educators like him need to improve. “If AI can complete the tasks that are currently given to entry-level employees, we in higher education will need to adapt and ensure our students have skills that are valued in the workplace upon graduation,” he said.
While point solutions can result in greater efficiencies, far more transformative changes can be achieved through system solutions, where a firm changes its workflow, Goldfarb said. “When you look at the history of economics technology, massive productivity gains tend to happen by changing the workflow, by figuring out what the technology enables you to do what was impossible before,” Goldfarb explained. “But those system solutions are hard, and they take time.”
In his comments to Rethinking65, Goldfarb added, “I anticipate many financial services firms will discover new workflows that take advantage of what AI has to offer, though I am not sure exactly what that will look like.”
Small firms are often the ones that develop industry-transforming system changes and then become large firms themselves, Goldfarb said during the webinar, citing as examples Ford Motor Company, Google and Netflix.
“There’s a lot more smart people outside your organization than in it, no matter how big your organization is and how smart your people,” Goldfarb explained. “And so, there is often a really good chance that somebody outside will figure out a way to disrupt what you do in time to technological change.”
What’s Under the Hood
Goldfarb devoted much of his presentation to concerns about AI’s disruptive effects for business, noting that there are divergent opinions.
“There’s an optimistic view where we are on the verge of machines that can do just about everything we can do and listen to us and make our lives much better, as in older science fiction,” he said. “Or we’re on the verge of machines that can do everything we can do, and they don’t listen to us, and that’s where we get dystopian science fiction like the Terminator or The Matrix.” Goldfarb said worst-case scenarios predicted by some are unlikely.
But to understand the potential benefits and pitfalls of AI you have to understand what’s “under the hood,” of AI systems, Goldfarb said. In essence they’re all the latest versions of a long line of prediction machines.
“Any time you’re filling in missing information, that’s prediction,” he said. “So yes, it could be good old-fashioned statistics problems. But it could also be a large number of other problems that are really about filling missing information.”
Lenders were some of the first businesses to use machine prediction, he said. What’s the probability that a borrower will repay a loan? “That’s arguably the oldest prediction problem in business,” he said. “Increasingly, lenders, banks and others, have been using machine learning tools to predict whether someone’s going to pay back a loan.”
Similarly, the insurance industry, which is also in the business of pricing risk — and so making predictions — increasingly has been using machine learning tools for underwriting, he said.
“What’s changed in the past few years is we’ve started to recognize a number of things that we didn’t use to think of as prediction can be solved with machine prediction, like medical diagnosis,” Goldfarb explained. So, diverse systems like OpenAI, ChatGPT and image-generation tools are all machine prediction “under the hood,” he said. “What ChatGPT is doing, it is predicting the set of words that’s most helpful, honest and harmless in response to your query.”
It’s necessary to understand the nature of what’s under the hood to understand the role of people in work as the “essential complements” to AI to prediction machines, Goldfarb said. In his view, machines produce output, but humans decide what to do with it. “That’s what we call judgment,” he said.
And the output the machines produce depends on how the AI models were created. “The key point is, it’s not machines making decisions,” Goldfarb said. “Machines are making predictions. Humans are embedding our values into those machines … often in an automated way, by pre-specifying and encoding those values what matters into the machine.”
The Dangers
Goldfarb said AI holds both great promise and great risk. “A number of research papers suggest that machines that can innovate will be fantastic for productivity growth, and therefore likely to be fantastic for many, many humans,” he said, adding, “Major technological change often comes with extraordinary risk.”
The potential benefits of AI are greatest in healthcare, Goldfarb asserted. “If AI’s potential is as big as it could be, to massively improve productivity, to give us much more of what we want, cure cancer, improve other aspects of healthcare, et cetera, what kind of a risk are we willing to take?” he asked. His answer: “To the extent that this technology is going to not just improve productivity, but improve outcomes in healthcare, then we should be willing to accept much more risk than we otherwise would.”
Goldfarb acknowledged AI can be used by bad actors, but said that the risks, while real, are not as bad as many fear, because society will adjust. For example, if the internet is flooded with fake images, “what’s likely to emerge is something like a babbling,” he said. “So, to the extent that we can no longer trust images and videos that we see online, pretty soon, we just won’t trust them.”
Similarly, if bad actors frequently use AI for blackmail, the tactic will become less effective, “because no one will believe you,” Goldfarb said. “It’s not the doom and gloom of fraud everywhere.”
Another area of concern is the possibility of crooks using AI for large-scale theft, such as getting into bank accounts. “That’s a small-scale worry,” Goldfarb said. “But it doesn’t mean the whole financial system is going to collapse and almost surely won’t.”
Institutions will have to adjust with new security technologies and protocols, he said. “To the extent that AI enables mimicking voices, then verbal phone confirmations are going to go away.”
When Rethinking65 asked what risks AI poses for financial advisors, Goldfarb responded, “ Like any tool, AI creates opportunities and risks. The opportunities include more accurate, more efficient services. AI tools, however, are not deterministic. The output can change, and sometimes it can be inaccurate. That creates a risk in both financial advice and financial documentation.”
Rethinking65 asked Goldfarb about another negative scenario: the possibility that some professionals, such as physicians or financial advisors, will shirk their key responsibility to provide human judgment to AI output, instead passing it along to the patient or client without vetting.
“Yes, that is a real risk,” Goldfarb acknowledged. “It means that the humans making the decisions will, in effect, be the designers of the AI rather than the financial advisor working with the client. Even when a process is automated, humans identify opportunities to use AI and humans determine what actions to take. If financial advisors start to defer to the AI, then it suggests extra responsibility for the humans designing the AI system in the first place.”
Ed Prince is a writer for Rethinking65. In a four-decade career in journalism, he has served as an editor with many of New Jersey’s leading newspapers, including the Star-Ledger, Asbury Park Press and Home News Tribune. Read more of his articles here.