At one capital markets firm recently, senior management were surprised to get an email saying the number two to the CEO was leaving. Rumours of his exit proved exaggerated — it was a false alarm.
The practical joker turned out to be a new AI app that took minutes of meetings and circulated them. Somehow, it had got its wires crossed.
The risk of generative artificial intelligence making things up is well known, and has caused a sprinkling of lawsuits and firings across the business world in the three years since ChatGPT burst on the world.
“It’s always important to remember that this technology can hallucinate — the quality can be poor,” says Alexis Besse, head of fixed income quantitative strategies at Jefferies in London. “But take it away and people will scream. It already delivers significant value to the business.”
Capital market participants are being pulled in all directions by AI. One debt capital markets banker is bursting with ideas for ways it could help his business and his working life. Yet he acknowledges too “if you listen to Bill Gates, AI is going to take people’s jobs — that’s the reality. Maybe it won’t take our jobs but it will stop your kids or mine getting into a certain career path. Maybe we’ll have more plumbers.”
And asked whether in practice he is using AI day to day, he says: “I’m trying to a bit more.”
AI has its wholehearted believers, who see it as self-evidently “a superior way of researching, of access to information,” as a senior banker in Singapore puts it.
But is also has its haters, who warn of dire risks to humans’ status in the workplace, and hence to social cohesion.
But most people in the financial world share some of both types of feeling — as well as a hefty dollop of scepticism. “How’s AI ever going to get you to agree with a borrower that they should pay 100bp over swaps instead of 97bp?” asks the head of bond syndicate at a leading investment bank. “How’s it going to get you on the phone with Schroders?”
Irresistible force
Like it, loathe it or laugh at it, AI is becoming ubiquitous. The big IT companies like Microsoft are pressing it on users of their software with every kind of prompt and default setting, and financial firms’ leaders are embracing it eagerly.
Some investment banks have handpicked teams of young bankers to “spearhead initiatives and increase the use of AI,” one official says.
Of course, as Besse points out, AI means many things. “AI has been used in trading for years, in the form of predictive analytics,” he says. “Machine learning has long been used by every [quantitative trading] team to predict a number of things. A large part of how we trade markets systematically is based on those models and that hasn’t changed much.”
Trading firms use machine learning to model patterns in liquidity and prices, while venues use it to aggregate and clean price information.
The new wave of generative AI moves on from analysing numbers to reading, writing — even speaking — text.
“There are some important changes compared with the previous generation,” says Besse. “The main thing is the adoption of the technology. I wrote a book on AI five years ago. Then I don’t think people knew what machine learning was. Now no one doesn’t know. The tech is being used all the way from analysts on the desk to senior management.”
As when the internet appeared in the 1990s, the capability has arrived before its uses are understood. Even the deepest experts have little idea how AI is going to change work. There is a sense of bobbing on an ocean of limitless power.
Seeking guidance
But capital markets — as a regulated industry in which a wrong decision can cost millions — cannot tolerate too much freewheeling experimentation.
Firms, team leaders, every employee has to work out how to organise this vast new energy, channel its use in structured ways and find its most productive uses.
In mid-2024 the International Capital Market Association set up a working group to answer its members’ “call for education on the topic of AI,” says Emma Thomas, a member of Icma’s fintech and digitalisation team.
With 180 members, the group, which meets quarterly, is one of Icma’s largest. “It shows the level of keenness to digest this information,” says Thomas. “The general attitude is quite accepting and pro-innovation. A lot of people involved in the group are at the beginning of their journeys.”
Some participants are AI professionals, others ordinary bond practitioners.
One of the group’s core messages has been to distinguish between systems which process information according to programmed rules, and those which teach themselves as they go along.
“Self-learning developments are going to be very impactful,” says Thomas. “A lot are still in development, but we have seen a number of models that go beyond predictable, deterministic outputs.”
A common task for the new large language models (LLMs) is extracting information from written bond documents.
The World Bank Group Treasury has developed a tool called Shastra to read dealers’ term sheets for its own funding deals and asset-liability management transactions and enter them into the World Bank’s core transaction system directly. Staff still check the information but they no longer have to enter it manually.
Treasury officials believe this will make the information more accurate and reliable, as well as saving time. Over the next three years, they will seek ways to share this data with external partners such as fiscal and paying agents, custodians, clearing systems and investors.
By comparing multiple documents on a deal, AI apps can help to prevent errors, says another digital capital markets expert. “You’d be surprised: we are still looking at settlement fails,” he says. “If you look at the number of securities that are issued, in different markets, there is still a level of complexity, which means some of these issues persist and can be solved through AI.”
While these applications are about streamlining and optimising existing administrative work, others have tried to use AI to create new knowledge.
In 2019, Bastien Winant and Marko Mravlak of the European Stability Mechanism with two academics published a paper on using machine learning to predict investor demand for ESM bonds, based on data from dealers. They believed they had produced useful forecasts and could forecast the direction of changes in some single investors’ demand. Some of the authors have produced further research in this area.
The expert says market participants are discussing AI tools “to support a financing decision”, but he has not seen them implemented yet.
Off the desk
Participants are finding that some possible uses of AI fall into two categories — clever tasks and dumb ones.
In the primary bond market, participants are not yet using AI much for the clever tasks.
One of the core jobs before a new issue comes to market — for the borrower and investment banks — is to survey the trading prices of comparable bonds and estimate how to structure the deal, what pricing it should offer and what investors are likely to accept.
“The proper large language model AI is not involved in what we do,” says the head of syndicate at a second top investment bank. “Can it help you work out comps for a new issue, or whether it’s a good day in the market? We know if it’s a good day.”
When it comes to suggesting comparables “it’s quite a long way from being better than a human being”, he adds.
The head of treasury at a European power company said there was already a function on Bloomberg that selected comparable bonds and enabled you to “price or look at how a bond would trade synthetically”.
His opposite number at a real estate company says “I’m interested in the subject of AI, and to hear who of the corporates is really using it. We use it only in controlling, when it comes to forecasting numbers and budgets. But in pure treasury or financing, capital markets, hedging, we don’t use AI.”
It may be coming. Covestro, the polymer company, won the Treasury of the Year award in November at Germany’s Structured Finance conference for a project called Free Lunch that it says has “revolutionised” its FX risk management using machine learning.
Thomas Böttger, the firm’s global head of finance, said in a social media post that it had turned “hedging from a cost factor into a revenue generator, while simultaneously providing superior risk protection.”
The expert points out that “the models are ultimately there as a support, not to trigger a decision. It remains valid to provide some colour on demand.”
Thomas backs up that point. “AI is not going to give you an accurate prediction of everything that’s going to happen,” she says. “It can provide more accurate insights more quickly. The real differentiator is the speed at which you can get those insights. No one ever claimed to have a 100% accurate model for predictions.”
What is an acceptable level of accuracy is a regular topic in the Icma working group. A common view is that 97% is enough for a proof of concept to be deemed accurate enough.
But 97% may not feel good enough for a capital markets banker who stakes their reputation with a client every time they advise them. “[Using AI] to produce data is very dangerous,” says the syndicate head. “If you have to check it five times over you might as well do it yourself.”
Ultimately, a firm is fully responsible for everything it puts out. Even if AI has generated some content, the firm must have absolute confidence in it.
As the expert says, with AI-generated data or analysis, “The real risk is that it can seem perfectly plausible, but the risk is greater, the less you know about the subject.”