Control AI, or it will control you
Capital markets need a strict code of conduct for using GPT bots
The clue is in the name: artificial intelligence. It doesn’t look great for the human race. Our one USP can now be mass produced as easily as Coke cans.
Other professions were hit a long time ago — airline pilots and, increasingly, doctors. When it comes to processing and evaluating a lot of complex data in real time, machines just make better judgments. Now it’s the turn of the capital markets.
The swelling AI river broke its banks in November with the arrival of ChatGPT. Suddenly, computers can not just think, but talk — and they don’t seem to need instructions any more. What’s more, anyone can access this miracle, free.
Capital markets folk are a plucky, entrepreneurial lot. They like innovation, and many are relaxed about this one.
“As human beings, we have to embrace new technology,” said one senior debt capital markets banker. “The industrial revolution threatened people’s jobs too. I think AI will make us more productive if used in the right manner.”
Everyone must hope the optimists are right — but they could be in for a shock.
Historians still debate the balance between the benefits of the steam-powered Industrial Revolution and the suffering it caused. But no one denies that it turned economies upside down, precipitating a series of social and political revolutions to cope with the economic one.
History is a guide, but not a template. Just because some things turned out well or badly before does not mean they will again.
Some things are playing out as they did in the 18th and 19th centuries. The new technology is being developed freeform by the private sector, with no government control. The rest of society is panting behind in a vain attempt to catch up. The leaders are motivated by profit and will not be constrained by ethics.
Other things are different this time. Early machine-made cloth may have been better and cheaper than handmade material, but you could tell them apart. AI is impersonating human beings in ways that fool even experts.
The implications for markets are grave. How can clients trust advice that might have been generated by a machine? Why should they pay for it? Could they generate it themselves?
If they are both playing with the same diagnostic equipment, how can you choose between BNP Paribas and Morgan Stanley?
Very soon, human participants could be reduced to caretakers, wiping the machines occasionally with an oily rag.
And as other critics have pointed out, AI is inherently conservative: it recycles what is. Gender or racial biases could be baked in, rather than shaken out. In markets, AI might be good at representing the consensus — but how is it to come up with a new idea?
Other activities are battling the same existential challenge. In education, overnight, all homework, essays and dissertations have become suspect. There is one obvious, if radical answer — go back to traditional exams: handwritten in exam halls and oral assessments.
Capital markets are another highly organised and regulated sphere where probity is essential. It would not be difficult for the industry and regulators to come together and set ground rules — AI can be used for some purposes, but it must be clearly indicated.
The best way for the market to deal with AI would be to place extra value on interactions that were verifiably human — speaking person to person, even if at a distance.
It’s not too late. Humans in the capital markets can still control their destiny — if they work together. To say ‘it’s us against the machines’ is no exaggeration.