
Much has been made of a Microsoft report published on July 22, which found “computer and mathematical” and “business and financial operations” were among the major categories of occupation with “the highest AI applicability scores”.
For the capital markets, it means some jobs have already been automated and more are likely to be. There’s bad new for journalists as well, the 16th ranked occupation for AI applicability.
However, if you ask some AI industry experts, lost jobs might be just the beginning.
Some researchers with credible track records are forecasting science fiction-like possibilities, with one widely shared scenario describing the extinction of humanity by the end of 2030.
AI companies — although clearly motivated by a desire to raise money for their activities — are also predicting rapid timelines to artificial general intelligence.
AGI is a term with no precisely agreed definition, arguably best defined as the point where AI can perform any remote work as well as a human. That's a lower bar than being able to replicate all human activities. At least according to the Microsoft research, the toughest tasks will include embalming, roofing and dredge operation.
It might be hard for some people to take talk of AGI seriously, particularly when social media seems full of screenshots of models making basic blunders like miscounting the number of Rs in strawberry.
It would be a mistake, however, to dismiss the possibility without careful consideration, because humans are predisposed to underestimate the risk of AI.
Many of the worst-case scenarios turn on the arrival a ‘singularity’, where AI surpasses human capabilities at AI research and becomes self-improving beyond control. This would mean an exponential explosion in its capabilities. Human intuition doesn’t cope well with exponential growth, as proven by early predictions of how the Covid pandemic would develop.
It might have already begun. Even with its existing capabilities, there are clear cases of AI boosting productivity, particularly speeding up coding and on cleaning and gathering data. Both are important to the progress of AI research.
They're also tasks which investment banks, for example, spend vast amounts of time and deploy large amounts of capital on recruiting and employing trainees to do.
Another problem for analysing how the risks of AI might play out is that it is extremely difficult to imagine how a world with AGI would look. The popular analogy is to playing chess against a grandmaster — you can see what your opponent has done, but you are levels behind in terms of strategy and understanding.
There is a temptation to rely upon the truth that capital markets is a people business. Sure, it will be hard to pick apart tightly interwoven institutional relationships based on decades of experience and trust for all manner of reasons.
But it is fanciful to think that once the technology improves that it could not offer all sorts of guidance to clients, or that those clients could use AI to optimise their own funding strategies.
If AI can live up to this potential and become aligned with human ambitions, then it will bring about vast change on the scale of the industrial revolution or beyond.
The possibility of such an enormous boost to output is likely a key reason why AI companies are currently valued at multiples of revenue far greater than other industries. For example, The Financial Times reported last week that OpenAI is in talks with investors about a secondary sale valuing the firm at $500bn.
Cutting through the hype
The hype around AI is likely to wax and wane as new models come out either disappointing or impressing. It is easy to look at the circus of AI companies and dismiss it as pure marketing that will eventually settle down.
It is worth noting though that the reported salaries on offer to top researchers suggests the tech companies do seem to believe in what they are selling.
You could also contend that for all their technical excellence, many AI safety researchers are caught up in groupthink, trying to outbid each other for the most dystopian prediction.
But even if the messengers have might have ulterior motives, specific critiques are the strongest way to unpick the possibility that AI will rapidly accelerate in the next decade. Getting on top of this debate should be a priority for capital markets practitioners.
It is a credible position to contend that AGI is most likely still decades away or further and there are numerous experts who would agree. Recent AI models have made rapid progress, but as anyone who has used them will know, they are prone to hallucinations and inaccuracies. The question is whether those flaws can be ironed out and how long it will take.
Reasoning about the risk and opportunities for those in the capital markets demands probabilistic thinking. If you accept that there is a small probability of AGI in the next decade, then the consequences are so enormous that prudent risk management demands taking action on that view.
There is more to positioning for this change than just getting some exposure to AI companies, data centres and chip manufacturers. To get it right, there needs to be more focus on how AI is developing, not just discussion about how the AIs of today are changing the workplace.
The risk of an exponential increase in capabilities adds urgency and means that these conversations must be proactive, not reactive.
If you don't think that AI could take your job, you aren't thinking hard enough. And even if you are, you're probably predisposed to underestimate how close the AI revolution is.
By failing to prepare for the AI revolution now, by the time it arrives you'll be too late.