Monsters in the deep? Bank of England warns against AI trading strategies

A member of the Bank of England’s Financial Policy Committee has cautioned against the use of AI trading strategies that could exacerbate market instability – highlighting the potential dangers of AI-driven trading algorithms that might inadvertently amplify external shocks, and warning that heads of trading will be held responsible if anything goes wrong.

Jonathan Hall

Speaking at the University of Exeter yesterday, Jonathan Hall warned about the emergence of ‘deep trading agents’ – AI systems that operate semi-autonomously and may not be fully understood by human traders. These agents could collude with each other in ways that evade detection or worsen market volatility. He stressed the importance of rigorous testing and regulatory compliance before deploying AI models in financial trading.

READ MORE: UK regulators indicate principles-based approach for AI regulation

Hall believes that it is theoretically possible for “trading nodes” to shift from being human traders to neural networks – in essence, for AI to take over – and that this could have serious implications for market stability. “My concern is that deep value-trading algorithms could make the market more brittle and highly correlated. And that deep flow-analysis trading algorithms could learn behaviour which actively amplifies shocks,” he said, in a speech entitled ‘Monsters in the Deep’. “The incentives of deep trading agents could become misaligned with that of regulators and the public good.”

He also warned that trading desk heads would be held accountable for any non-compliant or harmful behaviour exhibited by AI algorithms. While acknowledging that his concerns were currently speculative, he drew parallels with past trading strategies that contributed to market crises, such as the collapse of the Long-Term Capital Management hedge fund in the late 1990s.

In light of these risks, Hall urged caution in the adoption of neural networks for trading, citing both performance and regulatory concerns. “A manager that implements a trading algorithm must have an understanding that is deeper than just a simplified first-order interpretation of its behaviour,” he stressed. “If a manager implements a highly complex trading engine, then they are making an explicit choice to do so as opposed to a simpler model. If the difference between the two is what causes a problem, then that difference is the manager’s responsibility.

“Trading algorithms must have both internal stop limits and external human oversight, including a kill switch. Just as with a human trading desk, the buck stops with the manager… If trading algorithms engage in non-compliant, harmful behaviour then the trading manager will be held responsible.”

However, Hall also clarified that his views were personal and not necessarily reflective of the Bank of England’s official stance.

In reaction to the speech, Oliver Blower, CEO of fintech VoxSmart, who previously worked on Merrill Lynch and Barclays’ trading desks, said: “ When it comes to AI, sure there are risks to adoption and trading desks must proceed with caution. But investment banks can ill afford to ignore it. Heads of fixed income desks, for example, can achieve tangible benefits. By gathering all the disparate information from across the bank, before then deploying AI on top, some of the long-standing pricing and illiquidity issues across bond markets can finally be overcome.”

Got an opinion on AI in the industry? Complete our AI survey and have your say here. 

© Markets Media 2024.

Related Articles

Latest Articles