Thursday, October 31

Stay informed with free updates

Lenders’ rapidly expanding use of generative artificial intelligence is creating new risks for the financial system and could be incorporated into annual stress tests that examine sector resilience, a Bank of England deputy governor said.

“The power and use of AI is growing fast, and we mustn’t be complacent,” Sarah Breeden said on Thursday, adding that while the UK central bank was concerned, it was not ready to change its approach to regulating generative AI.

Some 75 per cent of financial companies are using the fast-evolving technology — up from 53 per cent two years ago — and more than half of the use cases have some degree of automated decision-making, according to a recent BoE survey.

Generative AI systems spew out text, code, and video in seconds, and Breeden said the central bank was concerned that, when used for trading, AI could lead to “sophisticated forms of manipulation or more crowded trades in normal times that exacerbate market volatility in stress”.

The BoE could use its annual stress tests of UK banks, which assess how well prepared lenders are for different crisis scenarios, “to understand how AI models used for trading whether by banks or non-banks could interact with each other”, said Breeden, who oversees financial stability at the central bank.

The BoE is setting up an “AI consortium” with private sector experts to study the risks.

Breeden warned: “Where such crowded trades are funded through leverage, a shock which causes losses for such trading strategies could be amplified into more serious market stress through feedback loops of forced selling and adverse price moves.”

Her comments at a conference in Hong Kong follow the IMF’s warning in its financial stability report last week that AI could lead to faster swings in financial markets and greater volatility under stress.

Breeden, who took up her role in November last year, said the rules making senior bankers more accountable for the areas they oversee could be adjusted to ensure they are held responsible for decisions made autonomously by AI systems.

“We need to be focused in particular on ensuring that managers of financial firms are able to understand and manage what their AI models are doing as they evolve autonomously beneath their feet,” she said.

While most uses of AI in financial services were “fairly low risk from a financial stability standpoint . . . more significant use cases from a financial stability perspective are emerging”, such as assessing credit risk and algorithmic trading.

In its survey the central bank found 41 per cent of companies surveyed were using AI to optimise internal processes, more than a quarter for customer support and at least a third to combat fraud.

AI was being used for credit risk assessment by 16 per cent of companies, with a further 19 per cent saying they planned to do so in the next three years, the poll found.

Eleven per cent of the groups were using the technology for algorithmic trading, with a further 9 per cent planning to adopt it for this work in the next three years.

Breeden said half of the uses of AI by financial companies were split roughly evenly between “semi-autonomous decision-making”, with some human involvement, and completely automated processes with no human involvement.

“That clearly poses challenges for financial firms’ management and governance, and for supervisors,” she said.

https://www.ft.com/content/d4d212a8-c63a-4b00-9f4c-e06ed59f9279

Share.

Leave A Reply

fifteen − 7 =

Exit mobile version