UK Parliamentary Panel Flags AI Oversight Gaps Could Expose Financial System to Harm

CN
Decrypt
Follow
3 hours ago

A UK parliamentary committee has warned that the rapid adoption of artificial intelligence across financial services is outpacing regulators' ability to manage risks to consumers and the financial system, raising concerns about accountability, oversight, and reliance on major technology providers.


In findings ordered to be printed by the House of Commons earlier this month, the Treasury Committee said UK regulators, including the Financial Conduct Authority, the Bank of England, and HM Treasury, are leaning too heavily on existing rules as AI use spreads across banks, insurers, and payment firms.


“By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm,” the committee wrote.





AI is already embedded in core financial functions, the committee said, while oversight has not kept pace with the scale or opacity of those systems.


The findings come as the UK government pushes to expand AI adoption across the economy, with Prime Minister Keir Starmer pledging roughly a year ago to “turbocharge” Britain’s future through the technology.


While noting that “AI and wider technological developments could bring considerable benefits to consumers,” the committee said regulators have failed to provide firms with clear expectations for how existing rules apply in practice.


The committee urged the Financial Conduct Authority to publish comprehensive guidance by the end of 2026 on how consumer protection rules apply to AI use and how responsibility should be assigned to senior executives under existing accountability rules when AI systems cause harm.


Formal minutes are expected to be released later this week.


“To its credit, the UK got out ahead on fintech—the FCA's sandbox in 2015 was the first of its kind, and 57 countries have copied it since. London remains a powerhouse in fintech despite Brexit,” Dermot McGrath, co-founder at Shanghai-based strategy and growth studio ZenGen Labs, told Decrypt.


Yet while that approach “worked because regulators could see what firms were doing and step in when needed,” artificial intelligence “breaks that model completely,” McGrath said.


The technology is already widely used across UK finance. Still, many firms lack a clear understanding of the very systems they rely on, McGrath explained. This leaves regulators and companies to infer how long-standing fairness rules apply to opaque, model-driven decisions.


McGrath argues the larger concern is that unclear rules may hold back firms trying to deploy AI to an extent where “regulatory ambiguity stifles the firms doing it carefully.”


AI accountability becomes more complex when models are built by tech firms, adapted by third parties, and used by banks, leaving managers responsible for decisions they may struggle to explain, McGrath explained.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink