“Fool around and find out” is one (G-rated) term that can be used to describe the attitude that the UK’s financial services sector is taking on AI integration. However, this wait-and-see approach could be putting consumers at risk.
AI is currently deeply embedded in the UK’s financial services sector. It decides who gets a loan, how much insurance costs, and how quickly a claim is paid out. But MPs are warning that regulation has failed to keep pace. A new report from the Treasury Select Committee concludes that the Bank of England, the Financial Conduct Authority (FCA) and HM Treasury are exposing consumers and the wider financial system to “potentially serious harm” by taking a largely hands-off approach to AI.
The findings come as AI adoption accelerates across the UK. According to evidence submitted to the Committee, over 75% of UK financial services firms now use AI, with the highest uptake among insurers and international banks. With more than three-quarters of UK financial services firms now using AI, the Committee argues that caution has already given way to complacency.
While the report acknowledges the economic and consumer benefits AI could bring, such as faster services, lower costs, and improved security, it argues that current safeguards are inadequate for the scale and speed of its adoption.
How AI is already shaping financial decisions
Today, insurers and international banks are leading the way when it comes to AI adoption in financial services, applying AI not only to automate admin but to run core functions such as credit assessments, fraud detection, and insurance claims processing.
This level of responsibility would normally come with tight oversight. Instead, MPs found that regulators are relying on existing rules that were never designed with AI in mind.
The FCA and the Bank of England say that frameworks such as the Consumer Duty and the Senior Managers and Certification Regime are flexible enough to cover AI risks, but the Committee is not convinced. It concludes that firms are being left to interpret the rules for themselves, creating both uncertainty and uneven standards across the sector.
Consumers left in the dark
A central concern of the report is the impact of AI-driven decision-making on consumers. MPs heard extensive evidence that many AI systems used in finance lack transparency, making it difficult for customers to understand why they have been denied credit or offered worse terms.
There are also concerns that automated decision-making could worsen financial exclusion. Automated systems trained on historical data may disadvantage people who already struggle to access financial services, particularly those with irregular incomes or limited credit histories. In urgent situations, such as applications for credit to cover medical treatment, the consequences of unfair decisions could be severe.
Another growing risk comes from unregulated AI-powered “financial advice” tools, including large language models used via search engines or chatbots. These chatbots and search engines can now answer questions about money, but are not regulated as financial advisers and may therefore provide consumers with misleading or incomplete guidance. MPs warned that misleading or incorrect information could steer consumers towards harmful financial decisions.
Fraud is also expected to increase as criminals use AI to amp up scams and impersonation attacks, putting further pressure on consumers and firms alike.
Regulators accused of playing catch-up
Despite all of these risks, the UK has no AI-specific financial regulation. The FCA and the Bank of England argue that existing rules, including the Consumer Duty and the Senior Managers and Certification Regime (SM&CR), provide sufficient protection.
The FCA points to initiatives such as its AI Live Testing service and Supercharged Sandbox as evidence that it is taking action. These allow firms to experiment with AI in controlled environments before deploying it more widely. But while MPs welcomed these efforts, they noted that participation is limited and voluntary.
Instead, the Committee found that regulators rely heavily on monitoring issues as they arise, using practices such as complaint tracking, surveys, and industry engagement, rather than providing clear, practical guidance. Most importantly, they say the regulators have failed to provide clear, practical guidance on how firms should apply existing rules when using AI.
Many firms told MPs that this reactive approach leaves them uncertain about how existing rules apply to AI, effectively pushing the burden of interpretation onto businesses.
Questions around accountability also remain unresolved. While regulators insist that senior managers are “on the hook” for harm caused by AI, industry groups argue that the opacity of complex models makes meaningful oversight difficult. That uncertainty, MPs say, risks both consumer harm and a negative effect on responsible innovation.
A threat to financial stability
Moving beyond individual consumer harm, the report also raises red flags about systemic risk. Evidence submitted to the inquiry suggested that AI could amplify cyberattacks, increase reliance on a small number of US-based cloud providers, and worsen market volatility by encouraging herd behaviour in trading.
Although the Bank of England and the FCA already conduct cyber and operational resilience stress tests, none are specifically designed for AI-driven failures. The Committee argues this leaves regulators unprepared for how a major AI-related shock might spread through the system.
The report recommends that AI-driven scenarios be incorporated into future system-wide stress tests to better prepare for worst-case scenarios.
Critical third parties
The Committee was particularly critical of the Government’s failure to activate the Critical Third Parties (CTP) Regime, which gives regulators new oversight powers over non-financial firms that provide essential services to the financial sector, such as cloud and AI providers.
Despite being established more than a year ago, no companies have yet been designated under the regime. This is despite high-profile incidents, such as a major Amazon Web Services (AWS) outage in October 2025 that disrupted UK banks, including Lloyds.
The Committee urges HM Treasury to act quickly, warning that heavy dependence on a small number of tech giants is a major vulnerability.
A call for action, not observation
Concluding the report, Committee Chair Dame Meg Hillier said she is “not confident that our financial system is prepared if there was a major AI-related incident.” The message from the Treasury Select Committee is that while AI may offer real opportunities, the current regulatory stance is insufficient.
If AI is already shaping the financial system, regulators can no longer afford to stand on the sidelines. For a sector increasingly run by algorithms, MPs say preparedness can no longer wait, as without clearer rules, stronger oversight, and proper stress testing, the risks could soon outweigh the rewards.
Also read: Security researchers found a Google Gemini flaw that let hidden instructions in a meeting invite extract private calendar data and create deceptive events.

