Major UK banks are in discussions with regulators as well as finance and national security organisations as the latest Anthropic artificial intelligence (AI) model unearths decades-old vulnerabilities.
At the same time Anthropic has announced Project Glasswing, which is providing a select group of organisations access to the model, known as Claude Mythos Preview AI, to enable them to develop defences against its misuse.
The AI model’s ability to identify security flaws in software which have remained undetected for years, despite organisations such as banks constantly looking for them, is a warning of what AI in the wrong hands could do.
It is not just the banking sector that will face threats if this type of technology is acquired by criminals, with any organisation at risk. According to Anthropic, its Claude Mythos Preview AI “has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser”.
It added that, given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout – for economies, public safety and national security – could be severe.
In a blog post announcing Project Glasswing, which it described as “an urgent attempt to put these capabilities to work for defensive purposes”, Anthropic revealed it would be working with a select group of businesses that will be given access to the AI model.
It said Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia and Palo Alto Networks will “use Mythos Preview as part of their defensive security work”.
“Anthropic will share what we learn so the whole industry can benefit,” the AI firm said.
In his blog, Chris Skinner, fintech industry expert and CEO at The Finanser, said this moment feels like an early warning. “Even if Anthropic keeps Mythos tightly restricted, similar capabilities will emerge elsewhere – and probably sooner than many expect,” he said.
“The real challenge isn’t whether this technology exists,” added Skinner. “It’s whether institutions can adapt quickly enough to operate in a world where AI can both defend and attack the foundations of finance.
“We are talking about an AI system that identified zero-day vulnerabilities in place for decades when everyone, including specialists, had no idea they existed.”
One IT security professional in the UK banking sector, who wished to remain anonymous, said: “It has always been possible for vulnerabilities to be found and secured, but the speed at which the AI can detect them means if it falls in the wrong hands, people can find the flaws very quickly and exploit them before software owners can correct the problem.”
In the UK, the Bank of England, the Financial Conduct Authority and the government are in talks with the National Cyber Security Centre over potential vulnerabilities in key IT systems.
According to The Financial Times, regulators are also planning meetings with finance firms to warn them of the risks that the AI model brings.
It said this followed a summons by US Treasury secretary Scott Bessent to the US’s largest banks to discuss the AI model’s ability to detect cyber security vulnerabilities that could be exploited.

