Business

/

ArcaMax

Parmy Olson: Anthropic's Mythos is a wake-up call for everyone, not just banks

Parmy Olson, Bloomberg Opinion on

Published in Business News

Mythos, a new artificial intelligence model that Anthropic PBC has teased as too dangerous to release, looked at first like a problem for banks. Days after the company announced the new technology, U.S. Treasury Secretary Scott Bessent summoned Wall Street leaders to make sure they were taking precautions to defend their systems, creating invaluable publicity for Anthropic and raising questions about who gets an exclusive peek at its threatening progeny.

The Treasury is now pushing for access to Mythos. One organization that already has it is the UK’s AI Security Institute, which has become the world’s top neutral arbiter of what counts as safe and secure AI.

It found that some of the hype around Mythos is warranted. It is indeed more capable of being used for complex cyberattacks than other AI tools such as OpenAI’s ChatGPT or Google’s Gemini. But it is most perilous for “weakly defended” or simplified systems. Large banks have some of the most secure IT in the world, and while Mythos and other powerful AI poses a threat in the wrong hands, it’s the much broader array of small and medium-sized companies that look most vulnerable to hackers and bad actors using the tools.

Cyber specialists have long complained that companies treat security as an afterthought, and the result is online services and software that are riddled with bugs, handing hackers a possible way to infiltrate a computer system.

Tech companies have an approach for dealing with this, called “responsible disclosure.” Once a flaw is found in their software, they’ll announce it to the world with a suggested fix, giving their customers time to make the patch and move on with their lives. Microsoft Corp.’s version of this is Patch Tuesday, which despite its name refers to a monthly disclosure of flaws the company has found in Office 365, Windows and other products.

IT staff at banks like Barclays Plc and Wells Fargo & Co. will take those suggested patches, test them to make sure they don’t break any of their existing systems, get sign off from management, and then deploy them. That takes weeks or months.

Up until the advent of generative AI, the process worked just fine because it would typically take an even longer time for bad actors to find a way to attack a system based on the flaw that had been disclosed. They’d have to study the bug and also experiment with different methods for exploiting it.

Artificial intelligence tools have changed all that. Even two years ago, hackers could take the details of a disclosure and paste it into ChatGPT, then tell the bot to scan a public database of source code such as GitHub for other patterns which could then be exploited. Let’s say for instance that Microsoft announced that its researchers had found a flaw in how Office 365 handles a file. A chatbot could not only suggest how to exploit it but quickly find other software like Microsoft Outlook or Teams that have similar deficiencies.

 

This has all got even easier for hackers in the last few months, as AI companies have imbued their models with “agentic” capabilities, effectively giving them the power to act independently. Anthropic’s Claude Cowork, released in January, can now carry out tasks like sending emails and making calendar appointments. For those who want to break into software, such tools won’t just find weaknesses, they’ll try different ways to hammer at them automatically until one method works.

Mythos can even “chain” a software bug into multi-step attacks, something only highly-skilled human hackers had been able to do previously. It’s the equivalent of a burglar planning a series of steps for a break-in: finding that first open window, using it to unlock a door from the inside and then disabling the alarm. Each step alone isn’t enough but together they get full access.

Till now, generative AI’s impact on cyber security has been amorphous. There was no single tool that could launch devastating new attacks, but large language models were still harnessed to supercharge old tricks of the trade. Hackers have used chatbots to polish up emails for phishing attacks to look more credible, or real-time avatar generators to create deepfake video calls that trick people into thinking a man in his living room is a young woman.

But agentic AI is set to fuel the act of hacking itself, which has long been an opportunistic pursuit for the unscrupulous. So-called black hats tend not to go after banks because they’re so secure. Instead they scan the web to spot vulnerabilities, be it a hospital they can infiltrate to make ransomware demands or a mom-and-pop shop. The recent advances in AI are a problem for these organizations because the moment a flaw is disclosed by a software provider, they now have precious little time to update and patch their systems.

According to zerodayclock.com, the average time between a software flaw being made public and a working attack being built has collapsed from 771 days in 2018, to less than four hours today.

Anthropic’s disclosure of Mythos certainly benefits its own publicity efforts ahead of an initial public offering, adding to the mystique around the potency of its technology. But it’s also forcing a much-needed reckoning over how the window of time between published IT flaws and their exploitation has effectively vanished. That raises questions over whether “responsible disclosure” is such a smart idea in the first place, and whether the process of patching flaws over weeks and months is now fruitless.

Even Wall Street can’t answer these questions yet, but banks at least have the staffing and the money to work out the difficult structural changes needed to eventually do patches in near-real time. The bigger problem will be for smaller firms who need to move just as fast, and who will require technical and regulatory help that the market can’t yet provide.


©2026 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus