It feels a bit like we have crossed a cyber event horizon with the release of Anthropic’s latest Mythos AI, an LLM optimised for vulnerability detection. Much of the noise centres on whether it enables better hacking at machine speed, which I believe is a bit of a distraction from the practical impact. The more material reality is it generates more truth than organisations can absorb. Put simply, visibility is now increasing faster than remediation capacity.
These are not theoretical issues. The vulnerabilities being surfaced are real and many organisations have been living on borrowed time with risk parked on registers, softened through assurance language and accepted by leadership and stakeholders (largely in ignorance/wilful blindness even) as tolerable. The uncomfortable truth is that digital environments have always been inherently flawed.
As I have written before back in 2017, quote: ‘For the facts are stark. In every 1,000 lines of code there are between 15–50 bugs that could constitute vulnerabilities’ (paraphrasing from the book ‘Code Complete: A Practical Handbook of Software Construction, Second Edition 2nd Edition’). Scale that across a world producing tens of billions of lines of code and the problem was already vast before AI compressed discovery timelines to near zero. Read the full piece in ‘Digital Dementia born of Artificial Intelligence‘.
That is not the sum of it, no organisation operates in isolation. Modern supply chains are deep, opaque and tightly coupled. This makes the risk systemic meaning there is a simultaneous exposure across institutions, driven by shared dependencies. For those in the industry, we have recognised that supply chain risk remains the largest ungoverned attack surface and have been warning of such a black swan event.
As for the catalyst of all this hand wringing, the real challenge is not deploying Mythos, it is operationalising its output. Most organisations lack the governance, prioritisation and decision frameworks to convert continuous vulnerability discovery into controlled, business aligned risk reduction; not to mention being asked to now handle an F1-class engine when most are still operating and skilled to 1.2-litre commuter grade in-house! Extend that across interconnected ecosystems and the scale becomes clear.
This is not a wave, it is an iceberg and we have only being seeing what sits above the waterline, till now.
Without sounding like a doom monger, there is a practical response as I see it, to shift from vulnerability management to authority led risk control founded on verifiable trust transparency. That means establishing smothering akin to a central decisioning layer that ingests outputs from tools like Mythos and converts them into prioritised, business aligned risk actions not raw findings that can be communicated into the supply chain in real time. In practice, this requires, to start with, three things:
- Attack-path based prioritisation, focusing only on vulnerabilities that are realistically exploitable and materially impact critical assets.
- A set of Authoritative Risk Positions (ARPs) that define what must be fixed, what can be tolerated and what is escalated to executive decision;
- A continuous assurance model that validates remediation and provides defensible evidence to stakeholders.
The key to this is the Trustworthiness of the verifiable current state in real-time that will need to extend beyond the enterprise into the supply chain, with minimum assurance standards and attestation requirements for key dependencies. The objective is not to fix everything, that is neither realistic nor perhaps necessary but to create a system where risk is continuously understood, explicitly owned, decisively controlled and backed off onto a real time response framework that can instil resilience in the face of the inevitability of attacks.
Of course, if an organisation insists on sticking its head in the sand they can always add another line to their risk register and assure themselves everything is under control, right up until the moment it very publicly goes to s**t …
PostScript
As an afterthought, there is also a degree of naivety in the implicit assumption from providers like Anthropic and OpenAI that organisations will readily operationalise these capabilities at scale (threat actors aside!). The reality I suspect is going to be more constrained. Most enterprises, even banks do not have the budget to run such frontier generation AI engines within their own environments. Equally, the idea that heavily regulated institutions will permit SaaS based models to interrogate their internal systems, codebases and configurations runs counter to decades of established control principles around data sovereignty, confidentiality and regulatory accountability. For many, this is not simply a technical hurdle; it is a structural and regulatory impasse. Until these constraints are addressed, the practical deployment of such tools will remain limited, regardless of their theoretical capability.
Posted on April 17, 2026
0