“The future arrives not with a whisper, but with an echo of mistakes we chose not to imagine.“
A storm is gathering in the world’s supply chains. Its name is Agentic AI (Artificial Intelligence). These autonomous systems, tasked with sourcing, negotiating, routing and optimising without human pause, are being heralded as the next great leap in efficiency. Yet beneath the promise lies a shadow that too few leaders are willing to confront.
The seduction/business case is obvious, cut costs, shrink lead times, bypass human error. Let machines speak to machines, let contracts be struck at algorithmic speed, let the invisible hand of AI weave the tangled web of global logistics. However, history has taught us that every leap of efficiency carries with it the seeds of fragility already sown in the lack of good housekeeping in most digital environments built on the sands of untold layers of cyber security and technology debt.
When agentic AI agents begin to bargain on your behalf, who will own the liability when they breach sanctions, lock in cartel-like agreements or sign away terms you never intended? When they prune your supply chain to razor-thin efficiency, who will answer when a single disruption topples your operations like dominoes? When they ‘learn’ from poisoned data, will you even know who has bent your decisions to their will? Take a lesson out of the JLR (Jaguar Land Rover) ongoing extortion saga, it is a vision of the future many are sleepwalking into. JLR are learning the hard way that the future of their supply chain is not automation as they thought, it is accountability on steroids.
This is not a warning about science fiction. It is a warning about the systemic risks already forming. Agents that cannot explain their reasoning will operate at speeds beyond human oversight. One manipulated decision could cascade through procurement, logistics and compliance, amplified by a thousand machine-to-machine handshakes before you have even convened a crisis call.
Agentic AI is the sharpest of double-edged swords. Its promise of speed and autonomy is mirrored by a peril few dare to voice, the velocity it will grant to adversaries once they seize control. Imagine threat actors not just probing your defences but weaponising your own AI agents, commandeering their tireless logic, their limitless reach and their instant access to decisions you once thought secure.
Lest we forget, Agentic AI is code and all code is flawed. I wrote about this back in 2016 ‘Bots & Robotic Software‘. Every line is a potential fissure, every library a hidden backdoor, every dependency, API (Application Programming Interface) a waiting exploit. Quoting from the book ‘Code Complete: A Practical Handbook of Software Construction, Second Edition 2nd Edition’:
- Industry Average: ‘about 15 – 50 errors per 1,000 lines of delivered code.’
- Microsoft Applications:‘about 10 – 20 defects per 1,000 lines of code during in-house testing, and 0.5 defect per KLOC (KLOC = 1,000 lines of code) in released product (Moore 1992).’
- Cleanroom development: ‘A Harlan Mills pioneered technique that has been able to achieve rates as low as 3 defects per 1,000 lines of code during in-house testing and 0.1 defect per 1,000 lines of code in released product (Cobb and Mills 1990). A few projects, for example, the space-shuttle software achieved a level of 0 defects in 500,000 lines of code using a system of format development methods, peer reviews and statistical testing.’
Despite the fact that not every defect is an exploitable vulnerability, with modern software systems comprising millions of lines of code the maths do not lie in the potential exploitable options. To imbue such fallible code with agency, permission to act, decide and transact on your behalf, is nothing less than a modern game of Russian Roulette. The chamber will spin and will not be empty every time, when it does fire, the shot will echo across your entire supply chain, no flesh wound it will be on the scale of arterial rupture. Oh yes and playing Russian Roulette with algorithms, efficiency just pulls the trigger faster.
What you gain in efficiency, you risk in catastrophe. The very systems built to accelerate your enterprise may, in the wrong hands, accelerate its undoing.
Then there is the geopolitical undertow. If your supply chain depends on Agentic AIs developed under the jurisdiction of foreign powers, you may find not only your procurement decisions but also your sovereignty compromised. The Cloud Act, EU AI Act, data localisation laws, extraterritorial controls, these are not abstract risks, but live wires running through the digital veins of your enterprise.
The prophecy is this, organisations that adopt Agentic AI without discipline, governance and humility will be the first to fall when, not if, disruption comes. Not because the technology is inherently bad, but because it will magnify every weakness, accelerate every blind spot and codify every oversight into automated permanence.
The path forward is not rejection, but vigilance, so PLEASE build the guardrails now:
- Insist on human-in-the-loop controls for critical decisions.
- Demand explainability, audit trails and the right to challenge an AI’s choice.
- Stress-test your supply chain as if the AI itself were an adversary.
- Govern these systems as you would govern systemic financial risk, because that is exactly what they represent.
Agentic AI is shaping the next era of commerce. The question is whether it will shape your organisation into a paragon of resilience or carve its name among the early casualties.
It is not all doom and gloom. The way forward is not blind adoption but disciplined design. Agentic AI must be harnessed within a Zero Trust architecture, where every action, transaction and request is verified continuously, never assumed safe, trust must be earned and qualified. This means identity-first controls, least-privilege access, data contexturalisation, immutable audit trails and real-time anomaly detection woven into the very fabric of supply chain systems. Just as importantly, secure-by-design practices, from threat modelling in development through to SBOM (Software Bill of Materials) transparency and continuous red-teaming, must ensure that flaws are caught before adversaries exploit them. The lesson is simple, do not bolt security on after you have handed code agency. Build it in from the start, make it inseparable from the AI’s function and treat every decision as if it could be the one that saves or sinks your enterprise.
The choice is yours, heed the warning, those who rush headlong into algorithmic autonomy without foresight will discover too late that they outsourced not just their supply chain but their destiny. There are no excuses for neglecting to do this properly. Yes, building securely carries a cost, but the cost of doing it wrong will be immeasurably greater, measured in regulatory fines, reputational collapse and systemic failure. If you cannot afford to make the right investment, then act like a prudent investor, walk away from the gamble, because betting your enterprise on unsecured Agentic AI is not strategy, it is recklessness dressed up as innovation. As I wrote earlier don’t let FOMO (Fear of Missing Out) write your security strategy or it could be writing your obituary.
So by all means, charge ahead. Roll out Agentic AI across your business and inoculate your supply chain as well, slap ‘innovation’ on the corporate slide deck and tell investors you have harnessed the future. Just do not be shocked when the quarterly risk report reads like a cyber-thriller and your audit committee asks why the procurement agent thought it was a good idea to auto-negotiate with a shell company in North Korea. Efficiency is a marvellous thing, right up until it becomes the shortest route from shareholder value to shareholder lawsuit.
September 26th, 2025 → 18:49
[…] corrosion has consequences as forewarned in my recent missive on Agentic AI amongst others. It creates organisations that are diverse on paper but brittle in practice. It […]