Why Good Boards Still Make Predictable Cyber Mistakes

Posted on January 19, 2026

0



After years sitting with business leaders discussing cyber risk and supporting boards through the rollercoaster of incident responses, you start to notice patterns. Not technical ones human ones, that our minds quietly work against us, making cyber risk is as much about engineering around that as it is the technology and controls.

Many of you will be familiar with the assumption that when cyber decisions went sideways it was down to missing data, immature tooling or poor reporting. Over time you start to recognise if not sense there is something else at play. That Cyber risk is not mismanaged because decision makers lack data but because loss-aversion, distorted probability and framing effects override rational judgement exactly when stakes are highest. The resulting experience being CISOs struggle to justify spend before loss but will accept extreme costs after loss.

The work of Daniel Kahneman and Amos Tversky on Prospect Theory helped me put language around what I had been observing. Prospect Theory does not describe irrational people it describes predictable behaviour under risk. Once you see it, you cannot unsee it in cyber governance.

Prospect Theory is a descriptive model of outcomes and choices under risk, not a full account of the cognitive process by which decisions are generated, helping to explain real-world behaviours that classical economics cannot. It says we do not evaluate risk objectively. We anchor on a reference point, fear losses more than we value gains and distort probabilities when things feel uncertain. In cyber, I see too often the reference point as ‘nothing bad has happened yet‘. That makes incremental risk feel acceptable and preventative spend feel abstract. It is easier to then justify delay with a straight face and a neat risk register. Boards quite reasonably ask for more evidence and justification. Everyone is being prudent.

Then an incident occurs and the entire posture flips. The same leaders who debated a six-figure control approve seven-figure response costs in hours. Authority centralises. Radical actions are suddenly not just acceptable but expected. The data has not changed; the framing has.

It has always a bit of a spectacle watched boards fixate on dramatic, low-probability scenarios while quietly tolerating the dull, high-frequency failures that actually cause harm. I will not be alone in witnessing how the same risk, framed as ‘resilience investment‘ or ‘avoided loss‘, produces very different outcomes.

None of this makes it irrational in a pejorative sense. It is human. The mistake is pretending that cyber risk is managed purely through logic when it is heavily mediated by psychology, especially under stress.

The most effective organisations I see are rarely the ones with the best tools. They are the ones that design decisions in advance, knowing full well how human judgement behaves once loss enters the room.

Hardware and Software can be regarded with a degree of certainty, but the wetware (humans) as many of us have known for a while are the eternal challenge in pursuit Cyber maturity. It turns out, it is as much psychological as it is technical. I suggest that if we rely on calm, rational judgement only after the breach, cyber risk will remain the only discipline where foresight is consistently funded retrospectively. I remain hopeful that one day we may even fund cyber resilience with the same enthusiasm we reserve for incident response, though experience suggests that day will arrive immediately after the breach.

Until we start designing cyber decisions for how boards and executives actually behave under pressure we should stop being surprised by incidents that behaved exactly as expected.