Because who needs actuarial models when panic will do?
In their paper Fear, Not Risk, Explains Asset Pricing (May 2025), Rob Arnott and Edward McQuarrie upend a core assumption of financial theory, that risk and reward are tightly linked. Their data shows otherwise, reward often correlates weakly, or not at all, with conventional measures like volatility. Instead, emotion, particularly fear, drives markets. Investors act not on neat risk metrics, but on the psychological weight of FOMO (fear of missing out) and FOL (fear of loss). The guiding principles of conventional portfolio construction, the efficient frontier from Markowitz’s 1952 work, asset class convenience and the neat shorthand of FE scores. On paper, all looks beautifully tidy. In the real world maybe not so much.
Markowitz’s efficient frontier is elegant, there is an optimal set of risks that should deliver the maximum return for the level of risk taken. Now Arnott & McQuarrie’s ‘fear theory’ throws in a plot twist, real world investor behaviour often bends, breaks or completely ignores the math. Market conditions, asset types and good old human emotion can all invert the risk / reward relationship, making some risks overpriced and others undervalued.
As for asset classes, the convenience of low to high risk classes, implying that moving up automatically means bigger rewards. ‘Fear theory’ again says not so fast. In a panic, supposedly ‘low risk’ assets can outperform, while the higher rungs may suddenly wobble.
Finally the FE scores? They are handy, sure, but they are backward-looking and sentiment sensitive. A high score after a market boom might overstate a fund’s prospects, while a low score during a sell-off might hide a golden opportunity.
The truth is, none of these frameworks are wrong, they are just incomplete. Theory, structure and metrics are essential, but they need a dose of real-world behavioural awareness.
This is not just a market problem. The same flawed logic infects cybersecurity, where risk is often modelled with false precision, probability X impact, threat matrices, patch schedules, while completely ignoring the emotional drivers of decision-making. In cyber, the best defence strategies account for human unpredictability alongside the technical models.
Though cyber and investment domains differ (digital infrastructure vs. financial assets), both are about making decisions under uncertainty. Both rely on data, are distorted by psychology and ultimately involve betting on the future without full visibility. In that sense, cyber risk is an economic and behavioural problem wrapped in technical clothing.
You would think business decisions on cybersecurity are based on structured frameworks, attack surfaces and maybe a sober talk with IT. However, often, they are as emotionally driven as meme stock speculation.
Emotionally distorted Cyber decisions are like investors mispricing assets. By ignoring behavioural signals, organisations misjudge cyber risk when they treat it as purely a mathematical construct. In cybersecurity, as in economics, Knightian uncertainty reflects threats that cannot be measured, only anticipated. Unlike quantifiable risks, these unknowns demand resilience over prediction, pushing cyber strategy toward adaptability rather than statistical control. Emotional noise often outweighs signal, for example:
- FOL over ‘risk variance’ – A data breach is not just a financial loss, it is a reputational catastrophe. Like investors fearing downside more than models predict, boards dread tail events, leading to over investment in compliance theatre or paralysis on transformation projects.
- FOMO in tech adoption – Fear of being left behind drives reckless adoption of Artificial Intelligence (AI) tools, cloud-first platforms, or quantum-ready crypto, even before risks are assessed. Like chasing the next big stock, it is driven by narrative, not risk modelling. This is increasingly of concern in critical service sectors like healthcare where we see AI adoption running riot. Whilst AI is transforming healthcare, innovation cannot come at the cost of safety or transparency. If a system influences clinical care, it must meet medical-grade oversight. Products that behave like medical devices but sidestep regulation risk undermining patient safety, clinician trust and fair competition.
- Mispriced security spend – Budgets are often based on frameworks, not actual threats. But a single viral breach or peer failure triggers spending spikes that no risk matrix could have forecast. In many cases, fear outpaces logic.
So yes, cyber risk has real, measurable components, intrusion attempts, patch delays, Multi factor Authentication (MFA) adoption, but boardroom action rarely hinges on a CVSS score. It hinges on whether the CEO read about a breach that morning and how red their ears turned.
If we accept that fear, not just risk, shapes behaviour, then our cyber governance needs to account for that. We need models that reflect how people actually react, not how they are supposed to. To which end , there are some principles to build on:
- Track emotional triggers – Do not just monitor technical stats, track breach headlines, social media noise or peer incidents. These often shape executive urgency more than any dashboard ever could.
- Score narrative risk – Supplement probability models with ‘story risk’, e.g., ‘Imagine this ends up on the BBC with your name on it’. Worst-case narrative thinking can shift priorities more effectively than mean loss estimates.
- Run emotionally realistic exercises – Scenario planning should not just cover technical impact, it should simulate brand damage, investor fallout and internal blame. Think leadership focused awareness development using the likes of Gold Team exercises with a PR twist.
- Guard against FOMO-driven decisions – When leadership pushes rapid adoption of the next shiny tech, governance should step in with structured review, not to block but to slow down emotional overreach. As highlighted in the AI adoption into the healthcare and finance sectors.
The takeaway, Cybersecurity governance must be rooted in evidence, but it must also confront the reality of human fear. It is not just about patches and controls, it is about recognising that the humans in the loop, executives, CISOs, boards, respond to stories, social pressure and the psychological weight of ‘what if’.
While often dismissed as a relic of cybersecurity marketing, FUD (Fear, Uncertainty, and Doubt) captures real psychological forces that shape executive behaviour and risk decisions. Far from being mere hype, these emotional drivers are now evidenced to reflect the messy reality of how organisations respond to perceived threats and acknowledging them may be a useful inject into building cyber strategies that are both technically sound and behaviourally aware.
Smart firms blend technical rigour with emotional intelligence. The rest? They buy five tools after reading one headline and call it strategy.
At least until the next breach.
July 30th, 2025 → 10:32
[…] my last missive, I had some interesting feedback on a term I used in reference to compliance, that of […]
September 24th, 2025 → 11:19
[…] Agentic AI is not strategy, it is recklessness dressed up as innovation. As I wrote earlier don’t let FOMO (Fear of Missing Out) write your security strategy or it could be writing your […]