Will the Next Cyber War Be Fought Inside Machine Minds?

Posted on January 3, 2026

0



In yesterdays Financial Times interview, AI pioneer Yann LeCun made waves by declaring that current large language models (LLMs) are a ‘dead end’ for achieving true machine intelligence, what he terms Advanced Machine Intelligence (AMI) rather than mere generative pattern completion. While LLMs like ChatGPT and META’s Llama have transformed search, writing and creative tooling, LeCun argues they remain constrained by language alone and lack the grounded understanding of the physical world essential for general reasoning and planning.

This critique highlights an important nuance often missed in public debates; LLMs excel at statistical prediction but they do not inherently learn causal models of reality or maintain persistent memory and agency, capabilities that many researchers believe are prerequisites for Artificial General Intelligence (AGI) and ultimately Artificial Superintelligence (ASI). To bridge this gap, LeCun champions world-model architectures such as Video Joint Embedding Predictive Architecture (V-JEPA), which train systems to anticipate latent states from video and spatial inputs, theoretically enabling physical reasoning beyond text alone.

However, even proponents of world models concede that such systems cannot achieve ASI in isolation. They typically lack the symbolic abstraction, long-term strategic memory and meta-reasoning that well-trained LLMs provide. The future of AI likely lies in hybrid stacks that combine causal world models, powerful language abstraction layers and yet-to-be-defined memory + agency subsystems. These would allow AI to both understand the world and reason about it in human-relevant terms, integrating perception, language and long-horizon planning. Only such integrated architectures, neither LLMs nor world models alone, can plausibly shoulder the cognitive load of true AGI or ASI.

A hybrid AI stack that fuses LLMs, world models such as V-JEPA and an emergent memory-and-agency layer will introduce a fundamentally new class of cyber risk, because it collapses the traditional boundary between software, decision-making and autonomous action.

Unlike today’s stateless or narrowly scoped AI systems, such models will maintain persistent internal world representations, learn from interaction and adapt goals over time, creating attack surfaces not just in code or data, but in belief formation, memory poisoning and goal manipulation. Adversaries will target long-term memory stores, induce subtle causal mis-learning through environment or sensor manipulation and exploit agency layers to trigger self-initiated actions that appear legitimate but are strategically misaligned. Classic controls, patching, access management and model validation will prove insufficient when compromise can occur via experience, context or training drift rather than exploits.

The most concerning challenge is epistemic compromise. Attacks that do not crash systems or exfiltrate data but quietly distort an AI’s understanding of reality, incentives or trust relationships. In effect, cyber defence will shift from protecting systems to protecting cognition itself, requiring new assurance primitives, attested memory integrity, causal model validation, behavioural anomaly detection and continuous epistemic auditing, analogous to stress-testing financial institutions for systemic fragility rather than isolated failures.

So we plough on regardless and build machines that can remember, reason and act, while promising ourselves we’ll add the security later. History assures us this is a winning strategy; trust first, controls second, lessons learned post-incident. Excuse the cynicism .. the reality is if cyber is not embedded at the DNA level, governing how memory forms, how agency is exercised and how reality is learned, the failure will not be a breach, it will be a lapse in judgement. When it happens, we will solemnly agree it was unforeseeable, right up until the system explains, quite convincingly, why it did exactly what it was allowed to do.