I have developed a habit of scanning the opposite horizon when attention converges too tightly on one idea or trend. It has served me well, mass focus / group think can create distraction which in turn creates blind spots that can be exploited by actors who thrive in the absence of attention.
So when I find myself caught up in the Artificial Intelligence (AI) stream of thought I force myself to look around, engage some peripheral thinking, get off trend. This catalysed an interesting insight.
There is a certain irony to technological revolutions, they tend to eat their own prophets. Having been intimately involved during the cloud revolution and witnessed how it rendered the on-premise server rooms of the 2000’s obsolete, I got a sense of a quieter, almost philosophical revolt that is stirring in AI. Its name? Algorithmic minimalism.
Reflecting on past technology breakthroughs suggests that the next great leap in AI may not come from more power as Nvidia would have us believe, but from the rediscovery of restraint, the application of human intelligence to craft gracefully efficient algorithms.
While Nvidia reigns supreme as the high priest of the GPU (Graphics Processing Unit) temple, selling compute by the kilowatt and the GPU kilogram, I suspect there is a growing chorus of engineers and researchers who are exercising a heretical question, do we really need all this hardware and GPU horsepower to be intelligent? Well the internet behemoths will be hoping so with all the 100’s of Billions in private corporate debt they are racking up to fuel what they are selling to investors as the future in GPU centric datacentres.
For years, the GPU economy has rested on a seductive creed, more data + more parameters + more compute = more intelligence. For a time, it held true. The world watched in awe as larger and larger models produced smarter, more capable AI systems. However somewhere between GPT-3 and GPT-5, the law of diminishing returns started to signal that each new model offered a little less insight for a lot more electricity.
Enter algorithmic minimalism, the art of doing more with less. It rejects the assumption that intelligence must be brute-forced and instead pursues efficiency through elegance, sparse models, dynamic parameter use, retrieval-augmented learning and adaptive training that mimics biological efficiency rather than digital excess.
Heralding DeepSeek, a potent early signal that the reductionist approach can work and US Tariffs and export controls may have inadvertently helped forged a new AI frontier. With a model architecture built on clever optimisation rather than compute gluttony, DeepSeek managed to achieve performance parity with far larger, more power-hungry peers. A bridgehead proof point that intelligent design can rival sheer horsepower, that innovation in algorithmic architecture can flatten the playing field. In doing so, DeepSeek may have cracked open the door to entirely new AI economics. The US may rue the day …
The implications are profound. Just as cloud computing once replaced corporate server farms with virtualised efficiency, algorithmic minimalism could render today’s GPU arms race unsustainable. Imagine AI systems that run locally, train incrementally and draw power more like a smartphone than a substation. The advantage shifts from those who hoard silicon to those who craft elegance. Edge compute, Peer to Per (P2P) ingenuity to harvest the untold compute cycles in our very pockets.
That is not just disruption, it is inversion. Compute becomes the commodity; code becomes the crown jewel.
So while the silicon giants continue to stack pyramids of GPUs in their hyperscale deserts, I suspect that somewhere out there, a leaner algorithm, perhaps born in a lab like DeepSeek’s, is quietly preparing to prove that intelligence is not proportional to wattage. (Do get in touch if you read this, I would love to champion such excellence).
Posted on November 10, 2025
0