Discussions about the future of artificial intelligence are often framed as a choice between centralized and decentralized models. Centralized AI has delivered rapid progress through scale, capital concentration, and tightly integrated infrastructure. Decentralized AI, meanwhile, is widely regarded by experts as a response to systemic risk — offering resilience, openness, and reduced dependence on a small number of operators.
However, the key insight emphasized by specialists — including Lado Okhotnikov, the founder of Holiverse — is that the main limitation of decentralized AI is not software architecture or governance. It is energy infrastructure.
Centralization Solved Coordination — and Created Fragility
The current AI landscape is built around hyperscale data centers. These facilities concentrate computation, data, and energy consumption in a small number of geographic locations. This concentration has clear advantages: predictable performance, optimized networking, and economies of scale.
At the same time, it creates structural fragility: hyperscale data centers require continuous, high-quality electricity at the scale of tens or even hundreds of megawatts, while in many regions — including parts of the U.S., Europe, and Asia — power grids are already operating near their limits. In these regions, grid congestion has led to delays in new data-center projects, rising energy costs, and increased exposure to regulatory constraints or outages.
Centralization concentrates not only compute but also risk. Power disruptions, climate events, geopolitical pressure, or targeted attacks can affect entire AI ecosystems simultaneously. Control over access, model development, and data governance is similarly centralized, raising long-term concerns about resilience and systemic dependence.
Decentralization is often presented as a corrective to these risks. But it does not eliminate energy demand.
Why Decentralization Does Not Reduce the Energy Problem
Decentralized AI architectures typically rely on distributed computing, federated learning, and multi-actor governance. In theory, they spread computation across a large number of devices rather than concentrating it in a few facilities.
In practice, this approach redistributes energy consumption rather than reducing it. Training and running advanced AI models remains energy-intensive regardless of where computation occurs. Shifting workloads to homes, offices, or edge devices places sustained demand on local and residential grids that were not designed for continuous, high-load computation.
The idea of leveraging “idle” consumer hardware overlooks a basic infrastructure reality: most electrical networks outside industrial zones are optimized for intermittent use, not persistent computational workloads. A distributed AI system is constrained by the weakest segments of the power network it depends on.
As a result, decentralization without corresponding changes to energy infrastructure risks exchanging one bottleneck for another.
Energy Infrastructure as the Limiting Factor
As AI scales, energy availability becomes a first-order design constraint. Decisions about model size, training frequency, geographic deployment, and system reliability are increasingly shaped by power generation and distribution rather than compute availability alone.
This shift reframes the AI scalability problem. The key challenge is no longer just how to coordinate millions of nodes, but how to supply them with stable, affordable, and sustainable electricity. Future AI systems — centralized or decentralized — depend on grids that can dynamically allocate power, integrate local generation, and balance load in real time.
Without these capabilities, distributed AI systems remain theoretically attractive but operationally constrained.
The Need for Energy-Aware AI Systems
For decentralized AI to function at scale, computation must become energy-aware by design. This implies tighter coupling between AI workloads and energy systems, including:
- local and distributed energy generation,
- edge-level storage and buffering,
- adaptive scheduling of computation based on power availability,
- and efficiency metrics focused on energy per operation rather than raw throughput.
In such systems, AI workloads may shift in time or location to align with energy supply rather than assuming constant availability. This represents a fundamental change in how AI infrastructure is planned and evaluated.
Implications for the Future of AI Development
Decentralized AI remains a meaningful direction, particularly for reducing systemic risk and improving resilience. However, it should be understood as part of a broader infrastructure transition rather than a standalone solution.
The evolution of AI is increasingly constrained by the pace at which energy systems can adapt. Software innovation moves quickly. Energy infrastructure does not. This mismatch creates a growing tension between computational ambit жеion and physical feasibility.
Addressing this tension requires viewing AI and energy as interdependent systems rather than separate domains — decentralization alone will not solve the energy challenge, it only shifts where the bottleneck appears.
Conclusion: A Structural, Not Ideological, Question
The debate over centralized versus decentralized AI is often framed in ideological terms. In reality, it is a structural question. Intelligence at scale depends on electricity at scale. This is increasingly stated by founders of AI companies, including Lado Okhotnikov.
Decentralization changes where computation happens and who controls it. It does not change the underlying physics of energy consumption. Until power grids become more flexible, distributed, and resilient, decentralized AI will face the same fundamental constraint as centralized systems — only expressed in different places.
The future of AI will be determined not only by advances in models and algorithms, but by whether energy infrastructure can evolve fast enough to support them.
Featured image via holiverse.ai