

Beyond Centralized Limits: Unpacking the First Globally Distributed 100B+ Parameter AI Model
While the world was still catching its breath from the release of Intellect-1, a 10B parameter decentralized model that made headlines earlier this year, 0G Labs raised the bar by an order of magnitude.
This monumental achievement was based on DiLoCoX, a groundbreaking framework that has successfully trained a 107B parameter foundation model under a decentralized network environment with constrained bandwidth (e.g., 1gbps across segregated clusters). The training process achieves 357x greater communication efficiency than legacy approaches like AllReduce.
DiLoCoX is the brainchild of Michael Heinrich (0G Co-Founder and CEO), Ming Wu (0G CTO), and other leading experts in the fields of AI and distributed networks. Here’s a rundown of our novel approach to AI model training, and what it means for the future of decentralized intelligence.
DiLoCoX: A New Framework for Decentralized Model Training
Developed in collaboration with China Mobile, the world’s largest mobile network operator by subscriber count, our new research paper outlines DiLoCoX (short for Distributed Low-Communication Exchange). This novel framework is designed to solve one of the biggest barriers to decentralized AI: the high communication overhead required to train large models across fully distributed infrastructure.
Instead of relying on centralized superclusters with ultra-fast connectivity, DiLoCoX introduces a modular architecture optimized for real-world bandwidth constraints.
Key innovations include:
- Pipeline Parallelism, which splits models across nodes and overlaps computation with communication to maximize utilization.
- Dual Optimizer Policy, allowing nodes to perform local updates while still aligning with global model objectives.
- One-Step-Delay Overlap, enabling training to continue without waiting for all nodes to sync, reducing idle time.
- Adaptive Gradient Compression, which reduces the size of transmitted updates while preserving accuracy.
These techniques make it possible to train a model at a scale of over 100B parameters with minimal degradation in convergence — all within an accessible, bandwidth-limited environment.
From Private Models to Public Goods
When PrimeIntellect released Intellect-1 earlier this year, it was rightly celebrated as a major step forward for decentralized AI. But while it proved that decentralized model training is possible, 0G’s work with DiLoCoX shows just how far that concept can go in terms of scale, performance, and in how these systems can be owned and accessed.
Here’s how the 0G model stands apart:
- Unrivaled performance at scale, with DiLoCoX achieving 10x Intellect-1’s model size and 357x more communication efficiency, setting a new benchmark for decentralized AI training.
- Cutting-edge AI on everyday infrastructure, with DiLoCoX trained on standard 1gbps networks. By combining compression, pipeline parallelism, and asynchronous updates to unlock high-throughput training, DiLoCoX proves that frontier-scale model development no longer requires centralized superclusters.
- Verifiable, democratic model training with full transparency into data, weights, and convergence history. Even contributors with basic bandwidth and compute can join training clusters and shape the future of decentralized AI.
- Composable, responsible AI models, where agent developers can plug into open-source models rather than rely on black-box APIs, with 0G powering every layer of the AI stack.
In short, with DiLoCoX 0G is proving that the future of AI doesn’t belong to the few who control the infrastructure, but to the global community that chooses to build it together.
Help Build the Open AI Economy
By showing that it’s possible to train frontier-scale models outside the walls of centralized infrastructure, 0G has set a new precedent for what open, verifiable AI can look like in practice. DiLoCoX isn’t just a framework for scaling; it’s a foundation for participation, ownership, and composability. It gives builders, researchers, and communities the tools to create and coordinate AI in ways that were never possible before.
As we move toward mainnet, the implications will only grow. From community-funded training campaigns to fully tokenized AI agents operating onchain, 0G is laying the groundwork for a future where anyone can help shape, own, and benefit from the AI they interact with. The infrastructure is now ready, thanks to 0G. The only question is what you’ll choose to build with it.
Learn more about DiLoCoX here
Interested in building on 0G? Reach out here