Welcome to the ØG Research

At ØG, we connect advanced AI with Web3, driving innovation in decentralized AI through cutting-edge research and collaboration across blockchain ecosystems.
who we are

Optimizing Model Training in Decentralized AI Systems with Scalable Frameworks and Algorithms for Efficient and Collaborative Global Learning.

Communication Optimization:

Decentralized learning requires frequent sharing of intermediate results, which becomes costly with large models and data. To reduce communication overhead, we will explore advanced techniques like lossless gradient compression and quantization.

Local Computation Optimization:

Training at scale imposes heavy computation loads, especially on edge devices. We aim to ease this by integrating efficient training pipelines, parallelization methods, and model pruning to enable faster and lighter training.

Performance Optimization with Heterogeneous Data:

Variations in local data can cause model drift and reduce accuracy. We plan to develop algorithms that adapt learning rates per neuron and use smart round selection to improve training and aggregation under heterogeneous data conditions.

Performance Optimization with Dynamic Environment:

In real-world decentralized systems, nodes join and leave unpredictably. We will design asynchronous protocols and schedulers to predict node behavior and prioritize high-quality updates, ensuring stable and accurate model training.

Model Alignment in Decentralized AI Systems

Contemporary large language models demonstrate exceptional text interpretation and generation capabilities. However, they also raise ethical risks as they could inadvertently generate inappropriate, biased, harmful, or non-factual content. These risks are exacerbated in decentralized AI ecosystems, where each node’s training data is neither controllable nor filtered. It is important to perform model alignment, ensuring the model output aligns with human values. We will make several efforts toward this task.

01. Enhanced Learning Algorithms with Human Preference

The most common solution for alignment is to integrate human preferences as human values in model optimization, e.g., Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO). We aim to apply these strategies to decentralized settings and make them more efficient. Possible improvements include the construction of higher-quality preference datasets, design of advanced reward functions, and distributed sharing of human feedback across different participating nodes.

02. Self-Regulation and Correction

In social psychology, perspective taking is an important emotional intelligence skill, inspiring individuals to leverage self-awareness for behavior regulation. Inspired by this principle, we will propose new alignment strategies, which guide the model to automatically inspect its output responses, identify any content misaligned with human values, and rectify it. A new end-to-end regulation and correction pipeline with advanced prompts will be established to achieve this goal.

03. Decentralized Debating for Alignment

Debating is another popular alignment method, where multiple models (or agents) debate with each other to produce the most accurate and valuable content. This approach is a natural fit for decentralized AI, where there are multiple models from different nodes ready for debate. We will implement this strategy in real-world, large-scale decentralized scenarios and make adaptations to enhance alignment efficiency and effectiveness.

Technical Breakthroughs from the ØG Research Lab

Detecting Communication Deadlocks in Deep Learning Jobs

June 23, 2025

Reduction Fusion for Optimized Distributed Data-Parallel Computations via Inverse Recomputation

June 25, 2025

A Low-Communication Large-Scale Training Framework for Decentralized Cluster

June 26, 2025

Backdoor Attack against Scaffold Federated Learning

June 26, 2025

Have an innovative idea you’d like us to consider?

We’re open to fresh perspectives—share your concept or proposal with us.

New Blockchain System Empowered by Multi-Agent Technology

LLM-based multi-agent systems are rising in popularity for managing complex tasks through coordinated AI agents. Their structure aligns well with blockchain, where each node can act as an agent. We will explore key applications of this integration to enhance functionality and efficiency.

Smart Contract Management:

Smart contracts are a critical component in blockchain to automate transaction execution. It is promising to analyze, manage, and optimize this software in a distributed manner. We could implement a multi-agent solution in the blockchain, with each agent focusing on different functionalities. Their collaboration could significantly augment the blockchain with comprehensive smart contract services.

Anomaly Detection:

During blockchain execution, malicious entities may attempt to interfere with transactions, consensus mechanisms, or cross-node communications. It is thus vital to introduce security schemes to monitor the system and detect any anomalies. We will design and develop multi-agent systems to achieve this goal. By encouraging different agents to focus on various aspects of events and coordinating their decisions, the trustworthiness of the blockchain environment will be greatly enhanced.