< Explain other AI papers

Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models

Gongbo Zhang, Wen Wang, Ye Tian, Li Yuan

2026-04-30

Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models

Summary

This paper introduces a new method, called TIDE, for making smaller, more efficient diffusion language models by transferring knowledge from larger, more complex ones, even when those models are built very differently.

What's the problem?

Large diffusion language models are powerful but require a huge number of parameters, making them slow and expensive to use. Existing methods for simplifying these models only work when the simplified version has a similar structure to the original, and don't address the challenge of transferring knowledge between models with different designs, attention mechanisms, or even how they break down text into smaller pieces.

What's the solution?

The researchers developed TIDE, which uses three key techniques. First, it adjusts how much the smaller model learns from the larger one throughout the training process, focusing on when the larger model is most reliable. Second, it improves the larger model’s ability to make predictions even when parts of the input are hidden. Finally, it uses a new way to compare the way both models process text, ensuring stable learning and filtering out noise. They successfully shrunk models down from 8 or 16 billion parameters to just 0.6 billion while maintaining good performance.

Why it matters?

This work is important because it allows for the creation of smaller, faster, and more accessible language models without sacrificing too much accuracy. This is especially significant for tasks like code generation, where the new method showed substantial improvements, potentially making powerful AI tools available to more people and on less powerful hardware.

Abstract

Diffusion large language models (dLLMs) offer parallel decoding and bidirectional context, but state-of-the-art dLLMs require billions of parameters for competitive performance. While existing distillation methods for dLLMs reduce inference steps within a single architecture, none address cross-architecture knowledge transfer, in which the teacher and student differ in architecture, attention mechanism, and tokenizer. We present TIDE, the first framework for cross-architecture dLLM distillation, comprising three modular components: (1) TIDAL, which jointly modulates distillation strength across training progress and diffusion timestep to account for the teacher's noise-dependent reliability; (2) CompDemo, which enriches the teacher's context via complementary mask splitting to improve predictions under heavy masking; and (3) Reverse CALM, a cross-tokenizer objective that inverts chunk-level likelihood matching, yielding bounded gradients and dual-end noise filtering. Distilling 8B dense and 16B MoE teachers into a 0.6B student via two heterogeneous pipelines outperforms the baseline by an average of 1.53 points across eight benchmarks, yielding notable gains in code generation, where HumanEval scores reach 48.78 compared to 32.3 for the AR baseline.