< Explain other AI papers

Fairy2i: Training Complex LLMs from Real LLMs with All Parameters in {pm 1, pm i}

Feiyu Wang, Xinyu Tan, Bokai Huang, Yihao Zhang, Guoan Wang, Peizhuang Cong, Tong Yang

2025-12-15

Fairy2i: Training Complex LLMs from Real LLMs with All Parameters in {pm 1, pm i}

Summary

This paper introduces a new method called Fairy2i that makes large language models, like the ones powering chatbots, much more efficient to run without significantly sacrificing their performance.

What's the problem?

Large language models are incredibly powerful, but they require a lot of computer memory and processing power, making them expensive and difficult to use, especially on everyday devices. To reduce these demands, researchers try to simplify the numbers the models use, a process called quantization. However, pushing this simplification too far can severely degrade the model's accuracy. Complex-valued models are better at handling extreme simplification, but they need to be trained from scratch, meaning you can't just take existing, well-trained models and use them.

What's the solution?

Fairy2i solves this problem by cleverly converting existing, real-valued language models into an equivalent complex-valued form. It does this using a mathematical trick that proves the conversion doesn't lose any information. Then, it uses a special quantization technique that focuses on the 'phase' of the complex numbers and uses a very efficient way to store these phases. Finally, it refines the quantization process step-by-step to minimize errors, allowing for calculations that avoid complex multiplications, making it faster. Essentially, it takes a model you already have and makes it run much more efficiently without retraining it.

Why it matters?

This work is important because it allows us to take advantage of the efficiency benefits of complex-valued models without the huge cost of training them from the beginning. It means we can run powerful language models on less powerful hardware, like phones or laptops, and it significantly outperforms existing methods for simplifying these models, bringing us closer to making AI more accessible and practical for everyone.

Abstract

Large language models (LLMs) have revolutionized artificial intelligence, yet their massive memory and computational demands necessitate aggressive quantization, increasingly pushing representations toward the theoretical limit of a single bit. While complex-valued LLMs, such as iFairy, offer a superior chance for low-bit representation compared to real-valued counterparts, they require training from scratch, preventing the utilization of the vast ecosystem of pre-trained real-valued foundation models. Here we present Fairy2i, a universal framework that transforms pre-trained real-valued layers into an equivalent widely-linear complex form, enabling extremely low-bit quantization while reusing existing checkpoints. By proving a lossless mathematical equivalence between real and widely-linear maps, we convert standard Transformers into the complex domain and employ a phase-aware quantization scheme with a highly efficient codebook of fourth roots of unity. Furthermore, we introduce a recursive residual quantization mechanism that iteratively minimizes quantization error, allowing inference to proceed via efficient multiplication-free accumulation. We demonstrate that Fairy2i restores the performance of LLaMA-2 7B at an effective 2-bit precision to levels nearly comparable with full-precision baselines, significantly outperforming state-of-the-art real-valued binary and ternary quantization methods. This work bridges the gap between the representational efficiency of complex-valued arithmetic and the practical utility of pre-trained models, paving a new way for efficient inference on commodity hardware.