Bidirectional Normalizing Flow: From Data to Noise and Back
Yiyang Lu, Qiao Sun, Xianbang Wang, Zhicheng Jiang, Hanhong Zhao, Kaiming He
2025-12-19
Summary
This paper introduces a new type of normalizing flow, called BiFlow, which is a method for creating realistic data like images. Normalizing flows work by transforming data into a simple 'noise' distribution and then learning to reverse that process to generate new data.
What's the problem?
Existing normalizing flows rely on being perfectly invertible, meaning you can precisely undo the transformation. Recent advances combining these flows with Transformer networks have run into a problem: this 'undoing' process, called causal decoding, is very slow and limits how quickly new data can be generated. It's like trying to build something with LEGOs where every step has to be perfectly reversible, making the process cumbersome.
What's the solution?
BiFlow gets around the invertibility requirement. Instead of needing a perfect 'undo' function, it *learns* an approximate reverse transformation. This is like having someone help you rebuild the LEGO creation, even if they don't perfectly remember every step – it's faster and more flexible. This allows for more complex transformations and faster data generation.
Why it matters?
BiFlow significantly speeds up the process of generating data with normalizing flows, making it up to 100 times faster than previous methods. It also produces higher quality images and performs competitively with other state-of-the-art generative models, potentially revitalizing interest in normalizing flows as a powerful tool for creating realistic data.
Abstract
Normalizing Flows (NFs) have been established as a principled framework for generative modeling. Standard NFs consist of a forward process and a reverse process: the forward process maps data to noise, while the reverse process generates samples by inverting it. Typical NF forward transformations are constrained by explicit invertibility, ensuring that the reverse process can serve as their exact analytic inverse. Recent developments in TARFlow and its variants have revitalized NF methods by combining Transformers and autoregressive flows, but have also exposed causal decoding as a major bottleneck. In this work, we introduce Bidirectional Normalizing Flow (BiFlow), a framework that removes the need for an exact analytic inverse. BiFlow learns a reverse model that approximates the underlying noise-to-data inverse mapping, enabling more flexible loss functions and architectures. Experiments on ImageNet demonstrate that BiFlow, compared to its causal decoding counterpart, improves generation quality while accelerating sampling by up to two orders of magnitude. BiFlow yields state-of-the-art results among NF-based methods and competitive performance among single-evaluation ("1-NFE") methods. Following recent encouraging progress on NFs, we hope our work will draw further attention to this classical paradigm.