< Explain other AI papers

MIST: Mutual Information Via Supervised Training

German Gritsai, Megan Richards, Maxime Méloux, Kyunghyun Cho, Maxime Peyrard

2025-11-25

MIST: Mutual Information Via Supervised Training

Summary

This paper introduces a new way to estimate mutual information, a concept used to measure how much knowing one piece of information tells you about another. Instead of relying on traditional formulas, they use a neural network to *learn* how to estimate this value directly from data.

What's the problem?

Calculating mutual information accurately can be really difficult, especially when you don't have a lot of data or when dealing with complex relationships between variables. Existing methods often struggle with these situations and can be slow or unreliable. They either make strong assumptions about the data that aren't always true, or they require a lot of computational power.

What's the solution?

The researchers trained a neural network, called MIST, to predict mutual information. They did this by showing it a huge number of artificially created datasets where the true mutual information was already known. The network learned to recognize patterns and estimate the MI without needing a specific formula. They also added a special attention mechanism to handle different amounts of data and different types of variables, and used a technique called quantile regression to provide a range of possible MI values instead of just a single guess, giving a measure of uncertainty. Finally, they used normalizing flows to adapt the training data to different types of real-world data.

Why it matters?

This new approach is better than older methods at estimating mutual information in many situations, especially when data is limited or complex. It's also much faster than other neural network-based methods. Because the estimator is trainable and fully differentiable, it can be easily integrated into other machine learning systems. This opens up possibilities for improving various applications that rely on understanding relationships between data, like image processing, natural language processing, and more.

Abstract

We propose a fully data-driven approach to designing mutual information (MI) estimators. Since any MI estimator is a function of the observed sample from two random variables, we parameterize this function with a neural network (MIST) and train it end-to-end to predict MI values. Training is performed on a large meta-dataset of 625,000 synthetic joint distributions with known ground-truth MI. To handle variable sample sizes and dimensions, we employ a two-dimensional attention scheme ensuring permutation invariance across input samples. To quantify uncertainty, we optimize a quantile regression loss, enabling the estimator to approximate the sampling distribution of MI rather than return a single point estimate. This research program departs from prior work by taking a fully empirical route, trading universal theoretical guarantees for flexibility and efficiency. Empirically, the learned estimators largely outperform classical baselines across sample sizes and dimensions, including on joint distributions unseen during training. The resulting quantile-based intervals are well-calibrated and more reliable than bootstrap-based confidence intervals, while inference is orders of magnitude faster than existing neural baselines. Beyond immediate empirical gains, this framework yields trainable, fully differentiable estimators that can be embedded into larger learning pipelines. Moreover, exploiting MI's invariance to invertible transformations, meta-datasets can be adapted to arbitrary data modalities via normalizing flows, enabling flexible training for diverse target meta-distributions.