< Explain other AI papers

Understanding Hallucinations in Diffusion Models through Mode Interpolation

Sumukh K Aithal, Pratyush Maini, Zachary C. Lipton, J. Zico Kolter

2024-06-14

Understanding Hallucinations in Diffusion Models through Mode Interpolation

Summary

This paper explores a specific problem in image generation models called diffusion models, focusing on a phenomenon known as 'hallucinations.' These hallucinations are images generated by the model that do not exist in the training data and are often unrealistic.

What's the problem?

Diffusion models sometimes create outputs that are completely different from anything they were trained on, which are called hallucinations. This happens because the models can 'interpolate' or smoothly transition between similar data points in their training set, leading them to generate images that are not actually possible based on the original data. Understanding why this occurs is important for improving the reliability of these models.

What's the solution?

The authors investigate this issue by conducting experiments with simple datasets, such as one-dimensional and two-dimensional Gaussian distributions. They discover that certain conditions in how the model learns can cause it to produce these hallucinations. They also find that the models can recognize when they are generating these unrealistic outputs. By measuring how much variation there is in the model's output as it generates images, they develop a method to reduce hallucinations by over 95% while still keeping most of the accurate samples.

Why it matters?

This research is significant because it helps improve the performance of diffusion models in generating realistic images. By understanding and reducing hallucinations, AI systems can become more reliable and useful for applications like art creation, video game design, and virtual reality. The findings contribute to making AI-generated content more trustworthy and aligned with real-world expectations.

Abstract

Colloquially speaking, image generation models based upon diffusion processes are frequently said to exhibit "hallucinations," samples that could never occur in the training data. But where do such hallucinations come from? In this paper, we study a particular failure mode in diffusion models, which we term mode interpolation. Specifically, we find that diffusion models smoothly "interpolate" between nearby data modes in the training set, to generate samples that are completely outside the support of the original training distribution; this phenomenon leads diffusion models to generate artifacts that never existed in real data (i.e., hallucinations). We systematically study the reasons for, and the manifestation of this phenomenon. Through experiments on 1D and 2D Gaussians, we show how a discontinuous loss landscape in the diffusion model's decoder leads to a region where any smooth approximation will cause such hallucinations. Through experiments on artificial datasets with various shapes, we show how hallucination leads to the generation of combinations of shapes that never existed. Finally, we show that diffusion models in fact know when they go out of support and hallucinate. This is captured by the high variance in the trajectory of the generated sample towards the final few backward sampling process. Using a simple metric to capture this variance, we can remove over 95% of hallucinations at generation time while retaining 96% of in-support samples. We conclude our exploration by showing the implications of such hallucination (and its removal) on the collapse (and stabilization) of recursive training on synthetic data with experiments on MNIST and 2D Gaussians dataset. We release our code at https://github.com/locuslab/diffusion-model-hallucination.