< Explain other AI papers

Wavelet Latent Diffusion (Wala): Billion-Parameter 3D Generative Model with Compact Wavelet Encodings

Aditya Sanghi, Aliasghar Khani, Pradyumna Reddy, Arianna Rampini, Derek Cheung, Kamal Rahimi Malekshan, Kanika Madan, Hooman Shayani

2024-11-13

Wavelet Latent Diffusion (Wala): Billion-Parameter 3D Generative Model with Compact Wavelet Encodings

Summary

This paper talks about Wavelet Latent Diffusion (WaLa), a new method for generating 3D shapes that combines advanced techniques to efficiently create high-quality models while using less computing power.

What's the problem?

The problem is that existing large-scale 3D generative models often require a lot of computational resources and struggle to capture fine details in high-resolution images. This makes it hard to create complex 3D shapes without using excessive data and processing time.

What's the solution?

To tackle this issue, the authors introduce WaLa, which encodes 3D shapes into compact wavelet-based representations. They achieve a remarkable compression ratio of 2427 times while maintaining detail. This allows them to train large generative models with about one billion parameters that can produce high-quality 3D shapes quickly, typically within two to four seconds. They also demonstrate that their method outperforms existing approaches in terms of generation quality and efficiency.

Why it matters?

This research is important because it advances the field of 3D modeling by making it possible to generate complex shapes more efficiently. By reducing the amount of data needed and speeding up the generation process, WaLa could have significant applications in areas like video game design, virtual reality, and computer-aided design, where creating detailed 3D models quickly is essential.

Abstract

Large-scale 3D generative models require substantial computational resources yet often fall short in capturing fine details and complex geometries at high resolutions. We attribute this limitation to the inefficiency of current representations, which lack the compactness required to model the generative models effectively. To address this, we introduce a novel approach called Wavelet Latent Diffusion, or WaLa, that encodes 3D shapes into wavelet-based, compact latent encodings. Specifically, we compress a 256^3 signed distance field into a 12^3 times 4 latent grid, achieving an impressive 2427x compression ratio with minimal loss of detail. This high level of compression allows our method to efficiently train large-scale generative networks without increasing the inference time. Our models, both conditional and unconditional, contain approximately one billion parameters and successfully generate high-quality 3D shapes at 256^3 resolution. Moreover, WaLa offers rapid inference, producing shapes within two to four seconds depending on the condition, despite the model's scale. We demonstrate state-of-the-art performance across multiple datasets, with significant improvements in generation quality, diversity, and computational efficiency. We open-source our code and, to the best of our knowledge, release the largest pretrained 3D generative models across different modalities.