LLaNA: Large Language and NeRF Assistant
Andrea Amaduzzi, Pierluigi Zama Ramirez, Giuseppe Lisanti, Samuele Salti, Luigi Di Stefano
2024-06-18

Summary
This paper introduces LLaNA, a new assistant that combines large language models (LLMs) with Neural Radiance Fields (NeRFs) to understand and describe 3D objects. LLaNA can perform tasks like captioning NeRFs and answering questions about them.
What's the problem?
While multimodal large language models are good at understanding both images and 3D data, they often struggle to fully capture the details of how objects look and their shapes. NeRFs are a newer technology that can represent both the appearance and geometry of objects, but integrating them with language models has been challenging. Existing methods usually require rendering images or creating complex 3D structures, which can be inefficient.
What's the solution?
To solve this problem, the authors developed LLaNA, which directly processes the weights of NeRFs without needing to create images or 3D models. This allows the assistant to extract detailed information about objects represented by NeRFs more efficiently. They also created a dataset of NeRFs with text descriptions for various tasks, enabling the development of benchmarks to test how well LLaNA understands NeRF data. The results showed that LLaNA's method of processing NeRF weights is more effective than traditional methods that rely on extracting 2D or 3D representations.
Why it matters?
This research is important because it opens up new possibilities for combining language understanding with advanced 3D representations. By creating an assistant like LLaNA, we can improve how AI interacts with and describes complex visual information, which has applications in areas like virtual reality, gaming, and robotics. This advancement could lead to smarter AI systems that can better understand and communicate about the world around us.
Abstract
Multimodal Large Language Models (MLLMs) have demonstrated an excellent understanding of images and 3D data. However, both modalities have shortcomings in holistically capturing the appearance and geometry of objects. Meanwhile, Neural Radiance Fields (NeRFs), which encode information within the weights of a simple Multi-Layer Perceptron (MLP), have emerged as an increasingly widespread modality that simultaneously encodes the geometry and photorealistic appearance of objects. This paper investigates the feasibility and effectiveness of ingesting NeRF into MLLM. We create LLaNA, the first general-purpose NeRF-language assistant capable of performing new tasks such as NeRF captioning and Q\&A. Notably, our method directly processes the weights of the NeRF's MLP to extract information about the represented objects without the need to render images or materialize 3D data structures. Moreover, we build a dataset of NeRFs with text annotations for various NeRF-language tasks with no human intervention. Based on this dataset, we develop a benchmark to evaluate the NeRF understanding capability of our method. Results show that processing NeRF weights performs favourably against extracting 2D or 3D representations from NeRFs.