< Explain other AI papers

Euclid: Supercharging Multimodal LLMs with Synthetic High-Fidelity Visual Descriptions

Jiarui Zhang, Ollie Liu, Tianyu Yu, Jinyi Hu, Willie Neiswanger

2024-12-13

Euclid: Supercharging Multimodal LLMs with Synthetic High-Fidelity Visual Descriptions

Summary

This paper discusses Euclid, a new model designed to improve how AI understands and describes geometric details in images. It uses synthetic data to enhance the performance of multimodal large language models (MLLMs) in visual perception tasks.

What's the problem?

Multimodal large language models have advanced significantly, but they still struggle with low-level visual perception, particularly when it comes to accurately describing geometric details in images. This limitation affects their usefulness in important fields like robotics and medical image analysis, where precise visual understanding is crucial.

What's the solution?

To tackle this problem, the authors introduce a benchmark called Geoperception to evaluate how well MLLMs can transcribe geometric information from images. They conduct experiments to identify effective strategies for improving performance, such as using high-quality synthetic data and a structured training approach called a data curriculum. This leads to the development of Euclid, a model specifically optimized for understanding geometric shapes. Euclid is trained entirely on synthetic data but demonstrates strong performance on real-world geometry tasks, outperforming existing models by a significant margin.

Why it matters?

This research is important because it enhances the ability of AI systems to accurately interpret and describe geometric information, which is essential for various applications. By focusing on improving low-level visual perception through innovative training techniques and synthetic data, Euclid sets a new standard for performance in multimodal AI systems, paving the way for advancements in fields that rely on precise visual understanding.

Abstract

Multimodal large language models (MLLMs) have made rapid progress in recent years, yet continue to struggle with low-level visual perception (LLVP) -- particularly the ability to accurately describe the geometric details of an image. This capability is crucial for applications in areas such as robotics, medical image analysis, and manufacturing. In this paper, we first introduce Geoperception, a benchmark designed to evaluate an MLLM's ability to accurately transcribe 2D geometric information from an image. Using this benchmark, we demonstrate the limitations of leading MLLMs, and then conduct a comprehensive empirical study to explore strategies for improving their performance on geometric tasks. Our findings highlight the benefits of certain model architectures, training techniques, and data strategies, including the use of high-fidelity synthetic data and multi-stage training with a data curriculum. Notably, we find that a data curriculum enables models to learn challenging geometry understanding tasks which they fail to learn from scratch. Leveraging these insights, we develop Euclid, a family of models specifically optimized for strong low-level geometric perception. Although purely trained on synthetic multimodal data, Euclid shows strong generalization ability to novel geometry shapes. For instance, Euclid outperforms the best closed-source model, Gemini-1.5-Pro, by up to 58.56% on certain Geoperception benchmark tasks and 10.65% on average across all tasks.