POINTS1.5: Building a Vision-Language Model towards Real World Applications
Yuan Liu, Le Tian, Xiao Zhou, Xinyu Gao, Kavio Yu, Yang Yu, Jie Zhou
2024-12-12

Summary
This paper talks about POINTS1.5, a new vision-language model designed to improve the generation of images from text prompts, especially in real-world applications.
What's the problem?
While recent vision-language models have made great progress in generating images from text, they struggle with more complex tasks that involve multiple objects and their relationships. Existing models often perform poorly when asked to create detailed scenes because the datasets they were trained on do not provide enough information about how different objects interact with each other.
What's the solution?
To address these issues, the authors developed POINTS1.5, which enhances the previous model POINTS1.0 by introducing several key improvements. First, it uses a new vision encoder that can handle images of any size without breaking them into smaller pieces. Second, it adds support for Chinese language processing by collecting and annotating a large number of images. Finally, the authors implemented rigorous filtering methods to ensure high-quality training data for the model. These changes allow POINTS1.5 to generate more accurate and detailed images in response to complex text prompts.
Why it matters?
This research is significant because it helps bridge the gap between simple image generation and more sophisticated tasks that require understanding complex scenes. By improving how models handle intricate relationships between objects, POINTS1.5 can be applied in various fields such as education, virtual reality, and content creation, making technology more effective and accessible.
Abstract
Vision-language models have made significant strides recently, demonstrating superior performance across a range of tasks, e.g. optical character recognition and complex diagram analysis. Building on this trend, we introduce a new vision-language model, POINTS1.5, designed to excel in various real-world applications. POINTS1.5 is an enhancement of POINTS1.0 and incorporates several key innovations: i) We replace the original CLIP vision encoder, which had a fixed image resolution, with a NaViT-style vision encoder that supports native dynamic high resolution. This allows POINTS1.5 to process images of any resolution without needing to split them into tiles. ii) We add bilingual support to POINTS1.5, significantly enhancing its capability in Chinese. Due to the scarcity of open-source Chinese datasets for vision-language models, we collect numerous images from the Internet and annotate them using a combination of manual and automatic methods. iii) We propose a set of rigorous filtering methods for visual instruction tuning datasets. We comprehensively evaluate all these filtering methods, and choose the most effective ones to obtain the final visual instruction tuning set. Thanks to these innovations, POINTS1.5 significantly outperforms POINTS1.0 and demonstrates strong performance across a range of real-world applications. Notably, POINTS1.5-7B is trained on fewer than 4 billion tokens and ranks first on the OpenCompass leaderboard among models with fewer than 10 billion parameters