Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling
Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, Lixin Gu, Xuehui Wang, Qingyun Li, Yimin Ren, Zixuan Chen, Jiapeng Luo, Jiahao Wang, Tan Jiang, Bo Wang, Conghui He, Botian Shi, Xingcheng Zhang
2024-12-09

Summary
This paper talks about InternVL 2.5, an advanced multimodal large language model that improves how machines understand and process both visual and textual information by enhancing training methods and data quality.
What's the problem?
Despite the progress in multimodal models, there are still challenges in scaling these models effectively to improve their performance. Many existing models struggle to perform well across different tasks, especially when it comes to reasoning, understanding documents, and handling various types of media like images and videos.
What's the solution?
The authors developed InternVL 2.5, which builds on the previous version (InternVL 2.0) by introducing better training strategies and data management. They explored how different factors like model size, dataset quality, and testing conditions affect performance. Through extensive testing on various benchmarks, they found that InternVL 2.5 can compete with leading commercial models like GPT-4o and Claude-3.5-Sonnet. Notably, it became the first open-source model to achieve over 70% accuracy on a specific benchmark called MMMU.
Why it matters?
This research is important because it sets a new standard for open-source multimodal models, showing that they can achieve high performance comparable to commercial models. By improving how machines understand both language and images, InternVL 2.5 can enhance applications in fields like education, healthcare, and content creation, making technology more accessible and effective.
Abstract
We introduce InternVL 2.5, an advanced multimodal large language model (MLLM) series that builds upon InternVL 2.0, maintaining its core model architecture while introducing significant enhancements in training and testing strategies as well as data quality. In this work, we delve into the relationship between model scaling and performance, systematically exploring the performance trends in vision encoders, language models, dataset sizes, and test-time configurations. Through extensive evaluations on a wide range of benchmarks, including multi-discipline reasoning, document understanding, multi-image / video understanding, real-world comprehension, multimodal hallucination detection, visual grounding, multilingual capabilities, and pure language processing, InternVL 2.5 exhibits competitive performance, rivaling leading commercial models such as GPT-4o and Claude-3.5-Sonnet. Notably, our model is the first open-source MLLMs to surpass 70% on the MMMU benchmark, achieving a 3.7-point improvement through Chain-of-Thought (CoT) reasoning and showcasing strong potential for test-time scaling. We hope this model contributes to the open-source community by setting new standards for developing and applying multimodal AI systems. HuggingFace demo see https://huggingface.co/spaces/OpenGVLab/InternVL