< Explain other AI papers

HoneyBee: Data Recipes for Vision-Language Reasoners

Hritik Bansal, Devandra Singh Sachan, Kai-Wei Chang, Aditya Grover, Gargi Ghosh, Wen-tau Yih, Ramakanth Pasunuru

2025-10-15

HoneyBee: Data Recipes for Vision-Language Reasoners

Summary

This paper investigates how to build better datasets for training vision-language models (VLMs) to perform reasoning tasks, like answering questions about images. It focuses on understanding what makes a dataset effective for teaching these models to 'think' through problems.

What's the problem?

While VLMs are getting really good at reasoning, it's not clear *why* certain datasets work better than others. Researchers don't fully understand which parts of a dataset – the images, the questions, or the step-by-step solutions provided – are most important for improving a model's reasoning ability. Essentially, building good training data is a bit of a black box.

What's the solution?

The researchers experimented with different ways to create and organize datasets. They looked at where the image and question pairs came from, added extra information like image captions to help the models, and included examples of reasoning done with text alone. They also tested increasing the amount of data – more questions per image and more step-by-step solutions for each question. Based on these experiments, they created a new, large dataset called HoneyBee, specifically designed for reasoning, and trained VLMs using it. They also developed a way to make the models faster at answering questions without losing accuracy.

Why it matters?

This work is important because it provides a clearer understanding of how to build effective datasets for VLMs. The HoneyBee dataset and the strategies developed in this paper significantly improve the reasoning capabilities of these models, leading to better performance on complex tasks like solving math problems. The faster decoding method also makes these powerful models more practical to use.

Abstract

Recent advances in vision-language models (VLMs) have made them highly effective at reasoning tasks. However, the principles underlying the construction of performant VL reasoning training datasets remain poorly understood. In this work, we introduce several data curation approaches and study their impacts on VL reasoning capabilities by carefully controlling training and evaluation setups. We analyze the effects of context (image and question pair) sources, implement targeted data interventions, and explore scaling up images, questions, and chain-of-thought (CoT) solutions. Our findings reveal that (a) context source strategies significantly affect VLM performance, (b) interventions such as auxiliary signals from image captions and the inclusion of text-only reasoning yield substantial gains, and (c) scaling all data dimensions (e.g., unique questions per image and unique CoTs per image-question pair) consistently improves reasoning capability. Motivated by these insights, we introduce HoneyBee, a large-scale, high-quality CoT reasoning dataset with 2.5M examples consisting 350K image-question pairs. VLMs trained with HoneyBee outperform state-of-the-art models across model sizes. For instance, a HoneyBee-trained VLM with 3B parameters outperforms the SOTA model and the base model by 7.8% and 24.8%, respectively, on MathVerse. Furthermore, we propose a test-time scaling strategy that reduces decoding cost by 73% without sacrificing accuracy. Overall, this work presents improved strategies for VL reasoning dataset curation research.