< Explain other AI papers

HopChain: Multi-Hop Data Synthesis for Generalizable Vision-Language Reasoning

Shenzhi Wang, Shixuan Liu, Jing Zhou, Chang Gao, Xiong-Hui Chen, Binghai Wang, An Yang, Shiji Song, Bowen Yu, Gao Huang, Junyang Lin

2026-03-23

HopChain: Multi-Hop Data Synthesis for Generalizable Vision-Language Reasoning

Summary

This paper focuses on improving how well visual language models, or VLMs, can reason about images and text together, specifically when the reasoning requires multiple steps and careful attention to visual details.

What's the problem?

Current VLMs are good at general tasks, but they struggle with complex questions that require several steps of reasoning based on what's actually *in* an image. When asked to think through a problem step-by-step (called 'Chain of Thought' reasoning), they often make mistakes in understanding the image, logical reasoning, using their existing knowledge, or even just making things up. The problem is that the datasets used to train these models don't have enough examples of these multi-step, visually-focused reasoning problems, so the models don't learn to handle them well.

What's the solution?

The researchers created a new system called HopChain to automatically generate training data for VLMs. HopChain builds questions that require multiple 'hops' or steps of reasoning, where each step builds on the previous one and is grounded in specific objects or details within an image. The final answer is always a clear number, making it easy to check if the model is correct. They then used this new data, along with existing data, to train two powerful VLMs, Qwen3.5-35B-A3B and Qwen3.5-397B-A17B, and tested how well they performed on a variety of tasks.

Why it matters?

This work is important because it shows that creating more challenging, multi-step reasoning data can significantly improve the ability of VLMs to understand and reason about the visual world. The improvements weren't specific to any one test; the models performed better across a wide range of tasks, and the benefits were especially noticeable when the reasoning required many steps. This suggests that HopChain is a useful tool for building more reliable and intelligent VLMs.

Abstract

VLMs show strong multimodal capabilities, but they still struggle with fine-grained vision-language reasoning. We find that long CoT reasoning exposes diverse failure modes, including perception, reasoning, knowledge, and hallucination errors, which can compound across intermediate steps. However, most existing vision-language data used for RLVR does not involve complex reasoning chains that rely on visual evidence throughout, leaving these weaknesses largely unexposed. We therefore propose HopChain, a scalable framework for synthesizing multi-hop vision-language reasoning data specifically for RLVR training of VLMs. Each synthesized multi-hop query forms a logically dependent chain of instance-grounded hops, where earlier hops establish the instances, sets, or conditions needed for later hops, while the final answer remains a specific, unambiguous number suitable for verifiable rewards. We add the multi-hop data synthesized by HopChain to the original RLVR data used to train Qwen3.5-35B-A3B and Qwen3.5-397B-A17B, and compare against RLVR on the original RLVR data alone across 24 benchmarks spanning STEM and Puzzle, General VQA, Text Recognition and Document Understanding, and Video Understanding. Although this multi-hop data is not synthesized to target any specific benchmark, adding it improves 20 out of 24 benchmarks on both models, indicating broad and generalizable gains. To demonstrate that full chained queries are important, we replace them with half-multi-hop or single-hop variants, reducing the 24-benchmark average accuracy by 5.3 and 7.0 points, respectively. Multi-hop training also strengthens long-CoT vision-language reasoning, with gains peaking at more than 50 accuracy points in the ultra-long-CoT regime. These experiments establish HopChain as an effective, scalable framework for synthesizing multi-hop data that improves generalizable vision-language reasoning.