BrainExplore: Large-Scale Discovery of Interpretable Visual Representations in the Human Brain
Navve Wasserman, Matias Cosarinsky, Yuval Golbari, Aude Oliva, Antonio Torralba, Tamar Rott Shaham, Michal Irani
2025-12-11
Summary
This research paper tackles the difficult problem of understanding how the human brain visually perceives and categorizes the world around us, and where in the brain these perceptions happen.
What's the problem?
For a long time, scientists have been trying to figure out how the brain represents what we see. Brain activity is incredibly complex, and there are just too many things we *could* be looking at, making it hard to study systematically. Most studies have been small, focused on specific areas of the brain, and relied on researchers manually looking at data, which isn't very efficient or comprehensive.
What's the solution?
The researchers developed a new, automated system to analyze brain scans (fMRI data). It works in two steps: first, it finds patterns in the brain activity without being told what to look for. Then, it figures out what kinds of images make those patterns happen and uses those images to describe what the brain is 'thinking' about. They built a system that automatically tests different explanations and picks the most reliable one, allowing them to analyze a huge amount of brain data.
Why it matters?
This work is important because it allows us to discover thousands of different visual concepts the brain represents, including some that haven't been noticed before. It provides a more complete and automated way to understand how the brain processes visual information, which could eventually help us understand things like visual disorders or even build more intelligent artificial intelligence.
Abstract
Understanding how the human brain represents visual concepts, and in which brain regions these representations are encoded, remains a long-standing challenge. Decades of work have advanced our understanding of visual representations, yet brain signals remain large and complex, and the space of possible visual concepts is vast. As a result, most studies remain small-scale, rely on manual inspection, focus on specific regions and properties, and rarely include systematic validation. We present a large-scale, automated framework for discovering and explaining visual representations across the human cortex. Our method comprises two main stages. First, we discover candidate interpretable patterns in fMRI activity through unsupervised, data-driven decomposition methods. Next, we explain each pattern by identifying the set of natural images that most strongly elicit it and generating a natural-language description of their shared visual meaning. To scale this process, we introduce an automated pipeline that tests multiple candidate explanations, assigns quantitative reliability scores, and selects the most consistent description for each voxel pattern. Our framework reveals thousands of interpretable patterns spanning many distinct visual concepts, including fine-grained representations previously unreported.