ViStoryBench: Comprehensive Benchmark Suite for Story Visualization
Cailin Zhuang, Ailin Huang, Wei Cheng, Jingwei Wu, Yaoqi Hu, Jiaqi Liao, Zhewei Huang, Hongyuan Wang, Xinyao Liao, Weiwei Cai, Hengyuan Xu, Xuanyang Zhang, Xianfang Zeng, Gang Yu, Chi Zhang
2025-06-02

Summary
This paper talks about ViStoryBench, a new set of tests designed to check how well AI models can turn written stories into pictures by using a variety of story types and ways to measure their performance.
What's the problem?
The problem is that it's hard to know how good AI models really are at creating visuals from stories because there hasn't been a thorough or fair way to test them across different kinds of stories and visual challenges.
What's the solution?
The researchers created ViStoryBench, which includes a wide range of datasets and scoring methods to evaluate how well these models understand both the story and the visuals they create. This makes it possible to compare different models and see where they do well or need improvement.
Why it matters?
This is important because it helps push AI to get better at combining creativity with understanding, which can make story-based visuals more accurate and fun for things like education, entertainment, and creative projects.
Abstract
ViStoryBench is a comprehensive evaluation benchmark for story visualization frameworks, featuring diverse datasets and metrics to assess model performance across narrative and visual dimensions.