< Explain other AI papers

Bridging Text and Video Generation: A Survey

Nilay Kumar, Priyansh Bhandari, G. Maragatham

2025-10-09

Bridging Text and Video Generation: A Survey

Summary

This paper is a thorough overview of how technology that creates videos from text descriptions has developed, looking at the different methods used and where the field is heading.

What's the problem?

Creating videos from text is really hard! Early attempts struggled with making videos that actually looked good, stayed consistent throughout, and accurately reflected what the text asked for. There were also issues with making the process fast and efficient, requiring a lot of computing power. The field needed to move beyond basic approaches to tackle these problems of quality, coherence, and control.

What's the solution?

The researchers examined all the different approaches to text-to-video generation, starting with older methods like GANs and VAEs and moving to the newer, more successful Diffusion-Transformer models. They explained how each method works, what problems it solved compared to previous ones, and why new designs were necessary. They also looked closely at the datasets used to train these models, the computer hardware needed, and how the models are evaluated to see how well they perform, pointing out the flaws in current evaluation methods.

Why it matters?

This work is important because it provides a single, comprehensive resource for anyone wanting to understand or work in the field of text-to-video generation. It highlights the current limitations and suggests areas for future research, which could lead to improvements in many areas like education, entertainment, and tools for people with visual impairments.

Abstract

Text-to-video (T2V) generation technology holds potential to transform multiple domains such as education, marketing, entertainment, and assistive technologies for individuals with visual or reading comprehension challenges, by creating coherent visual content from natural language prompts. From its inception, the field has advanced from adversarial models to diffusion-based models, yielding higher-fidelity, temporally consistent outputs. Yet challenges persist, such as alignment, long-range coherence, and computational efficiency. Addressing this evolving landscape, we present a comprehensive survey of text-to-video generative models, tracing their development from early GANs and VAEs to hybrid Diffusion-Transformer (DiT) architectures, detailing how these models work, what limitations they addressed in their predecessors, and why shifts toward new architectural paradigms were necessary to overcome challenges in quality, coherence, and control. We provide a systematic account of the datasets, which the surveyed text-to-video models were trained and evaluated on, and, to support reproducibility and assess the accessibility of training such models, we detail their training configurations, including their hardware specifications, GPU counts, batch sizes, learning rates, optimizers, epochs, and other key hyperparameters. Further, we outline the evaluation metrics commonly used for evaluating such models and present their performance across standard benchmarks, while also discussing the limitations of these metrics and the emerging shift toward more holistic, perception-aligned evaluation strategies. Finally, drawing from our analysis, we outline the current open challenges and propose a few promising future directions, laying out a perspective for future researchers to explore and build upon in advancing T2V research and applications.