Spatiotemporal Skip Guidance for Enhanced Video Diffusion Sampling
Junha Hyung, Kinam Kim, Susung Hong, Min-Jung Kim, Jaegul Choo
2024-12-02

Summary
This paper introduces Spatiotemporal Skip Guidance (STG), a new method for improving video generation using diffusion models by enhancing the quality of the generated videos without needing extra training.
What's the problem?
While diffusion models have been successful in creating high-quality images and videos, they often struggle with maintaining diversity and smooth motion in generated videos. Current methods that improve quality can lead to less variety and flickering effects, making the videos look unnatural. Additionally, some existing techniques require complex training processes, which can be impractical for larger models.
What's the solution?
STG addresses these issues by using a simple, training-free approach that enhances video generation. It works by skipping certain layers in the model during the sampling process, which helps create a better version of the video without losing diversity or smoothness. This method allows for improved video quality while avoiding the need for additional models or extensive training.
Why it matters?
This research is significant because it provides a more efficient way to generate high-quality videos, making it easier for creators and developers to produce visually appealing content. By improving video generation techniques, STG can be applied in various fields such as entertainment, gaming, and virtual reality, ultimately enhancing user experiences.
Abstract
Diffusion models have emerged as a powerful tool for generating high-quality images, videos, and 3D content. While sampling guidance techniques like CFG improve quality, they reduce diversity and motion. Autoguidance mitigates these issues but demands extra weak model training, limiting its practicality for large-scale models. In this work, we introduce Spatiotemporal Skip Guidance (STG), a simple training-free sampling guidance method for enhancing transformer-based video diffusion models. STG employs an implicit weak model via self-perturbation, avoiding the need for external models or additional training. By selectively skipping spatiotemporal layers, STG produces an aligned, degraded version of the original model to boost sample quality without compromising diversity or dynamic degree. Our contributions include: (1) introducing STG as an efficient, high-performing guidance technique for video diffusion models, (2) eliminating the need for auxiliary models by simulating a weak model through layer skipping, and (3) ensuring quality-enhanced guidance without compromising sample diversity or dynamics unlike CFG. For additional results, visit https://junhahyung.github.io/STGuidance.