Cinemo: Consistent and Controllable Image Animation with Motion Diffusion Models
Xin Ma, Yaohui Wang, Gengyu Jia, Xinyuan Chen, Yuan-Fang Li, Cunjian Chen, Yu Qiao
2024-07-23

Summary
This paper introduces Cinemo, a new method for creating smooth and controllable animations from still images using motion diffusion models. It focuses on improving how well these animations maintain the details of the original image while allowing users to control the animation's motion.
What's the problem?
Creating animations from static images can be challenging because existing methods often struggle to keep the details of the original image consistent over time. Additionally, ensuring that the animation looks smooth and matches user-defined motions can be difficult. Many current techniques either lose important information from the original image or produce choppy animations.
What's the solution?
Cinemo addresses these issues by using three main strategies during training and when generating animations. First, it learns how to predict motion changes without directly trying to create each frame, which helps maintain consistency. Second, it uses a method based on structural similarity to give users better control over how intense the motion appears. Finally, it employs a noise refinement technique to smooth out sudden changes in movement during the animation process. These strategies work together to produce high-quality animations that are both consistent and easy for users to control.
Why it matters?
This research is important because it enhances the ability to animate images in a way that preserves their original details and allows for precise user input. This could have many applications, such as in video games, movies, or social media content creation, where high-quality animations are desired. By improving the technology behind image animation, Cinemo can help artists and creators bring their ideas to life more effectively.
Abstract
Diffusion models have achieved great progress in image animation due to powerful generative capabilities. However, maintaining spatio-temporal consistency with detailed information from the input static image over time (e.g., style, background, and object of the input static image) and ensuring smoothness in animated video narratives guided by textual prompts still remains challenging. In this paper, we introduce Cinemo, a novel image animation approach towards achieving better motion controllability, as well as stronger temporal consistency and smoothness. In general, we propose three effective strategies at the training and inference stages of Cinemo to accomplish our goal. At the training stage, Cinemo focuses on learning the distribution of motion residuals, rather than directly predicting subsequent via a motion diffusion model. Additionally, a structural similarity index-based strategy is proposed to enable Cinemo to have better controllability of motion intensity. At the inference stage, a noise refinement technique based on discrete cosine transformation is introduced to mitigate sudden motion changes. Such three strategies enable Cinemo to produce highly consistent, smooth, and motion-controllable results. Compared to previous methods, Cinemo offers simpler and more precise user controllability. Extensive experiments against several state-of-the-art methods, including both commercial tools and research approaches, across multiple metrics, demonstrate the effectiveness and superiority of our proposed approach.