< Explain other AI papers

Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation

Fangyuan Mao, Aiming Hao, Jintao Chen, Dongxia Liu, Xiaokun Feng, Jiashu Zhu, Meiqi Wu, Chubin Chen, Jiahong Wu, Xiangxiang Chu

2025-08-12

Omni-Effects: Unified and Spatially-Controllable Visual Effects
  Generation

Summary

This paper talks about Omni-Effects, a new system that helps create visual effects for videos where you can control what kind of effect shows up and exactly where it appears on the screen. It uses smart technology to combine different effects without mixing them up and lets users guide the effects with text prompts and spatial information.

What's the problem?

The problem is that current video generation methods usually train for only one effect at a time, which makes it hard to create videos with multiple effects that appear exactly where you want them. When trying to combine many effects in one model, the different effects can interfere with each other, and it's challenging to place these effects precisely on different parts of the screen.

What's the solution?

Omni-Effects solves this by introducing two main innovations. First, it uses a LoRA-based Mixture of Experts module that has different expert branches specialized for different effects, which helps prevent the effects from interfering with each other. Second, it uses a Spatial-Aware Prompt that blends spatial location information with text instructions to control exactly where each effect appears. It also has an Independent-Information Flow system that keeps the controls for each effect separate to avoid unwanted blending. The team also built a big dataset and a way to evaluate how well the system works.

Why it matters?

This matters because it allows creators to make complex videos with many different visual effects that can be precisely controlled in where and how they appear. This can make producing high-quality, customized visual content faster and easier, which is useful for movies, games, advertising, and more.

Abstract

Omni-Effects is a unified framework that enables the generation of prompt-guided and spatially controllable composite visual effects using LoRA-based Mixture of Experts and Spatial-Aware Prompt with Independent-Information Flow.