< Explain other AI papers

RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers

Min Zhao, Guande He, Yixiao Chen, Hongzhou Zhu, Chongxuan Li, Jun Zhu

2025-02-25

RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion
  Transformers

Summary

This paper talks about CodeCriticBench, a new way to test how well AI language models can understand and critique computer code, including both writing code and answering questions about it

What's the problem?

Current methods for testing AI's ability to critique code are too simple and don't cover enough different types of coding tasks. They mostly focus on general thinking skills or just writing code, and the questions they use are often too easy. Also, these tests don't look at all the different ways an AI should be able to understand and critique code

What's the solution?

The researchers created CodeCriticBench, which tests AI on two main coding tasks: writing code and answering questions about code. These tasks come in different difficulty levels to really challenge the AI. They also made detailed checklists to evaluate how well the AI critiques code from many different angles, not just whether it's correct or not. Then, they tested a bunch of existing AI models using CodeCriticBench to show how well it works

Why it matters?

This matters because as AI gets better at working with code, we need good ways to test how well it can actually understand and improve code. CodeCriticBench could help researchers make better AI that can assist programmers more effectively, potentially leading to faster and more reliable software development. It could also help companies choose the right AI tools for code review and programming assistance

Abstract

Recent advancements in video generation have enabled models to synthesize high-quality, minute-long videos. However, generating even longer videos with temporal coherence remains a major challenge, and existing length extrapolation methods lead to temporal repetition or motion deceleration. In this work, we systematically analyze the role of frequency components in positional embeddings and identify an intrinsic frequency that primarily governs extrapolation behavior. Based on this insight, we propose RIFLEx, a minimal yet effective approach that reduces the intrinsic frequency to suppress repetition while preserving motion consistency, without requiring any additional modifications. RIFLEx offers a true free lunch--achieving high-quality 2times extrapolation on state-of-the-art video diffusion transformers in a completely training-free manner. Moreover, it enhances quality and enables 3times extrapolation by minimal fine-tuning without long videos. Project page and codes: https://riflex-video.github.io/{https://riflex-video.github.io/.}