< Explain other AI papers

FakeParts: a New Family of AI-Generated DeepFakes

Gaetan Brison, Soobash Daiboo, Samy Aimeur, Awais Hussain Sani, Xi Wang, Gianni Franchi, Vicky Kalogeiton

2025-08-29

FakeParts: a New Family of AI-Generated DeepFakes

Summary

This research focuses on a new, more realistic type of deepfake called 'FakeParts,' which subtly alters only parts of a real video instead of creating an entirely fake one. The paper highlights how difficult these partial deepfakes are to spot and introduces a new dataset to help improve detection methods.

What's the problem?

Current deepfake detection systems are really good at finding videos that are completely fabricated. However, these new 'FakeParts' deepfakes are much harder to detect because they blend real and fake content seamlessly. Because only small portions are changed, they’re much more deceptive to both people and existing AI detection tools, creating a significant vulnerability.

What's the solution?

To tackle this problem, the researchers created a large dataset called 'FakePartsBench.' This dataset contains over 25,000 videos with detailed notes pinpointing exactly which pixels and frames have been manipulated. This allows researchers to train and test new detection methods specifically designed for these partial deepfakes. They also conducted studies showing how much harder these FakeParts are for people and current AI to identify.

Why it matters?

This work is important because FakeParts represent a growing threat. They’re more believable and harder to detect than traditional deepfakes, meaning they could be used to spread misinformation or cause harm more effectively. By identifying this weakness and providing a dataset for improvement, the researchers are helping to develop better tools to protect against this new form of deception.

Abstract

We introduce FakeParts, a new class of deepfakes characterized by subtle, localized manipulations to specific spatial regions or temporal segments of otherwise authentic videos. Unlike fully synthetic content, these partial manipulations, ranging from altered facial expressions to object substitutions and background modifications, blend seamlessly with real elements, making them particularly deceptive and difficult to detect. To address the critical gap in detection capabilities, we present FakePartsBench, the first large-scale benchmark dataset specifically designed to capture the full spectrum of partial deepfakes. Comprising over 25K videos with pixel-level and frame-level manipulation annotations, our dataset enables comprehensive evaluation of detection methods. Our user studies demonstrate that FakeParts reduces human detection accuracy by over 30% compared to traditional deepfakes, with similar performance degradation observed in state-of-the-art detection models. This work identifies an urgent vulnerability in current deepfake detection approaches and provides the necessary resources to develop more robust methods for partial video manipulations.