Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing
Hanhui Wang, Yihua Zhang, Ruizheng Bai, Yue Zhao, Sijia Liu, Zhengzhong Tu
2024-11-28

Summary
This paper introduces FaceLock, a new method designed to protect personal images from malicious editing by ensuring that edited portraits cannot be recognized or linked back to the original person.
What's the problem?
With advancements in image editing technology, it has become easier for people to edit portraits in harmful ways, which can threaten someone's privacy and identity. Current methods to protect images often fail because they don't work well against various types of edits, making it difficult to keep personal images safe.
What's the solution?
FaceLock addresses this issue by using advanced techniques that alter facial features in edited images, making them unrecognizable. It combines facial recognition technology with a process that modifies the image to ensure that even after editing, the original biometric information is destroyed or significantly changed. This way, the edited images cannot be traced back to the real person. The method has been tested and shown to work better than previous solutions.
Why it matters?
This research is important because it enhances the protection of individuals' identities in an age where image editing tools are widely available. By providing a reliable way to safeguard personal images from malicious edits, FaceLock helps maintain privacy and security in digital spaces, which is increasingly crucial as technology continues to evolve.
Abstract
Recent advancements in diffusion models have made generative image editing more accessible, enabling creative edits but raising ethical concerns, particularly regarding malicious edits to human portraits that threaten privacy and identity security. Existing protection methods primarily rely on adversarial perturbations to nullify edits but often fail against diverse editing requests. We propose FaceLock, a novel approach to portrait protection that optimizes adversarial perturbations to destroy or significantly alter biometric information, rendering edited outputs biometrically unrecognizable. FaceLock integrates facial recognition and visual perception into perturbation optimization to provide robust protection against various editing attempts. We also highlight flaws in commonly used evaluation metrics and reveal how they can be manipulated, emphasizing the need for reliable assessments of protection. Experiments show FaceLock outperforms baselines in defending against malicious edits and is robust against purification techniques. Ablation studies confirm its stability and broad applicability across diffusion-based editing algorithms. Our work advances biometric defense and sets the foundation for privacy-preserving practices in image editing. The code is available at: https://github.com/taco-group/FaceLock.