If you're concerned about your photos being used to generate deepfake videos, adversarial perturbations applied in multiple domains (color and frequency) can effectively block modern video generation models while remaining imperceptible to humans.
This paper presents Anti-I2V, a defense method that protects photos from being misused in AI-generated fake videos. Instead of just adding noise to images, it works across multiple color spaces and frequency domains to disrupt video generation models, targeting both traditional and newer Transformer-based architectures.