eduzhai > Applied Sciences > Engineering >

Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces

  • Save

... pages left unread,continue reading

Document pages: 14 pages

Abstract: Deepfake represents a category of face-swapping attacks that leverage machinelearning models such as autoencoders or generative adversarial networks.Although the concept of the face-swapping is not new, its recent technicaladvances make fake content (e.g., images, videos) more realistic andimperceptible to Humans. Various detection techniques for Deepfake attacks havebeen explored. These methods, however, are passive measures against Deepfakesas they are mitigation strategies after the high-quality fake content isgenerated. More importantly, we would like to think ahead of the attackers withrobust defenses. This work aims to take an offensive measure to impede thegeneration of high-quality fake images or videos. Specifically, we propose touse novel transformation-aware adversarially perturbed faces as a defenseagainst GAN-based Deepfake attacks. Different from the naive adversarial faces,our proposed approach leverages differentiable random image transformationsduring the generation. We also propose to use an ensemble-based approach toenhance the defense robustness against GAN-based Deepfake variants under theblack-box setting. We show that training a Deepfake model with adversarialfaces can lead to a significant degradation in the quality of synthesizedfaces. This degradation is twofold. On the one hand, the quality of thesynthesized faces is reduced with more visual artifacts such that thesynthesized faces are more obviously fake or less convincing to humanobservers. On the other hand, the synthesized faces can easily be detectedbased on various metrics.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×