Computing & Computer Technologies

Generating Adversarial Patterns in Facial Recognition with Visual Camouflage

Expand
  • 1. Key Laboratory of Social Computing and Cognitive Intelligence of Ministry of Education; School of Control Science and Engineering; School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, Liaoning, China; 2. Systems Engineering Institute, Academy of Military Sciences, Beijing 100141, China

Received date: 2023-07-06

  Accepted date: 2023-07-27

  Online published: 2023-12-21

Abstract

Deep neural networks, especially face recognition models, have been shown to be vulnerable to adversarial examples. However, existing attack methods for face recognition systems either cannot attack black-box models, are not universal, have cumbersome deployment processes, or lack camouflage and are easily detected by the human eye. In this paper, we propose an adversarial pattern generation method for face recognition and achieve universal black-box attacks by pasting the pattern on the frame of goggles. To achieve visual camouflage, we use a generative adversarial network (GAN). The scale of the generative network of GAN is increased to balance the performance conflict between concealment and adversarial behavior, the perceptual loss function based on VGG19 is used to constrain the color style and enhance GAN’s learning ability, and the fine-grained meta-learning adversarial attack strategy is used to carry out black-box attacks. Sufficient visualization results demonstrate that compared with existing methods, the proposed method can generate samples with camouflage and adversarial characteristics. Meanwhile, extensive quantitative experiments show that the generated samples have a high attack success rate against black-box models.

Cite this article

BAO Qirui, MEI Haiyang, WEI Huilin, L Zheng, WANG Yuxin, YANG Xin . Generating Adversarial Patterns in Facial Recognition with Visual Camouflage[J]. Journal of Shanghai Jiaotong University(Science), 2025 , 30(5) : 911 -922 . DOI: 10.1007/s12204-023-2692-x

References

[1] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples [DB/OL]. (2014-12-20). https://arxiv.org/abs/1412.6572

[2] CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks [C]//2017 IEEE Symposium on Security and Privacy. San Jose: IEEE, 2017: 39-57.

[3] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks [DB/OL]. (2017-06-19). https://arxiv.org/abs/1706.06083

[4] DONG Y P, LIAO F Z, PANG T Y, et al. Boosting adversarial attacks with momentum [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 9185-9193.

[5] PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning [C]// 2017 ACM on Asia Conference on Computer and Communications Security. Abu Dhabi: ACM, 2017: 506-519.

[6] BRENDEL W, RAUBER J, BETHGE M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models [DB/OL]. (2017-12-12). https://arxiv.org/abs/1712.04248

[7] LIU H, JI R R, LI J, et al. Universal adversarial perturbation via prior driven uncertainty approximation [C]//2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 2941-2949.

[8] ZHOU W, HOU X, CHEN Y J, et al. Transferable adversarial perturbations[M]//European conference on computer vision. Cham: Springer, 2018: 471-486.

[9] DONG Y P, PANG T Y, SU H, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks [C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 4307-4316.

[10] XIE C H, ZHANG Z S, ZHOU Y Y, et al. Improving transferability of adversarial examples with input diversity [C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 2725-2734.

[11] ZHONG Y Y, DENG W H. Towards transferable adversarial attack against deep face recognition [J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 1452-1466.

[12] WANG X S, HE K. Enhancing the transferability of adversarial attacks through variance tuning [C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 1924-1933.

[13] WANG W X, YIN B J, YAO T P, et al. Delving into data: Effectively substitute training for black-box attack [C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 4759-4768.

[14] ZHOU M Y, WU J, LIU Y P, et al. DaST: data-free substitute training for adversarial attacks [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 231-240.

[15] ZHANG C N, BENZ P, KARJAUV A, et al. Data-free universal adversarial perturbation and black-box attack [C]//2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 7848-7857.

[16] CHATZIKYRIAKIDIS E, PAPAIOANNIDIS C, PITAS I. Adversarial face de-identification [C]//2019 IEEE International Conference on Image Processing. Taipei: IEEE, 2019: 684-688.

[17] SHARIF M, BHAGAVATULA S, BAUER L, et al. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition [C]// 2016 ACM SIGSAC Conference on Computer and Communications Security. Vienna: ACM, 2016: 1528-1540.

[18] KOMKOV S, PETIUSHKO A. AdvHat: real-world adversarial attack on ArcFace face ID system [C]//2020 25th International Conference on Pattern Recognition. Milan: IEEE, 2021: 819-826.

[19] DONG Y P, SU H, WU B Y, et al. Efficient decision-based black-box adversarial attacks on face recognition [C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 7706-7714.

[20] BOSE A J, AARABI P. Adversarial attacks on face detectors using neural net based constrained optimization [C]//2018 IEEE 20th International Workshop on Multimedia Signal Processing. Vancouver: IEEE, 2018: 1-6.

[21] DEB D, ZHANG J B, JAIN A K. AdvFaces: adversarial face synthesis [C]//2020 IEEE International Joint Conference on Biometrics. Houston: IEEE, 2020: 1-10.

[22] YANG L, SONG Q, WU Y Q. Attacks on state-of-the-art face recognition using attentional adversarial attack generative network [J]. Multimedia Tools and Applications, 2021, 80(1): 855-875.

[23] SHARIF M, BHAGAVATULA S, BAUER L, et al. A general framework for adversarial examples with objectives [J]. ACM Transactions on Privacy and Security, 22(3): 16.

[24] YIN B J, WANG W X, YAO T P, et al. Adv-makeup: A new imperceptible and transferable attack on face recognition [DB/OL]. (2021-05-07). https://arxiv.org/abs/2105.03162

[25] QIU H N, XIAO C W, YANG L, et al. SemanticAdv: generating adversarial examples via attribute-conditioned image editing[M]//European conference on computer vision. Cham: Springer, 2020: 19-37.

[26] JIA S, YIN B J, YAO T P, et al. Adv-attribute: Inconspicuous and transferable adversarial attack on face recognition [DB/OL]. (2022-10-13). https://arxiv.org/abs/2210.06871

[27] ZHAI M Y, CHEN L, TUNG F, et al. Lifelong GAN: Continual learning for conditional image generation [C]//2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 2759-2768.

[28] MAO X D, LI Q, XIE H R, et al. Least Squares generative adversarial networks [C]//2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 2813-2821.

[29] RONNEBERGER O, FISCHER P, BROX T. U-net: Convolutional networks for biomedical image segmentation [M]//Lecture notes in computer science. Cham: Springer International Publishing, 2015: 234-241.

[30] JOHNSON J, ALAHI A, LI F F. Perceptual losses for real-time style transfer and super-resolution[M]//European conference on computer vision. Cham: Springer, 2016: 694-711.

[31] MAHENDRAN A, VEDALDI A. Understanding deep image representations by inverting them [C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 5188-5196.

[32] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778.

[33] HU J, SHEN L, SUN G. Squeeze-and-excitation networks [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7132-7141.

[34] DENG J K, GUO J, XUE N N, et al. ArcFace: Additive angular margin loss for deep face recognition [C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 4685-4694.

[35] SCHROFF F, KALENICHENKO D, PHILBIN J. FaceNet: A unified embedding for face recognition and clustering [C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 815-823.

[36] PASZKE A, GROSS S, MASSA F, et al. PyTorch: An imperative style, high-performance deep learning library [DB/OL]. (2019-12-03). https://arxiv.org/abs/1912.01703

[37] LIN J D, SONG C B, HE K, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks [DB/OL]. (2019-08-17). https://arxiv.org/abs/1908.06281

Outlines

/