J Shanghai Jiaotong Univ Sci ›› 2025, Vol. 30 ›› Issue (5): 911-922.doi: 10.1007/s12204-023-2692-x

• • 上一篇    下一篇

具有视觉伪装性的人脸识别对抗性图案生成方法

  

  1. 1. 大连理工大学 社会计算与认知智能教育部重点实验室;控制科学与工程学院;计算机科学与技术学院,辽宁 大连 116024;2. 军事科学院 系统工程研究院,北京 100141
  • 收稿日期:2023-07-06 接受日期:2023-07-27 出版日期:2025-09-26 发布日期:2023-12-21

Generating Adversarial Patterns in Facial Recognition with Visual Camouflage

包骐瑞1,梅海洋1,魏慧琳2,吕政1,王宇新1,杨鑫1   

  1. 1. Key Laboratory of Social Computing and Cognitive Intelligence of Ministry of Education; School of Control Science and Engineering; School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, Liaoning, China; 2. Systems Engineering Institute, Academy of Military Sciences, Beijing 100141, China
  • Received:2023-07-06 Accepted:2023-07-27 Online:2025-09-26 Published:2023-12-21

摘要: 深度神经网络,特别是人脸识别模型,已经被证明容易受到对抗性样本的影响。然而,现有的针对人脸识别系统的攻击方法要么无法攻击黑盒模型,要么不通用,部署过程繁琐;要么缺乏伪装,容易被人眼检测到。在本文中,我们提出了一种用于人脸识别的对抗性图案生成方法,并通过将图案粘贴在护目镜的边框来实现通用的黑盒攻击。为了实现视觉伪装,使用了生成对抗性网络(Generative Adversarial Network,GAN)。增加了 GAN 生成网络的规模,以平衡隐蔽性和对抗性之间的性能冲突,使用基于 VGG19 的感知损失函数来约束颜色风格并增强 GAN 的学习能力,并使用细粒度的元学习对抗性攻击策略进行黑盒攻击。与现有技术相比,充分的可视化结果表明,本文提出的方法可以生成具有伪装和对抗特性的样本。同时,大量的定量实验表明,生成的样本对黑盒模型的攻击成功率很高。

关键词: 人脸识别, 对抗攻击, 黑盒攻击, 伪装图案

Abstract: Deep neural networks, especially face recognition models, have been shown to be vulnerable to adversarial examples. However, existing attack methods for face recognition systems either cannot attack black-box models, are not universal, have cumbersome deployment processes, or lack camouflage and are easily detected by the human eye. In this paper, we propose an adversarial pattern generation method for face recognition and achieve universal black-box attacks by pasting the pattern on the frame of goggles. To achieve visual camouflage, we use a generative adversarial network (GAN). The scale of the generative network of GAN is increased to balance the performance conflict between concealment and adversarial behavior, the perceptual loss function based on VGG19 is used to constrain the color style and enhance GAN’s learning ability, and the fine-grained meta-learning adversarial attack strategy is used to carry out black-box attacks. Sufficient visualization results demonstrate that compared with existing methods, the proposed method can generate samples with camouflage and adversarial characteristics. Meanwhile, extensive quantitative experiments show that the generated samples have a high attack success rate against black-box models.

中图分类号: