Generating Adversarial Patterns in Facial Recognition with Visual Camouflage

Qirui Bao,Haiyang Mei, Huilin Wei,Zheng Lü,Yuxin Wang,Xin Yang

Journal of Shanghai Jiaotong University (Science)(2024)

引用 0|浏览13
暂无评分
摘要
Deep neural networks, especially face recognition models, have been shown to be vulnerable to adversarial examples. However, existing attack methods for face recognition systems either cannot attack black-box models, are not universal, have cumbersome deployment processes, or lack camouflage and are easily detected by the human eye. In this paper, we propose an adversarial pattern generation method for face recognition and achieve universal black-box attacks by pasting the pattern on the frame of goggles. To achieve visual camouflage, we use a generative adversarial network (GAN). The scale of the generative network of GAN is increased to balance the performance conflict between concealment and adversarial behavior, the perceptual loss function based on VGG19 is used to constrain the color style and enhance GAN’s learning ability, and the fine-grained meta-learning adversarial attack strategy is used to carry out black-box attacks. Sufficient visualization results demonstrate that compared with existing methods, the proposed method can generate samples with camouflage and adversarial characteristics. Meanwhile, extensive quantitative experiments show that the generated samples have a high attack success rate against black-box models.
更多
查看译文
关键词
face recognition,adversarial attacks,black-box attack,camouflage pattern,人脸识别,对抗攻击,黑盒攻击,伪装图案,TP18,TP309
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要