高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向人脸验证的可迁移对抗样本生成方法

孙军梅 潘振雄 李秀梅 袁珑 张鑫

孙军梅, 潘振雄, 李秀梅, 袁珑, 张鑫. 面向人脸验证的可迁移对抗样本生成方法[J]. 电子与信息学报, 2023, 45(5): 1842-1851. doi: 10.11999/JEIT220358
引用本文: 孙军梅, 潘振雄, 李秀梅, 袁珑, 张鑫. 面向人脸验证的可迁移对抗样本生成方法[J]. 电子与信息学报, 2023, 45(5): 1842-1851. doi: 10.11999/JEIT220358
SUN Junmei, PAN Zhenxiong, LI Xiumei, YUAN Long, ZHANG Xin. Transferable Adversarial Example Generation Method For Face Verification[J]. Journal of Electronics & Information Technology, 2023, 45(5): 1842-1851. doi: 10.11999/JEIT220358
Citation: SUN Junmei, PAN Zhenxiong, LI Xiumei, YUAN Long, ZHANG Xin. Transferable Adversarial Example Generation Method For Face Verification[J]. Journal of Electronics & Information Technology, 2023, 45(5): 1842-1851. doi: 10.11999/JEIT220358

面向人脸验证的可迁移对抗样本生成方法

doi: 10.11999/JEIT220358
基金项目: 国家自然科学基金(61801159, 61571174),杭州市科技计划项目(20201203B124)
详细信息
    作者简介:

    孙军梅:女,博士,副教授,研究方向为深度学习、智能软件系统

    潘振雄:男,硕士生,研究方向为深度学习、对抗样本

    李秀梅:女,博士,教授,研究方向为时频分析及应用、压缩感知、深度学习

    袁珑:男,硕士生,研究方向为深度学习、对抗样本攻击和防御、目标检测

    张鑫:男,硕士生,研究方向为深度学习、医学图像处理

    通讯作者:

    李秀梅 lixiumei@hznu.edu.cn

  • 中图分类号: TN911.73; TP391.41

Transferable Adversarial Example Generation Method For Face Verification

Funds: The National Natural Science Foundation of China (61801159, 61571174), The Science and Technology Plan Project of Hangzhou (20201203B124)
  • 摘要: 在人脸识别模型的人脸验证任务中,传统的对抗攻击方法无法快速生成真实自然的对抗样本,且对单模型的白盒攻击迁移到其他人脸识别模型上时攻击效果欠佳。该文提出一种基于生成对抗网络的可迁移对抗样本生成方法TAdvFace。TAdvFace采用注意力生成器提高面部特征的提取能力,利用高斯滤波操作提高对抗样本的平滑度,并用自动调整策略调节身份判别损失权重,能够根据不同的人脸图像快速地生成高质量可迁移的对抗样本。实验结果表明,TAdvFace通过单模型的白盒训练,生成的对抗样本能够在多种人脸识别模型和商业API模型上都取得较好的攻击效果,拥有较好的迁移性。
  • 图  1  人脸验证过程

    图  2  TAdvFace网络结构图

    图  3  注意力生成器结构图及SE模块

    图  4  身份判别损失函数权重与余弦相似度对应关系图

    图  5  各攻击方法生成的对抗样本

    图  6  SE模块不同组合方式的变式

    算法1 自动调整算法
     输入:对抗损失函数权重${\lambda _i}$;目标人脸图像$y$;
     输入:最大迭代次数$T$;对抗样本${x^t}$;目标人脸图像$y$
     (1) for $t = 0$ to $T - 1$ do
     (a)  ${\lambda _i}$=10
     (b)  generate Adversarial Example ${x^t}$ within $\varepsilon $
     (c)  Get the cosine pair by Cosine Similarity$({x^t},y)$
     (d)  if cosine pair <0.6 then
     (e)   ${\lambda _i}$=1.1$ \times $${\lambda _i}$ $ \triangleright $cosine pair[–1,0.6]
     (f)   if cosine pair <0.4 then
     (g)    ${\lambda _i}$=1.1$ \times $${\lambda _i}$ $\triangleright $cosine pair[–1,0.4]
     (h)    if cosine pair <0 then
     (i)     ${\lambda _i}$=1.1$ \times $${\lambda _i}$ $\triangleright $cosine pair[–1,0]
     (j)     if cosine pair <-0.2 then
     (k)      ${\lambda _i}$=1.05$ \times $${\lambda _i}$ $\triangleright $cosine pair[–1,–0.2]
     (l)      if cosine pair <-0.4 then
     (m)      ${\lambda _i}$=1.05$ \times $${\lambda _i}$ $\triangleright $cosine pair[–1,–0.4]
     (n)  update L with new(${\lambda _i}$) $\triangleright $计算总损失函数
     (o)  $t = t + 1$
     (2) end for
    下载: 导出CSV

    表  1  各攻击方法生成的对抗样本对人脸识别模型的非定向攻击成功率(%)

    攻击方法FaceNetSphereFaceInsightFaceVGG-FaceAPI-BaiduAPI-Face++API-Xfyun
    PGD-899.9026.2926.2921.6172.2122.406.31
    PGD-1699.9052.9552.5432.6794.5756.2516.95
    FGM-0.0187.4717.7413.4716.0050.7510.352.71
    FGM-0.0491.6026.7628.2218.2674.8126.747.62
    FLM100.0024.0116.4018.1374.8520.325.24
    GFLM99.8333.5126.4023.0789.6242.4811.78
    AdvFace100.0063.0451.5157.1394.1859.7113.05
    本文TAdvFace99.9878.5958.2273.4798.3774.2228.14
    下载: 导出CSV

    表  2  各对抗攻击方法生成的对抗样本的图像质量评价和生成时间

    评价指标PGD-8PGD-16FGM-0.01FGM-0.04FLMGFLMAdvFace本文TAdvFace
    ↑SSIM0.89±0.010.75±0.030.82±0.070.82±0.070.82±0.050.62±0.100.91±0.020.92±0.02
    ↑PSNR(dB)34.11±0.3929.23±0.4119.01±3.2718.99±3.2423.25±1.8119.50±2.3428.62±2.6729.80±4.08
    ↓LPIPS0.037±0.0120.086±0.0210.073±0.0410.072±0.0410.033±0.0100.058±0.0250.020±0.0060.020±0.007
    ↓时间(s)8.277.860.010.010.120.530.010.01
    下载: 导出CSV

    表  3  不同高斯核尺寸下对抗样本的非定向攻击成功率(%)

    尺寸FaceNetSphereFaceInsightFaceVGG-FaceAPI-BaiduAPI-Face++
    /99.7226.7316.1629.6367.2317.22
    1$ \times $199.6924.5015.3028.0364.6115.71
    3$ \times $399.7025.1215.3026.9365.7416.03
    5$ \times $599.7038.6721.1941.6878.5627.69
    7$ \times $799.7965.7140.2565.1495.6060.46
    9$ \times $999.7887.7362.7479.2498.5183.46
    下载: 导出CSV

    表  4  不同高斯核尺寸下对抗样本的图像质量评价

    尺寸↑SSIM↑PSNR↓LPIPS
    /0.9732±0.01334.40±4.820.0053±0.002
    1$ \times $10.9730±0.01334.31±4.850.0052±0.002
    3$ \times $30.9725±0.01434.37±4.820.0052±0.002
    5$ \times $50.9656±0.01733.32±4.750.0073±0.003
    7$ \times $70.9406±0.02630.63±4.600.0170±0.006
    9$ \times $90.8415±0.04225.08±4.150.0676±0.019
    下载: 导出CSV

    表  5  不同嵌入阶段下的对抗样本非定向攻击成功率(%)

    嵌入阶段FaceNetSphere
    Face
    Insight
    Face
    VGGFaceAPI-BaiduAPI-Face++
    /99.7965.7140.2565.1495.6060.46
    R256-199.6867.6438.5261.0994.4358.02
    R256-299.7961.4236.0759.0993.0752.07
    R256-399.8066.9137.6365.7893.9756.98
    All99.8272.2744.1768.1895.2562.01
    下载: 导出CSV

    表  6  SE模块不同组合方式下的对抗样本的攻击成功率(%)

    组合方式FaceNetSphereFaceInsightFaceVGG-FaceAPI-BaiduAPI-Face++
    标准组合99.7368.3438.0462.9593.9856.24
    卷积操作前99.8269.1441.1867.2695.8563.45
    卷积操作间99.7065.8438.8464.0394.7657.96
    跳跃连接中99.8272.2744.1768.1895.2562.01
    下载: 导出CSV

    表  7  不同身份判别损失权重策略下非定向攻击成功率(%)

    权重策略Face
    Net
    Sphere
    Face
    Insight
    Face
    VGG-FaceAPI-BaiduAPI-Face++
    ${\lambda _i}$=1099.8272.2744.1768.1895.2562.01
    自动调整99.9878.5958.2273.4798.3774.22
    下载: 导出CSV
  • [1] ZHONG Yaoyao and DENG Weihong. Towards transferable adversarial attack against deep face recognition[J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 1452–1466. doi: 10.1109/TIFS.2020.3036801
    [2] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. The 2nd International Conference on Learning Representations, Banff, Canada, 2014.
    [3] GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [4] MIYATO T, DAI A M, and GOODFELLOW I J. Adversarial training methods for semi-supervised text classification[C]. The 5th International Conference on Learning Representations, Toulon, France, 2017.
    [5] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [6] DABOUEI A, SOLEYMANI S, DAWSON J, et al. Fast geometrically-perturbed adversarial faces[C]. 2019 IEEE Winter conference on Applications of Computer Vision, Waikoloa, USA, 2019: 1979–1988.
    [7] DONG Yinpeng, SU Hang, WU Baoyuan, et al. Efficient decision-based black-box adversarial attacks on face recognition[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 7709–7714.
    [8] HANSEN N and OSTERMEIER A. Completely derandomized self-adaptation in evolution strategies[J]. Evolutionary Computation, 20o1, 9(2): 159–195.
    [9] YANG Xiao, YANG Dingcheng, DONG Yinpeng, et al. Delving into the adversarial robustness on face recognition[EB/OL]. https://arxiv.org/pdf/2007.04118v1.pdf, 2022.
    [10] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139–144. doi: 10.1145/3422622
    [11] XIAO Chaowei, LI Bo, ZHU Junyan, et al. Generating adversarial examples with adversarial networks[C]. The 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 2018: 3905–3911.
    [12] YANG Lu, SONG Qing, and WU Yingqi. Attacks on state-of-the-art face recognition using attentional adversarial attack generative network[J]. Multimedia Tools and Applications, 2021, 80(1): 855–875. doi: 10.1007/s11042-020-09604-z
    [13] QIU Haonan, XIAO Chaowei, YANG Lei, et al. SemanticAdv: Generating adversarial examples via attribute-conditioned image editing[C]. 2020 16th European Conference on Computer Vision, Glasgow, UK, 2020: 19–37.
    [14] JOSHI A, MUKHERJEE A, SARKAR S, et al. Semantic adversarial attacks: Parametric transformations that fool deep classifiers[C]. The 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 4773–4783.
    [15] MIRJALILI V, RASCHKA S, and ROSS A. PrivacyNet: Semi-adversarial networks for multi-attribute face privacy[J]. IEEE Transactions on Image Processing , 2020, 29: 9400–9412. doi: 10.1109/TIP.2020.3024026
    [16] ZHU Zheng’an, LU Yunzhong, and CHIANG C K. Generating adversarial examples by makeup attacks on face recognition[C]. 2019 IEEE International Conference on Image Processing, Taipei, China, 2019: 2516–2520.
    [17] DEB D, ZHANG Jianbang, and JAIN A K. AdvFaces: Adversarial face synthesis[C]. 2020 IEEE International Joint Conference on Biometrics, Houston, USA, 2020: 1–10.
    [18] HU Jie, SHEN Li, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011–2023. doi: 10.1109/TPAMI.2019.2913372
    [19] SHARMA Y, DING G W, and BRUBAKER M A. On the effectiveness of low frequency perturbations[C]. Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 2019: 3389–3396.
    [20] YI Dong, LEI Zhen, LIAO Shengcai, et al. Learning face representation from scratch[EB/OL]. https://arxiv.org/abs/1411.7923.pdf, 2022.
    [21] HUANG G B, MATTAR M, BERG T, et al. Labeled faces in the wild: A database for studying face recognition in unconstrained environments[EB/OL]. http://vis-www.cs.umass.edu/papers/lfw.pdf, 2022.
  • 加载中
图(6) / 表(8)
计量
  • 文章访问数:  655
  • HTML全文浏览量:  359
  • PDF下载量:  133
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-03-31
  • 修回日期:  2022-08-26
  • 录用日期:  2022-09-06
  • 网络出版日期:  2022-09-09
  • 刊出日期:  2023-05-10

目录

    /

    返回文章
    返回