高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于图神经网络模型校准的成员推理攻击

谢丽霞 史镜琛 杨宏宇 胡泽 成翔

谢丽霞, 史镜琛, 杨宏宇, 胡泽, 成翔. 基于图神经网络模型校准的成员推理攻击[J]. 电子与信息学报. doi: 10.11999/JEIT240477
引用本文: 谢丽霞, 史镜琛, 杨宏宇, 胡泽, 成翔. 基于图神经网络模型校准的成员推理攻击[J]. 电子与信息学报. doi: 10.11999/JEIT240477
XIE Lixia, SHI Jingchen, YANG Hongyu, HU Ze, CHENG Xiang. Membership Inference Attacks Based on Graph Neural Network Model Calibration[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240477
Citation: XIE Lixia, SHI Jingchen, YANG Hongyu, HU Ze, CHENG Xiang. Membership Inference Attacks Based on Graph Neural Network Model Calibration[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240477

基于图神经网络模型校准的成员推理攻击

doi: 10.11999/JEIT240477
基金项目: 国家自然科学基金民航联合研究基金重点项目(U2433205),国家自然科学基金(62201576, U1833107),江苏省基础研究计划自然科学基金青年基金(BK20230558)
详细信息
    作者简介:

    谢丽霞:女,硕士,教授,研究方向为网络信息安全

    史镜琛:男,硕士生,研究方向为人工智能安全

    杨宏宇:男,博士,教授,博士生导师,研究方向为网络与系统安全、软件安全、网络安全态势感知

    胡泽:男,博士,讲师,研究方向为人工智能、自然语言处理、网络信息安全

    成翔:男,博士,讲师,研究方向为网络与系统安全、网络安全态势感知、APT攻击检测

    通讯作者:

    杨宏宇 yhyxlx@hotmail.com

  • 中图分类号: TN915.08; TP309

Membership Inference Attacks Based on Graph Neural Network Model Calibration

Funds: Civil Aviation Joint Research Fund Project of the National Natural Science Foundation of China (U2433205), The National Natural Science Foundation of China (62201576, U1833107), Jiangsu Provincial Basic Research Program Natural Science Foundation-Youth Fund (BK20230558)
  • 摘要: 针对图神经网络(GNN)模型在其预测中常处于欠自信状态,导致该状态下实施成员推理攻击难度大且攻击漏报率高的问题,该文提出一种基于GNN模型校准的成员推理攻击方法。首先,设计一种基于因果推断的GNN模型校准方法,通过基于注意力机制的因果图提取、因果图与非因果图解耦、后门路径调整策略和因果关联图生成过程,构建用于训练GNN模型的因果关联图。其次,使用与目标因果关联图在相同数据分布下的影子因果关联图构建影子GNN模型,模拟目标GNN模型的预测行为。最后,使用影子GNN模型的后验概率构建攻击数据集以训练攻击模型,根据目标GNN模型对目标节点的后验概率输出推断其是否属于目标GNN模型的训练数据。在4个数据集上的实验结果表明,该文方法在2种攻击模式下面对不同架构的GNN模型进行攻击时,攻击准确率最高为92.6%,攻击漏报率最低为6.7%,性能指标优于基线攻击方法,可有效地实施成员推理攻击。
  • 图  1  MIAs-MC攻击方法架构

    图  2  基于因果推断的模型校准方法

    图  3  GNN中的结构因果模型

    图  4  攻击模式2下MIAs的攻击准确率

    图  5  攻击模式2下MIAs的攻击精确率

    图  6  攻击模式2下MIAs的攻击漏报率

    1  因果关联图生成算法

     输入:GNN模型初始的训练子图G(Gt, GsG),迭代次数T
     输出:目标因果关联图Gtarget,影子因果关联图Gshadow
     (1) for t = 1 to T
     (2)  Gc, Gu ← Attention(G) //因果图提取
     (3)  Lc, Lu ← Decouple(G) //因果图与非因果图解耦,生成对
        应损失函数
     (4)  Lcau ← Backdoor Adjustment(Gc, Gu) //后门路径调整,
        生成后门路径调整损失函数
     (5)  LLc, Lu, Lcau //计算模型总损失函数
     (6)  θt+1 ← Update(θt) //更新模型参数
     (7)  Gt+1Gt //迭代更新因果注意力图
     (8) endfor
     (9) Gtarget, Gshadow ← GT //生成目标因果关联图和影子因果关
       联图
     (10) 结束算法返回目标因果关联图Gtarget,影子因果关联图
       Gshadow
    下载: 导出CSV

    表  1  数据集的统计信息

    数据集类别数节点数边数节点特征维度使用节点数
    Cora72 7085 4291 4332 520
    CiteSeer63 3274 7323 7032 400
    PubMed319 71744 33850018 000
    Flickr789 250449 87850042 000
    下载: 导出CSV

    表  2  攻击模式1下MIAs-MC的攻击结果

    数据集 GNN架构 Accuracy Precision AUC Recall F1-score
    Cora GCN 0.926 0.920 0.912 0.913 0.912
    GAT 0.911 0.914 0.910 0.911 0.911
    GraphSAGE 0.905 0.908 0.904 0.905 0.905
    SGC 0.914 0.923 0.915 0.914 0.914
    CiteSeer GCN 0.918 0.912 0.917 0.918 0.918
    GAT 0.857 0.879 0.857 0.857 0.855
    GraphSAGE 0.933 0.936 0.931 0.933 0.933
    SGC 0.930 0.938 0.929 0.930 0.930
    PubMed GCN 0.750 0.784 0.750 0.751 0.743
    GAT 0.642 0.686 0.643 0.642 0.621
    GraphSAGE 0.748 0.754 0.747 0.748 0.748
    SGC 0.690 0.702 0.691 0.690 0.690
    Flickr GCN 0.841 0.846 0.841 0.841 0.841
    GAT 0.786 0.801 0.787 0.786 0.785
    GraphSAGE 0.732 0.764 0.732 0.732 0.725
    SGC 0.907 0.916 0.908 0.907 0.907
    下载: 导出CSV

    表  3  攻击模式1下基线攻击方法的攻击结果

    数据集 GNN架构 Accuracy Precision AUC Recall F1-score
    Cora GCN 0.763 0.770 0.764 0.763 0.763
    GAT 0.721 0.728 0.718 0.721 0.720
    GraphSAGE 0.825 0.837 0.825 0.825 0.824
    SGC 0.806 0.812 0.808 0.806 0.807
    CiteSeer GCN 0.860 0.865 0.859 0.860 0.860
    GAT 0.772 0.775 0.769 0.772 0.771
    GraphSAGE 0.858 0.875 0.859 0.858 0.827
    SGC 0.863 0.868 0.862 0.863 0.863
    PubMed GCN 0.647 0.655 0.647 0.647 0.647
    GAT 0.593 0.612 0.593 0.593 0.580
    GraphSAGE 0.554 0.560 0.553 0.554 0.553
    SGC 0.664 0.685 0.665 0.664 0.658
    Flickr GCN 0.774 0.805 0.775 0.774 0.769
    GAT 0.601 0.613 0.602 0.601 0.598
    GraphSAGE 0.689 0.755 0.688 0.689 0.668
    SGC 0.877 0.893 0.878 0.877 0.876
    下载: 导出CSV

    表  4  Cora数据集上影子模型与目标模型准确率差异(%)

    GNN架构基线攻击下训练准确率差值基线攻击下测试准确率差值模型校准后训练准确率差值模型校准后测试准确率差值
    GCN0.323.970.790.95
    GAT3.651.991.912.22
    GraphSAGE0.324.920.160.80
    SGC0.660.471.111.70
    下载: 导出CSV

    表  5  PubMed数据集上影子模型与目标模型准确率差异(%)

    GNN架构基线攻击下训练准确率差值基线攻击下测试准确率差值模型校准后训练准确率差值模型校准后测试准确率差值
    GCN1.450.751.150.14
    GAT0.361.150.820.51
    GraphSAGE0.205.120.133.15
    SGC1.580.600.730.56
    下载: 导出CSV
  • [1] SHOKRI R, STRONATI M, SONG Congzheng, et al. Membership inference attacks against machine learning models[C]. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 3–18. doi: 10.1109/SP.2017.41.
    [2] SALEM A, ZHANG Yang, HUMBERT M, et al. ML-Leaks: Model and data independent membership inference attacks and defenses on machine learning models[C]. The Network and Distributed System Security Symposium (NDSS), San Diego, USA, 2019: 24–27.
    [3] LONG Yunhui, WANG Lei, BU Diyue, et al. A pragmatic approach to membership inferences on machine learning models[C]. 2020 IEEE European Symposium on Security and Privacy (EuroS&P), Genoa, Italy, 2020: 521–534. doi: 10.1109/EuroSP48549.2020.00040.
    [4] KO M, JIN Ming, WANG Chenguang, et al. Practical membership inference attacks against large-scale multi-modal models: A pilot study[C]. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023: 4848–4858. doi: 10.1109/ICCV51070.2023.00449.
    [5] CHOQUETTE-CHOO C A, TRAMER F, CARLINI N, et al. Label-only membership inference attacks[C]. The 38th International Conference on Machine Learning, 2021: 1964–1974.
    [6] LIU Han, WU Yuhao, YU Zhiyuan, et al. Please tell me more: Privacy impact of explainability through the lens of membership inference attack[C]. 2024 IEEE Symposium on Security and Privacy (SP), San Francisco, USA, 2024: 120–120. doi: 10.1109/SP54263.2024.00120.
    [7] 吴博, 梁循, 张树森, 等. 图神经网络前沿进展与应用[J]. 计算机学报, 2022, 45(1): 35–68. doi: 10.11897/SP.J.1016.2022.00035.

    WU Bo, LIANG Xun, ZHANG Shusen, et al. Advances and applications in graph neural network[J]. Chinese Journal of Computers, 2022, 45(1): 35–68. doi: 10.11897/SP.J.1016.2022.00035.
    [8] OLATUNJI I E, NEJDL W, and KHOSLA M. Membership inference attack on graph neural networks[C]. 2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), Atlanta, USA, 2021: 11–20. doi: 10.1109/TPSISA52974.2021.00002.
    [9] HE Xinlei, WEN Rui, WU Yixin, et al. Node-level membership inference attacks against graph neural networks[EB/OL]. https://arxiv.org/abs/2102.05429, 2021.
    [10] WU Bang, YANG Xiangwen, PAN Shirui, et al. Adapting membership inference attacks to GNN for graph classification: Approaches and implications[C]. 2021 IEEE International Conference on Data Mining (ICDM), Auckland, New Zealand, 2021: 1421–1426. doi: 10.1109/ICDM51629.2021.00182.
    [11] WANG Xiuling and WANG W H. Link membership inference attacks against unsupervised graph representation learning[C]. The 39th Annual Computer Security Applications Conference, Austin, USA, 2023: 477–491. doi: 10.1145/3627106.3627115.
    [12] WANG Xiao, LIU Hongrui, SHI Chuan, et al. Be confident! Towards trustworthy graph neural networks via confidence calibration[C]. The 35th Conference on Neural Information Processing Systems, 2021: 1820.
    [13] HSU H H H, SHEN Y, TOMANI C, et al. What makes graph neural networks miscalibrated?[C]. The 36th Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 1001.
    [14] LIU Tong, LIU Yushan, HILDEBRANDT M, et al. On calibration of graph neural networks for node classification[C]. 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 2022: 1–8. doi: 10.1109/IJCNN55064.2022.9892866.
    [15] YANG Zhilin, COHEN W W, and SALAKHUTDINOV R. Revisiting semi-supervised learning with graph embeddings[C]. The 33rd International Conference on International Conference on Machine Learning, New York, USA, 2016: 40–48.
    [16] ZENG Hanqing, ZHOU Hongkuan, SRIVASTAVA A, et al. GraphSAINT: Graph sampling based inductive learning method[C]. The 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020: 1–19.
    [17] KIPF T N and WELLING M. Semi-supervised classification with graph convolutional networks[C]. The 5th International Conference on Learning Representations, Toulon, France, 2017: 1–14.
    [18] VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018: 1–12.
    [19] HAMILTON W, YING Z, and LESKOVEC J. Inductive representation learning on large graphs[C]. Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, USA, 2017: 1025–1035.
    [20] WU F, SOUZA A, ZHANG Tianyi, et al. Simplifying graph convolutional networks[C]. The 36th International Conference on Machine Learning, Long Beach, USA, 2019: 6861–6871.
  • 加载中
图(6) / 表(6)
计量
  • 文章访问数:  133
  • HTML全文浏览量:  47
  • PDF下载量:  26
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-06-12
  • 修回日期:  2025-02-17
  • 网络出版日期:  2025-02-26

目录

    /

    返回文章
    返回