高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向中文文本分类的字符级对抗样本生成方法

张顺香 吴厚月 朱广丽 许鑫 苏明星

张顺香, 吴厚月, 朱广丽, 许鑫, 苏明星. 面向中文文本分类的字符级对抗样本生成方法[J]. 电子与信息学报, 2023, 45(6): 2226-2235. doi: 10.11999/JEIT220563
引用本文: 张顺香, 吴厚月, 朱广丽, 许鑫, 苏明星. 面向中文文本分类的字符级对抗样本生成方法[J]. 电子与信息学报, 2023, 45(6): 2226-2235. doi: 10.11999/JEIT220563
ZHANG Shunxiang, WU Houyue, ZHU Guangli, Xu Xin, SU Mingxing. Character-level Adversarial Samples Generation Approach for Chinese Text Classification[J]. Journal of Electronics & Information Technology, 2023, 45(6): 2226-2235. doi: 10.11999/JEIT220563
Citation: ZHANG Shunxiang, WU Houyue, ZHU Guangli, Xu Xin, SU Mingxing. Character-level Adversarial Samples Generation Approach for Chinese Text Classification[J]. Journal of Electronics & Information Technology, 2023, 45(6): 2226-2235. doi: 10.11999/JEIT220563

面向中文文本分类的字符级对抗样本生成方法

doi: 10.11999/JEIT220563
基金项目: 国家自然科学基金(62076006),安徽高校协同创新项目(GXXT-2021-008),安徽省研究生科研项目(YJS20210402)
详细信息
    作者简介:

    张顺香:男,教授,研究方向为情感计算、人物关系挖掘

    吴厚月:男,硕士生,研究方向为对抗样本生成、关系抽取

    朱广丽:女,副教授,研究方向为情感计算、复杂网络分析

    许鑫:男,硕士生,研究方向为自然语言处理、因果关系抽取

    苏明星:男,硕士生,研究方向为自然语言处理、情感计算

    通讯作者:

    张顺香 sxzhang@aust.edu.cn

  • 中图分类号: TN915.08; TP391.1

Character-level Adversarial Samples Generation Approach for Chinese Text Classification

Funds: The National Natural Science Foundation of China (62076006), The University Synergy Innovation Program of Anhui Province (GXXT-2021-008), The Graduate Students Scientific Research Project of Anhui Province(YJS20210402)
  • 摘要: 对抗样本生成是一种通过添加较小扰动信息,使得神经网络产生误判的技术,可用于检测文本分类模型的鲁棒性。目前,中文领域对抗样本生成方法主要有繁体字和同音字替换等,这些方法都存在对抗样本扰动幅度大,生成对抗样本质量不高的问题。针对这些问题,该文提出一种字符级对抗样本生成方法(PGAS),通过对多音字进行替换可以在较小扰动下生成高质量的对抗样本。首先,构建多音字字典,对多音字进行标注;然后对输入文本进行多音字替换;最后在黑盒模式下进行对抗样本攻击实验。实验在多种情感分类数据集上,针对多种最新的分类模型验证了该方法的有效性。
  • 图  1  PGAS模型框架图

    图  2  PGAS算法替换向量描述样例

    图  3  字音1和字音2在坐标系中的转移

    图  4  多音字3种拼音变换情况

    图  5  不同实验方法生成对抗样本数量的WMD和IMD分布情况

    表  1  实验数据集

    项目酒店评论数据微博评论数据商品评论数据
    任务类型情感倾向性分类情感倾向性分类情感倾向性分类
    分类数目222
    训练集(条)41207000042130
    测试集(条)17663000018056
    多音字数量(个)255695273914566585441
    下载: 导出CSV

    表  2  在酒店评论数据集上的对比试验结果(%)

    测试模型无修改对比方法本文方法
    WordHandlingCWordAttackerDeepWordBugFastWordBugPGAS
    准确率准确率降低幅度准确率降低幅度准确率降低幅度准确率降低幅度准确率降低幅度
    SVM76.3572.194.1671.035.3269.187.1770.156.2052.3623.99
    LSTM83.2176.256.9674.298.9272.5110.7075.227.9962.1721.04
    MemNet77.1270.316.8172.594.5370.156.9769.197.9358.6318.49
    IAN86.3181.255.0683.263.0578.327.9978.298.0264.9221.39
    AOA79.9171.268.6573.296.6268.2511.6670.539.3860.1519.76
    AEN-GloVe86.3279.816.5181.075.2577.169.1680.096.2368.3717.95
    LSTM+SynATT88.6183.595.0282.566.0578.3910.2281.377.2461.8426.77
    TD-GAT78.3672.206.1673.215.1572.196.1771.247.1260.2318.13
    ASGCN82.9777.185.7977.415.5671.0511.9273.089.8961.0821.89
    CNN82.3674.218.1576.385.9869.9112.4569.5112.8559.3922.97
    pos-ACNN-CNN76.2870.156.1372.533.7568.258.0366.1910.0958.1818.10
    下载: 导出CSV

    表  3  在微博评论数据集上的对比试验结果(%)

    测试模型无修改对比方法本文方法
    WordHandlingCWordAttackerDeepWordBugFastWordBugPGAS
    准确率准确率降低幅度准确率降低幅度准确率降低幅度准确率降低幅度准确率降低幅度
    SVM74.2568.325.9369.045.2166.038.2263.5110.7453.2121.04
    LSTM79.6672.587.0871.228.4468.2511.4169.799.8759.7419.92
    MemNet73.2866.826.4665.397.8964.518.7759.2114.0754.0919.19
    IAN80.3974.415.9876.284.1173.287.1174.076.3259.7420.65
    AOA77.2168.258.9663.0514.1662.8914.3264.1913.0253.6623.55
    AEN-GloVe85.3174.2911.0273.0812.2374.8510.4676.289.0366.2319.08
    LSTM+SynATT89.0772.1416.9375.4413.6377.6011.4781.187.8967.0422.03
    TD-GAT83.0676.336.7373.989.0872.5610.5074.618.4554.3928.67
    ASGCN80.1769.1910.9871.049.1369.0411.1362.8817.2956.1823.99
    CNN76.3368.387.9566.379.9669.716.6269.446.8957.2019.13
    pos-ACNN-CNN70.9461.259.6959.3711.5761.439.5160.0710.8759.3311.61
    下载: 导出CSV

    表  4  在商品评论数据集上的对比试验结果(%)

    测试模型无修改对比方法本文方法
    WordHandlingCWordAttackerDeepWordBugFastWordBugPGAS
    准确率准确率降低幅度准确率降低幅度准确率降低幅度准确率降低幅度准确率降低幅度
    SVM73.2864.219.0766.077.2163.1910.0965.038.2554.6418.64
    LSTM77.0469.077.9767.459.5968.178.8768.298.7553.2123.83
    MemNet82.3673.858.5171.0411.3273.229.1472.949.4263.0219.34
    IAN74.0762.2511.8266.387.6965.318.7665.838.2456.4117.66
    AOA78.2569.448.8168.519.7467.1411.1168.0710.1854.2024.05
    AEN-GloVe81.3370.0311.3073.098.2470.2511.0872.558.7855.9725.36
    LSTM+SynATT85.6076.499.1178.217.3973.2112.3974.5411.0661.0824.52
    TD-GAT84.9272.1712.7575.609.3275.099.8374.6010.3262.0422.88
    ASGCN83.6476.057.5979.034.6174.329.3276.597.0564.2919.35
    CNN75.9163.2112.7064.2411.6765.3910.5265.0210.8959.3116.60
    pos-ACNN-CNN86.4972.7113.7877.309.1979.027.4772.7413.7568.0318.46
    下载: 导出CSV

    表  5  不同实验方法生成对抗样本数量的WMD和IMD分布情况(条)

    项目对比方法本文方法
    WordHandlingCWordAttackerDeepWordBugFastWordBugPGAS
    WMD0-0.221361272000
    0.2-0.46233802722530
    0.4-0.68607893253970
    0.6-0.82564735315290
    0.8-12402668608140
    IMD0-0.21630136814091091364
    0.2-0.43412694686801352
    0.4-0.62918290197235
    0.6-0.80181332549
    0.8-100070
    下载: 导出CSV
  • [1] PAPERNOT N, MCDANIEL P, SWAMI A, et al. Crafting adversarial input sequences for recurrent neural networks[C]. MILCOM 2016 - 2016 IEEE Military Communications Conference, Baltimore, USA, 2016: 49–54.
    [2] WANG Boxin, PEI Hengzhi, PAN Boyuan, et al. T3: Tree-autoencoder constrained adversarial text generation for targeted attack[C/OL]. The 2020 Conference on Empirical Methods in Natural Language Processing, 2020: 6134–6150.
    [3] LE T, WANG Suhang, and LEE D. MALCOM: Generating malicious comments to attack neural fake news detection models[C]. 2020 IEEE International Conference on Data Mining, Sorrento, Italy, 2020: 282–291.
    [4] MOZES M, STENETORP P, KLEINBERG B, et al. Frequency-guided word substitutions for detecting textual adversarial examples[C/OL]. The 16th Conference of the European Chapter of the Association for Computational Linguistics, 2021: 171–186.
    [5] TAN S, JOTY S, VARSHNEY L, et al. Mind your Inflections! Improving NLP for non-standard Englishes with Base-Inflection encoding[C/OL]. The 2020 Conference on Empirical Methods in Natural Language Processing, 2020: 5647–5663.
    [6] 潘文雯, 王新宇, 宋明黎, 等. 对抗样本生成技术综述[J]. 软件学报, 2020, 31(1): 67–81. doi: 10.13328/j.cnki.jos.005884

    PAN Wenwen, WANG Xinyu, SONG Mingli, et al. Survey on generating adversarial examples[J]. Journal of Software, 2020, 31(1): 67–81. doi: 10.13328/j.cnki.jos.005884
    [7] MILLER D, NICHOLSON L, DAYOUB F, et al. Dropout sampling for robust object detection in open-set conditions[C]. 2018 IEEE International Conference on Robotics and Automation, Brisbane, Australia, 2018: 3243–3249.
    [8] 王文琦, 汪润, 王丽娜, 等. 面向中文文本倾向性分类的对抗样本生成方法[J]. 软件学报, 2019, 30(8): 2415–2427. doi: 10.13328/j.cnki.jos.005765

    WANG Wenqi, WANG Run, WANG Li’na, et al. Adversarial examples generation approach for tendency classification on Chinese texts[J]. Journal of Software, 2019, 30(8): 2415–2427. doi: 10.13328/j.cnki.jos.005765
    [9] 仝鑫, 王罗娜, 王润正, 等. 面向中文文本分类的词级对抗样本生成方法[J]. 信息网络安全, 2020, 20(9): 12–16. doi: 10.3969/j.issn.1671-1122.2020.09.003

    TONG Xin, WANG Luona, WANG Runzheng, et al. A generation method of word-level adversarial samples for Chinese text classiifcation[J]. Netinfo Security, 2020, 20(9): 12–16. doi: 10.3969/j.issn.1671-1122.2020.09.003
    [10] BLOHM M, JAGFELD G, SOOD E, et al. Comparing attention-based convolutional and recurrent neural networks: Success and limitations in machine reading comprehension[C]. The 22nd Conference on Computational Natural Language Learning, Brussels, Belgium, 2018: 108–118.
    [11] NIU Tong and BANSAL M. Adversarial over-sensitivity and over-stability strategies for dialogue models[C]. The 22nd Conference on Computational Natural Language Learning, Brussels, Belgium, 2018: 486–496.
    [12] EBRAHIMI J, LOWD D, and DOU Dejing. On adversarial examples for character-level neural machine translation[C]. The 27th International Conference on Computational Linguistics, Santa Fe, USA, 2018: 653–663.
    [13] GAO Ji, LANCHANTIN J, SOFFA M L, et al. Black-box generation of adversarial text sequences to evade deep learning classifiers[C]. 2018 IEEE Security and Privacy Workshops, San Francisco, USA, 2018: 50–56.
    [14] GOODMAN D, LV Zhonghou, and WANG Minghua. FastWordBug: A fast method to generate adversarial text against NLP applications[J]. arXiv preprint arXiv: 2002.00760, 2020.
    [15] EBRAHIMI J, RAO Anyi, LOWD D, et al. HotFlip: White-box adversarial examples for text classification[C]. The 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, 2018: 31–36.
    [16] SONG Liwei, YU Xinwei, PENG H T, et al. Universal adversarial attacks with natural triggers for text classification[C/OL]. The 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021: 3724–3733.
    [17] LI Dianqi, ZHANG Yizhe, PENG Hao, et al. Contextualized perturbation for textual adversarial attack[C/OL]. The 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021: 5053–5069.
    [18] TAN S, JOTY S, KAN M Y, et al. It's Morphin' time! Combating linguistic discrimination with inflectional perturbations[C/OL]. The 58th Annual Meeting of the Association for Computational Linguistics, 2020: 2920–2935.
    [19] LI Linyang, MA Ruotian, GUO Qipeng, et al. BERT-ATTACK: Adversarial attack against BERT using BERT[C/OL]. The 2020 Conference on Empirical Methods in Natural Language Processing, 2020: 6193–6202.
    [20] ZANG Yuan, QI Fanchao, YANG Chenghao, et al. Word-level textual adversarial attacking as combinatorial optimization[C/OL]. The 58th Annual Meeting of the Association for Computational Linguistics, 2020: 6066–6080.
    [21] CHENG Minhao, YI Jinfeng, CHEN Pinyu, et al. Seq2Sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples[C]. The 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 3601–3608.
    [22] JIA R and LIANG P. Adversarial examples for evaluating reading comprehension systems[C]. The 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 2017: 2021–2031.
    [23] MINERVINI P and RIEDEL S. Adversarially regularising neural NLI models to integrate logical background knowledge[C]. The 22nd Conference on Computational Natural Language Learning, Brussels, Belgium, 2018: 65–74.
    [24] WANG Yicheng and BANSAL M. Robust machine comprehension models via adversarial training[C]. The 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, USA, 2018: 575–581.
    [25] RIBEIRO M T, SINGH S, and GUESTRIN C. Semantically equivalent adversarial rules for debugging NLP models[C]. The 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, 2018: 856–865.
    [26] IYYER M, WIETING J, GIMPEL K, et al. Adversarial example generation with syntactically controlled paraphrase networks[C]. The 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, USA, 2018: 1875–1885.
    [27] HAN Wenjuan, ZHANG Liwen, JIANG Yong, et al. Adversarial attack and defense of structured prediction models[C/OL]. The 2020 Conference on Empirical Methods in Natural Language Processing, 2020: 2327–2338.
    [28] WANG Tianlu, WANG Xuezhi, QIN Yao, et al. CAT-Gen: Improving robustness in NLP models via controlled adversarial text generation[C/OL]. The 2020 Conference on Empirical Methods in Natural Language Processing, 2020: 5141–5146.
    [29] 魏星, 王小辉, 魏亮, 等. 基于规范科技术语数据库的科技术语多音字研究与读音推荐[J]. 中国科技术语, 2020, 22(6): 25–29. doi: 10.3969/j.issn.1673-8578.2020.06.005

    WEI Xing, WANG Xiaohui, WEI Liang, et al. Pronunciation recommendations on polyphonic characters in terms based on the database of standardized terms[J]. China Terminology, 2020, 22(6): 25–29. doi: 10.3969/j.issn.1673-8578.2020.06.005
    [30] KIRITCHENKO S, ZHU Xiaodan, CHERRY C, et al. NRC-Canada-2014: Detecting aspects and sentiment in customer reviews[C]. The 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, 2014: 437–442.
    [31] TANG Duyu, QIN Bing, FENG Xiaocheng, et al. Effective LSTMs for target-dependent sentiment classification[C]. COLING 2016, the 26th International Conference on Computational Linguistics, Osaka, Japan, 2016: 3298–3307.
    [32] TANG Duyu, QIN Bing, and LIU Ting. Aspect level sentiment classification with deep memory network[C]. The 2016 Conference on Empirical Methods in Natural Language Processing, Austin, USA, 2016: 214–224.
    [33] MA Dehong, LI Sujian, ZHANG Xiaodong, et al. Interactive attention networks for aspect-level sentiment classification[C]. The 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia, 2017: 4068–4074.
    [34] HUANG Binxuan, OU Yanglan, and CARLEY K M. Aspect level sentiment classification with attention-over-attention neural networks[C]. The 11th International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, Washington, USA, 2018: 197–206.
    [35] SONG Youwei, WANG Jiahai, JIANG Tao, et al. Targeted sentiment classification with attentional encoder network[C]. The 28th International Conference on Artificial Neural Networks, Munich, Germany, 2019: 93–103.
    [36] HE Ruidan, LEE W S, NG H T, et al. Effective attention modeling for aspect-level sentiment classification[C]. The 27th International Conference on Computational Linguistics, Santa Fe, USA, 2018: 1121–1131.
    [37] HUANG Binxuan and CARLEY K M. Syntax-aware aspect level sentiment classification with graph attention networks[C]. The 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 2019: 5469–5477.
    [38] ZHANG Chen, LI Qiuchi, and SONG Dawei. Aspect-based sentiment classification with aspect-specific graph convolutional networks[C]. The 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 2019: 4568–4578.
    [39] WANG Yuanchao, LI Mingtao, PAN Zhichen, et al. Pulsar candidate classification with deep convolutional neural networks[J]. Research in Astronomy and Astrophysics, 2019, 19(9): 133. doi: 10.1088/1674-4527/19/9/133
    [40] 唐恒亮, 尹棋正, 常亮亮, 等. 基于混合图神经网络的方面级情感分类[J]. 计算机工程与应用, 2023, 59(4): 175–182. doi: 10.3778/j.ssn.1002-8331.2109-0172

    TANG Hengliang, YIN Qizheng, CHANG Liangliang, et al. Aspect-level sentiment classification based on mixed graph neural network[J]. Computer Engineering and Applications, 2023, 59(4): 175–182. doi: 10.3778/j.ssn.1002-8331.2109-0172
    [41] KUSNER M J, SUN Yu, KOLKIN N I, et al. From word embeddings to document distances[C]. The 32nd International Conference on Machine Learning, Lille, France, 2015: 957–966.
  • 加载中
图(5) / 表(5)
计量
  • 文章访问数:  645
  • HTML全文浏览量:  387
  • PDF下载量:  146
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-05-07
  • 修回日期:  2022-07-09
  • 网络出版日期:  2022-07-14
  • 刊出日期:  2023-06-10

目录

    /

    返回文章
    返回