高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

候选标记信息感知的偏标记学习算法

陈鸿昶 谢天 高超 李邵梅 黄瑞阳

陈鸿昶, 谢天, 高超, 李邵梅, 黄瑞阳. 候选标记信息感知的偏标记学习算法[J]. 电子与信息学报, 2019, 41(10): 2516-2524. doi: 10.11999/JEIT181059
引用本文: 陈鸿昶, 谢天, 高超, 李邵梅, 黄瑞阳. 候选标记信息感知的偏标记学习算法[J]. 电子与信息学报, 2019, 41(10): 2516-2524. doi: 10.11999/JEIT181059
Hongchang CHEN, Tian XIE, Chao GAO, Shaomei LI, Ruiyang HUANG. Candidate Label-Aware Partial Label Learning Algorithm[J]. Journal of Electronics & Information Technology, 2019, 41(10): 2516-2524. doi: 10.11999/JEIT181059
Citation: Hongchang CHEN, Tian XIE, Chao GAO, Shaomei LI, Ruiyang HUANG. Candidate Label-Aware Partial Label Learning Algorithm[J]. Journal of Electronics & Information Technology, 2019, 41(10): 2516-2524. doi: 10.11999/JEIT181059

候选标记信息感知的偏标记学习算法

doi: 10.11999/JEIT181059
基金项目: 国家自然科学基金(61601513)
详细信息
    作者简介:

    陈鸿昶:男,1964年生,教授,博士生导师,研究方向为通信与信息系统,大数据处理分析

    谢天:男,1994年生,硕士生,研究方向为机器学习

    高超:男,1982年生,博士,研究方向为计算机视觉,机器学习

    李邵梅:女,1982年生,博士,研究方向为计算机视觉,机器学习

    黄瑞阳:男,1986年生,博士,研究方向为网络大数据分析

    通讯作者:

    谢天 xietianxt@foxmail.com

  • 中图分类号: TP18

Candidate Label-Aware Partial Label Learning Algorithm

Funds: The National Natural Science Foundation of China (61601513)
  • 摘要: 在偏标记学习中,示例的真实标记隐藏在由一组候选标记组成的标记集中。现有的偏标记学习算法在衡量示例之间的相似度时,只基于示例的特征进行计算,缺乏对候选标记集信息的利用。该文提出一种候选标记感知的偏标记学习算法(CLAPLL),在构建图的阶段有效地结合候选标记集信息来衡量示例之间的相似度。首先,基于杰卡德距离和线性重构,计算出各个示例的标记集之间的相似度,然后结合示例相似度和标记集的相似度构建相似度图,并通过现有的基于图的偏标记学习算法进行学习和预测。3个合成数据集和6个真实数据集上实验结果表明,该文方法相比于基线算法消歧准确率提升了0.3%~16.5%,分类准确率提升了0.2%~2.8%。
  • 图  1  采用候选标记集信息的消歧效果

    图  2  消歧准确率随参数$p$的变化

    图  3  分类准确率随参数$p$的变化

    图  4  消歧准确率随参数$r$的变化

    图  9  分类准确率随参数$k$的变化

    图  6  消歧准确率随参数$\alpha $的变化

    图  7  分类准确率随参数$\alpha $的变化

    图  8  消歧准确率随参数$k$的变化

    图  5  分类准确率随参数$r$的变化

    表  1  候选标记信息感知的偏标记学习算法伪代码

     输入:偏标记数据集$D = \left\{ {({X_i},{S_i})|1 \le i \le m} \right\}$,最近邻样本数    $k$,标记相似度权重$\alpha $
     训练阶段:
     1 对特征矩阵${\text{X}} \in {{\text{R}}^{m \times d}}$进行Z-score归一化;
     2 根据式(1)求${{\text{w}}_j}$;
     3 根据${{\text{w}} _j}$构建相似度图${G_i}(V,E)$;
     4 switch v
       case Jaccard:根据式(3)计算${{\text{u}}_j}$,并构建候选标记集相似度    图${G_{\rm{c}}}(i,j)$, (CAP-J算法);
       case linear:根据式(4)计算${{\text{u}}_j}$,并构建候选标记集相似度     图${G_{\rm{c}}}(i,j)$, (CAP-L算法);
       end switch
     5 根据式(7)计算最终相似度图$G(i,j)$;
     6 结合现有图模型偏标记学习算法进行消歧,得到消歧结果    $\mathop D\limits^ \wedge = \left\{ {({X_i},{{\widehat y}_i})|1 \le i \le m} \right\}$;
     测试阶段:
     7 对于未见示例${x^*}$,根据式(8)计算得分类结果;
     输出:消歧结果$\mathop D\limits^ \wedge = \left\{ {({X_i},{{\widehat y}_i})|1 \le i \le m} \right\}$和分类结果${y^*}$。
    下载: 导出CSV

    表  2  基线算法和本文算法复杂度比较

    算法复杂度实际复杂度
    基线算法$O({d^{\,\; 2} }{n^3}\lg (n))$$O({d^{\,\; 2} }{n^3}\lg (n))$
    本文算法(CAP-J)$O({d^{\,\; 2} }{n^3}\lg (n) + (s + 1){k^2})$$O({d^{\,\; 2} }{n^3}\lg (n))$
    本文算法(CAP-L)$O({d^{\,\; 2} }{n^3}\lg (n) + (sk + 1){k^2})$$O({d^{\,\; 2} }{n^3}\lg (n))$
    下载: 导出CSV

    表  3  真实偏标记数据集的特征

    数据集样本数特征数类别标记数候选标记数
    平均最小最大
    Lost1122108162.2313
    Birdsong499838132.1814
    MSRSCv2175848233.1617
    FG-NET1002262787.48211
    Yahoo! News229911632191.9115
    Soccer Player174722791712.09111
    下载: 导出CSV

    表  4  合成偏标记数据集的特征

    数据集样本数特征数类别标记数参数设置
    Ecoli33678p={0.1, 0.2, 0.3, 0.4,0.5, 0.6, 0.7, 0.8} r={1, 2, 3, 4, 5}
    Movement3609015
    CTG21262110
    下载: 导出CSV

    表  5  不同算法在真实偏标记数据集上的消歧准确率(%)

    数据集消歧准确率(mean±std.)
    LostMSRCv2BirdSongFG-NETSoccer PlayerYahoo! News
    PLKNN67.54±0.0951.00±0.0968.69±0.0411.06±0.1352.60±0.0266.06±0.02
    CAP-JKNN73.60±0.1062.19±0.0877.14±0.0414.71±0.1569.55±0.0180.00±0.02
    CAP-LKNN73.38±0.1361.88±0.0976.67±0.0414.81±0.1769.22±0.0279.78±0.05
    PLKNN(监督)84.93±0.0473.07±0.0284.29±0.1414.94±0.0590.65±0.0391.21±0.03
    IPAL84.01±0.1570.58±0.1583.61±0.0415.28±0.1967.65±0.0384.99±0.05
    CAP-JIPAL85.58±0.1771.25±0.2084.22±0.0415.40±0.1967.94±0.0285.33±0.04
    CAP-LIPAL85.39±0.2470.92±0.1284.40±0.0514.86±0.1767.89±0.0785.21±0.03
    IPAL(监督)85.43±0.3276.43±0.2285.92±0.1015.53±0.1871.43±0.0586.43±0.06
    LALO75.05±1.2459.42±0.8978.14±0.7515.92±0.69
    CAP-JLALO76.80±1.1159.48±1.0978.02±0.8115.69±0.75
    CAP-LLALO80.22±1.0859.72±0.8278.24±0.6415.76±0.94
    LALO(监督)84.53±1.5360.04±1.1479.25±0.8816.13±0.62
    下载: 导出CSV

    表  6  不同算法在真实偏标记数据集上的分类准确率(%)

    数据集消歧准确率(mean±std.)
    LostMSRCv2BirdSongFG-NETSoccer PlayerYahoo! News
    PLKNN61.48±0.7844.12±0.3664.66±0.235.58±0.4249.55±0.0458.30±0.06
    CAP-JKNN64.01±0.6546.35±0.3866.01±0.266.24±0.3850.77±0.0961.18±0.05
    CAP-LKNN63.58±0.7246.14±0.4865.88±0.215.74±0.5650.43±0.0960.50±0.12
    PLKNN(监督)69.26±0.4851.33±0.3068.49±0.136.98±0.2154.26±0.0561.53±0.08
    IPAL73.18±0.7953.08±0.3371.09±0.335.28±0.5554.84±0.1065.88±0.14
    CAP-JIPAL73.95±0.6853.35±0.5071.34±0.305.45±0.6055.00±0.1066.02±0.16
    CAP-LIPAL73.44±0.6852.61±0.7171.60±0.265.89±0.5754.46±0.1866.02±0.18
    IPAL (监督)75.04±0.8255.71±0.4672.05±0.275.95±0.6255.38±0.1366.83±0.15
    LALO72.15±3.0450.13±2.0372.99±1.546.11±1.61
    CAP-JLALO73.02±2.8849.23±2.1073.00±1.625.96±1.19
    CAP-LLALO74.84±2.2050.27±3.1973.37±1.506.76±1.64
    LALO(监督)76.68±2.1952.31±2.4974.87±1.267.03±1.29
    下载: 导出CSV
  • HÜLLERMEIER E and BERINGER J. Learning from ambiguously labeled examples[J]. Intelligent Data Analysis, 2006, 10(5): 419–439. doi: 10.3233/IDA-2006-10503
    SONG Jingqi, LIU Hui, GENG Fenghuan, et al. Weakly-supervised classification of pulmonary nodules based on shape characters[C]. The 14th International Conference on Dependable, Autonomic and Secure Computing, The 14th International Conference on Pervasive Intelligence and Computing, The 2nd International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress, Auckland, New Zealand, 2016: 228–232.
    TANG Caizhi and ZHANG Minling. Confidence-rated discriminative partial label learning[C]. The 31st AAAI Conference on Artificial Intelligence, San Francisco, USA, 2017: 2611–2617.
    TODA T, INOUE S, and UEDA N. Mobile activity recognition through training labels with inaccurate activity segments[C]. The 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, Hiroshima, Japan, 2016: 57–64.
    YU Fei and ZHANG Minling. Maximum margin partial label learning[J]. Machine Learning, 2017, 106(4): 573–593. doi: 10.1007/s10994-016-5606-4
    LUO Jie and ORABONA F. Learning from candidate labeling sets[C]. The 23rd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2010: 1504–1512.
    ZHANG Minling and YU Fei. Solving the partial label learning problem: An instance-based approach[C]. The 24th International Conference on Artificial Intelligence, Buenos Aires, Argentina, 2015: 4048–4054.
    FENG Lei and AN Bo. Leveraging latent label distributions for partial label learning[C]. The Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 2018: 2107–2113.
    COUR T, SAPP B, and TASKAR B. Learning from partial labels[J]. Journal of Machine Learning Research, 2011, 12: 1501–1536.
    ZHOU Zhihua. A brief introduction to weakly supervised learning[J]. National Science Review, 2018, 5(1): 44–53. doi: 10.1093/nsr/nwx106
    TOLDO R and FUSIELLO A. Robust multiple structures estimation with J-linkage[C]. The 10th European Conference on Computer Vision, Marseille, France, 2008: 537–547.
    DUA D and TANISKIDOU E K. UCI machine learning repository[EB/OL]. http://archive.ics.uci.edu/ml, 2017.
    ZENG Zinan, XIAO Shijie, JIA Kui, et al. Learning by associating ambiguously labeled images[C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 708–715.
    GUILLAUMIN M, VERBEEK J, and SCHMID C. Multiple instance metric learning from automatically labeled bags of faces[C]. The 11th European Conference on Computer Vision, Heraklion, Greece, 2010: 634–647.
    ZHANG Minling, ZHOU Binbin, and LIU Xuying. Partial label learning via feature-aware disambiguation[C]. The 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, USA, 2016: 1335–1344.
    BRIGGS F, FERN X Z, and RAICH R. Rank-loss support instance machines for MIML instance annotation[C]. The 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Beijing, China, 2012: 534–542.
    LIU Liping and DIETTERICH T G. A conditional multinomial mixture model for superset label learning[C]. The 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 548–556.
    ZHANG Minling, YU Fei, and TANG Caizhi. Disambiguation-free partial label learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2017, 29(10): 2155–2167. doi: 10.1109/TKDE.2017.2721942
    ZHANG Minling and YU Fei. Solving the partial label learning problem: An instance-based approach[C]. The 24th International Conference on Artificial Intelligence, Buenos Aires, Argentina, 2015: 4048–4054.
  • 加载中
图(9) / 表(6)
计量
  • 文章访问数:  2691
  • HTML全文浏览量:  1391
  • PDF下载量:  70
  • 被引次数: 0
出版历程
  • 收稿日期:  2018-11-20
  • 修回日期:  2019-04-21
  • 网络出版日期:  2019-05-16
  • 刊出日期:  2019-10-01

目录

    /

    返回文章
    返回