高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于行人属性分级识别的行人再识别

陈鸿昶 吴彦丞 李邵梅 高超

陈鸿昶, 吴彦丞, 李邵梅, 高超. 基于行人属性分级识别的行人再识别[J]. 电子与信息学报, 2019, 41(9): 2239-2246. doi: 10.11999/JEIT180740
引用本文: 陈鸿昶, 吴彦丞, 李邵梅, 高超. 基于行人属性分级识别的行人再识别[J]. 电子与信息学报, 2019, 41(9): 2239-2246. doi: 10.11999/JEIT180740
Hongchang CHEN, Yancheng WU, Shaomei LI, Chao GAO. Person Re-identification Based on Attribute Hierarchy Recognition[J]. Journal of Electronics & Information Technology, 2019, 41(9): 2239-2246. doi: 10.11999/JEIT180740
Citation: Hongchang CHEN, Yancheng WU, Shaomei LI, Chao GAO. Person Re-identification Based on Attribute Hierarchy Recognition[J]. Journal of Electronics & Information Technology, 2019, 41(9): 2239-2246. doi: 10.11999/JEIT180740

基于行人属性分级识别的行人再识别

doi: 10.11999/JEIT180740
基金项目: 国家自然科学基金(61601513)
详细信息
    作者简介:

    陈鸿昶:男,1964年生,教授,博士生导师,研究方向为通信与信息系统、计算机视觉

    吴彦丞:男,1994年生,硕士生,研究方向为计算机视觉

    李邵梅:女,1982年生,博士,讲师,研究方向为通信与信息系统、计算机视觉

    高超:男,1982年生,博士,讲师,研究方向为通信与信息系统、计算机视觉

    通讯作者:

    吴彦丞 wuyc1994@163.com

  • 中图分类号: TP391.41

Person Re-identification Based on Attribute Hierarchy Recognition

Funds: The National Natural Science Foundation of China (61601513)
  • 摘要: 为了提高行人再识别算法的识别效果,该文提出一种基于注意力模型的行人属性分级识别神经网络模型,相对于现有算法,该模型有以下3大优点:一是在网络的特征提取部分,设计用于识别行人属性的注意力模型,提取行人属性信息和显著性程度;二是在网络的特征识别部分,针对行人属性的显著性程度和包含的信息量大小,利用注意力模型对属性进行分级识别;三是分析属性之间的相关性,根据上一级的识别结果,调整下一级的识别策略,从而提高小目标属性的识别准确率,进而提高行人再识别的准确率。实验结果表明,该文提出的模型相较于现有方法,有效提高了行人再识别的首位准确率,其中,Market1501数据集上,首位准确率达到了93.1%,在DukeMTMC数据集上,首位准确率达到了81.7%。
  • 图  1  网络整体结构

    图  2  注意力模型网络结构

    图  3  分级网络中每一级的识别结果

    图  4  数据集部分属性间相关性及其共现概率

    图  5  网络参数及改进效果对比

    表  1  Market1501数据集中的属性类别

    属性类(G)属性数量(k)
    Gendermale, female2
    Agechild, teenager, adult, old4
    Hair lengthlong, short2
    Length of lower-body clothinglong, short2
    Type of lower-body clothingpants, dress2
    Wearing hatyes, no2
    Carrying bagyes, no2
    Carrying backpackyes, no2
    Carrying handbagyes, no2
    Color of upper-body clothingblack, white, red, purple,yellow, gray, blue, green8
    Color of lower-body clothingblack, white, pink, purple,yellow, gray, blue, green, brown9
    下载: 导出CSV

    表  2  Market1501数据集各属性识别准确率(%)

    行人属性genderagehairL.slvL.lowS.clothB.packH.bagbaghatC.upC.lowmean
    基础网络82.1885.3280.1292.4871.5885.6779.5781.5479.6670.5691.2387.8182.31
    本文算法90.2788.1591.5493.5587.2590.4889.7787.6584.6787.3992.4493.4889.72
    下载: 导出CSV

    表  3  DukeMTMC数据集各属性识别准确率(%)

    行人属性genderhatbootsL.upB.packH.bagbagC.shoesC.upC.lowmean
    基础网络82.4775.4876.1473.5871.5869.4278.3168.5462.1751.2470.89
    本文算法83.5987.2484.5676.3377.1175.3283.7872.1974.8862.1877.72
    下载: 导出CSV

    表  4  Market1501数据集行人再识别结果(%)

    方法Rank-1mAP
    XQDA[11]43.822.2
    SCS[12]51.926.3
    DNS[13]61.035.6
    G-SCNN[14]65.839.5
    MSCAN[15]80.357.5
    PDC[16]84.163.4
    JLML[17]85.165.5
    HA-CNN[8]91.275.7
    基础网络82.461.2
    基础网络-R185.766.7
    基础网络-R288.470.3
    基础网络-C186.968.5
    本文算法93.176.2
    下载: 导出CSV

    表  5  DukeMTMC数据集行人再识别结果(%)

    方法Rank-1mAP
    BoW+KISSME[18]25.112.2
    LOMO+XQDA[11]30.817.0
    ResNet50[19]65.245.0
    ResNet50+LSRO[20]67.747.1
    JLML[17]73.356.4
    HA-CNN[8]80.563.8
    基础网络73.655.7
    基础网络-R175.357.4
    基础网络-R277.860.8
    基础网络-C178.361.2
    本文算法81.765.9
    下载: 导出CSV
  • FARENZENA M, BAZZANI L, PERINA A, et al. Person re-identification by symmetry-driven accumulation of local features[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 2360–2367.
    曾明勇, 吴泽民, 田畅, 等. 基于外观统计特征融合的人体目标再识别[J]. 电子与信息学报, 2014, 36(8): 1844–1851. doi: 10.3724/SP.J.1146.2013.01389

    ZENG Mingyong, WU Zemin, TIAN Chang, et al. Fusing appearance statistical features for person re-identification[J]. Journal of Electronics &Information Technology, 2014, 36(8): 1844–1851. doi: 10.3724/SP.J.1146.2013.01389
    陈鸿昶, 陈雷, 李邵梅, 等. 基于显著度融合的自适应分块行人再识别[J]. 电子与信息学报, 2017, 39(11): 2652–2660. doi: 10.11999/JEIT170162

    CHEN Hongchang, CHEN Lei, LI Shaomei, et al. Person re-identification of adaptive blocks based on saliency fusion[J]. Journal of Electronics &Information Technology, 2017, 39(11): 2652–2660. doi: 10.11999/JEIT170162
    KÖSTINGER M, HIRZER M, WOHLHART P, et al. Large scale metric learning from equivalence constraints[C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 2288–2295.
    LI Wei, ZHAO Rui, XIAO Tong, et al. DeepReID: Deep filter pairing neural network for person re-identification[C]. Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 152–159.
    LIU Hao, FENG Jiashi, QI Meibin, et al. End-to-end comparative attention networks for person re-identification[J]. IEEE Transactions on Image Processing, 2017, 26(7): 3492–3506. doi: 10.1109/TIP.2017.2700762
    MATSUKAWA T and SUZUKI E. Person re-identification using CNN features learned from combination of attributes[C]. The 23rd International Conference on Pattern Recognition, Cancun, Mexico, 2016: 2428–2433.
    LI Wei, ZHU Xiatian, and GONG Shaogang. Harmonious attention network for person re-identification[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, 2285–2294.
    LIN Yutian, ZHENG Liang, ZHENG Zhedong, et al. Improving person re-identification by attribute and identity learning[EB/OL]. http://arxiv.org/abs/1703.07220, 2017.
    XU K, BA J, KIROS R, et al. Show, attend and tell: Neural image caption generation with visual attention[C]. 2015 International Conference on Machine Learning. New York, USA, 2015: 2048–2057.
    LIAO Shengcai, HU Yang, ZHU Xiangyu, et al. Person re-identification by local maximal occurrence representation and metric learning[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 2197–2206.
    CHEN Dapeng, YUAN Zejian, CHEN Badong, et al. Similarity learning with spatial constraints for person re-identification[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 1268–1277.
    ZHANG Li, XIANG Tao, and GONG Shaogang. Learning a discriminative null space for person re-identification[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 1239–1248.
    VARIOR R R, HALOI M, and WANG Gang. Gated Siamese convolutional neural network architecture for human re-identification[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 791–808.
    LI Dangwei, CHEN Xiaotang, ZHANG Zhang, et al. Learning deep context-aware features over body and latent parts for person re-identification[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 7398–7407.
    SU Chi, LI Jianing, ZHANG Shiliang, et al. Pose-driven deep convolutional model for person re-identification[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 3980–3989.
    LI Wei, ZHU Xiatian, and GONG Shaogang. Person re-identification by deep joint learning of multi-loss classification[C]. The 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia, 2017: 2194–2200.
    WANG Hanxiao, GONG Shaogang, and XIANG Tao. Highly efficient regression for scalable person re-identification[EB/OL]. http://arxiv.org/abs/1612.01341, 2016.
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    ZHENG Zhedong, ZHENG Liang, and YANG Yi. Unlabeled samples generated by GAN improve the person re-identification baseline in vitro[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 3774–3782.
  • 加载中
图(5) / 表(5)
计量
  • 文章访问数:  4013
  • HTML全文浏览量:  2126
  • PDF下载量:  134
  • 被引次数: 0
出版历程
  • 收稿日期:  2018-07-20
  • 修回日期:  2019-03-03
  • 网络出版日期:  2019-04-17
  • 刊出日期:  2019-09-10

目录

    /

    返回文章
    返回