高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

融合子空间共享特征的多尺度跨模态行人重识别方法

王凤随 闫涛 刘芙蓉 钱亚萍 许月

王凤随, 闫涛, 刘芙蓉, 钱亚萍, 许月. 融合子空间共享特征的多尺度跨模态行人重识别方法[J]. 电子与信息学报, 2023, 45(1): 325-334. doi: 10.11999/JEIT211212
引用本文: 王凤随, 闫涛, 刘芙蓉, 钱亚萍, 许月. 融合子空间共享特征的多尺度跨模态行人重识别方法[J]. 电子与信息学报, 2023, 45(1): 325-334. doi: 10.11999/JEIT211212
WANG Fengsui, YAN Tao, LIU Furong, QIAN Yaping, XU Yue. Multi-scale Cross-Modality Person Re-identification Method Based on Shared Subspace Features[J]. Journal of Electronics & Information Technology, 2023, 45(1): 325-334. doi: 10.11999/JEIT211212
Citation: WANG Fengsui, YAN Tao, LIU Furong, QIAN Yaping, XU Yue. Multi-scale Cross-Modality Person Re-identification Method Based on Shared Subspace Features[J]. Journal of Electronics & Information Technology, 2023, 45(1): 325-334. doi: 10.11999/JEIT211212

融合子空间共享特征的多尺度跨模态行人重识别方法

doi: 10.11999/JEIT211212
基金项目: 安徽省自然科学基金(2108085MF197, 1708085MF154),安徽高校省级自然科学研究重点项目(KJ2019A0162),检测技术与节能装置安徽省重点实验室开放基金资助项目(DTESD2020B02),安徽高校研究生科学研究项目(YJS20210448, YJS20210449)
详细信息
    作者简介:

    王凤随:男,博士,副教授,硕士生导师,研究方向为图像与视频信息处理、计算机视觉等

    闫涛:男,硕士生,研究方向为图像处理与模式识别等

    刘芙蓉:女,硕士生,研究方向为图像处理与模式识别等

    钱亚萍:女,硕士生,研究方向为图像处理与模式识别等

    许月:女,硕士生,研究方向为图像处理与模式识别等

    通讯作者:

    王凤随 fswang@ahpu.edu.cn

  • 中图分类号: TN911.73; TP391.4

Multi-scale Cross-Modality Person Re-identification Method Based on Shared Subspace Features

Funds: The Natural Science Foundation of Anhui Province, China (2108085MF197, 1708085MF154), The Natural Science Foundation of the Anhui Higher Education Institutions of China (KJ2019A0162), The Open Research Fund of Anhui Key Laboratory of Detection Technology and Energy Saving Devices, Anhui Polytechnic University (DTESD2020B02), The Graduate Science Foundation of the Anhui Higher Education Institutions of China (YJS20210448, YJS20210449)
  • 摘要: 跨模态行人重识别(Re-ID)是智能监控系统所面临的一项具有很大挑战的问题,现有的跨模态研究方法中主要基于全局或局部学习表示有区别的模态共享特征。然而,很少有研究尝试融合全局与局部的特征表示。该文提出一种新的多粒度共享特征融合(MSFF)网络,该网络结合了全局和局部特征来学习两种模态的不同粒度表示,从骨干网络中提取多尺度、多层次的特征,全局特征表示的粗粒度信息与局部特征表示的细粒度信息相互协同,形成更具有区别度的特征描述符。此外,为使网络能够提取更有效的共享特征,该文还针对网络中的两种模态的嵌入模式提出了子空间共享特征模块的改进方法,改变传统模态特征权重的特征嵌入方式。将该模块提前放入骨干网络中,使两种模态的各自特征映射到同一子空间中,经过骨干网络产生更丰富的共享权值。在两个公共数据集实验结果证明了所提方法的有效性,SYSU-MM01数据集最困难全搜索单镜头模式下平均精度mAP达到了60.62%。
  • 图  1  行人从粗粒度到细粒度开始详细划分

    图  2  本文算法整体网络结构

    图  3  SYSU-MM01数据集中示例图像

    图  4  在SYSU-MM01的All-search single模式下和其它方法对比

    图  5  不同λ和子空间共享特征模块在MSFF网络中不同位置的性能实验结果

    表  1  网络3个分支的结构设置

    BranchPartMap sizeDimsFeature
    Part-global112×4512Fg
    Part-11+224×8512×2+512Fg1/Fp1
    Part-21+324×8512×3+512Fg2/Fp2
    下载: 导出CSV

    表  2  在SYSU-MM01的All-search模式下和其他方法对比实验结果(%)

    方法单镜头多镜头
    R-1R-10R-20mAPR-1R-10R-20mAP
    One-Stream[17]12.0449.6866.7413.6719.1358.1475.058.59
    Two-stream[17]11.6547.9965.5012.8516.3358.3574.468.03
    Zero-padding[17]14.8054.1271.3315.9561.4078.4110.89
    TONE[5]12.5250,7268.6014.42
    HCML[5]14.3253.1669.1716.16
    BDTR[16]27.3266.9681.727.32
    eBDTR[18]27.8267.3481.3428.42
    D2RL[19]28.9070.6082.4029.20
    MAC[20]33.2679.0490.0936.22
    DPMBN[21]37.0279.4689.8740.28
    AlignGAN[22]42.4085.0093.7040.7051.5089.4095.7033.90
    LZM[23]45.0089.0645.94
    Hi-CMD[24]34.9477.5835.94
    AGW[25]47.5084.3992.1447.65
    Xmodal[26]49.9289.7995.9650.7347.5688.1395.9836.08
    DDAG[27]54.7590.3995.8153.02
    cm-SSFT[28]67.6089.2093.9063.2064.491.295.762.0
    Baseline(TSLFN)[8]59.9691.5096.8254.9562.0993.7497.8548.02
    本文62.9393.6897.6760.6268.4295.7198.2254.51
    下载: 导出CSV

    表  3  在RegDB数据集和其它方法对比实验结果(%)

    方法Visible to InfraredInfrared to Visible
    R-1R-10R-20mAPR-1R-10R-20mAP
    Zero-padding[17]17.7534.2144.3518.9016.6334.6844.2517.82
    HCML[5]24.4447.5356.7820.0821.7045.0255.5822.24
    BDTR[16]33.5658.6167.4332.7632.9258.4668.4331.96
    eBDTR[18]34.6258.9668.7233.4634.2158.7468.6432.49
    AlignGAN[22]57.9053.6056.3053.40
    MAC[20]36.4362.3671.6337.0336.2061.6870.9936.63
    Xmodal[26]62.2183.1391.7260.18
    DDAG[27]69.3486.1991.4963.4668.0685.1590.3161.80
    cm-SSFT*[28]72.3072.9071.0071.70
    Baseline(TSLFN)[8]
    本文78.0691.3696.1272.43
    下载: 导出CSV

    表  4  网络各个模块在SYSU-MM01数据集All-search single模式下实验结果(%)

    子空间共享特征模块Part-globalPart-1Part-2R-1R-10R-20mAP
    P×56.6791.8597.1455.27
    Pg-1×61.6393.0297.5158.41
    Pg-2×61.6392.6396.9257.83
    Pg-3×57.4491.5397.0155.28
    MSFF62.9393.6897.6760.62
    下载: 导出CSV
  • [1] 王粉花, 赵波, 黄超, 等. 基于多尺度和注意力融合学习的行人重识别[J]. 电子与信息学报, 2020, 42(12): 3045–3052. doi: 10.11999/JEIT190998

    WANG Fenhua, ZHAO Bo, HUANG Chao, et al. Person Re-identification based on multi-scale network attention fusion[J]. Journal of Electronics &Information Technology, 2020, 42(12): 3045–3052. doi: 10.11999/JEIT190998
    [2] 周智恒, 刘楷怡, 黄俊楚, 等. 一种基于等距度量学习策略的行人重识别改进算法[J]. 电子与信息学报, 2019, 41(2): 477–483. doi: 10.11999/JEIT180336

    ZHOU Zhiheng, LIU Kaiyi, HUANG Junchu, et al. Improved metric learning algorithm for person Re-identification based on equidistance[J]. Journal of Electronics &Information Technology, 2019, 41(2): 477–483. doi: 10.11999/JEIT180336
    [3] 陈鸿昶, 吴彦丞, 李邵梅, 等. 基于行人属性分级识别的行人再识别[J]. 电子与信息学报, 2019, 41(9): 2239–2246. doi: 10.11999/JEIT180740

    CHEN Hongchang, WU Yancheng, LI Shaomei, et al. Person Re-identification based on attribute hierarchy recognition[J]. Journal of Electronics &Information Technology, 2019, 41(9): 2239–2246. doi: 10.11999/JEIT180740
    [4] CHEN Yingcong, ZHU Xiatian, ZHENG Weishi, et al. Person reidentification by camera correlation aware feature augmentation[J]. IEEE transactions on pattern analysis and machine intelligence, , 2017, 40(2): 392–408. doi: 10.1109/TPAMI.2017.2666805
    [5] YE Mang, LAN Xiangyuan, LI Jiawei, et al. Hierarchical discriminative learning for visible thermal person Re-identification[C]. The 32nd AAAI Conference on Artificial Intelligence, New Orleans, USA, 2018: 7501–7508.
    [6] DAI Pingyang, JI Rongrong, WANG Haibin, et al. Cross-modality person Re-identification with generative adversarial training[C]. The Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 2018: 677–683.
    [7] YE Mang, SHEN Jianbing and SHAO Ling. Visible-Infrared Person Re-Identification via Homogeneous Augmented Tri-Modal Learning[J]. IEEE Transactions on Information Forensics and Security, 2020, 16: 728–739. doi: 10.1109/TIFS.2020.3001665
    [8] ZHU Yuanxin, YANG Zhao, WANG Li, et al. Hetero-center loss for cross-modality person Re-identification[J]. Neurocomputing, 2020, 386: 97–109. doi: 10.1016/j.neucom.2019.12.100
    [9] WANG Guanshuo, YUAN Yufeng, CHEN Xiong, et al. Learning discriminative features with multiple granularities for person Re-identification[C]. The 26th ACM International Conference on Multimedia, Seoul, Korea (South), 2018: 274–282.
    [10] BAI Xian, YANG Mingkun, HUANG Tengteng, et al. Deep-person: Learning discriminative deep features for person Re-identification[J]. Pattern Recognition, 2020, 98: 107036. doi: 10.1016/j.patcog.2019.107036
    [11] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [12] AlMAZAN J, GAJIC B, MURRAY N, et al. Re-ID done right: Towards good practices for person Re-identification[EB/OL].https://arxiv.org/abs/1801.05339, 2018.
    [13] IOFFE S and SZEGEDY C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]. The 32nd International Conference on Machine Learning, Lille, France, 2015: 448–456.
    [14] WANG Guan’an, ZHANG Tianzhu, YANG Yang, et al. Cross-modality paired-images generation for RGB-infrared person Re-identification[C]. The 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 12144–12151.
    [15] NGUYEN D T, HONG H G, KIM K W, et al. Person recognition system based on a combination of body images from visible light and thermal cameras[J]. Sensors, 2017, 17(3): 605. doi: 10.3390/s17030605
    [16] YE Mang, WANG Zheng, LAN Xiangyuan, et al. Visible thermal person Re-identification via dual-constrained top-ranking[C]. The Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 2018: 1092–1099.
    [17] WU Ancong, ZHENG Weishi, YU Hongxing, et al. RGB-infrared cross-modality person Re-identification[C]. The IEEE International Conference on Computer Vision, Venice, Italy, 2017: 5390–5399.
    [18] YE Mang, LAN Xiangyuan, WANG Zheng, et al. Bi-directional center-constrained top-ranking for visible thermal person Re-identification[J]. IEEE Transactions on Information Forensics and Security, 2020, 15: 407–419. doi: 10.1109/TIFS.2019.2921454
    [19] WANG Zhixiang, WANG Zheng, ZHENG Yinqiang, et al. Learning to reduce dual-level discrepancy for infrared-visible person Re-identification[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 618–626.
    [20] WU Ancong, ZHENG Weishi, GONG Shaogang, et al. RGB-IR person Re-identification by cross-modality similarity preservation[J]. International Journal of Computer Vision, 2020, 128(6): 1765–1785. doi: 10.1007/s11263-019-01290-1
    [21] XIANG Xuezhi, LV Ning, YU Zeting, et al. Cross-modality person Re-identification based on dual-path multi-branch network[J]. IEEE Sensors Journal, 2019, 19(23): 11706–11713. doi: 10.1109/JSEN.2019.2936916
    [22] WANG Guan’an, ZHANG Tianzhu, CHENG Jian, et al. RGB-infrared cross-modality person Re-identification via joint pixel and feature alignment[C]. The IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 3622–3631.
    [23] BASARAN E, GÖKMEN M, and KAMASAK M E. An efficient framework for visible–infrared cross modality person Re-identification[J]. Signal Processing:Image Communication, 2020, 87: 115933. doi: 10.1016/j.image.2020.115933
    [24] CHOI S, LEE S, KIM Y, et al. Hi-CMD: Hierarchical cross-modality disentanglement for visible-infrared person Re-identification[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 10254–10263.
    [25] YE Mang, SHEN Jianbing, LIN Gaojie, et al. Deep learning for person Re-identification: A survey and outlook[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44(6): 2872–2893.
    [26] LI Diangang, WEI Xing, HONG Xiaopeng, et al. Infrared-visible cross-modal person Re-identification with an X modality[C]. The 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 4610–4617.
    [27] YE Mang, SHEN Jianbing, CRANDALL D J, et al. Dynamic dual-attentive aggregation learning for visible-infrared person Re-identification[C]. The 16th European Conference on Computer Vision, Glasgow, UK, 2020: 229–247.
    [28] LU Yan, WU Yue, LIU Bin, et al. Cross-modality person Re-identification with shared-specific feature transfer[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 13376–13386.
  • 加载中
图(5) / 表(4)
计量
  • 文章访问数:  862
  • HTML全文浏览量:  415
  • PDF下载量:  211
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-11-02
  • 修回日期:  2022-03-25
  • 录用日期:  2022-03-25
  • 网络出版日期:  2022-03-30
  • 刊出日期:  2023-01-17

目录

    /

    返回文章
    返回