高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

T3FRNet:一种融合三重感知细粒度重构的换衣行人重识别方法

庄建军 王楠

庄建军, 王楠. T3FRNet:一种融合三重感知细粒度重构的换衣行人重识别方法[J]. 电子与信息学报. doi: 10.11999/JEIT250476
引用本文: 庄建军, 王楠. T3FRNet:一种融合三重感知细粒度重构的换衣行人重识别方法[J]. 电子与信息学报. doi: 10.11999/JEIT250476
ZHUANG Jianjun, WANG Nan. T3FRNet: A Cloth-Changing Person Re-identification via Texture-aware Transformer Tuning Fine-grained Reconstruction method[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250476
Citation: ZHUANG Jianjun, WANG Nan. T3FRNet: A Cloth-Changing Person Re-identification via Texture-aware Transformer Tuning Fine-grained Reconstruction method[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250476

T3FRNet:一种融合三重感知细粒度重构的换衣行人重识别方法

doi: 10.11999/JEIT250476 cstr: 32379.14.JEIT250476
基金项目: 国家自然科学基金(62272234),江苏省研究生科研与实践创新计划项目(SJCX25_0503)
详细信息
    作者简介:

    庄建军:男,教授,研究方向为视频信号的智能处理

    王楠:女,硕士生,研究方向为计算机视觉、行人重识别

    通讯作者:

    庄建军 jjzhuang@nuist.edu.cn

  • 中图分类号: TN911.73; TP391.41

T3FRNet: A Cloth-Changing Person Re-identification via Texture-aware Transformer Tuning Fine-grained Reconstruction method

Funds: The National Natural Science Foundation of China (62272234), Postgraduate Research & Practice Innovation Program of Jiangsu Province (SJCX25_0503)
  • 摘要: 针对换衣行人重识别(CC Re-ID)任务中存在的有效特征提取困难和训练样本不足的问题,提出了一种融合三重感知细粒度重构的换衣行人重识别方法,利用细粒度纹理感知模块处理后的纹理特征与深度特征进行拼接,提高服装变化下的识别能力,引入Transformer注意力机制的ResFormer50网络增强模型对图像特征提取的感知能力,通过自适应混合池化模块(AHP)进行通道级自主感知聚合,对特征进行深层次细粒度挖掘,从而达到整体表征一致性与服装变化泛化性并重的效果。新的自适应细粒度重构策略(AFR)通过细粒度级别的对抗性扰动与选择性重构,在不依赖显式监督的前提下,显著提升模型对服装变换、局部细节扰动的鲁棒性和泛化能力,从而提高模型在实际场景中的识别准确率。大量实验结果表明了所提方法的有效性,在LTCC和PRCC数据集换衣场景下,Rank-1/mAP分别达到了45.6%/19.8%和70.6%/69.1%,优于同类前沿方法。
  • 图  1  三重感知细粒度重构网络结构框图

    图  2  Transformer注意力机制结构示意图

    图  3  细粒度纹理感知模块示意图

    图  4  自适应细粒度重构策略示意图

    图  5  参数$ \beta $不同取值下的模型性能对比

    图  6  不同模型的可视化结果图

    表  1  五个常用数据集的基本情况

    数据集摄像头数据类型训练集(ID/图像)测试集(ID/图像)
    查询集图库集
    LTCC12SC/CC77/957675/49375/7050
    PRCC3SC/CC150/1789671/354371/3384
    Celeb-reID-SC/CC632/20208420/2972420/11006
    DeepChange17CC450/75083521/17527521/62956
    Market-15016SC751/12936750/3368750/19732
    注:SC和CC分别表示服装一致和服装变换两种模式
    下载: 导出CSV

    表  2  本文方法与其他最新方法在LTCC和PRCC数据集上的实验结果(%)

    方法模态服装标签LTCCPRCC
    标准服装变换标准服装变换
    Rank-1mAPRank-1mAPRank-1mAPRank-1mAP
    CAL[7]RGBYes74.240.840.118.0100.099.855.255.8
    AIM[8]RGBYes76.341.140.619.1100.099.957.958.3
    FDGAN[23]RGBYes73.436.932.915.4100.099.758.358.6
    GI-ReID[4]RGB+gaNo63.229.423.710.4--33.3-
    BMDB[6]RGB+bsNo74.339.541.817.999.797.956.653.9
    FIRe2[12]RGBNo75.939.944.619.1100.099.565.063.1
    TAPFN[13]RGBNo71.934.740.117.499.898.169.168.7
    CSSC[11]RGBNo78.140.243.618.6100.099.165.563.0
    MBUNet[9]RGBNo67.634.840.315.099.899.668.765.2
    ACID[10]RGBNo65.130.629.114.599.199.055.466.1
    AFL[24]RGBNo74.439.142.118.4100.099.757.456.5
    T3FRNet(本文方法)RGBNo79.841.345.619.8100.099.970.669.1
    下载: 导出CSV

    表  3  本文方法与其他最新方法在Celeb-reID数据集上的实验结果(%)

    方法模态Rank-1mAP
    MBUNet[9]RGB55.512.8
    ACID[10]RGB52.511.4
    CSSC[11]RGB64.517.3
    FIRe2[12]RGB64.018.2
    TAPFN[13]RGB61.416.9
    SCNet[25]RGB+sil62.817.5
    T3FRNet(本文方法)RGB64.618.4
    下载: 导出CSV

    表  4  本文方法与其他最新方法在DeepChange数据集上的实验结果(%)

    方法模态服装标签Rank-1mAP
    CAL[7]RGBYes54.019.0
    FIRe2[12]RGBNo57.920.0
    ACD-Net[5]RGB+skeNo56.820.6
    SCNet[25]RGB+silNo53.518.7
    T3FRNet(本文方法)RGBNo58.020.8
    下载: 导出CSV

    表  5  本文方法与其他最新方法在Market-1501数据集上的比较(%)

    方法模态服装标签Rank-1mAP
    CAL[7]RGBYes94.787.5
    FDGAN[23]RGBYes95.487.0
    GI-ReID[4]RGB+gaNo95.688.9
    CSSC[11]RGBNo95.387.6
    FIRe2[12]RGBNo95.487.7
    AFL[24]RGBNo95.588.8
    T3FRNet(本文方法)RGBNo96.289.3
    下载: 导出CSV

    表  6  LTCC和PRCC数据集上的消融实验(%)

    AFRTAJP-LossResFormer50AHPLTCCPRCC
    Rank-1mAPRank-1mAP
    -----28.811.540.236.2
    ----40.117.958.356.0
    ---40.618.865.262.2
    --41.519.367.662.5
    -41.819.269.567.5
    45.619.870.669.1
    下载: 导出CSV

    表  7  Transformer注意力机制嵌入位置研究结果(%)

    嵌入位置 LTCC PRCC
    Rank-1 mAP Rank-1 mAP
    stage1 45.3 19.4 69.8 69.0
    stage2 45.6 19.8 70.6 69.1
    stage3 43.1 19.2 68.9 67.3
    stage4 41.6 19.2 62.0 61.1
    下载: 导出CSV
  • [1] 程德强, 姬广凯, 张皓翔, 等. 基于多粒度融合和跨尺度感知的跨模态行人重识别[J]. 通信学报, 2025, 46(1): 108–123. doi: 10.11959/j.issn.1000-436x.2025019.

    CHENG Deqiang, JI Guangkai, ZHANG Haoxiang, et al. Cross-modality person re-identification based on multi-granularity fusion and cross-scale perception[J]. Journal on Communications, 2025, 46(1): 108–123. doi: 10.11959/j.issn.1000-436x.2025019.
    [2] 庄建军, 庄宇辰. 一种结构化双注意力混合通道增强的跨模态行人重识别方法[J]. 电子与信息学报, 2024, 46(2): 518–526. doi: 10.11999/JEIT230614.

    ZHUANG Jianjun and ZHUANG Yuchen. A cross-modal person re-identification method based on hybrid channel augmentation with structured dual attention[J]. Journal of Electronics & Information Technology, 2024, 46(2): 518–526. doi: 10.11999/JEIT230614.
    [3] 张鹏, 张晓林, 包永堂, 等. 换装行人重识别研究进展[J]. 中国图象图形学报, 2023, 28(5): 1242–1264. doi: 10.11834/jig.220702.

    ZHANG Peng, ZHANG Xiaolin, BAO Yongtang, et al. Cloth-changing person re-identification: A summary[J]. Journal of Image and Graphics, 2023, 28(5): 1242–1264. doi: 10.11834/jig.220702.
    [4] JIN Xin, HE Tianyu, ZHENG Kecheng, et al. Cloth-changing person re-identification from a single image with gait prediction and regularization[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 14258-14267. doi: 10.1109/CVPR52688.2022.01388.
    [5] GE Yiyuan, YU Mingxin, CHEN Zhihao, et al. Attention-enhanced controllable disentanglement for cloth-changing person re-identification[J]. The Visual Computer, 2025, 41(8): 5609–5624. doi: 10.1007/s00371-024-03741-4.
    [6] LIU Xuan, HAN Hua, XU Kaiyu, et al. Cloth-changing person re-identification based on the backtracking mechanism[J]. IEEE Access, 2025, 13: 27527–27536. doi: 10.1109/ACCESS.2025.3538976.
    [7] GU Xinqian, CHANG Hong, MA Bingpeng, et al. Clothes-changing person re-identification with RGB modality only[C]. Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 1060–1069. doi: 10.1109/CVPR52688.2022.00113.
    [8] YANG Zhengwei, LIN Meng, ZHONG Xian, et al. Good is bad: Causality inspired cloth-debiasing for cloth-changing person re-identification[C]. Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 1472–1481. doi: 10.1109/CVPR52729.2023.00148.
    [9] ZHANG Guoqing, LIU Jie, CHEN Yuhao, et al. Multi-biometric unified network for cloth-changing person re-identification[J]. IEEE Transactions on Image Processing, 2023, 32: 4555–4566. doi: 10.1109/TIP.2023.3279673.
    [10] YANG Zhengwei, ZHONG Xian, ZHONG Zhun, et al. Win-win by competition: Auxiliary-free cloth-changing person re-identification[J]. IEEE Transactions on Image Processing, 2023, 32: 2985–2999. doi: 10.1109/TIP.2023.3277389.
    [11] WANG Qizao, QIAN Xuelin, LI Bin, et al. Content and salient semantics collaboration for cloth-changing person re-identification[C]. Proceedings of the 2025 IEEE International Conference on Acoustics, Speech and Signal Processing, Hyderabad, India, 2025: 1–5. doi: 10.1109/ICASSP49660.2025.10890451.
    [12] WANG Qizao, QIAN Xuelin, LI Bin, et al. Exploring fine-grained representation and recomposition for cloth-changing person re-identification[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 6280–6292. doi: 10.1109/TIFS.2024.3414667.
    [13] ZHANG Guoqing, ZHOU Jieqiong, ZHENG Yuhui, et al. Adaptive transformer with pyramid fusion for cloth-changing person re-identification[J]. Pattern Recognition, 2025, 163: 111443. doi: 10.1016/j.patcog.2025.111443.
    [14] HE Kaiming, ZHANG Xiangyu, REN Shaqing, et al. Deep residual learning for image recognition[C]. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [15] LUO Hao, GU Youzhi, LIAO Xingyu, et al. Bag of tricks and a strong baseline for deep person re-identification[C]. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, USA, 2019: 1487–1495. doi: 10.1109/CVPRW.2019.00190.
    [16] CHEN Weihua, CHEN Xiaotang, ZHANG Jianguo, et al. Beyond triplet loss: A deep quadruplet network for person re-identification[C]. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 1320–1329. doi: 10.1109/CVPR.2017.145.
    [17] QIAN Xuelin, WANG Wenxuan, ZHANG Li, et al. Long-term cloth-changing person re-identification[C]. Proceedings of the 15th Asian Conference on Computer Vision, Kyoto, Japan, 2020: 71–88. doi: 10.1007/978-3-030-69535-4_5.
    [18] YANG Qize, WU Ancong, and ZHENG Weishi. Person re-identification by contour sketch under moderate clothing change[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(6): 2029–2046. doi: 10.1109/TPAMI.2019.2960509.
    [19] HUANG Yan, XU Jingsong, WU Qiang, et al. Beyond scalar neuron: Adopting vector-neuron capsules for long-term person re-identification[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(10): 3459–3471. doi: 10.1109/TCSVT.2019.2948093.
    [20] XU Peng and ZHU Xiatian. DeepChange: A long-term person re-identification benchmark with clothes change[C]. Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision, Paris, France, 2023: 11162–11171. doi: 10.1109/ICCV51070.2023.01028.
    [21] ZHENG Liang, SHEN Liyue, TIAN Lu, et al. Scalable person re-identification: A benchmark[C]. Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1116–1124. doi: 10.1109/ICCV.2015.133.
    [22] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386.
    [23] CHAN P P K, HU Xiaoman, SONG Haorui, et al. Learning disentangled features for person re-identification under clothes changing[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2023, 19(6): 1–21. doi: 10.1145/3584359.
    [24] LIU Yuxuan, GE Hongwei, WANG Zhen, et al. Clothes-changing person re-identification via universal framework with association and forgetting learning[J]. IEEE Transactions on Multimedia, 2024, 26: 4294–4307. doi: 10.1109/TMM.2023.3321498.
    [25] GUO Peini, LIU Hong, WU Jianbing, et al. Semantic-aware consistency network for cloth-changing person re-identification[C]. Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, Canada, 2023: 8730–8739. doi: 10.1145/3581783.3612416.
  • 加载中
图(6) / 表(7)
计量
  • 文章访问数:  41
  • HTML全文浏览量:  12
  • PDF下载量:  3
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-05-27
  • 修回日期:  2025-11-03
  • 录用日期:  2025-11-03
  • 网络出版日期:  2025-11-12

目录

    /

    返回文章
    返回