Advanced Search
Turn off MathJax
Article Contents
ZHUANG Jianjun, WANG Nan. T3FRNet: A Cloth-Changing Person Re-identification via Texture-aware Transformer Tuning Fine-grained Reconstruction method[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250476
Citation: ZHUANG Jianjun, WANG Nan. T3FRNet: A Cloth-Changing Person Re-identification via Texture-aware Transformer Tuning Fine-grained Reconstruction method[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250476

T3FRNet: A Cloth-Changing Person Re-identification via Texture-aware Transformer Tuning Fine-grained Reconstruction method

doi: 10.11999/JEIT250476 cstr: 32379.14.JEIT250476
Funds:  The National Natural Science Foundation of China (62272234), Postgraduate Research & Practice Innovation Program of Jiangsu Province (SJCX25_0503)
  • Received Date: 2025-05-27
  • Accepted Date: 2025-11-03
  • Rev Recd Date: 2025-11-03
  • Available Online: 2025-11-12
  •   Objective  Compared to conventional person re-identification, Cloth-Changing person Re-Identification (CC Re-ID) requires moving beyond the reliance on the stability of a person’s appearance features over time, instead demanding models with greater robustness and generalization capabilities to address real-world application scenarios. Existing deep feature representation methods can leverage salient regions or attribute information to obtain discriminative features and mitigate the impact of clothing variations, yet their performance often degrades under changing environments. To address the challenges of effective feature extraction and limited training samples in CC Re-ID tasks, this paper proposes a novel Texture-aware Transformer Tuning Fine-grained Reconstruction Network (T3FRNet), which aims to fully exploit fine-grained information within person images, enhance the robustness of feature learning, and reduce the adverse impact of clothing changes on model performance, ultimately overcoming performance bottlenecks under scene variations.  Methods  To better compensate for the limitations of local receptive fields, T3FRNet incorporates a Transformer-based attention mechanism into the ResNet50 backbone, constructing a hybrid architecture named ResFormer50. This design facilitates spatial relational modeling on top of local features, enhancing the model’s perceptual capacity for feature extraction while maintaining a balance between efficiency and performance. The fine-grained Texture-Aware (TA) module concatenates processed texture features with deep semantic features, thereby improving the model’s recognition capability under clothing variations. Meanwhile, the Adaptive Hybrid Pooling (AHP) module performs channel-wise autonomous aggregation, enabling deeper and more refined mining of feature representations. This contributes to achieving a balance between global representation consistency and robustness to clothing changes. A novel Adaptive Fine-grained Reconstruction (AFR) strategy introduces adversarial perturbations and selective reconstruction at a fine-grained level. Without relying on explicit supervision, the AFR strategy significantly enhances the model’s robustness and generalization against clothing changes and local detail perturbations, thereby improving recognition accuracy in real-world scenarios. Finally, a Joint Perception Loss (JP-Loss) is designed by integrating fine-grained identity robustness loss, texture feature loss, the widely used identity classification loss, and triplet loss. This composite loss jointly supervises the model to learn robust fine-grained identity features, ultimately boosting its performance under challenging cloth-changing conditions.  Results and Discussions  To validate the effectiveness of the proposed model, extensive evaluations are conducted on two widely used CC Re-ID benchmarks, LTCC, PRCC and Celeb-reID, as well as a large-scale dataset, DeepChange (Table 1). Under cloth-changing scenarios, the model achieves Rank-1/mAP scores of 45.6%/19.8% on LTCC, 70.6%/69.1% on PRCC (Table 2), 64.6%/18.4% on Celeb-reID(Table 3), and 58.0%/20.8% on DeepChange (Table 4), outperforming existing state-of-the-art approaches. The TA module effectively extracts latent local texture details within person images and, in conjunction with the AFR strategy, performs fine-grained adversarial perturbations and selective reconstruction. This enhances fine-grained feature representations, enabling the proposed method to also achieve 96.2% Rank-1 and 89.3% mAP on the clothing-consistent Market-1501 dataset (Table 5). The introduction of the JP-Loss further supports the TA module and AFR strategy by enabling fine-grained adaptive regulation and clustering of texture-sensitive identity features (Table 6). Furthermore, when Transformer-based attention mechanism is integrated after stage2 of the ResNet50, the model achieves improved local structural perception and global context modeling with only a slight increase in computational overhead, thereby enhancing overall performance (Table 7). Additionally, setting the $ \beta $ parameter to 0.5 (Fig.5) enables the JP-Loss to effectively balance global texture consistency and local fine-grained discriminability, thereby enhancing the overall robustness and accuracy of CC Re-ID. Finally, visualization experiments based on the PRCC dataset (Fig.6) offer intuitive evidence of the model’s superior feature extraction capability and highlight the significance of the Transformer-based attention mechanism. The top 10 ranking retrieval results of the baseline model and T3FRNet in the clothing changing scenario (Fig.7) intuitively demonstrate that T3FRNet has better stability and accuracy.  Conclusions  This paper proposes a CC Re-ID method based on T3FRNet, composed of the ResFormer50 backbone, TA module, AHP module, AFR strategy, and JP-Loss. Extensive experiments conducted on four publicly available cloth-changing benchmarks and one clothing-consistent dataset demonstrate the effectiveness and superiority of the proposed approach. Under long-term scenarios, Rank-1/mAP on the LTCC and PRCC datasets achieve significant improvements of 16.8%/8.3% and 30.4%/32.9% respectively. The ResFormer50 backbone facilitates spatial relationship modeling on top of local fine-grained features, while the TA module and AFR strategy enhance the expressiveness of fine-grained representations. The AHP module effectively balances the model’s sensitivity to local textures with the stability of global features, thereby ensuring strong feature representation alongside robustness. JP-Loss assists the model in constraining fine-grained feature representations and performing adaptive regulation, thereby enhancing its generalization capability in diverse and challenging cloth-changing scenarios. Future work will focus on simplifying the model architecture to reduce computational complexity and latency, aiming to strike a better balance between high recognition accuracy and deployment efficiency.
  • loading
  • [1]
    程德强, 姬广凯, 张皓翔, 等. 基于多粒度融合和跨尺度感知的跨模态行人重识别[J]. 通信学报, 2025, 46(1): 108–123. doi: 10.11959/j.issn.1000-436x.2025019.

    CHENG Deqiang, JI Guangkai, ZHANG Haoxiang, et al. Cross-modality person re-identification based on multi-granularity fusion and cross-scale perception[J]. Journal on Communications, 2025, 46(1): 108–123. doi: 10.11959/j.issn.1000-436x.2025019.
    [2]
    庄建军, 庄宇辰. 一种结构化双注意力混合通道增强的跨模态行人重识别方法[J]. 电子与信息学报, 2024, 46(2): 518–526. doi: 10.11999/JEIT230614.

    ZHUANG Jianjun and ZHUANG Yuchen. A cross-modal person re-identification method based on hybrid channel augmentation with structured dual attention[J]. Journal of Electronics & Information Technology, 2024, 46(2): 518–526. doi: 10.11999/JEIT230614.
    [3]
    张鹏, 张晓林, 包永堂, 等. 换装行人重识别研究进展[J]. 中国图象图形学报, 2023, 28(5): 1242–1264. doi: 10.11834/jig.220702.

    ZHANG Peng, ZHANG Xiaolin, BAO Yongtang, et al. Cloth-changing person re-identification: A summary[J]. Journal of Image and Graphics, 2023, 28(5): 1242–1264. doi: 10.11834/jig.220702.
    [4]
    JIN Xin, HE Tianyu, ZHENG Kecheng, et al. Cloth-changing person re-identification from a single image with gait prediction and regularization[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 14258-14267. doi: 10.1109/CVPR52688.2022.01388.
    [5]
    GE Yiyuan, YU Mingxin, CHEN Zhihao, et al. Attention-enhanced controllable disentanglement for cloth-changing person re-identification[J]. The Visual Computer, 2025, 41(8): 5609–5624. doi: 10.1007/s00371-024-03741-4.
    [6]
    LIU Xuan, HAN Hua, XU Kaiyu, et al. Cloth-changing person re-identification based on the backtracking mechanism[J]. IEEE Access, 2025, 13: 27527–27536. doi: 10.1109/ACCESS.2025.3538976.
    [7]
    GU Xinqian, CHANG Hong, MA Bingpeng, et al. Clothes-changing person re-identification with RGB modality only[C]. Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 1060–1069. doi: 10.1109/CVPR52688.2022.00113.
    [8]
    YANG Zhengwei, LIN Meng, ZHONG Xian, et al. Good is bad: Causality inspired cloth-debiasing for cloth-changing person re-identification[C]. Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 1472–1481. doi: 10.1109/CVPR52729.2023.00148.
    [9]
    ZHANG Guoqing, LIU Jie, CHEN Yuhao, et al. Multi-biometric unified network for cloth-changing person re-identification[J]. IEEE Transactions on Image Processing, 2023, 32: 4555–4566. doi: 10.1109/TIP.2023.3279673.
    [10]
    YANG Zhengwei, ZHONG Xian, ZHONG Zhun, et al. Win-win by competition: Auxiliary-free cloth-changing person re-identification[J]. IEEE Transactions on Image Processing, 2023, 32: 2985–2999. doi: 10.1109/TIP.2023.3277389.
    [11]
    WANG Qizao, QIAN Xuelin, LI Bin, et al. Content and salient semantics collaboration for cloth-changing person re-identification[C]. Proceedings of the 2025 IEEE International Conference on Acoustics, Speech and Signal Processing, Hyderabad, India, 2025: 1–5. doi: 10.1109/ICASSP49660.2025.10890451.
    [12]
    WANG Qizao, QIAN Xuelin, LI Bin, et al. Exploring fine-grained representation and recomposition for cloth-changing person re-identification[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 6280–6292. doi: 10.1109/TIFS.2024.3414667.
    [13]
    ZHANG Guoqing, ZHOU Jieqiong, ZHENG Yuhui, et al. Adaptive transformer with pyramid fusion for cloth-changing person re-identification[J]. Pattern Recognition, 2025, 163: 111443. doi: 10.1016/j.patcog.2025.111443.
    [14]
    HE Kaiming, ZHANG Xiangyu, REN Shaqing, et al. Deep residual learning for image recognition[C]. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [15]
    LUO Hao, GU Youzhi, LIAO Xingyu, et al. Bag of tricks and a strong baseline for deep person re-identification[C]. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, USA, 2019: 1487–1495. doi: 10.1109/CVPRW.2019.00190.
    [16]
    CHEN Weihua, CHEN Xiaotang, ZHANG Jianguo, et al. Beyond triplet loss: A deep quadruplet network for person re-identification[C]. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 1320–1329. doi: 10.1109/CVPR.2017.145.
    [17]
    QIAN Xuelin, WANG Wenxuan, ZHANG Li, et al. Long-term cloth-changing person re-identification[C]. Proceedings of the 15th Asian Conference on Computer Vision, Kyoto, Japan, 2020: 71–88. doi: 10.1007/978-3-030-69535-4_5.
    [18]
    YANG Qize, WU Ancong, and ZHENG Weishi. Person re-identification by contour sketch under moderate clothing change[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(6): 2029–2046. doi: 10.1109/TPAMI.2019.2960509.
    [19]
    HUANG Yan, XU Jingsong, WU Qiang, et al. Beyond scalar neuron: Adopting vector-neuron capsules for long-term person re-identification[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(10): 3459–3471. doi: 10.1109/TCSVT.2019.2948093.
    [20]
    XU Peng and ZHU Xiatian. DeepChange: A long-term person re-identification benchmark with clothes change[C]. Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision, Paris, France, 2023: 11162–11171. doi: 10.1109/ICCV51070.2023.01028.
    [21]
    ZHENG Liang, SHEN Liyue, TIAN Lu, et al. Scalable person re-identification: A benchmark[C]. Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1116–1124. doi: 10.1109/ICCV.2015.133.
    [22]
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386.
    [23]
    CHAN P P K, HU Xiaoman, SONG Haorui, et al. Learning disentangled features for person re-identification under clothes changing[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2023, 19(6): 1–21. doi: 10.1145/3584359.
    [24]
    LIU Yuxuan, GE Hongwei, WANG Zhen, et al. Clothes-changing person re-identification via universal framework with association and forgetting learning[J]. IEEE Transactions on Multimedia, 2024, 26: 4294–4307. doi: 10.1109/TMM.2023.3321498.
    [25]
    GUO Peini, LIU Hong, WU Jianbing, et al. Semantic-aware consistency network for cloth-changing person re-identification[C]. Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, Canada, 2023: 8730–8739. doi: 10.1145/3581783.3612416.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(7)

    Article Metrics

    Article views (24) PDF downloads(1) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return