高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向掌纹识别的多尺度感兴趣区域特征融合机制

马宇轩 张飞飞 李光辉 唐鑫 董正阳

马宇轩, 张飞飞, 李光辉, 唐鑫, 董正阳. 面向掌纹识别的多尺度感兴趣区域特征融合机制[J]. 电子与信息学报. doi: 10.11999/JEIT250940
引用本文: 马宇轩, 张飞飞, 李光辉, 唐鑫, 董正阳. 面向掌纹识别的多尺度感兴趣区域特征融合机制[J]. 电子与信息学报. doi: 10.11999/JEIT250940
MA Yuxuan, ZHANG Feifei, LI Guanghui, TANG Xin, DONG Zhengyang. Multi-Scale Region of Interest Feature Fusion for Palmprint Recognition[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250940
Citation: MA Yuxuan, ZHANG Feifei, LI Guanghui, TANG Xin, DONG Zhengyang. Multi-Scale Region of Interest Feature Fusion for Palmprint Recognition[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250940

面向掌纹识别的多尺度感兴趣区域特征融合机制

doi: 10.11999/JEIT250940 cstr: 32379.14.JEIT250940
基金项目: 国家自然科学基金(62372214),苏州市科技计划(SGC2021070)
详细信息
    作者简介:

    马宇轩:男,硕士生,研究方向为生物特征识别、深度学习等

    张飞飞:男,硕士,高级工程师,研究方向为图像处理算法的硬件加速和SoC芯片设计

    李光辉:男,博士,教授,研究方向为物联网、边缘计算、无损检测、集成电路设计验证等

    唐鑫:男,硕士生,研究方向为生物特征识别、深度学习等

    董正阳:男,硕士生,研究方向为生物特征识别、人脸表情识别、深度学习等

    通讯作者:

    李光辉 ghli@jiangnan.edu.cn

  • 中图分类号: TN911.73; TP391.4

Multi-Scale Region of Interest Feature Fusion for Palmprint Recognition

Funds: The National Natural Science Foundation of China (62372214), Suzhou Science and Technology Project (SGC2021070)
  • 摘要: 定位感兴趣区域(ROI)是掌纹识别流程中的关键环节,然而,在实际应用中,光照变化与手掌姿态的多样性常常导致ROI定位出现偏移,进而影响识别系统的性能。为缓解此问题,该文提出一种新颖的多尺度ROI特征融合机制,并据此设计了一个双分支协同工作的深度学习模型。该模型由特征提取网络和权重预测网络构成:前者负责从多个不同尺度的ROI中并行提取特征,后者则自适应地为各尺度特征分配权重。该融合机制的核心思想在于,不同尺度的ROI既共享了掌纹的核心纹理等本质特征,又各自包含了独特的尺度相关信息。通过对这些特征进行加权融合,模型能够强化共有的本质特征,同时抑制由定位不准引入的噪声和冗余信息,从而生成更具鲁棒性的特征。在IITD, MPD和NTU-CP等多个公开掌纹数据集上的综合实验表明,该模型在存在显著定位误差时,其识别精度仅出现小幅下降,展现出远超传统单尺度ROI模型的抗误差能力。特别是在NTU-CP定位误差测试中,该模型的等错误率(EER)仅从1.96%小幅上升至5.01%,而其他对比模型的EER均超过10%,这充分证实了所提多尺度ROI特征融合机制的有效性与优越性。
  • 图  1  多尺度ROI特征融合机制

    图  2  多尺度ROI掌纹

    图  3  不同尺度的ROI

    图  4  ROI3Net模型结构

    图  5  权重热力图

    图  6  正常定位下不同模型的ROC曲线

    图  7  存在定位错误下不同模型的ROC曲线

    表  1  正常定位下实验结果(%)

    方法IITDMPDNTU-CPRESTCASIABMPD
    EERRank-1EERRank-1EERRank-1EERRank-1EERRank-1EERRank-1
    本文模型3.6099.004.9799.901.9699.908.5990.171.2199.906.5199.88
    CompNet6.3298.618.3699.903.5099.6512.7087.661.2699.908.8999.62
    CCNet5.6799.007.4699.862.6399.6510.8486.891.3199.908.7799.75
    CO3Net5.7399.008.4799.822.5499.7413.6384.291.8499.7710.2199.75
    DCPV8.6595.6911.3299.717.3798.8919.9780.884.8799.3613.8099.25
    RLANN4.6899.007.1899.782.7899.4816.1382.271.7799.8011.0199.62
    PalmALNet6.2395.1519.4296.737.6897.5319.8881.662.5199.5314.5498.75
    MTCC5.5797.628.7199.724.4299.5716.0582.932.3799.8013.9199.38
    下载: 导出CSV

    表  2  存在定位误差实验结果(%)

    方法IITDMPDNTU-CPRESTCASIABMPD
    EERRank-1EERRank-1EERRank-1EERRank-1EERRank-1EERRank-1
    本文模型10.1590.536.3399.585.0196.7611.7687.212.7099.4310.6099.32
    CompNet30.2257.9212.4999.0814.5268.4220.7279.527.0198.6316.7298.12
    CCNet27.6061.7612.8697.9610.1180.9318.7681.376.0098.7315.9897.12
    CO3Net28.3661.7613.5697.9615.6658.5522.2675.278.1097.0917.7396.37
    DCPV29.5138.6118.8692.6820.5344.2521.5076.8314.2787.8318.3793.37
    RLANN14.1675.849.9898.8211.3675.4019.8780.623.2199.1316.8197.00
    PalmALNet17.3129.1517.5570.9113.0871.2318.7981.264.0497.9616.6095.75
    MTCC22.6830.0811.8399.2013.3490.9822.7478.723.5299.0614.9897.25
    下载: 导出CSV

    表  3  本文模型在不同条件下的精度实验结果(%)

    条件IITDMPDNTU-CPRESTCASIABMPD
    EERRank-1EERRank-1EERRank-1EERRank-1EERRank-1EERRank-1
    定位正常3.6099.004.9799.901.9699.908.5990.171.2199.906.5199.88
    定位误差10.1590.536.3399.585.0196.7611.7687.212.7099.4310.6099.32
    仿射变换20.5973.468.2898.217.9990.1320.6764.145.1498.3318.2192.13
    下载: 导出CSV

    表  4  不同方法的性能对比

    方法计算量(M)参数量(M)GPU运行时间(ms)
    本文模型4927.1438.446.48
    CompNet1053.1915.044.98
    CCNet1688.9762.5210.01
    CO3Net2302.4079.6310.70
    DCPV2134.6268.749.54
    RLANN2450.4043.357.42
    PalmALNet2030.7528.626.92
    MTCC640.554.432.84
    下载: 导出CSV

    表  5  不同尺度消融的EER结果(%)

    采用尺度 测试数据集
    1.00 1.25 1.50 1.75 IITD MPD REST BMPD
    5.60 7.67 13.32 11.39
    5.87 7.91 13.82 10.55
    6.05 8.01 14.37 11.88
    4.81 5.34 10.29 8.74
    5.10 5.73 10.14 7.49
    4.95 5.38 9.11 8.93
    3.60 4.97 8.59 6.51
    4.26 5.16 8.12 7.84
    下载: 导出CSV

    表  6  多尺度ROI特征融合机制对不同模型性能的提升结果(%)

    方法IITDMPDNTU-CPRESTCASIABMPD
    EERRank-1EERRank-1EERRank-1EERRank-1EERRank-1EERRank-1
    CCNet↓15.44↑27.08↓4.02↑0.51↓3.45↑13.36↓5.60↑4.54↓2.75↑0.48↓2.71↑1.71
    CO3Net↓15.78↑24.52↓4.58↑0.51↓10.80↑38.87↓6.94↑8.14↓4.28↑1.93↓2.81↑2.25
    RLANN↓4.34↑15.48↓3.00↑0.41↓5.61↑20.02↓2.34↑0.66↓0.40↑0.30↓2.30↑1.45
    下载: 导出CSV

    表  7  多尺度ROI特征融合机制对性能损耗结果

    方法计算量(M)参数量(M)GPU运行时间(ms)
    CCNet3443.42↑6.46↑2.22
    CO3Net4670.28↑10.46↑3.50
    RLANN5004.04↑2.91↑1.05
    下载: 导出CSV
  • [1] ZHAO Shuping, FEI Lunke, and WEN Jie. Multiview-learning-based generic palmprint recognition: A literature review[J]. Mathematics, 2023, 11(5): 1261. doi: 10.3390/math11051261.
    [2] AMROUNI N, BENZAOUI A, and ZEROUAL A. Palmprint recognition: Extensive exploration of databases, methodologies, comparative assessment, and future directions[J]. Applied Sciences, 2024, 14(1): 153. doi: 10.3390/app14010153.
    [3] ZHANG D, KONG W K, YOU J, et al. Online palmprint identification[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(9): 1041–1050. doi: 10.1109/TPAMI.2003.1227981.
    [4] KONG A W K and ZHANG D. Competitive coding scheme for palmprint verification[C]. The 17th International Conference on Pattern Recognition, Cambridge, UK, 2004: 520–523. doi: 10.1109/ICPR.2004.1334184.
    [5] FEI Lunke, XU Yong, TANG Wenliang, et al. Double-orientation code and nonlinear matching scheme for palmprint recognition[J]. Pattern Recognition, 2016, 49: 89–101. doi: 10.1016/j.patcog.2015.08.001.
    [6] JIA Wei, HU Rongxiang, LEI Yingke, et al. Histogram of oriented lines for palmprint recognition[J]. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2014, 44(3): 385–395. doi: 10.1109/TSMC.2013.2258010.
    [7] GENOVESE A, PIURI V, PLATANIOTIS K N, et al. PalmNet: Gabor-PCA convolutional networks for touchless palmprint recognition[J]. IEEE Transactions on Information Forensics and Security, 2019, 14(12): 3160–3174. doi: 10.1109/TIFS.2019.2911165.
    [8] CHAI Tingting, PRASAD S, and WANG Shenghui. Boosting palmprint identification with gender information using DeepNet[J]. Future Generation Computer Systems, 2019, 99: 41–53. doi: 10.1016/j.future.2019.04.013.
    [9] LIANG Xu, YANG Jinyang, LU Guangming, et al. CompNet: Competitive neural network for palmprint recognition using learnable Gabor kernels[J]. IEEE Signal Processing Letters, 2021, 28: 1739–1743. doi: 10.1109/LSP.2021.3103475.
    [10] YANG Ziyuan, HUANGFU Huijie, LENG Lu, et al. Comprehensive competition mechanism in palmprint recognition[J]. IEEE Transactions on Information Forensics and Security, 2023, 18: 5160–5170. doi: 10.1109/TIFS.2023.3306104.
    [11] YANG Ziyuan, XIA Wenjun, QIAO Yifan, et al. CO3Net: Coordinate-aware contrastive competitive neural network for palmprint recognition[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 2514114. doi: 10.1109/TIM.2023.3276506.
    [12] FENG Yulin and KUMAR A. BEST: Building evidences from scattered templates for accurate contactless palmprint recognition[J]. Pattern Recognition, 2023, 138: 109422. doi: 10.1016/j.patcog.2023.109422.
    [13] CHAI Tingting, WANG Xin, LI Ru, et al. Joint finger valley points-free ROI detection and recurrent layer aggregation for palmprint recognition in open environment[J]. IEEE Transactions on Information Forensics and Security, 2025, 20: 421–435. doi: 10.1109/TIFS.2024.3516539.
    [14] SHAO Huikai, ZOU Yuchen, LIU Chengcheng, et al. Learning to generalize unseen dataset for cross-dataset palmprint recognition[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 3788–3799. doi: 10.1109/TIFS.2024.3371257.
    [15] SU Le, FEI Lunke, ZHANG B, et al. Complete region of interest for unconstrained palmprint recognition[J]. IEEE Transactions on Image Processing, 2024, 33: 3662–3675. doi: 10.1109/TIP.2024.3407666.
    [16] WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 3–19. doi: 10.1007/978-3-030-01234-2_1.
    [17] KUMAR A and SHEKHAR S. Personal identification using multibiometrics rank-level fusion[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 2011, 41(5): 743–752. doi: 10.1109/TSMCC.2010.2089516.
    [18] MATKOWSKI W M, CHAI Tingting, and KONG A W K. Palmprint recognition in uncontrolled and uncooperative environment[J]. IEEE Transactions on Information Forensics and Security, 2020, 15: 1601–1615. doi: 10.1109/TIFS.2019.2945183.
    [19] SUN Zhenan, TAN Tieniu, WANG Yunhong, et al. Ordinal palmprint represention for personal identification[C]. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, USA, 2005: 279–284. doi: 10.1109/CVPR.2005.267.
    [20] IZADPANAHKAKHK M, RAZAVI S M, TAGHIPOUR-GORJIKOLAIE M, et al. Novel mobile palmprint databases for biometric authentication[J]. International Journal of Grid and Utility Computing, 2019, 10(5): 465–474. doi: 10.1504/ijguc.2019.102016.
    [21] YANG Ziyuan, KANG Ming, TEOH A B J, et al. A dual-level cancelable framework for palmprint verification and hack-proof data storage[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 8587–8599. doi: 10.1109/TIFS.2024.3461869.
    [22] YANG Ziyuan, LENG Lu, WU Tengfei, et al. Multi-order texture features for palmprint recognition[J]. Artificial Intelligence Review, 2023, 56(2): 995–1011. doi: 10.1007/s10462-022-10194-5.
  • 加载中
图(7) / 表(7)
计量
  • 文章访问数:  55
  • HTML全文浏览量:  19
  • PDF下载量:  6
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-09-22
  • 修回日期:  2025-12-30
  • 录用日期:  2025-12-30
  • 网络出版日期:  2026-01-08

目录

    /

    返回文章
    返回