高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

平滑注意力与谱上采样细化的非等距三维点云模型对应关系计算

杨军 张思洋 吴衍

杨军, 张思洋, 吴衍. 平滑注意力与谱上采样细化的非等距三维点云模型对应关系计算[J]. 电子与信息学报, 2024, 46(8): 3285-3294. doi: 10.11999/JEIT231180
引用本文: 杨军, 张思洋, 吴衍. 平滑注意力与谱上采样细化的非等距三维点云模型对应关系计算[J]. 电子与信息学报, 2024, 46(8): 3285-3294. doi: 10.11999/JEIT231180
YANG Jun, ZHANG Siyang, WU Yan. Correspondence Calculation of Non-isometric 3D Point Shapes Based on Smooth Attention and Spectral Up-sampling Refinement[J]. Journal of Electronics & Information Technology, 2024, 46(8): 3285-3294. doi: 10.11999/JEIT231180
Citation: YANG Jun, ZHANG Siyang, WU Yan. Correspondence Calculation of Non-isometric 3D Point Shapes Based on Smooth Attention and Spectral Up-sampling Refinement[J]. Journal of Electronics & Information Technology, 2024, 46(8): 3285-3294. doi: 10.11999/JEIT231180

平滑注意力与谱上采样细化的非等距三维点云模型对应关系计算

doi: 10.11999/JEIT231180 cstr: 32379.14.JEIT231180
基金项目: 国家自然科学基金(42261067)
详细信息
    作者简介:

    杨军:男,博士,教授,博士生导师,研究方向为计算机图形学、深度学习、遥感大数据智能处理

    张思洋:男,硕士,研究方向为3维模型的空间分析

    吴衍:男,博士,研究方向为3维模型对应关系计算和深度学习

    通讯作者:

    杨军 yangj@mail.lzjtu.cn

  • 中图分类号: TP391.4

Correspondence Calculation of Non-isometric 3D Point Shapes Based on Smooth Attention and Spectral Up-sampling Refinement

Funds: The National Natural Science Foundation of China(42261067)
  • 摘要: 为了解决非等距3维点云模型对应关系计算易受模型大尺度形变影响而导致对应失真、准确率低且平滑性差的问题,该文提出一种结合平滑注意力与谱上采样细化的非等距3维点云模型对应关系计算新方法。首先,利用点所在表面的几何特征信息设计平滑注意力机制与平滑感知模块,提高特征对大尺度形变区域非刚性变换的感知能力;其次,将深度函数映射模块与平滑正则化约束相结合,提升函数映射计算结果的平滑性;最后,在谱上采样细化模块中,以多分辨率重建的方式得到最终的逐点映射结果。实验结果表明,与已有算法相比,本算法在FAUST、SCAPE和SMAL数据集上构建的对应关系测地误差最小,处理大尺度形变模型时,能够提升逐点映射的平滑性和全局准确率。
  • 图  1  模型对应关系计算网络架构

    图  2  平滑感知模块的网络架构

    图  3  谱上采样细化模块的计算过程

    图  4  FAUST数据集上4种算法构建非等距模型间对应关系

    图  5  SCAPE数据集上4种算法构建非等距模型间对应关系

    图  6  通过纹理迁移展示4种算法在SMAL数据集上模型对应关系结果

    图  7  4种算法在3个数据集上的测地误差曲线

    图  8  消融实验定性结果

    图  9  消融实验定量结果

  • [1] DENG Bailin, YAO Yuxin, DYKE R M, et al. A survey of non-rigid 3D registration[J]. Computer Graphics Forum, 2022, 41(2): 559–589. doi: 10.1111/cgf.14502.
    [2] OVSJANIKOV M, BEN-CHEN M, SOLOMON J, et al. Functional maps: A flexible representation of maps between shapes[J]. ACM Transactions on Graphics (TOG), 2012, 31(4): 30. doi: 10.1145/2185520.2185526.
    [3] WU Yan, YANG Jun, and ZHAO Jinlong. Partial 3D shape functional correspondence via fully spectral eigenvalue alignment and upsampling refinement[J]. Computers & Graphics, 2020, 92: 99–113. doi: 10.1016/j.cag.2020.09.004.
    [4] LITANY O, REMEZ T, RODOLÀ E, et al. Deep functional maps: Structured prediction for dense shape correspondence[C]. The IEEE International Conference on Computer Vision, Venice, Italy, 2017: 5660–5668. doi: 10.1109/ICCV.2017.603.
    [5] TOMBARI F, SALTI S, and DI STEFANO L. Unique signatures of histograms for local surface description[C]. The 11th European Conference on Computer Vision, Heraklion, Greece, 2010: 356–369. doi: 10.1007/978-3-642-15558-1_26.
    [6] ROUFOSSE J M, SHARMA A, and OVSJANIKOV M. Unsupervised deep learning for structured shape matching[C]. The IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 1617–1627. doi: 10.1109/ICCV.2019.00170.
    [7] MARIN R, RAKOTOSAONA M J, MELZI S, et al. Correspondence learning via linearly-invariant embedding[C]. Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 136.
    [8] ATTAIKI S, PAI G, and OVSJANIKOV M. DPFM: Deep partial functional maps[C]. The 2021 International Conference on 3D Vision, London, United Kingdom, 2021: 175–185. doi: 10.1109/3DV53792.2021.00040.
    [9] DONATI N, CORMAN E, and OVSJANIKOV M. Deep orientation-aware functional maps: Tackling symmetry issues in shape matching[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 732–741. doi: 10.1109/CVPR52688.2022.00082.
    [10] 管焱然, 奥利弗·范凯克, 管有庆. 基于重心映射的三角形网格参数化方法研究与实现[J]. 北京邮电大学学报, 2019, 42(5): 83–90. doi: 10.13190/j.jbupt.2018-266.

    GUAN Yanran, VAN KAICK O, and GUAN Youqing. Research and implementation of triangle mesh parameterization method based on barycentric mapping[J]. Journal of Beijing University of Posts and Telecommunications, 2019, 42(5): 83–90. doi: 10.13190/j.jbupt.2018-266.
    [11] DONATI N, SHARMA A, and OVSJANIKOV M. Deep geometric functional maps: Robust feature learning for shape correspondence[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 8589–8598. doi: 10.1109/CVPR42600.2020.00862.
    [12] EYNARD D, RODOLÀ E, GLASHOFF K, et al. Coupled functional maps[C]. The 2016 Fourth International Conference on 3D Vision, Stanford, USA, 2016: 399–407. doi: 10.1109/3DV.2016.49.
    [13] PINKALL U and POLTHIER K. Computing discrete minimal surfaces and their conjugates[J]. Experimental Mathematics, 1993, 2(1): 15–36. doi: 10.1080/10586458.1993.10504266.
    [14] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, USA, 2017: 6000–6010.
    [15] SHARP N, ATTAIKI S, CRANE K, et al. DiffusionNet: Discretization agnostic learning on surfaces[J]. ACM Transactions on Graphics (TOG), 2022, 41(3): 27. doi: 10.1145/3507905.
    [16] MELZI S, REN Jing, RODOLÀ E, et al. ZoomOut: Spectral upsampling for efficient shape correspondence[J]. ACM Transactions on Graphics (TOG), 2019, 38(6): 155. doi: 10.1145/3355089.3356524.
    [17] BOGO F, ROMERO J, LOPER M, et al. FAUST: Dataset and evaluation for 3D mesh registration[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 3794–3801. doi: 10.1109/CVPR.2014.491.
    [18] ANGUELOV D, SRINIVASAN P, KOLLER D, et al. SCAPE: Shape completion and animation of people[C]. ACM SIGGRAPH 2005 Papers, Los Angeles, USA, 2005: 408–416. doi: 10.1145/1186822.1073207.
    [19] ZUFFI S, KANAZAWA A, JACOBS D W, et al. 3D menagerie: Modeling the 3D shape and pose of animals[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 5524–5532. doi: 10.1109/CVPR.2017.586.
  • 加载中
图(9)
计量
  • 文章访问数:  112
  • HTML全文浏览量:  35
  • PDF下载量:  22
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-10-30
  • 修回日期:  2024-07-10
  • 网络出版日期:  2024-07-29
  • 刊出日期:  2024-08-30

目录

    /

    返回文章
    返回