高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于点云极化表征与孪生网络的智能车定位

陶倩文 胡钊政 万金杰 胡华桦 张明

陶倩文, 胡钊政, 万金杰, 胡华桦, 张明. 基于点云极化表征与孪生网络的智能车定位[J]. 电子与信息学报, 2023, 45(4): 1163-1172. doi: 10.11999/JEIT220140
引用本文: 陶倩文, 胡钊政, 万金杰, 胡华桦, 张明. 基于点云极化表征与孪生网络的智能车定位[J]. 电子与信息学报, 2023, 45(4): 1163-1172. doi: 10.11999/JEIT220140
TAO Qianwen, HU Zhaozheng, WAN Jinjie, HU Huahua, ZHANG Ming. Intelligent Vehicle Localization Based on Polarized LiDAR Representation and Siamese Network[J]. Journal of Electronics & Information Technology, 2023, 45(4): 1163-1172. doi: 10.11999/JEIT220140
Citation: TAO Qianwen, HU Zhaozheng, WAN Jinjie, HU Huahua, ZHANG Ming. Intelligent Vehicle Localization Based on Polarized LiDAR Representation and Siamese Network[J]. Journal of Electronics & Information Technology, 2023, 45(4): 1163-1172. doi: 10.11999/JEIT220140

基于点云极化表征与孪生网络的智能车定位

doi: 10.11999/JEIT220140
基金项目: 武汉市科技局基金(2020010601012165, 2020010602011973),重庆市自然科学基金(cstc2020jcyj-msxmX0978),国家重点研发计划(2021YFB2501100)
详细信息
    作者简介:

    陶倩文:女,博士生,研究方向为3维计算机视觉和智能车定位技术等

    胡钊政:男,教授,研究方向为3维计算机视觉、智能车路系统、智能车定位技术等

    万金杰:男,硕士生,研究方向为机器学习和智能车定位技术等

    胡华桦:男,硕士生,研究方向为3维计算机视觉和智能车定位技术等

    张明:男,硕士生,研究方向为3维计算机视觉和智能车定位技术等

    通讯作者:

    胡钊政 zzhu@whut.edu.cn

  • 中图分类号: TN249; TP242.6

Intelligent Vehicle Localization Based on Polarized LiDAR Representation and Siamese Network

Funds: The Fundations of Wuhan Science and Technology Bureau (2020010601012165, 2020010602011973), The Natural Science Foundation of Chongqing (cstc2020jcyj-msxmX0978), The National Key Research and Development Program of China (2021YFB2501100)
  • 摘要: 基于3维激光雷达 (LiDAR) 的智能车定位在地图存储空间与匹配效率、准确率等方面仍存在诸多问题。该文提出一种轻量级点云极化地图构建方法:采用多通道图像模型对3维点云进行编码生成点云极化图,利用孪生网络结构提取并训练点云极化指纹,结合轨迹位姿信息构建点云极化地图。还提出一种基于点云极化地图匹配的智能车定位方法:采用孪生网络对查询指纹与地图指纹进行相似度建模实现快速的地图粗匹配,采用基于2阶隐马尔可夫模型 (HMM2) 的地图序列精确匹配方法获取最近的地图节点,通过点云配准计算车辆位姿。使用实地数据集和公开的KITTI数据集进行测试。实验结果表明,地图匹配准确率高于96%,定位平均误差约为30 cm,并对不同类型的LiDAR传感器与不同的场景具有较好的鲁棒性。
  • 图  1  算法整体流程图

    图  2  轻量级3维点云极化表征模型

    图  3  PL-Net框架

    图  4  基于HMM2框架的地图精确匹配问题求解

    图  5  实地数据集实验平台与实验场景

    图  6  由Velodyne VLP-16生成的点云极化图

    图  7  不同间距的点云极化图的SURF匹配结果

    图  8  实地数据集的智能车定位结果与对比

    图  9  KITTI数据集的实验路径

    图  10  由Velodyne HDL-64E生成的点云极化图

    图  11  KITTI数据集的智能车定位结果与对比

    表  1  基于孪生网络的点云极化地图粗匹配结果

    $ \Delta m $ (m)$\mu $内地图节点的
    平均数目 (个)
    粗匹配结果的
    平均数目 (个)
    ${\overline P _f}$ (%)
    1.010.03.095.42
    1.56.72.996.39
    2.05.03.195.70
    下载: 导出CSV

    表  2  基于 SURF 特征的点云极化图匹配准确率

    $ \Delta m $ (m)${\overline P _s}$ (%)
    1.085.09
    1.595.88
    2.097.71
    下载: 导出CSV

    表  3  改变$ {\sigma _s} $$ {\sigma _e} $获取地图序列精确匹配结果Pa (%)

    $ {\sigma _s} $$ {\sigma _e} $
    0.51.01.52.02.5
    0.897.9997.9998.5798.2897.99
    1.097.9997.9998.8597.7197.13
    1.297.7197.9998.2896.8596.28
    下载: 导出CSV

    表  4  改变地图分辨率和车速获取地图序列精确匹配结果Pa (%)

    $v$ (m/s)$ \Delta m $ (m)
    1.52.0
    596.2298.46
    1095.0498.09
    1597.9998.85
    下载: 导出CSV

    表  5  不同方法的地图匹配结果对比

    方法${\overline P _{{a} } }$ (%)平均耗时 (ms)
    本文方法98.5758.12
    文献[11]96.2989.40
    文献[12]87.4335.80
    文献[13]95.43395.34
    LeGO-LOAM[7]中回环检测模块55.71544.04
    下载: 导出CSV

    表  6  不同方法的定位结果对比

    方法MAE (m)RMSE (m)平均定位耗时 (ms)
    本文方法0.360.3978.47
    文献[11]0.460.48109.75
    文献[12]1.030.7459.65
    文献[13]0.550.61420.92
    LeGO-LOAM[7]0.730.62100.87
    下载: 导出CSV

    表  7  不同方法的地图匹配结果对比

    方法Pa (%)平均耗时 (ms)
    本文方法96.5594.23
    文献[11]90.02250.03
    文献[12]94.72117.64
    文献[13]95.301068.14
    LeGO-LOAM[7]中回环检测模块89.06610.03
    下载: 导出CSV

    表  8  不同方法的定位结果对比

    方法MAE (m)RMSE (m)平均定位耗时 (ms)
    本文方法0.180.52138.70
    文献[11]0.252.65294.41
    文献[12]0.260.53162.11
    文献[13]0.230.461116.62
    LeGO-LOAM[7]2.652.78104.78
    下载: 导出CSV
  • [1] XIAO Zhu, CHEN Yanxun, ALAZAB M, et al. Trajectory data acquisition via private car positioning based on tightly-coupled GPS/OBD integration in urban environments[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(7): 9680–9691. doi: 10.1109/TITS.2021.3105550
    [2] 中国科学技术协会. 中国科协发布2020重大科学问题和工程技术难题[EB/OL]. https://www.cast.org.cn/art/2020/8/16/art_90_130822.html, 2020.
    [3] GUO Xiansheng, ANSARI N, HU Fangzi, et al. A survey on fusion-based indoor positioning[J]. IEEE Communications Surveys & Tutorials, 2020, 22(1): 566–594. doi: 10.1109/COMST.2019.2951036
    [4] 刘国忠, 胡钊政. 基于SURF和ORB全局特征的快速闭环检测[J]. 机器人, 2017, 39(1): 36–45. doi: 10.13973/j.cnki.robot.2017.0036

    LIU Guozhong and HU Zhaozheng. Fast loop closure detection based on holistic features from SURF and ORB[J]. Robot, 2017, 39(1): 36–45. doi: 10.13973/j.cnki.robot.2017.0036
    [5] 李祎承, 胡钊政, 王相龙, 等. 面向智能车定位的道路环境视觉地图构建[J]. 中国公路学报, 2018, 31(11): 138–146,213. doi: 10.3969/j.issn.1001-7372.2018.11.015

    LI Yicheng, HU Zhaozheng, WANG Xianglong, et al. Construction of a visual map based on road scenarios for intelligent vehicle localization[J]. China Journal of Highway and Transport, 2018, 31(11): 138–146,213. doi: 10.3969/j.issn.1001-7372.2018.11.015
    [6] 姚萌, 贾克斌, 萧允治. 基于单目视频和无监督学习的轻轨定位方法[J]. 电子与信息学报, 2018, 40(9): 2127–2134. doi: 10.11999/JEIT171017

    YAO Meng, JIA Kebin, and SIU Wanchi. Learning-based localization with monocular camera for light-rail system[J]. Journal of Electronics &Information Technology, 2018, 40(9): 2127–2134. doi: 10.11999/JEIT171017
    [7] SHAN Tixiao and ENGLOT B. LeGO-LOAM: Lightweight and ground-optimized LiDAR odometry and mapping on variable terrain[C]. Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 2018: 4758–4765.
    [8] KOIDE K, MIURA J, and MENEGATTI E. A portable three-dimensional LiDAR-based system for long-term and wide-area people behavior measurement[J]. International Journal of Advanced Robotic Systems, 2019, 16(2): 72988141984153. doi: 10.1177/1729881419841532
    [9] WAN Guowei, YANG Xiaolong, CAI Renlan, et al. Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes[C]. Proceedings of 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 2018: 4670–4677.
    [10] 胡钊政, 刘佳蕙, 黄刚, 等. 融合WiFi、激光雷达与地图的机器人室内定位[J]. 电子与信息学报, 2021, 43(8): 2308–2316. doi: 10.11999/JEIT200671

    HU Zhaozheng, LIU Jiahui, HUANG Gang, et al. Integration of WiFi, laser, and map for robot indoor localization[J]. Journal of Electronics &Information Technology, 2021, 43(8): 2308–2316. doi: 10.11999/JEIT200671
    [11] YIN Huan, WANG Yue, DING Xiaqing, et al. 3D LiDAR-based global localization using Siamese neural network[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 21(4): 1380–1392. doi: 10.1109/TITS.2019.2905046
    [12] KIM G, CHOI S, and KIM A. Scan context++: Structural place recognition robust to rotation and lateral variations in urban environments[J]. IEEE Transactions on Robotics, 2022, 38(3): 1856–1874. doi: 10.1109/TRO.2021.3116424
    [13] CHEN Xieyuanli, LÄBE T, MILIOTO A, et al. OverlapNet: A siamese network for computing LiDAR scan similarity with applications to loop closing and localization[J]. Autonomous Robots, 2022, 46(1): 61–81. doi: 10.1007/s10514-021-09999-0
    [14] BROMLEY J, GUYON I, LECUN Y, et al. Signature verification using a “Siamese” time delay neural network[C]. Proceedings of the 6th International Conference on Neural Information Processing Systems (NIPS), Denver, USA, 1993: 737–744.
    [15] BAY H, ESS A, TUYTELAARS T, et al. Speeded–up robust features (SURF)[J]. Computer Vision and Image Understanding, 2008, 110(3): 346–359. doi: 10.1016/j.cviu.2007.09.014
    [16] FISCHLER M A and BOLLES R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24(6): 381–395. doi: 10.1145/358669.358692
    [17] SERVOS J and WASLANDER S L. Multi-channel generalized-ICP: A robust framework for multi-channel scan registration[J]. Robotics and Autonomous Systems, 2017, 87: 247–257. doi: 10.1016/j.robot.2016.10.016
    [18] GEIGER A, LENZ P, and URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]. Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012: 3354–3361.
  • 加载中
图(11) / 表(8)
计量
  • 文章访问数:  554
  • HTML全文浏览量:  190
  • PDF下载量:  102
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-02-15
  • 修回日期:  2022-06-13
  • 网络出版日期:  2022-06-22
  • 刊出日期:  2023-04-10

目录

    /

    返回文章
    返回