Advanced Search
Volume 40 Issue 9
Aug.  2018
Turn off MathJax
Article Contents
Meng YAO, Kebin JIA, Wanchi SIU. Learning-based Localization with Monocular Camera for Light-rail System[J]. Journal of Electronics & Information Technology, 2018, 40(9): 2127-2134. doi: 10.11999/JEIT171017
Citation: Meng YAO, Kebin JIA, Wanchi SIU. Learning-based Localization with Monocular Camera for Light-rail System[J]. Journal of Electronics & Information Technology, 2018, 40(9): 2127-2134. doi: 10.11999/JEIT171017

Learning-based Localization with Monocular Camera for Light-rail System

doi: 10.11999/JEIT171017
Funds:  The National Natural Science Foundation of China (61672064), The Beijing Natural Science Foundation (KZ201610005007)
  • Received Date: 2017-10-31
  • Rev Recd Date: 2018-05-21
  • Available Online: 2018-07-12
  • Publish Date: 2018-09-01
  • The visual-based scene recognition and localization module is widely used in vehicle safety system. This paper proposes a new method of scene recognition based on local key region and key frame, which is based on the problem of large amount of training data, large matching complexity and low tracking precision. The proposed method meets the real-time requirements with high accuracy. First, the method uses the unsupervised method to extract the significant regions of the single reference sequence captured by the monocular camera as the key regions. The binary features with low correlation in key regions are also extracted to improve the scene matching accuracy and reduce the computational complexity of feature generation and matching. Secondly, key frames in the reference sequence are extracted based on the discrimination score to reduce the retrieval range of the tracking module and improve the efficiency. Practical field tests are done on real data of the light railway system in Hong Kong and the open test data set in Nordland. The experimental results show that the proposed method achieves fast matching and the precision is 9.8% higher than SeqSLAM which is based on global feature.
  • loading
  • LOWRY S, SUNDERHAUF N, NEWMAN P, et al. Visual place recognition: a survey[J]. IEEE Transactions on Robotics, 2016, 32(1): 1–19 doi: 10.1109/TRO.2015.2496823
    DAYOUB F, MORRIS T, BEN U, et al. Vision-only autonomous navigation using topometric maps[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 2013: 1923–1929.
    MURILLO A C, SINGH G, KOSECKÁ J, et al. Localization in urban environments using a panoramic gist descriptor[J]. IEEE Transactions on Robotics, 2013, 29(1): 146–160 doi: 10.1109/TRO.2012.2220211
    MADDERN W and VIDAS S. Towards robust night and day place recognition using visible and thermal imaging[OL]. https://eprints.qut.edu.au/52646/.
    MCMANUS C, FURGALE P, and BARFOOT T D. Towards lighting-invariant visual navigation: An appearance-based approach using scanning laser-rangefinders[J]. Robotics and Autonomous Systems, 2013, 61(8): 836–852 doi: 10.1016/j.robot.2013.04.008
    LINEGAR C, CHURCHILL W, and NEWMAN P. Made to measure: Bespoke landmarks for 24-hour, all-weather localisation with a camera[C]. 2016 IEEE International Conference on. Robotics and Automation (ICRA), Stockholm, Sweden, 2016:787–794.
    LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91–110 doi: 10.1023/B:VISI.0000029664.99615.94
    BAY H, ESS A, TUYTELAARS T, et al. Speeded-up robust features (SURF)[J]. Computer Vision and Image Understanding, 2008, 110(3): 346–359 doi: 10.1007/11744023_32
    ROSTEN E, PORTER R, and DRUMMOND T. Faster and better: A machine learning approach to corner detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(1): 105–119 doi: 10.1109/TPAMI.2008.275
    CALONDER M, LEPETIT V, STRECHA C, et al. BRIEF: Binary robust independent elementary features[C]. Proceedings of the 11th European Conference on Computer Vision, Crete, Greece, 2010: 778–792.
    RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB: An efficient alternative to SIFT or SURF[C]. Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 2011: 2564–2571.
    MCMANUS C, UPCROFT B, and NEWMANN P. Scene signatures: Localised and point-less features for localisation[C]. Robotics:Science and Systems, Berkeley, USA, 2014: 1–9.
    HAN Fei, YANG Xue, DENG Yiming, et al. SRAL: Shared representative appearance learning for long-term visual place recognition[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 1172–1179 doi: 10.1109/LRA.2017.2662061
    CARLEVARIS-BIANCO N and EUSTICE R M. Learning visual feature descriptors for dynamic lighting conditions[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, USA, 2014: 2769–2776.
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1097–1105.
    SÜNDERHAUF N, SHIRAZI S, JACOBSON A, et al. Place recognition with ConvNet landmarks: Viewpoint-robust, condition-robust, training-free[C]. Proceedings of Robotics: Science and Systems XII, Rome, Italy, 2015: 296–296.
    ZITNICK C L and DOLLÁR P. Edge boxes: Locating object proposals from edges[C]. Proceedings of the 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 391–405.
    ARROYO R, ALCANTARILLA P F, BERGASA L M, et al. Fusion and binarization of CNN features for robust topological localization across seasons[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, South Korea, 2016: 4656–4663.
    LINEGAR C, CHURCHILL W, and NEWMAN P. Made to measure: Bespoke landmarks for 24-hour, all-weather localisation with a camera[C]. IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 2016: 787–794.
    DALAL N and TRIGGS B. Histograms of oriented gradients for human detection[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, USA, 2005: 886–893.
    MILFORD M J and WYETH G F. SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights[C]. IEEE International Conference on Robotics and Automation, Saint Paul, USA, 2012: 1643–1649.
    BRESSON G, ALSAYED Z, LI Yu, et al. Simultaneous localization and mapping: A survey of current trends in autonomous driving[J]. IEEE Transactions on Intelligent Vehicles, 2017, 2(3): 194–220 doi: 10.1109/TIV.2017.2749181
    KIM P, COLTIN B, ALEXANDROV O, et al. Robust visual localization in changing lighting conditions[C]. IEEE International Conference on Robotics and Automation, Singapore, 2017: 5447–5452.
    BAI Dongdong, WANG Chaoqun, ZHANG Bo, et al. Sequence searching with CNN features for robust and fast visual place recognition[J]. Computers and Graphics, 2018, 70: 270–280 doi: 10.1016/j.cag.2017.07.019
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(2)

    Article Metrics

    Article views (1525) PDF downloads(47) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return