高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向工业检测的光场相机快速标定研究

王兴政 刘杰豪 韦国耀 陈松伟

王兴政, 刘杰豪, 韦国耀, 陈松伟. 面向工业检测的光场相机快速标定研究[J]. 电子与信息学报, 2022, 44(5): 1530-1538. doi: 10.11999/JEIT211174
引用本文: 王兴政, 刘杰豪, 韦国耀, 陈松伟. 面向工业检测的光场相机快速标定研究[J]. 电子与信息学报, 2022, 44(5): 1530-1538. doi: 10.11999/JEIT211174
WANG Xingzheng, LIU Jiehao, WEI Guoyao, CHEN Songwei. Fast Light Field Camera Calibration for Industrial Inspection[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1530-1538. doi: 10.11999/JEIT211174
Citation: WANG Xingzheng, LIU Jiehao, WEI Guoyao, CHEN Songwei. Fast Light Field Camera Calibration for Industrial Inspection[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1530-1538. doi: 10.11999/JEIT211174

面向工业检测的光场相机快速标定研究

doi: 10.11999/JEIT211174
基金项目: 广东省自然科学基金(2020A1515011559, 2021A1515012287),深圳市科技研究项目(JCYJ20180306174120445, 20200810150441003, ZDYBH201900000002)
详细信息
    作者简介:

    王兴政:男,1983年生,副教授,研究方向为计算摄像、光场成像与分析、机器视觉检测、医学图像分析与识别

    刘杰豪:男,1996年生,硕士生,研究方向为光场数据处理与光场相机标定

    韦国耀:男,1997年生,硕士生,研究方向为双目相机标定、目标检测

    陈松伟:男,1999年生,硕士生,研究方向为光场数据处理与显著性目标检测

    通讯作者:

    王兴政 xingzheng.wang@szu.edu.cn

  • 中图分类号: TP274

Fast Light Field Camera Calibration for Industrial Inspection

Funds: The Natural Science Foundation of Guangdong Province (2020A1515011559, 2021A1515012287), The Shenzhen Science and Technology Research Fund (JCYJ20180306174120445, 20200810150441003, ZDYBH201900000002)
  • 摘要: 由于光场数据量大,现有光场相机标定算法存在速度慢、无法快速校准工业检测中光场相机的参数变化、降低工业检测效率的问题。该文基于稀疏光场成像模型优化光场数据,提出光场相机快速标定算法。该算法以清晰度作为图像质量评价指标,从光场数据中选取高质量、具有代表性的稀疏视图,构建稀疏光场;接着利用稀疏光场求解相机参数初值并优化,得到最佳参数。实验结果表明,与现有最优标定算法相比,该方法不仅提高平均标定速度70%以上,在现有5个数据集的平均标定时间从101.27 s减少到30.99 s,而且保持标定精度在最优水平,在公开数据集PlenCalCVPR2013DatasetA的标定误差仅为0.0714 mm。
  • 图  1  光场相机在工业检测的应用:PCB检测

    图  2  光场图像、微透镜子图像及光场子视图的形成

    图  3  光场相机、虚拟相机阵列模型及子视图质量

    图  4  3种稀疏光场成像模型

    图  5  棋盘格光场图像数据集

    图  6  标定时间随视角个数的变化

    图  7  光场相机标定中4个阶段的运行时间对比

    表  1  标定数据集

    数据集数量(张)尺寸
    (mm)
    角点数量
    (个)
    图像大小
    (像素)
    视角数量
    (个)
    A103.61×3.6119×193280×32809×9
    B103.61×3.6119×193280×32809×9
    C127.22×7.2219×193280×32809×9
    D107.22×7.2219×193280×32809×9
    E1735.0×35.16×83280×32809×9
    下载: 导出CSV

    表  2  不同稀疏光场方案的射线重投影误差(mm)和标定时间(s)

    数据集9×97×75×53×3 s=13×3 s=2
    A(10)0.0749(109.43)0.0733(83.3)0.0739(34.85)0.0714(19.73)0.0728(17.87)
    B(10)0.0455(102.47)0.0439(73.18)0.0419(36.14)0.0403(16.52)0.0432(16.46)
    C(12)0.0917(120.28)0.0901(90.34)0.0872(42.44)0.0910(18.72)0.0825(19.09)
    D(10)0.0845(131.38)0.0805(74.49)0.0805(35.36)0.0760(16.56)0.0811(15.20)
    E(17)0.1941(42.81)0.1883(46.09)0.1765(19.47)0.1581(8.39)0.1838(7.87)
    下载: 导出CSV

    表  3  9×9稠密光场标定方法在不同子视图的标定误差

    位置123456789
    10.12710.09360.07830.08260.08420.08270.08280.10470.1586
    20.08540.08140.08370.08210.08110.08240.08350.08100.0941
    30.07950.08230.08000.07620.07480.07680.08110.08210.0809
    40.07890.08250.07990.07320.07190.07430.07910.08120.0777
    50.07850.08180.07810.07280.07180.07470.07970.08150.0769
    60.07980.08160.08080.07580.07550.07850.08570.08590.0788
    70.08240.08450.08410.08150.08200.08510.09030.08470.0833
    80.10090.08300.08690.09090.08930.08960.08680.08240.0979
    90.21540.13900.08760.08710.08770.08550.08390.12190.1937
    下载: 导出CSV

    表  4  3×3稀疏光场标定方法在不同子视图的标定误差

    位置123456789
    10.18310.12900.08690.07960.07830.08030.09490.14470.2174
    20.11780.08290.07650.07460.07400.07410.07600.08790.1363
    30.09330.07610.07320.07180.07140.07140.07230.07700.1051
    40.08870.07640.07380.07070.07050.07090.07160.07390.0954
    50.08840.07550.07320.07070.07060.07140.07240.07400.0934
    60.09100.07460.07420.07230.07260.07340.07580.07740.0972
    70.10100.07830.07510.07460.07530.07600.07770.07970.1098
    80.14420.09230.08030.08030.07820.07720.07770.09320.1470
    90.27730.18720.11060.09020.08280.08480.10390.17230.2604
    下载: 导出CSV

    表  5  不同方法的射线重投影误差对比(mm)

    数据集文献[15]文献[16]文献[17]3×3稀疏光场
    A(10)0.2180.2140.07490.0728
    B(10)0.1470.1420.04550.0432
    C(12)0.09170.0882
    D(10)0.08450.0811
    E(17)0.5580.4480.1940.183
    下载: 导出CSV

    表  6  不同方法的标定时间对比(s)

    数据集文献[15]文献[16]文献[17]3×3稀疏光场
    A(10)1928183100.0532.22
    B(10)4427416100.9331.25
    C(12)90.9134.08
    D(10)113.4930.78
    E(17)13558752.6626.64
    下载: 导出CSV
  • [1] 周志良. 光场成像技术研究[D]. [博士论文], 中国科学技术大学, 2012.

    ZHOU Zhiliang. Research on light field imaging technology[D]. [Ph. D. dissertation], University of Science and Technology of China, 2012.
    [2] 张春萍, 王庆. 光场相机成像模型及参数标定方法综述[J]. 中国激光, 2016, 43(6): 264–275. doi: 10.3788/CJL201643.0609004

    ZHANG Chunping and WANG Qing. Survey on imaging model and calibration of light field camera[J]. Chinese Journal of Lasers, 2016, 43(6): 264–275. doi: 10.3788/CJL201643.0609004
    [3] 方璐, 戴琼海. 计算光场成像[J]. 光学学报, 2020, 40(1): 0111001. doi: 10.3788/AOS202040.0111001

    FANG Lu and DAI Qionghai. Computational light field imaging[J]. Acta Optica Sinica, 2020, 40(1): 0111001. doi: 10.3788/AOS202040.0111001
    [4] HEINZE C, SPYROPOULOS S, HUSSMANN S, et al. Automated robust metric calibration algorithm for multifocus plenoptic cameras[J]. IEEE Transactions on Instrumentation and Measurement, 2016, 65(5): 1197–1205. doi: 10.1109/TIM.2015.2507412
    [5] ILLGNER K, RESTREPO J, JAISWAL S P, et al. Lightfield imaging for industrial applications[J]. SPIE, 2020, 11525, 1152526.
    [6] 刘艳, 李腾飞. 对张正友相机标定法的改进研究[J]. 光学技术, 2014, 40(6): 565–570. doi: 10.13741/j.cnki.11-1879/o4.2014.06.017

    LIU Yan and LI Tengfei. Research of the improvement of Zhang's camera calibration method[J]. Optical Technique, 2014, 40(6): 565–570. doi: 10.13741/j.cnki.11-1879/o4.2014.06.017
    [7] CHUCHVARA A, BARSI A, and GOTCHEV A. Fast and accurate depth estimation from sparse light fields[J]. IEEE Transactions on Image Processing, 2020, 29: 2492–2506. doi: 10.1109/TIP.2019.2959233
    [8] HE Bingwei and LI Y F. A novel method for camera calibration using vanishing points[C]. 14th International Conference on Mechatronics and Machine Vision in Practice, Xiamen, China, 2007: 44–47.
    [9] 孟宪哲, 牛少彰, 吴小媚, 等. 基于相机标定的非对称裁剪检测算法[J]. 电子与信息学报, 2012, 34(10): 2409–2414. doi: 10.3724/SP.J.1146.2012.00357

    MENG Xianzhe, NIU Shaozhang, WU Xiaomei, et al. Detecting asymmetric cropping based on camera calibration[J]. Journal of Electronics &Information Technology, 2012, 34(10): 2409–2414. doi: 10.3724/SP.J.1146.2012.00357
    [10] 刘碧霞, 李绍滋, 郭锋, 等. 一种简单快速的相机标定新方法[J]. 计算机工程与科学, 2011, 33(1): 88–93. doi: 10.3969/j.issn.1007-130X.2011.01.017

    LIU Bixia, LI Shaozi, GUO Feng, et al. A new easy fast camera self-calibration technique[J]. Computer Engineering and Science, 2011, 33(1): 88–93. doi: 10.3969/j.issn.1007-130X.2011.01.017
    [11] 陈钊正, 陈启美. 基于摄像机自标定的视频对比度能见度检测算法与实现[J]. 电子与信息学报, 2010, 32(12): 2907–2912. doi: 10.3724/SP.J.1146.2009.01630

    CHEN Zhaozheng and CHEN Qimei. Video contrast visibility detection algorithm and its implementation based on camera self-calibration[J]. Journal of Electronics &Information Technology, 2010, 32(12): 2907–2912. doi: 10.3724/SP.J.1146.2009.01630
    [12] SU Pochang, SHEN Ju, XU Wanxin, et al. A fast and robust extrinsic calibration for RGB-D camera networks[J]. Sensors, 2018, 18(1): 235. doi: 10.3390/s18010235
    [13] MIKHELSON I V, LEE P G, SAHAKIAN A V, et al. Automatic, fast, online calibration between depth and color cameras[J]. Journal of Visual Communication and Image Representation, 2014, 25(1): 218–226. doi: 10.1016/j.jvcir.2013.03.010
    [14] GARAU N, DE NATALE F G B, and CONCI N. Fast automatic camera network calibration through human mesh recovery[J]. Journal of Real-Time Image Processing, 2020, 17(6): 1757–1768. doi: 10.1007/s11554-020-01002-w
    [15] BOK Y, JEON H G, and KWEON I S. Geometric calibration of micro-lens-based light field cameras using line features[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(2): 287–300. doi: 10.1109/TPAMI.2016.2541145
    [16] LIU Yuxuan, MO Fan, ALEKSANDROV M, et al. Accurate calibration of standard plenoptic cameras using corner features from raw images[J]. Optics Express, 2021, 29(1): 158–169. doi: 10.1364/OE.405168
    [17] DANSEREAU D G, PIZARRO O, and WILLIAMS S B. Decoding, calibration and rectification for Lenselet-based Plenoptic cameras[C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 1027–1034.
    [18] DIGUMARTI S T, DANIEL J, RAVENDRAN A, et al. Unsupervised learning of depth estimation and visual odometry for sparse light field cameras[C]. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, Prague, Czech Republic, 2021: 278–285.
    [19] MONTEIRO N B, BARRETO J P, and GASPAR J A. Standard plenoptic cameras mapping to camera arrays and calibration based on DLT[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(11): 4090–4099. doi: 10.1109/TCSVT.2019.2954305
  • 加载中
图(7) / 表(6)
计量
  • 文章访问数:  1140
  • HTML全文浏览量:  398
  • PDF下载量:  120
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-10-26
  • 修回日期:  2021-12-21
  • 录用日期:  2021-12-28
  • 网络出版日期:  2022-01-12
  • 刊出日期:  2022-05-25

目录

    /

    返回文章
    返回