高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向遥感图像旋转目标检测的双向衰减损失方法研究

张正 马渝博 柳长安 田青

张正, 马渝博, 柳长安, 田青. 面向遥感图像旋转目标检测的双向衰减损失方法研究[J]. 电子与信息学报, 2023, 45(10): 3578-3586. doi: 10.11999/JEIT220991
引用本文: 张正, 马渝博, 柳长安, 田青. 面向遥感图像旋转目标检测的双向衰减损失方法研究[J]. 电子与信息学报, 2023, 45(10): 3578-3586. doi: 10.11999/JEIT220991
ZHANG Zheng, MA Yubo, LIU Chang’an, TIAN Qing. Research on Bidirectional Attenuation Loss Method for Rotating Object Detection in Remote Sensing Image[J]. Journal of Electronics & Information Technology, 2023, 45(10): 3578-3586. doi: 10.11999/JEIT220991
Citation: ZHANG Zheng, MA Yubo, LIU Chang’an, TIAN Qing. Research on Bidirectional Attenuation Loss Method for Rotating Object Detection in Remote Sensing Image[J]. Journal of Electronics & Information Technology, 2023, 45(10): 3578-3586. doi: 10.11999/JEIT220991

面向遥感图像旋转目标检测的双向衰减损失方法研究

doi: 10.11999/JEIT220991
基金项目: 国家重点研发计划(2020YFB1600704)
详细信息
    作者简介:

    张正:男,副研究员,研究方向为人工智能与图像处理

    马渝博:男,硕士生,研究方向为人工智能与图像处理

    柳长安:男,教授,研究方向为人工智能与图像处理

    田青:男,教授,研究方向为人工智能与图像处理

    通讯作者:

    马渝博 2020316210116@mail.ncut.edu.cn

  • 中图分类号: TN911; TP753

Research on Bidirectional Attenuation Loss Method for Rotating Object Detection in Remote Sensing Image

Funds: National Key Research and Development Program (2020YFB1600704)
  • 摘要: 遥感图像中的目标检测技术是计算机视觉领域的热点研究之一。为了适应遥感图像中的复杂背景和任意方向的目标,主流的目标检测模型均采用旋转检测方法。然而,用于旋转检测的定位损失函数通常存在变化趋势与实际偏斜交并比(Intersection-over-Union, IoU)的变化趋势不一致的问题。为此,该文提出一种新的面向旋转目标检测的双向衰减损失方法。具体而言,该方法通过高斯乘积模拟偏斜IoU,并依据预测位置的偏差从两个方向衰减乘积。双向衰减损失能够反映由位置偏差引起的偏斜IoU变化,其变化趋势与偏斜IoU有着更强的一致性,并且与其他相关方法相比性能更好。在DOTAv1.0数据集上的实验表明,所提方法在多种基底函数和不同精度条件下都是有效的。
  • 图  1  KFIoU损失的偏斜IoU逼近过程

    图  2  两种偏移情况下的偏斜IoU表现

    图  3  双向衰减损失的偏斜IoU逼近过程

    图  4  偏斜IoU关于预测值偏移量的变化图像

    图  6  最远衰减距离

    图  10  衰减系数与归一化偏斜IoU的差异比较

    图  5  中心点偏移向量在真实值长边(y轴)和短边(x轴)方向的投影

    图  7  衰减系数关于偏移量的变化图像

    图  8  数据集中目标尺寸的分布情况统计

    图  9  结合双向衰减损失的RetinaNet网络结构

    图  11  DOTAv1.0上的检测结果可视化

    表  1  不同$\lambda $取值对RetinaNet网络检测性能的影响

    $\lambda $mAP(%)
    1.6070.59
    1.7070.71
    1.7571.24
    1.8071.22
    1.9070.12
    下载: 导出CSV

    表  2  基于不同损失计算方式的目标检测精度比较

    自变量x$ - \ln \left( {x + \varepsilon } \right)$$1 - x$${{\rm{e}}^{1 - x} } - 1$
    KFIoU69.3570.0770.30
    BAIoU71.01(+1.66)70.93(+0.86)71.24(+0.94)
    下载: 导出CSV

    表  3  基于不同损失函数的目标检测精度比较

    损失函数AP50AP75AP50:95
    KFIoU70.3033.1634.45
    BAIoU71.24(+0.94)36.58(+3.42)37.06(+2.61)
    下载: 导出CSV

    表  4  双向衰减损失与经典水平损失函数的检测精度比较

    损失函数mAP
    SmoothL155.03
    GIoU55.54
    BAIoU55.44
    下载: 导出CSV

    表  5  不同损失函数在DOTAv1.0中5类典型目标的检测结果对比(%)

    网络模型损失函数飞机小型汽车大型汽车直升机mAP
    RetinaNetSmoothL1[7]70.9056.3048.4064.8040.3156.14
    GWD[18]79.9868.5566.5374.1852.7568.39
    KFIoU83.5674.4667.6976.4359.3670.30
    BAIoU85.1374.0170.2976.0260.7771.24
    R3DetDCL[16]85.1471.0279.5685.6254.4575.15
    KLD[19]87.4173.4383.0787.0060.7378.32
    KFIoU87.6175.6584.0688.3862.0179.54
    BAIoU87.7076.4585.0289.2262.3480.14
    下载: 导出CSV
  • [1] 李晓博, 孙文方, 李立. 静止轨道遥感卫星海面运动舰船快速检测方法[J]. 电子与信息学报, 2015, 37(8): 1862–1867. doi: 10.11999/JEIT141615

    LI Xiaobo, SUN Wenfang, and LI Li. Ocean moving ship detection method for remote sensing satellite in geostationary orbit[J]. Journal of Electronics &Information Technology, 2015, 37(8): 1862–1867. doi: 10.11999/JEIT141615
    [2] 陈琪, 陆军, 赵凌君, 等. 基于特征的SAR遥感图像港口检测方法[J]. 电子与信息学报, 2010, 32(12): 2873–2878. doi: 10.3724/SP.J.1146.2010.00079

    CHEN Qi, LU Jun, ZHAO Lingjun, et al. Harbor detection method of SAR remote sensing images based on feature[J]. Journal of Electronics &Information Technology, 2010, 32(12): 2873–2878. doi: 10.3724/SP.J.1146.2010.00079
    [3] 李轩, 刘云清. 基于似圆阴影的光学遥感图像油罐检测[J]. 电子与信息学报, 2016, 38(6): 1489–1495. doi: 10.11999/JEIT151334

    LI Xuan and LIU Yunqing. Oil tank detection in optical remote sensing imagery based on quasi-circular shadow[J]. Journal of Electronics &Information Technology, 2016, 38(6): 1489–1495. doi: 10.11999/JEIT151334
    [4] ZHANG Zheng, MIAO Chunle, LIU Chang’an, et al. DCS-TransUperNet: Road segmentation network based on CSwin transformer with dual resolution[J]. Applied Sciences, 2022, 12(7): 3511. doi: 10.3390/app12073511
    [5] ZHANG Zheng, XU Zhiwei, LIU Chang’an, et al. Cloudformer: Supplementary aggregation feature and mask-classification network for cloud detection[J]. Applied Sciences, 2022, 12(7): 3221. doi: 10.3390/app12073221
    [6] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 580–587.
    [7] GIRSHICK R. Fast R-CNN[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1440–1448.
    [8] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. doi: 10.1109/TPAMI.2016.2577031
    [9] REDMON J and FARHADI A. YOLOv3: An incremental improvement[J]. arXiv preprint arXiv: 1804.02767, 2018.
    [10] BOCHKOVSKIY A, WANG C Y, and LIAO H Y M. YOLOv4: Optimal speed and accuracy of object detection[J]. arXiv preprint arXiv: 2004.10934, 2020.
    [11] 邵延华, 张铎, 楚红雨, 等. 基于深度学习的YOLO目标检测综述[J]. 电子与信息学报, 2022, 44(10): 3697–3708. doi: 10.11999/JEIT210790

    SHAO Yanhua, ZHANG Duo, CHU Hongyu, et al. A review of YOLO object detection based on deep learning[J]. Journal of Electronics &Information Technology, 2022, 44(10): 3697–3708. doi: 10.11999/JEIT210790
    [12] LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot MultiBox detector[C]. 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 21–37.
    [13] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318–327. doi: 10.1109/TPAMI.2018.2858826
    [14] DING Jian, XUE Nan, LONG Yang, et al. Learning RoI transformer for oriented object detection in aerial images[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 2844–2853.
    [15] YANG Xue, YAN Junchi, FENG Ziming, et al. R3Det: Refined single-stage detector with feature refinement for rotating object[C]. Proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021: 3163–3171.
    [16] YANG Xue, HOU Liping, ZHOU Yue, et al. Dense label encoding for boundary discontinuity free rotation detection[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 15814–15824.
    [17] CHEN Zhiming, CHEN Ke’an, LIN Weiyao, et al. PIoU loss: Towards accurate oriented object detection in complex environments[C]. 16th European Conference on Computer Vision, Glasgow, UK, 2020: 195–211.
    [18] YANG Xue, YAN Junchi, MING Qi, et al. Rethinking rotated object detection with Gaussian Wasserstein distance loss[C]. Proceedings of the 38th International Conference on Machine Learning, 2021: 11830–11841.
    [19] YANG Xue, YANG Xiaojiang, YANG Jirui, et al. Learning high-precision bounding box for rotated object detection via kullback-leibler divergence appendix[C]. 35th Conference on Neural Information Processing Systems, 2021.
    [20] YANG Xue, ZHOU Yue, ZHANG Gefan, et al. The KFIoU loss for rotated object detection[J]. arXiv preprint arXiv: 2201.12558, 2022.
    [21] XIA Guisong, BAI Xiang, DING Jian, et al. DOTA: A large-scale dataset for object detection in aerial images[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3974–3983.
    [22] EVERINGHAM M, VAN GOOL L, WILLIAMS C K I, et al. The PASCAL visual object classes (VOC) challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303–338. doi: 10.1007/s11263-009-0275-4
    [23] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context[C]. 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 740–755.
  • 加载中
图(11) / 表(5)
计量
  • 文章访问数:  270
  • HTML全文浏览量:  165
  • PDF下载量:  61
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-07-26
  • 修回日期:  2023-03-30
  • 网络出版日期:  2023-04-04
  • 刊出日期:  2023-10-31

目录

    /

    返回文章
    返回