高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种形态学引导的解耦式SAR舰船有向检测框架

汪泽宇 王青松

汪泽宇, 王青松. 一种形态学引导的解耦式SAR舰船有向检测框架[J]. 电子与信息学报. doi: 10.11999/JEIT250979
引用本文: 汪泽宇, 王青松. 一种形态学引导的解耦式SAR舰船有向检测框架[J]. 电子与信息学报. doi: 10.11999/JEIT250979
WANG Zeyu, WANG Qingsong. A Morphology-Guided Decoupled Framework for Oriented SAR Ship Detection[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250979
Citation: WANG Zeyu, WANG Qingsong. A Morphology-Guided Decoupled Framework for Oriented SAR Ship Detection[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250979

一种形态学引导的解耦式SAR舰船有向检测框架

doi: 10.11999/JEIT250979 cstr: 32379.14.JEIT250979
基金项目: 国家自然科学基金(62273365),“小米青年学者”项目
详细信息
    作者简介:

    汪泽宇:男,研究生,研究方向为深度学习、目标检测、表征学习

    王青松:男,教授,研究方向为遥感图像精化处理、协同探测感知与信息融合

    通讯作者:

    王青松 wangqs5@mail.sysu.edu.cn

  • 中图分类号: TP75; TP391.41

A Morphology-Guided Decoupled Framework for Oriented SAR Ship Detection

Funds: The National Natural Science Foundation of China(62273365), Xiaomi Young Talents Program
  • 摘要: 合成孔径雷达(SAR)以其全天时、全天候的观测能力,在遥感检测中得到了广泛应用。然而,受限于标注精度,目前主流的 SAR 目标检测方法多依赖水平框标注,难以实现精确的目标角度和尺度估计。同时,尽管弱监督学习在光学图像中的角度预测取得了进展,但其忽视了 SAR 特有的成像几何,难以有效泛化。为解决上述挑战,该文提出了一种融合 SAR 成像机理的有向舰船检测新框架,核心思想在于将检测任务解耦为定位与方向估计两个独立的子模块。其中,定位模块可以直接利用任意现有的、在水平框标注上训练的检测器;而方向估计模块则在一个专门构建的形态学合成二值数据集上进行全监督训练。该框架的优势在于无需修改原有检测器结构和重新训练的前提下,即插即用地赋予模型高精度的有向框预测能力。实验验证了所提方法在多个数据集上相较于现有仅依赖水平框监督的方法表现出更优的性能,部分场景中甚至超越全监督方法,体现出强大的有效性与工程应用价值
  • 图  1  模型总览图,(c)模块训练集来源于形态学仿真的数据,在推理阶段将(a)模块的二值化切片作为输入得到最终的有向检测结果。

    图  2  不同仿真条件单独作用时的效果图

    图  3  不同方法的检测可视化结果对比图,第一列中的绿色矩形代表数据集的矩形框标注,第二列到第三列中的绿色矩形代表网络正确预测的目标,红色矩形代表网络的虚警

    图  4  仿真数据集样本多样性结果展示,虚线代表使用了全部的仿真条件的性能指标。各实验结果根据对照组进行归一化显示处理

    图  5  不同源的数据分布降维可视化,仿真样本数据集与HRSID数据集都进行随机抽样选取5000个样本,由于SSDD训练集样本数少于5000,使用了所有训练集进行降维可视化

    图  6  不同方法使用不同训练样本量对AP50精度的影响

    表  1  本文方法与典型检测方法在SSDD数据集上的性能比较

    方法 监督方式 AP50 R
    R-RetinaNet[18] 有向框监督 79.9 84.0
    GWD[19] 有向框监督 82.5 85.9
    S2Anet[20] 有向框监督 88.4 90.0
    R-Faster-RCNN[3] 有向框监督 87.7 89.8
    RoI Transformer[21] 有向框监督 89.5 91.0
    [14] 水平框监督 87.3 89.7
    本文方法 水平框监督 89.4 95.6
    下载: 导出CSV

    表  2  本文方法与典型检测方法在HRSID数据集上的性能比较

    方法 监督方式 AP50 R
    R-RetinaNet[18] 有向框监督 72.7 79.2
    GWD[19] 有向框监督 73.3 79.1
    S2Anet[20] 有向框监督 80.8 83.9
    R-Faster-RCNN[3] 有向框监督 77.3 82.1
    RoI Transformer[21] 有向框监督 83.8 86.4
    H2Rbox-v2[7] 水平框监督 56.2 68.6
    Wholly-WOOD[8] 水平框监督 61.5 64.5
    [14] 水平框监督 81.5 85.0
    本文方法 水平框监督 84.3 91.9
    下载: 导出CSV

    表  3  不同轴对齐检测器对最终精度的影响

    数据集 轴对齐检测器 HBBs AP50 OBBs AP50 HBBs R OBBs R
    SSDD FCOS[22] 81.9 78.1 91.2 88.8
    CenterNet[23] 90.2 88.5 96.7 94.7
    Faster-RCNN[3] 90.0 88.1 93.0 90.1
    YOLOX[4] 90.3 89.4 99.1 95.6
    HRSID FOCS[22] 78.4 71.2 87.6 80.3
    CenterNet[23] 86.6 77.5 92.3 85.1
    Faster-RCNN[3] 79.7 70.3 83.5 78.7
    YOLOX[4] 89.0 84.3 96.1 90.8
    下载: 导出CSV
  • [1] DI BISCEGLIE M and GALDI C. CFAR detection of extended objects in high-resolution SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2005, 43(4): 833–843. doi: 10.1109/TGRS.2004.843190.
    [2] LENG Xiangguang, JI Kefeng, YANG Kai, et al. A bilateral CFAR algorithm for ship detection in SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2015, 12(7): 1536–1540. doi: 10.1109/LGRS.2015.2412174.
    [3] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. doi: 10.1109/tpami.2016.2577031.
    [4] GE Zheng, LIU Songtao, WANG Feng, et al. YOLOX: Exceeding YOLO series in 2021[EB/OL]. https://arxiv.org/abs/2107.08430, 2021.
    [5] CHEN Yuming, YUAN Xinbin, WANG Jiabao, et al. YOLO-MS: Rethinking multi-scale representation learning for real-time object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, 47(6): 4240–4252. doi: 10.1109/tpami.2025.3538473.
    [6] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[C]. 9th International Conference on Learning Representations, Austria, 2021. (查阅网上资料, 未找到本条文献出版城市信息, 请确认并补充).
    [7] YU Yi, YANG Xue, LI Qingyun, et al. H2RBox-v2: Incorporating symmetry for boosting horizontal box supervised oriented object detection[C]. Proceedings of the 37th International Conference on Neural Information Processing Systems, New Orleans, USA, 2023: 2581.
    [8] YU Yi, YANG Xue, LI Yansheng, et al. Wholly-WOOD: Wholly leveraging diversified-quality labels for weakly-supervised oriented object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, 47(6): 4438–4454. doi: 10.1109/TPAMI.2025.3542542.
    [9] HU Fengming, XU Feng, WANG R, et al. Conceptual study and performance analysis of tandem multi-antenna spaceborne SAR interferometry[J]. Journal of Remote Sensing, 2024, 4: 0137. doi: 10.34133/remotesensing.0137.
    [10] YOMMY A S, LIU Rongke, and WU Shuang. SAR image despeckling using refined lee filter[C]. 2015 7th International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 2015: 260–265. doi: 10.1109/IHMSC.2015.236.
    [11] KANG Yuzhuo, WANG Zhirui, ZUO Haoyu, et al. ST-Net: Scattering topology network for aircraft classification in high-resolution SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5202117. doi: 10.1109/tgrs.2023.3236987.
    [12] ZHANG Yipeng, LU Dongdong, QIU Xiaolan, et al. Scattering-point-guided RPN for oriented ship detection in SAR images[J]. Remote Sensing, 2023, 15(5): 1411. doi: 10.3390/rs15051411.
    [13] PAN Dece, GAO Xin, DAI Wei, et al. SRT-Net: Scattering region topology network for oriented ship detection in large-scale SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5202318. doi: 10.1109/tgrs.2024.3351366.
    [14] YUE Tingxuan, ZHANG Yanmei, WANG Jin, et al. A weak supervision learning paradigm for oriented ship detection in SAR image[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5207812. doi: 10.1109/TGRS.2024.3375069.
    [15] WEI Shunjun, ZENG Xiangfeng, QU Qizhe, et al. HRSID: A high-resolution SAR images dataset for ship detection and instance segmentation[J]. IEEE Access, 2020, 8: 120234–120254. doi: 10.1109/access.2020.3005861.
    [16] ZHANG Tianwen, ZHANG Xiaoling, LI Jianwei, et al. SAR ship detection dataset (SSDD): Official release and comprehensive data analysis[J]. Remote Sensing, 2021, 13(18): 3690. doi: 10.3390/rs13183690.
    [17] ZHOU Yue, YANG Xue, ZHANG Gefan, et al. MMRotate: A rotated object detection benchmark using PyTorch[C]. Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 2022: 7331–7334. doi: 10.1145/3503161.3548541.
    [18] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 2999–3007. doi: 10.1109/iccv.2017.324.
    [19] YANG Xue, ZHANG Gefan, YANG Xiaojiang, et al. Detecting rotated objects as gaussian distributions and its 3-D generalization[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(4): 4335–4354. doi: 10.1109/tpami.2022.3197152.
    [20] LI Jianfeng, CHEN Mingxu, HOU Siyuan, et al. An improved S2A-net algorithm for ship object detection in optical remote sensing images[J]. Remote Sensing, 2023, 15(18): 4559. doi: 10.3390/rs15184559.
    [21] DING Jian, XUE Nan, LONG Yang, et al. Learning RoI transformer for oriented object detection in aerial images[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 2844–2853. doi: 10.1109/cvpr.2019.00296.
    [22] TIAN Zhi, SHEN Chunhua, CHEN Hao, et al. FCOS: A simple and strong anchor-free object detector[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(4): 1922–1933. doi: 10.1109/tpami.2020.3032166.
    [23] DUAN Kaiwen, BAI Song, XIE Lingxi, et al. CenterNet++ for object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(5): 3509–3521. doi: 10.1109/tpami.2023.3342120.
    [24] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/cvpr.2016.90.
    [25] LIU Zhuang, MAO Hanzi, WU Chaoyuan, et al. A ConvNet for the 2020s[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 11966–11976. doi: 10.1109/cvpr52688.2022.01167.
    [26] HEALY J and MCINNES L. Uniform manifold approximation and projection[J]. Nature Reviews Methods Primers, 2024, 4(1): 82. doi: 10.1038/s43586-024-00363-x.
  • 加载中
图(6) / 表(3)
计量
  • 文章访问数:  36
  • HTML全文浏览量:  14
  • PDF下载量:  0
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-09-24
  • 修回日期:  2025-12-12
  • 录用日期:  2025-12-12
  • 网络出版日期:  2025-12-25

目录

    /

    返回文章
    返回