高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于多域雷达回波数据融合的海面小目标分类网络模型

赵子健 许述文 水鹏朗

霍小林, 楼正国, 李军, 汪元美. 基于最小偶极子数解的脑磁定位方法[J]. 电子与信息学报, 2001, 23(10): 937-942.
引用本文: 赵子健, 许述文, 水鹏朗. 基于多域雷达回波数据融合的海面小目标分类网络模型[J]. 电子与信息学报. doi: 10.11999/JEIT240818
Huo Xiaolin, Lou Zhengguo, Li Jun, Wang Yuanmei. A METHOD OF LOCALIZATION OF MEG BASED ON THE MINIMUM-DIPOLE SOLUTIONS[J]. Journal of Electronics & Information Technology, 2001, 23(10): 937-942.
Citation: ZHAO Zijian, XU Shuwen, SHUI Penglang. A Network Model for Surface Small Targets Classification Based on Multidomain Radar Echo Data Fusion[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240818

基于多域雷达回波数据融合的海面小目标分类网络模型

doi: 10.11999/JEIT240818
基金项目: 国家自然科学基金(62371382)
详细信息
    作者简介:

    赵子健:男,博士生,研究方向为海面小目标识别、深度学习、机器学习、目标检测等

    许述文:男,教授,研究方向为雷达目标检测与识别、机器学习、时频分析、SAR图像处理等

    水鹏朗:男,教授,研究方向为海杂波建模、雷达目标检测与识别、图像处理等

    通讯作者:

    水鹏朗 plshui@xidian.edu.cn

  • 中图分类号: TN957.52

A Network Model for Surface Small Targets Classification Based on Multidomain Radar Echo Data Fusion

Funds: The National Natural Science Foundation of China (62371382)
  • 摘要: 海面小目标识别是海事雷达监视任务中一个重要且具有挑战性的问题。由于海面小目标类型多样、环境复杂多变,对其进行有效分类存在较大困难。在高分辨体制雷达下,海面小目标通常只占据一或几个距离单元,缺乏足够的空间散射结构信息,因此目标的雷达截面积(RCS)起伏和径向速度变化成为分类的主要依据。为此,该文提出一种基于多域雷达回波数据融合的分类网络模型,用于海面小目标的分类任务。由于不同域的数据具有其特殊的物理意义,因此该文构建了时域LeNet(T-LeNet)神经网络模块和时频特征提取神经网络模块,分别从雷达海面回波信号的幅度序列和时频分布(TFD)即时频图中提取特征。其中幅度序列主要反映了目标RCS的起伏特性,而时频图不仅反映RCS起伏特性,还能体现目标径向速度的变化。最后,利用IPIX, CSIR数据库和自测的无人机数据集构建了包括4种海面小目标的数据集:锚定漂浮小球、漂浮船只、低空无人机(UAV)和移动的快艇。实验结果表明所提方法具有良好的识别能力。
  • 图  1  4种海面小目标

    图  2  4类目标的幅度序列和时频图示例

    图  3  目标识别网络流程图

    图  4  网络结构

    图  5  混淆矩阵

    图  6  不同实验在训练集和验证集的损失与准确率

    表  1  4类目标与其对应的雷达参数

    目标类型 数据来源 距离分辨率(m) 重频(kHz) 载频(GHz) 极化方式 工作模式 波束宽度(°) 海况(级) 训练样本数 测试样本数
    锚定漂浮小球 IPIX93 30 1 9.39 HH/HV/VH/VV 驻留模式 0.9 2/3 21940 6560
    漂浮船只 IPIX98 30 1 9.39 HH/HV/VH/VV 驻留模式 0.9 / 12768 3416
    低空无人机 灵山岛 3 4 10.00 HH/VV 驻留模式 1.1 2 7569 1 893
    移动的快艇 CSIR 15 2.5/5 6.90 VV 跟踪模式 1.8 3 2920 964
    下载: 导出CSV

    表  2  混淆矩阵

    预测类别
    目标1目标2目标3目标4
    真实类别目标1T1P1F1P2F1P3F1P4
    目标2F2P1T2P2F2P3F2P4
    目标3F3P1F3P2T3P3F3P4
    目标4F4P1F4P2F4P3T4P4
    下载: 导出CSV

    表  3  不同实验在6个评价指标下的分类结果

    准确率误差精确度召回率F1-measureKappa
    时频图+AlexNet0.77730.22270.81390.79120.80240.6403
    时频图+Vgg16[5]0.80220.19780.84610.82020.83300.6792
    时频图+ResNet180.81450.18550.84870.82380.83610.7006
    幅度序列+T-LeNet0.92500.07500.91450.91300.91380.8823
    幅度序列+时频图+T-LeNet+AlexNet0.94260.05740.94400.95280.94840.9106
    幅度序列+时频图+T-LeNet+Vgg160.95490.04510.95580.95780.95680.9296
    幅度序列+时频图+T-LeNet+ResNet180.97210.02790.97080.97760.97420.9567
    下载: 导出CSV

    表  4  不同ResNet网络在6个评价指标下的分类结果

    准确率误差精确度召回率F1-measureKappa
    时频图+ResNet180.81450.18550.84870.82380.83610.7006
    时频图+ResNet340.82450.17550.86770.83790.85250.7165
    时频图+ResNet500.82020.17980.86190.83590.84870.7100
    幅度序列+时频图+T-LeNet+ResNet180.97210.02790.97080.97760.97420.9567
    幅度序列+时频图+T-LeNet+ResNet340.97260.02740.97290.97750.97520.9574
    幅度序列+时频图+T-LeNet+ResNet500.97360.02640.97070.97770.97420.9589
    下载: 导出CSV

    表  5  网络的参数量、训练时间、测试时间和单个样本测试时间

    网络参数量(M)训练时间(min)测试时间(s)单个样本测试时间(ms)
    T-LeNet8.46697.2523.871.86
    AlexNet61.104887.1346.973.66
    Vgg16138.3651216.1555.054.29
    ResNet1811.1786120.1245.943.58
    ResNet3421.2867195.1972.655.66
    ResNet5023.5162330.33136.3310.62
    T-LeNet+AlexNet71.792295.3751.464.01
    T-LeNet+Vgg16149.0489233.6759.674.65
    T-LeNet+ResNet1821.7429130.3250.693.95
    T-LeNet+ResNet3431.8511203.6285.376.65
    T-LeNet+ResNet5034.4677342.09145.3911.33
    下载: 导出CSV
  • [1] ZHANG Tianwen, ZHANG Xiaoling, KE Xiao, et al. HOG-ShipCLSNet: A novel deep learning network with HOG feature fusion for SAR ship classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5210322. doi: 10.1109/TGRS.2021.3082759.
    [2] 关键. 雷达海上目标特性综述[J]. 雷达学报, 2020, 9(4): 674–683. doi: 10.12000/JR20114.

    GUAN Jian. Summary of marine radar target characteristics[J]. Journal of Radars, 2020, 9(4): 674–683. doi: 10.12000/JR20114.
    [3] NI Jun, ZHANG Fan, YIN Qiang, et al. Random neighbor pixel-block-based deep recurrent learning for polarimetric SAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(9): 7557–7569. doi: 10.1109/TGRS.2020.3037209.
    [4] LEE S J, LEE M J, KIM K T, et al. Classification of ISAR images using variable cross-range resolutions[J]. IEEE Transactions on Aerospace and Electronic Systems, 2018, 54(5): 2291–2303. doi: 10.1109/TAES.2018.2814211.
    [5] XU Shuwen, RU Hongtao, LI Dongchen, et al. Marine radar small target classification based on block-whitened time-frequency spectrogram and pre-trained CNN[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5101311. doi: 10.1109/TGRS.2023.3240693.
    [6] GUO Zixun and SHUI Penglang. Anomaly based sea-surface small target detection using K-nearest neighbor classification[J]. IEEE Transactions on Aerospace and Electronic Systems, 2020, 56(6): 4947–4964. doi: 10.1109/TAES.2020.3011868.
    [7] KUO B C, HO H H, LI C H, et al. A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014, 7(1): 317–326. doi: 10.1109/JSTARS.2013.2262926.
    [8] YIN Qiang, CHENG Jianda, ZHANG Fan, et al. Interpretable POLSAR image classification based on adaptive-dimension feature space decision tree[J]. IEEE Access, 2020, 8: 173826–173837. doi: 10.1109/ACCESS.2020.3023134.
    [9] JIA Qingwei, DENG Tingquan, WANG Yan, et al. Discriminative label correlation based robust structure learning for multi-label feature selection[J]. Pattern Recognition, 2024, 154: 110583. doi: 10.1016/j.patcog.2024.110583.
    [10] ZHONG Jingyu, SHANG Ronghua, ZHAO Feng, et al. Negative label and noise information guided disambiguation for partial multi-label learning[J]. IEEE Transactions on Multimedia, 2024, 26: 9920–9935. doi: 10.1109/TMM.2024.3402534.
    [11] ZHAO Jie, LING Yun, HUANG Faliang, et al. Incremental feature selection for dynamic incomplete data using sub-tolerance relations[J]. Pattern Recognition, 2024, 148: 110125. doi: 10.1016/j.patcog.2023.110125.
    [12] ZOU Yizhang, HU Xuegang, and LI Peipei. Gradient-based multi-label feature selection considering three-way variable interaction[J]. Pattern Recognition, 2024, 145: 109900. doi: 10.1016/j.patcog.2023.109900.
    [13] SUN Xu, GAO Junyu, and YUAN Yuan. Alignment and fusion using distinct sensor data for multimodal aerial scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5626811. doi: 10.1109/TGRS.2024.3406697.
    [14] WU Xin, HONG Danfeng, and CHANUSSOT J. Convolutional neural networks for multimodal remote sensing data classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5517010. doi: 10.1109/TGRS.2021.3124913.
    [15] DUAN Guoxing, WANG Yunhua, ZHANG Yanmin, et al. A network model for detecting marine floating weak targets based on multimodal data fusion of radar echoes[J]. Sensors, 2022, 22(23): 9163. doi: 10.3390/s22239163.
    [16] Cognitive Systems Laboratory. The IPIX radar database[EB/OL]. http://soma.ece.mcmaster.ca/ipix, 2001.
    [17] The Defense, Peace, Safety, and Security Unit of the Council for Scientific and Industrial Research. The Fynmeet radar database[EB/OL]. http://www.csir.co.ca/small_boat_detection, 2007.
    [18] RICHARD C. Time-frequency-based detection using discrete-time discrete-frequency Wigner distributions[J]. IEEE Transactions on Signal Processing, 2002, 50(9): 2170–2176. doi: 10.1109/TSP.2002.801927.
    [19] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [20] KAYED M, ANTER A, and MOHAMED H. Classification of garments from fashion MNIST dataset using CNN LeNet-5 architecture[C]. 2020 International Conference on Innovative Trends in Communication and Computer Engineering (ITCE), Aswan, Egypt, 2020: 238–243. doi: 10.1109/ITCE48509.2020.9047776.
    [21] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386.
    [22] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
  • 加载中
图(6) / 表(5)
计量
  • 文章访问数:  223
  • HTML全文浏览量:  88
  • PDF下载量:  33
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-09-24
  • 修回日期:  2025-02-21
  • 网络出版日期:  2025-03-06

目录

    /

    返回文章
    返回