高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向雷达信号分选的卷积混合多注意力编解码网络

常怀昭 顾颖彦 韩蕴智 晋本周

常怀昭, 顾颖彦, 韩蕴智, 晋本周. 面向雷达信号分选的卷积混合多注意力编解码网络[J]. 电子与信息学报. doi: 10.11999/JEIT251031
引用本文: 常怀昭, 顾颖彦, 韩蕴智, 晋本周. 面向雷达信号分选的卷积混合多注意力编解码网络[J]. 电子与信息学报. doi: 10.11999/JEIT251031
CHANG Huaizhao, GU Yingyan, HAN Yunzhi, JIN Benzhou. Convolutional Mixed Multi-Attention Encoder-Decoder Network for Radar Signal Sorting[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT251031
Citation: CHANG Huaizhao, GU Yingyan, HAN Yunzhi, JIN Benzhou. Convolutional Mixed Multi-Attention Encoder-Decoder Network for Radar Signal Sorting[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT251031

面向雷达信号分选的卷积混合多注意力编解码网络

doi: 10.11999/JEIT251031 cstr: 32379.14.JEIT251031
基金项目: 国家自然科学基金资助项目(62371230)
详细信息
    作者简介:

    常怀昭:男,硕士研究生,研究方向为雷达信号处理

    顾颖彦:男,研究员,研究方向为态势推理技术研究

    韩蕴智:男,硕士研究生,研究方向为雷达信号处理

    晋本周:男,教授,研究方向为雷达信号处理

    通讯作者:

    晋本周 jinbz@nuaa.edu.cn

  • 中图分类号: TP301; TN95

Convolutional Mixed Multi-Attention Encoder-Decoder Network for Radar Signal Sorting

Funds: Projects Supported by the National Natural Science Foundation of China (62371230)
  • 摘要: 雷达信号分选是电磁环境感知领域中的关键技术之一。随着雷达辐射源调制样式、工作模式、协同方式日益复杂,对雷达辐射源的侦收过程中,虚假脉冲、脉冲丢失、参数测量误差等问题日益突出,常规信号分选方法性能严重下降。针对上述问题,该文提出了一种卷积混合多注意力的编解码网络,其中编码器与解码器均基于双分支扩张瓶颈模块构建,通过并行膨胀卷积路径捕获多尺度时序模式,逐步扩大感受野以融合上下文信息;在编解码器间嵌入局部注意力模块,用于建模脉冲序列的时序依赖关系并增强全局表征能力;同时在跳跃连接中引入特征选择模块,自适应地筛选多阶段特征图中的关键信息,最终通过分类器实现逐脉冲的雷达信号分类,进而实现分选。仿真实验表明,与主流基线方法相比,在脉冲丢失和虚假脉冲概率高且存在脉冲到达时间估计误差等复杂条件下,所提出方法具有更好的信号分选性能。
  • 图  1  网络结构

    图  2  双分支扩张瓶颈模块

    图  3  特征选择模块

    图  4  不同TOA到达误差下分选性能对比

    图  5  不同漏脉冲概率下分选性能对比

    图  6  不同假脉冲概率下分选性能对比

    图  7  各场景下分选性能对比

    图  8  泛化性能对比

    表  1  雷达参数设置

    雷达 RF/GHz PW/μs PRI/μs PRI调制方式
    1 8.1~8.5 20~40 540 固定
    2 8.15~8.3 30~45 370, J=0.25 抖动
    3 8.1~8.45 15~40 110, 280, 800 参差
    4 8.25~8.6 25~45 400·20, 600·15, 900·10 组变
    下载: 导出CSV

    表  2  MFR参数设置

    雷达 RF/GHz PW/μs PRI/μs PRI调制方式
    1 8.15~8.4 15~45 240 固定
    8.2~8.45 20~50 310,J=0.1 抖动
    8.25~8.5 25~40 370,410,260 参差
    2 8.1~8.5 20~40 480,J=0.1 抖动
    8.2~8.45 30~50 190,240,380 参差
    8.15~8.4 25~45 290·20,330·14,210·10 组变
    3 8.2~8.4 10~25 430,340 参差
    8.45~8.8 15~40 230·10,330·8,180·24,280·12 组变
    8.3~8.55 20~40 320 固定
    4 8.1~8.45 25~45 410·15,350·20 组变
    8.2~8.5 30~50 390 固定
    8.15~8.6 20~40 150,J=0.2 抖动
    下载: 导出CSV

    表  3  仿真实验结果(%)

    类别 本文模型 TCAN DCN GRU-FCN Bi-GRU
    总体准确率 92.58 86.24 84.22 83.81 73.38
    雷达一 90.72 88.03 83.80 77.69 50.73
    雷达二 91.65 80.92 85.66 85.81 76.40
    雷达三 96.98 96.91 97.32 96.42 99.81
    雷达四 89.72 77.08 63.28 69.21 55.32
    下载: 导出CSV

    表  4  MFR仿真实验结果(%)

    类别本文模型TCANDCNGRU-FCNBi-GRU
    平均准确率86.8283.5282.3276.7268.44
    雷达一88.3083.7884.1682.9975.01
    雷达二85.4880.6478.2772.7167.48
    雷达三86.8982.5482.5972.8660.74
    雷达四86.4887.0484.0377.9570.36
    下载: 导出CSV

    表  5  各调制模式对比(%)

    类别本文模型TCANDCNGRU-FCNBi-GRU
    固定90.4787.7786.3379.5678.67
    抖动85.1185.2282.4077.2272.37
    参差88.6881.6284.5179.4866.79
    组变85.4681.9078.3772.7957.78
    下载: 导出CSV

    表  6  模型计算开销对比

    算法模型本文模型TCANDCNGRU-FCN
    参数量401.68K329.50K158.17K38.15K
    FLOPs103.88M165.40M80.50M20.02M
    存储空间1.53MB1.26MB0.60MB0.15MB
    下载: 导出CSV

    表  7  消融实验结果(%)

    模型场景一场景二场景三
    Baseline81.476.272.0
    Baseline+A86.182.675.9
    Baseline+A+B88.284.376.2
    Baseline+A+B+C92.687.480.4
    下载: 导出CSV
  • [1] WANG Shiqiang, HU Guoping, ZHANG Qiliang, et al. The background and significance of radar signal sorting research in modern warfare[J]. Procedia Computer Science, 2019, 154: 519–523. doi: 10.1016/j.procs.2019.06.080.
    [2] HAIGH K and ANDRUSENKO J. Cognitive Electronic Warfare: An Artificial Intelligence Approach[M]. Norwood: Artech House, 2021: 239. (查阅网上资料, 未找到出版地信息, 请确认).
    [3] LANG Ping, FU Xiongjun, DONG Jian, et al. A novel radar signals sorting method via residual graph convolutional network[J]. IEEE Signal Processing Letters, 2023, 30: 753–757. doi: 10.1109/LSP.2023.3287404.
    [4] WAN Liangtian, LIU Rong, SUN Lu, et al. UAV Swarm based radar signal sorting via multi-source data fusion: A deep transfer learning framework[J]. Information Fusion, 2022, 78: 90–101. doi: 10.1016/j.inffus.2021.09.007.
    [5] ZHOU Zixiang, FU Xiongjun, DONG Jian, et al. Radar signal sorting with multiple self-attention coupling mechanism based transformer network[J]. IEEE Signal Processing Letters, 2024, 31: 1765–1769. doi: 10.1109/LSP.2024.3421948.
    [6] CAO Sheng, WANG Shucheng, and ZHANG Yan. Density-based fuzzy C-means multi-center re-clustering radar signal sorting algorithm[C]. Proceedings of 17th IEEE International Conference on Machine Learning and Applications, Orlando, USA, 2018: 891–896. doi: 10.1109/ICMLA.2018.00144.
    [7] SU Yuhang, CHEN Zhao, GONG Linfu, et al. An improved adaptive radar signal sorting algorithm based on DBSCAN by a novel CVI[J]. IEEE Access, 2024, 12: 43139–43154. doi: 10.1109/ACCESS.2024.3361221.
    [8] AHMED M G S and TANG B. New FCM's validity index for sorting radar signal[C]. Proceedings of IEEE 17th International Conference on Computational Science and Engineering, Chengdu, China, 2014: 127–131. doi: 10.1109/CSE.2014.55.
    [9] ZHU Mengtao, LI Yunjie, and WANG Shafei. Model-based time series clustering and interpulse modulation parameter estimation of multifunction radar pulse sequences[J]. IEEE Transactions on Aerospace and Electronic Systems, 2021, 57(6): 3673–3690. doi: 10.1109/TAES.2021.3082660.
    [10] WEI Xiuxi, PENG Maosong, HUANG Huajuan, et al. An overview on density peaks clustering[J]. Neurocomputing, 2023, 554: 126633. doi: 10.1016/j.neucom.2023.126633.
    [11] LIU Zhangmeng and YU P S. Classification, denoising, and deinterleaving of pulse streams with recurrent neural networks[J]. IEEE Transactions on Aerospace and Electronic Systems, 2019, 55(4): 1624–1639. doi: 10.1109/TAES.2018.2874139.
    [12] NOTARO P, PASCHALI M, HOPKE C, et al. Radar emitter classification with attribute-specific recurrent neural networks[J/OL]. arXiv preprint arXiv: 1911.07683, 2019. doi: 10.48550/arXiv.1911.07683.(查阅网上资料,不确定本条文献的格式和类型,请确认).
    [13] ZHANG Jiaxiang, WANG Bo, HAN Xinrui, et al. A multi-radar emitter sorting and recognition method based on hierarchical clustering and TFCN[J]. Digital Signal Processing, 2025, 160: 105005. doi: 10.1016/j.dsp.2025.105005.
    [14] WANG Haojian, TAO Zhenji, HE Jin, et al. Deinterleaving of intercepted radar pulse streams via temporal convolutional attention network[J]. IEEE Transactions on Aerospace and Electronic Systems, 2025, 61(4): 9327–9343. doi: 10.1109/TAES.2025.3555244.
    [15] AL-MALAHI A, FARHAN A, FENG Hancong, et al. An intelligent radar signal classification and deinterleaving method with unified residual recurrent neural network[J]. IET Radar, Sonar & Navigation, 2023, 17(8): 1259–1276. doi: 10.1049/rsn2.12417.
    [16] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [17] WANG Qilong, WU Banggu, ZHU Pengfei, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 11531–11539. doi: 10.1109/CVPR42600.2020.01155.
    [18] HU Jie, SHEN Li, and SUN Gang. Squeeze-and-excitation networks[C]. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7132–7141. doi: 10.1109/CVPR.2018.00745.
    [19] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]. Proceedings of IEEE International Conference on Computer Vision, Venice, Italy, 2017: 2999–3007. doi: 10.1109/ICCV.2017.324.
    [20] ELSAYED N, MAIDA A S, and BAYOUMI M. Deep gated recurrent and convolutional network hybrid model for univariate time series classification[J/OL]. arXiv preprint arXiv: 1812.07683, 2018. doi: 10.48550/arXiv.1812.07683.(查阅网上资料,不确定本条文献的格式和类型,请确认).
    [21] CHEN Hongyu, FENG Kangan, KONG Yukai, et al. Multi-function radar work mode recognition based on encoder-decoder model[C]. Proceedings of IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 2022: 1189–1192. doi: 10.1109/IGARSS46834.2022.9884556.
  • 加载中
图(8) / 表(7)
计量
  • 文章访问数:  34
  • HTML全文浏览量:  15
  • PDF下载量:  6
  • 被引次数: 0
出版历程
  • 修回日期:  2025-12-29
  • 录用日期:  2025-12-29
  • 网络出版日期:  2026-01-05

目录

    /

    返回文章
    返回