Advanced Search
Turn off MathJax
Article Contents
HUANG Linxuan, HE Minghao, YU Chunlai, FENG Mingyue, ZHANG Fuqun, ZHANG Yinan. Data Enhancement for Few-shot Radar Countermeasure Reconnaissance via Temporal-Conditional Generative Adversarial Networks[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250280
Citation: HUANG Linxuan, HE Minghao, YU Chunlai, FENG Mingyue, ZHANG Fuqun, ZHANG Yinan. Data Enhancement for Few-shot Radar Countermeasure Reconnaissance via Temporal-Conditional Generative Adversarial Networks[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250280

Data Enhancement for Few-shot Radar Countermeasure Reconnaissance via Temporal-Conditional Generative Adversarial Networks

doi: 10.11999/JEIT250280 cstr: 32379.14.JEIT250280
  • Received Date: 2025-04-16
  • Rev Recd Date: 2025-07-22
  • Available Online: 2025-07-30
  •   Objective  Radar electronic warfare systems rely on Pulse Descriptor Words (PDWs) to represent radar signals, capturing key parameters such as Time of Arrival (TOA), Carrier Frequency (CF), Pulse Width (PW), and Pulse Amplitude (PA). However, in complex electromagnetic environments, the scarcity of PDW data limits the effectiveness of data-driven models in radar pattern recognition. Conventional augmentation methods (e.g., geometric transformations, SMOTE) fall short in addressing three core issues: (1) failure to capture temporal physical laws (e.g., gradual frequency shifts and pulse modulation patterns); (2) distributional inconsistencies (e.g., frequency band overflows and PW discontinuities); and (3) weak coupling between PDWs and modulation types, leading to reduced classification accuracy. This study proposes Time-CondGAN, a temporal-conditional generative adversarial network designed to generate physically consistent and modulation-specific PDW sequences, enhancing few-shot radar reconnaissance performance.  Methods   Time-CondGAN integrates three core innovations: (1) Multimodal Conditional Generation Framework: A label encoder maps discrete radar modulation types (e.g., VS, HRWS, TWS in Table 2) into 128-dimensional feature vectors. These vectors are temporally expanded and concatenated with latent noise to enable category-controllable generation. The generator architecture (Fig. 4) employs bidirectional Gated Recurrent Units (GRUs) to capture long-range temporal dependencies. (2) Multi-task Discriminator Design: The discriminator (Fig. 5) is designed for joint adversarial discrimination and signal classification. A shared bidirectional GRU with an attention mechanism extracts temporal features, while classification loss constrains the generator to maintain category-specific distributions. (3) Temporal-Statistical Joint Optimization: The training process incorporates multiple loss components: Supervisor Loss (Eq. 7): A bidirectional GRU-based supervisor enforces temporal consistency. Feature Matching Loss (Eq. 9): Encourages alignment between high-level features of real and synthetic signals. Adversarial and Classification Losses (Eqs. (8)–(9)): Promote distribution realism and accurate category separation. A two-stage training strategy is adopted to improve stability, consisting of pre-training (Fig. 2) followed by adversarial training (Fig. 3).  Results and Discussions  Time-CondGAN demonstrates strong performance across three core dimensions. (1) Physical plausibility is achieved through accurate modeling of radar signal dynamics. Compared with TimeGAN, Time-CondGAN reduces the Kullback-Leibler (KL) divergence by 28.25% on average. Specifically, the PW distribution error decreases by 50.20% (KL = 4.68), the TOA interval error decreases by 20.5% (KL = 0.636), and the CF deviation is reduced by 13.29% (KL = 7.93), confirming the suppression of non-physical signal discontinuities. (2) Downstream task enhancement highlights the model’s few-shot generation capability. With only 10 real samples, classification accuracy improves markedly: VGG16 accuracy increases by 37.2% (from 43.0% to 59.0%) and LSTM accuracy by 28.6% (from 49.0% to 63.0%), both substantially outperforming conventional data augmentation methods. (3) Ablation studies validate the contribution of key modules. Removing the conditional encoder increases the PW KL divergence by 107.4%. Excluding the supervisor loss degrades CF continuity by 76.2%, and omitting feature matching results in a 44.6% misalignment in amplitude distribution.  Conclusions  This study establishes Time-CondGAN as an effective solution for radar PDW generation under few-shot conditions, addressing three key limitations: temporal fragmentation is resolved through bidirectional GRU supervision, mode collapse is alleviated via multimodal conditioning, and modulation specificity is maintained by classification constraints. The proposed framework offers operational value, enabling over 35% improvement in recognition accuracy under low-intercept conditions. Future work will incorporate radar propagation physics to refine amplitude modeling (current KL = 8.51), adopt meta-learning approaches for real-time adaptation in dynamic battlefield environments, and expand the model to multi-radar cooperative scenarios to support heterogeneous electromagnetic contexts.
  • loading
  • [1]
    陈韬伟, 王会源, 马一鸣, 等. 基于复杂网络建模的雷达辐射源信号脉间特征提取[J]. 火力与指挥控制, 2024, 49(12): 77–84,92. doi: 10.3969/j.issn.1002-0640.2024.12.009.

    CHEN Taowei, WANG Huiyuan, MA Yiming, et al. Inter-pulse feature extraction of radar emitter signals and its analysis based on complex network modeling[J]. Fire Control & Command Control, 2024, 49(12): 77–84,92. doi: 10.3969/j.issn.1002-0640.2024.12.009.
    [2]
    陈涛, 邱宝传, 肖易寒, 等. 基于点云分割网络的雷达信号分选方法[J]. 电子与信息学报, 2024, 46(4): 1391–1398. doi: 10.11999/JEIT230622.

    CHEN Tao, QIU Baochuan, XIAO Yihan, et al. The radar signal deinterleaving method base on point cloud segmentation network[J]. Journal of Electronics & Information Technology, 2024, 46(4): 1391–1398. doi: 10.11999/JEIT230622.
    [3]
    SHORTEN C and KHOSHGOFTAAR T M. A survey on image data augmentation for deep learning[J]. Journal of Big Data, 2019, 6(1): 60. doi: 10.1186/s40537-019-0197-0.
    [4]
    CHAWLA N V, BOWYER K W, HALL L O, et al. SMOTE: Synthetic minority over-sampling technique[J]. Journal of Artificial Intelligence Research, 2002, 16: 321–357. doi: 10.1613/jair.953.
    [5]
    GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. The 28th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 2672–2680.
    [6]
    MIRZA M and OSINDERO S. Conditional generative adversarial nets[Z]. arXiv preprint arXiv: 1411.1784, 2014. doi: 10.48550/arXiv.1411.1784.
    [7]
    YOON J, JARRETT D, and VAN DER SCHAAR M. Time-series generative adversarial networks[C]. The 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2019: 494.
    [8]
    KHAN S and KUMAR V. A novel hybrid GRU-CNN and residual bias (RB) based RB-GRU-CNN models for prediction of PTB Diagnostic ECG time series data[J]. Biomedical Signal Processing and Control, 2024, 94: 106262. doi: 10.1016/j.bspc.2024.106262.
    [9]
    WANG Zhongqi, SONG Chong, JIAO Zekun, et al. Synthetic aperture radar deep statistical imaging through diffusion generative model conditional inference[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5227717. doi: 10.1109/TGRS.2024.3498442.
    [10]
    ANG Yihao, HUANG Qiang, BAO Yifan, et al. TSGBench: Time series generation benchmark[J]. Proceedings of the VLDB Endowment, 2023, 17(3): 305–318. doi: 10.14778/3632093.3632097.
    [11]
    ESKANDARINASAB M R, HAMDI S M, and BOUBRAHIMI S F. ChronoGAN: Supervised and embedded generative adversarial networks for time series generation[C]. 2024 International Conference on Machine Learning and Applications (ICMLA), Miami, USA, 2024: 567–574. doi: 10.1109/ICMLA61862.2024.00083.
    [12]
    TANG Haocheng, LONG Jing, and WANG Junmei. Auxiliary discrminator sequence generative adversarial networks (ADSeqGAN) for few sample molecule generation[Z]. arXiv preprint arXiv: 2502.16446, 2025. doi: 10.48550/arXiv.2502.16446.
    [13]
    CHOWDHURY M A, MODEKWE G, and LU Qiugang. Lithium-ion battery capacity prediction via conditional recurrent generative adversarial network-based time-series regeneration[Z]. arXiv preprint arXiv: 2503.12258, 2025. doi: 10.48550/arXiv.2503.12258.
    [14]
    RUAN Shulan, ZHANG Yong, ZHANG Kun, et al. DAE-GAN: Dynamic aspect-aware Gan for text-to-image synthesis[C]. The IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 13940–13949. doi: 10.1109/ICCV48922.2021.01370.
    [15]
    YANG Yue, ZHANG Zhuo, MAO Wei, et al. Radar target recognition based on few-shot learning[J]. Multimedia Systems, 2023, 29(5): 2865–2875. doi: 10.1007/s00530-021-00832-3.
    [16]
    史松昊, 王晓丹, 杨春晓, 等. 基于跨域小样本学习的SAR图像目标识别方法[J]. 计算机科学, 2024, 51(6A): 230800136. doi: 10.11896/jsjkx.230800136.

    SHI Songhao, WANG Xiaodan, YANG Chunxiao, et al. SAR image target recognition based on cross domain few shot learning[J]. Computer Science, 2024, 51(6A): 230800136. doi: 10.11896/jsjkx.230800136.
    [17]
    王玉冰, 程嗣怡, 周一鹏, 等. 基于DS证据理论的机载火控雷达空空工作模式判定[J]. 现代雷达, 2017, 39(5): 79–84. doi: 10.16592/j.cnki.1004-7859.2017.05.017.

    WANG Yubing, CHENG Siyi, ZHOU Yipeng, et al. Air-to-air operation modes recognition of airborne fire control radar based on DS evidence theory[J]. Modern Radar, 2017, 39(5): 79–84. doi: 10.16592/j.cnki.1004-7859.2017.05.017.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(11)  / Tables(5)

    Article Metrics

    Article views (118) PDF downloads(21) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return