Advanced Search
Turn off MathJax
Article Contents
ZHU Longjun, YUAN Weiwei, MEN Xuefeng, TONG Wei, WU Qi. Weakly Supervised Recognition of Aerial Adversarial Maneuvers via Contrastive Learning[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250495
Citation: ZHU Longjun, YUAN Weiwei, MEN Xuefeng, TONG Wei, WU Qi. Weakly Supervised Recognition of Aerial Adversarial Maneuvers via Contrastive Learning[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250495

Weakly Supervised Recognition of Aerial Adversarial Maneuvers via Contrastive Learning

doi: 10.11999/JEIT250495 cstr: 32379.14.JEIT250495
Funds:  The National Natural Science Foundation of China (T2325018, 62171274), Natural Science Foundation of Jiangsu Province (BK20240641)
  • Received Date: 2025-06-03
  • Rev Recd Date: 2025-08-29
  • Available Online: 2025-09-08
  •   Objective  Accurate recognition of aerial adversarial maneuvers is essential for situational awareness and tactical decision-making in modern air warfare. Conventional supervised approaches face major challenges: obtaining labeled flight data is costly due to the intensive human effort required for collection and annotation, and these methods are limited in capturing temporal dependencies inherent in sequential flight parameters. Temporal dynamics are crucial for describing the evolution of maneuvers, yet existing models fail to fully exploit this information. To address these challenges, this study proposes a weakly supervised maneuver recognition framework based on contrastive learning. The method leverages a small proportion of labeled data to learn discriminative representations, thereby reducing reliance on extensive manual annotations. The proposed framework enhances recognition accuracy in data-scarce scenarios and provides a robust solution for maneuver analysis in dynamic adversarial aerial environments.  Methods  The proposed framework extends the Simple Framework for Contrastive Learning of visual Representations (SimCLR) into the time-series domain by incorporating five temporal-specific data augmentation strategies: time compression, masking, permutation, scaling, and flipping. These augmentations generate multi-view samples that form positive pairs for contrastive learning, thereby ensuring temporal invariance in the feature space. A customized ResNet-18 encoder is employed to extract hierarchical features from the augmented time-series data, and a Multi-Layer Perceptron (MLP) projection head maps these features into a contrastive space. The Normalized Temperature-scaled cross-entropy (NT-Xent) loss is adopted to maximize similarity between positive pairs and minimize it between negative pairs, which effectively mitigates pseudo-label noise. To further improve recognition performance, a fine-tuning strategy is introduced in which pre-trained features are combined with a task-specific classification head using a limited amount of labeled data to adapt to downstream recognition tasks. This contrastive learning framework enables efficient analysis of time-series flight data, achieves accurate recognition of fighter aircraft maneuvers, and reduces dependence on large-scale labeled datasets.  Results and Discussions  Experiments are conducted on flight simulation data obtained from DCS World. To address the class imbalance issue, hybrid datasets (Table 1) are constructed, and training data ratios ranging from 2% to 30% are employed to evaluate the effectiveness of the weakly supervised framework. The results demonstrate that contrastive learning effectively captures the temporal patterns within flight data. For example, on the D1 dataset, accuracy with the base method increases from 35.83% with 2% labeled data to 89.62% when the fine-tuning ratio reaches 30% (Tables 36, Fig. 2(a)2(c)). To improve recognition of long maneuver sequences, a linear classifier and a voting strategy are introduced. The voting strategy markedly enhances few-shot learning performance. On the D1 dataset, accuracy reaches 54.5% with 2% labeled data and rises to 97.9% at a 30% fine-tuning ratio, representing a substantial improvement over the base method. On the D6 dataset, which simulates multi-source data fusion scenarios in air combat, the accuracy of the voting method increases from 0.476 with 2% labeled data to 0.928 with 30% labeled data (Fig. 2(d)2(f)), with a growth rate in the low-data phase 53% higher than that of the base method. Additionally, on the comprehensive D7 dataset, the accuracy standard deviation of the voting method is only 0.011 (Fig. 2(g), Fig. 3), significantly lower than the 0.015 observed for the base method. The superiority of the proposed framework can be attributed to two factors: the suppression of noise through integration of multiple prediction results using the voting strategy and the extraction of robust features from unlabeled data via contrastive learning pre-training. Together, these techniques enhance generalization and stability in complex scenarios, confirming the effectiveness of the method in leveraging unlabeled data and managing multi-source information.  Conclusions  This study applies the SimCLR framework to maneuver recognition and proposes a weakly supervised approach based on contrastive learning. By incorporating targeted data augmentation strategies and combining self-supervised learning with fine-tuning, the method exploits the latent information in time-series data, yielding substantial improvements in recognition performance under limited labeled data conditions. Experiments on simulated air combat datasets demonstrate that the framework achieves stable recognition across different data categories, offering practical insights for feature learning and model optimization in time-series classification tasks. Future research will focus on three directions: first, integrating real flight data to evaluate the model’s generalization capability in practical scenarios; second, developing dynamically adaptive data augmentation strategies to enhance performance in complex environments; and third, combining reinforcement learning and related techniques to improve autonomous decision-making in dynamic aerial missions, thereby expanding opportunities for intelligent flight operations.
  • loading
  • [1]
    TIAN Yonglong, SUN Chen, POOLE B, et al. What makes for good views for contrastive learning?[C]. The 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 573.
    [2]
    CHUANG C Y, ROBINSON J, LIN Y C, et al. Debiased contrastive learning[C]. The 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 735.
    [3]
    ZHENG Mingkai, WANG Fei, YOU Shan, et al. Weakly supervised contrastive learning[C]. IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 10022–10031. doi: 10.1109/ICCV48922.2021.00989.
    [4]
    KHOSLA P, TETERWAK P, WANG Chen, et al. Supervised contrastive learning[C]. The 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 1567.
    [5]
    WU Linshan, ZHUANG Jiaxin, and CHEN Hao. VoCo: A simple-yet-effective volume contrastive learning framework for 3D medical image analysis[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 22873–22882. doi: 10.1109/CVPR52733.2024.02158.
    [6]
    CHEN Ting, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[C]. The 37th International Conference on Machine Learning, 2020: 149. (查阅网上资料, 未找到本条文献出版地信息, 请确认).
    [7]
    KUANG Haofei, ZHU Yi, ZHANG Zhi, et al. Video contrastive learning with global context[C]. IEEE/CVF International Conference on Computer Vision Workshops, Montreal, Canada, 2021: 3188. doi: 10.1109/ICCVW54120.2021.00358.
    [8]
    SUNG C, KIM W, AN J, et al. Contextrast: Contextual contrastive learning for semantic segmentation[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 3732–3742. doi: 10.1109/CVPR52733.2024.00358.
    [9]
    WU Zhuofeng, WANG Sinong, GU Jiatao, et al. CLEAR: Contrastive learning for sentence representation[J]. arXiv preprint arXiv: 2012.15466, 2020. doi: 10.48550/arXiv.2012.15466. (查阅网上资料,不确定文献类型及格式是否正确,请确认).
    [10]
    SPIJKERVET J and BURGOYNE J A. Contrastive learning of musical representations[C]. The 22nd International Society for Music Information Retrieval Conference, 2021: 673–681. (查阅网上资料, 未找到本条文献出版地信息, 请确认).
    [11]
    倪世宏, 史忠科, 谢川, 等. 军用战机机动飞行动作识别知识库的建立[J]. 计算机仿真, 2005, 22(4): 23–26. doi: 10.3969/j.issn.1006-9348.2005.04.007.

    NI Shihong, SHI Zhongke, XIE Chuan, et al. Establishment of avion inflight maneuver action recognizing knowledge base[J]. Computer Simulation, 2005, 22(4): 23–26. doi: 10.3969/j.issn.1006-9348.2005.04.007.
    [12]
    孟光磊, 陈振, 罗元强. 基于动态贝叶斯网络的机动动作识别方法[J]. 系统仿真学报, 2017, 29(S1): 140–145. doi: 10.16182/j.issn1004731x.joss.2017S1020.

    MENG Guanglei, CHEN Zhen, and LUO Yuanqiang. Maneuvering action identify method based on dynamic Bayesian network[J]. Journal of System Simulation, 2017, 29(S1): 140–145. doi: 10.16182/j.issn1004731x.joss.2017S1020.
    [13]
    WANG Yongjun, DONG Jiang, LIU Xiaodong, et al. Identification and standardization of maneuvers based upon operational flight data[J]. Chinese Journal of Aeronautics, 2015, 28(1): 133–140. doi: 10.1016/j.cja.2014.12.026.
    [14]
    LI Xiaokang, ZHU Tianyi, BIAN Zimu, et al. An improved algorithm for flight maneuver recognition and evaluation based on support vector machines[C]. 2024 International Conference on Cyber-Physical Social Intelligence (ICCSI), Doha, Qatar, 2024: 1–6. doi: 10.1109/ICCSI62669.2024.10799254.
    [15]
    XI Zhifei, LYU Yue, KOU Yingxin, et al. An online ensemble semi-supervised classification framework for air combat target maneuver recognition[J]. Chinese Journal of Aeronautics, 2023, 36(6): 340–360. doi: 10.1016/j.cja.2023.04.020.
    [16]
    WEI Zhenglei, DING Dali, ZHOU Huan, et al. A flight maneuver recognition method based on multi-strategy affine canonical time warping[J]. Applied Soft Computing, 2020, 95: 106527. doi: 10.1016/j.asoc.2020.106527.
    [17]
    LU Jing, CHAI Hongjun, and JIA Ruchun. A general framework for flight maneuvers automatic recognition[J]. Mathematics, 2022, 10(7): 1196. doi: 10.3390/math10071196.
    [18]
    WANG Can, TU Jingqi, YANG Xizhong, et al. Explainable basic-fighter-maneuver decision support scheme for piloting within-visual-range air combat[J]. Journal of Aerospace Information Systems, 2024, 21(6): 500–514. doi: 10.2514/1.I011388.
    [19]
    LEI Xie, SHILIN D, SHANGQIN T, et al. Beyond visual range maneuver intention recognition based on attention enhanced tuna swarm optimization parallel BiGRU[J]. Complex & Intelligent Systems, 2024, 10(2): 2151–2172. doi: 10.1007/s40747-023-01257-3.
    [20]
    TIAN Wei, ZHANG Hong, LI Hui, et al. Flight maneuver intelligent recognition based on deep variational autoencoder network[J]. EURASIP Journal on Advances in Signal Processing, 2022, 2022(1): 21. doi: 10.1186/s13634-022-00850-x.
    [21]
    LUO Dongsheng, CHENG Wei, WANG Yingheng, et al. Time series contrastive learning with information-aware augmentations[C]. The 37th AAAI Conference on Artificial Intelligence, Washington, USA, 2023: 4534–4542. doi: 10.1609/aaai.v37i4.25575.
    [22]
    CHEN Muxi, XU Zhijian, ZENG Ailing, et al. FrAug: Frequency domain augmentation for time series forecasting[J]. arXiv preprint arXiv: 2302.09292, 2023. doi: 10.48550/arXiv.2302.09292. (查阅网上资料,不确定文献类型及格式是否正确,请确认).
    [23]
    HADSELL R, CHOPRA S, and LECUN Y. Dimensionality reduction by learning an invariant mapping[C]. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), New York, USA, 2006: 1735–1742. doi: 10.1109/CVPR.2006.100.
    [24]
    SCHROFF F, KALENICHENKO D, and PHILBIN J. FaceNet: A unified embedding for face recognition and clustering[C]. IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 815–823. doi: 10.1109/CVPR.2015.7298682.
    [25]
    SOHN K. Improved deep metric learning with multi-class n-pair loss objective[C]. The 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016: 1857–1865.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(3)  / Tables(7)

    Article Metrics

    Article views (47) PDF downloads(5) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return