Advanced Search
Volume 47 Issue 7
Jul.  2025
Turn off MathJax
Article Contents
SHI Huaifeng, ZHOU Long, PAN Chengsheng, CAO Kangning, LIU Chaofan, LV Miao. Link State Awareness Enhanced Intelligent Routing Algorithm for Tactical Communication Networks[J]. Journal of Electronics & Information Technology, 2025, 47(7): 2127-2139. doi: 10.11999/JEIT241132
Citation: SHI Huaifeng, ZHOU Long, PAN Chengsheng, CAO Kangning, LIU Chaofan, LV Miao. Link State Awareness Enhanced Intelligent Routing Algorithm for Tactical Communication Networks[J]. Journal of Electronics & Information Technology, 2025, 47(7): 2127-2139. doi: 10.11999/JEIT241132

Link State Awareness Enhanced Intelligent Routing Algorithm for Tactical Communication Networks

doi: 10.11999/JEIT241132 cstr: 32379.14.JEIT241132
Funds:  The Foundation of Key Laboratory of Intelligent Support Technology for Complex Environments, Ministry of Education (B2202401)
  • Received Date: 2024-12-24
  • Rev Recd Date: 2025-04-07
  • Available Online: 2025-04-25
  • Publish Date: 2025-07-22
  •   Objective   Operational concept iteration, combat style innovation, and the emergence of new combat forces are accelerating the transition of warfare toward intelligent systems. In this context, tactical communication networks must establish end-to-end transmission paths through heterogeneous links, including ultra-shortwave and satellite communications, to meet differentiated routing requirements for multi-modal services sensitive to latency, bandwidth, and reliability. Existing Deep Reinforcement Learning (DRL)-based intelligent routing algorithms primarily use single neural network architectures, which inadequately capture complex dependencies among link states. This limitation reduces the accuracy and robustness of routing decisions under time-varying network conditions. To address this, a link state perception-enhanced intelligent routing algorithm (DRL-SGA) is proposed. By capturing spatiotemporal dependencies in link state sequences, the algorithm improves the adaptability of routing decision models to dynamic network conditions and enables more effective path selection for multi-modal service transmission.  Methods   The proposed DRL-SGA algorithm incorporates a link state perception enhancement module that integrates a Graph Neural Network (GNN) and an attention mechanism into a Proximal Policy Optimization (PPO) agent framework for collecting network state sequences. This module extracts high-order features from the sequences across temporal and spatial dimensions, thereby addressing the limited global link state awareness of the PPO agent’s Fully Connected Neural Network (FCNN). This enhancement improves the adaptability of the routing decision model to time-varying network conditions. The Actor-Critic framework enables periodic interaction between the agent and the network environment, while an experience replay pool continuously refines policy parameters. This process facilitates the discovery of routing paths that meet heterogeneous transmission requirements across latency-, bandwidth-, and reliability-sensitive services.  Results and Discussions   The routing decision capability of the DRL-SGA algorithm is evaluated in a simulated network comprising 47 routing nodes and 61 communication links. Its performance is compared with that of five other routing algorithms under varying traffic intensities. The results show that DRL-SGA provides superior adaptability to heterogeneous network environments. At a traffic intensity of 100 kbit/s, DRL-SGA reduces latency by 14.42~33.57% compared with the other algorithms (Figure 4). Network throughput increases by 2.51~23.41% (Figure 5). In scenarios characterized by resource constraints or topological changes, DRL-SGA consistently maintains higher service quality and greater adaptability to fluctuations in network state (Figures 712). Ablation experiments confirm the effectiveness of the individual components within the link state perception enhancement module in improving the algorithm’s perception capability (Table 3).  Conclusions   A link state perception-enhanced intelligent routing algorithm (DRL-SGA) is proposed for tactical communication networks. By extracting high-order features from link state sequences across temporal and spatial dimensions, the algorithm addresses the limited global link state awareness of the PPO agent’s FCNN. Through the Actor-Critic framework and periodic interactions between the agent and the network environment, DRL-SGA enables iterative optimization of routing strategies, improving decision accuracy and robustness under dynamic topology and link conditions. Experimental results show that DRL-SGA meets the differentiated transmission requirements of heterogeneous services—latency-sensitive, bandwidth-sensitive, and reliability-sensitive, while offering improved adaptability to variations in network state. However, the algorithm may exhibit delayed convergence when training samples are insufficient in rapidly changing environments. Future work will examine the integration of diffusion models to enrich training data and accelerate convergence.
  • loading
  • [1]
    吉祥, 蒋锴, 成海东. 全域作战指挥信息系统总体架构及核心支柱[J]. 指挥与控制学报, 2023, 9(2): 225–232. doi: 10.3969/j.issn.2096-0204.2023.02.0225.

    JI Xiang, JIANG Kai, and CHENG Haidong. Architecture and core pillars of all-domain operation command information system[J]. Journal of Command and Control, 2023, 9(2): 225–232. doi: 10.3969/j.issn.2096-0204.2023.02.0225.
    [2]
    张姣, 曹阔, 王海军, 等. 基于分层虚拟簇的多信道组网算法[J]. 电子与信息学报, 2023, 45(11): 4041–4049. doi: 10.11999/JEIT230802.

    ZHANG Jiao, CAO Kuo, WANG Haijun, et al. Multi-channel network construction algorithm based on hierarchical virtual clustering[J]. Journal of Electronics & Information Technology, 2023, 45(11): 4041–4049. doi: 10.11999/JEIT230802.
    [3]
    AULIA M A, SUKMANDHANI A A, and OHLIATI J. RIP and OSPF routing protocol analysis on defined network software[C]. 2022 International Electronics Symposium (IES), Surabaya, Indonesia, 2022: 393–397. doi: 10.1109/IES55876.2022.9888355.
    [4]
    VERMA A and BHARDWAJ N. A review on routing information protocol (RIP) and open shortest path first (OSPF) routing protocol[J]. International Journal of Future Generation Communication and Networking, 2016, 9(4): 161–170. doi: 10.14257/ijfgcn.2016.9.4.13.
    [5]
    NASRALLAH A, THYAGATURU A S, ALHARBI Z, et al. Ultra-low latency (ULL) networks: The IEEE TSN and IETF DetNet standards and related 5G ULL research[J]. IEEE Communications Surveys & Tutorials, 2019, 21(1): 88–145. doi: 10.1109/COMST.2018.2869350.
    [6]
    BELGAUM M R, MUSA S, ALI F, et al. Self-socio adaptive reliable particle swarm optimization load balancing in software-defined networking[J]. IEEE Access, 2023, 11: 101666–101677. doi: 10.1109/ACCESS.2023.3314791.
    [7]
    YAO Guangshun, DONG Zaixiu, WEN Weiming, et al. A routing optimization strategy for wireless sensor networks based on improved genetic algorithm[J]. Journal of Applied Science and Engineering, 2016, 19(2): 221–228. doi: 10.6180/jase.2016.19.2.13.
    [8]
    DENG Xia, ZENG Shouyuan, CHANG Le, et al. An ant colony optimization-based routing algorithm for load balancing in LEO satellite networks[J]. Wireless Communications and Mobile Computing, 2022, 2022(1): 3032997. doi: 10.1155/2022/3032997.
    [9]
    HUANG Xiaohong, YUAN Tingting, QIAO Guanhua, et al. Deep reinforcement learning for multimedia traffic control in software defined networking[J]. IEEE Network, 2018, 32(6): 35–41. doi: 10.1109/MNET.2018.1800097.
    [10]
    DONG Tianjian, ZHUANG Zirui, QI Qi, et al. Intelligent joint network slicing and routing via GCN-powered multi-task deep reinforcement learning[J]. IEEE Transactions on Cognitive Communications and Networking, 2022, 8(2): 1269–1286. doi: 10.1109/TCCN.2021.3136221.
    [11]
    HE Nan, YANG Song, LI Fan, et al. Leveraging deep reinforcement learning with attention mechanism for virtual network function placement and routing[J]. IEEE Transactions on Parallel and Distributed Systems, 2023, 34(4): 1186–1201. doi: 10.1109/TPDS.2023.3240404.
    [12]
    CASAS-VELASCO D M, RENDON O M C, and DA FONSECA N L S. DRSIR: A deep reinforcement learning approach for routing in software-defined networking[J]. IEEE Transactions on Network and Service Management, 2022, 19(4): 4807–4820. doi: 10.1109/TNSM.2021.3132491.
    [13]
    YANG Sijin, ZHUANG Lei, ZHANG Jianhui, et al. A multipolicy deep reinforcement learning approach for multiobjective joint routing and scheduling in deterministic networks[J]. IEEE Internet of Things Journal, 2024, 11(10): 17402–17418. doi: 10.1109/JIOT.2024.3358403.
    [14]
    SUN Penghao, GUO Zehua, LI Junfei, et al. Enabling scalable routing in software-defined networks with deep reinforcement learning on critical nodes[J]. IEEE/ACM Transactions on Networking, 2022, 30(2): 629–640. doi: 10.1109/TNET.2021.3126933.
    [15]
    潘成胜, 曹康宁, 石怀峰, 等. 基于深度强化学习的战术通信网络路径优选算法[J]. 中国电子科学研究院学报, 2024, 19(2): 138–148. doi: 10.3969/j.issn.1673-5692.2024.02.005.

    PAN Chengsheng, CAO Kangning, SHI Huaifeng, et al. Tactical communication network path selection algorithm based on deep reinforcement learning[J]. Journal of China Academy of Electronics and Information Technology, 2024, 19(2): 138–148. doi: 10.3969/j.issn.1673-5692.2024.02.005.
    [16]
    于全. 战术通信理论与技术[M]. 北京: 人民邮电出版社, 2020: 192–198.

    YU Quan. Communications in Tactical Environments: Theories and Technologies[M]. Beijing: Posts & Telecom Press, 2020: 192–198.
    [17]
    LEE D, KIM J, CHO K, et al. Advanced double layered multi-agent Systems based on A3C in real-time path planning[J]. Electronics, 2021, 10(22): 2762. doi: 10.3390/electronics10222762.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(13)  / Tables(4)

    Article Metrics

    Article views (353) PDF downloads(30) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return