Advanced Search
Turn off MathJax
Article Contents
XIA Wei, WEI Hongtu, CHENG Ying, WANG Junting, HU Xiaoxuan. An Expert of Chain Construction and Optimization Method for Satellite Mission Planning[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT251018
Citation: XIA Wei, WEI Hongtu, CHENG Ying, WANG Junting, HU Xiaoxuan. An Expert of Chain Construction and Optimization Method for Satellite Mission Planning[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT251018

An Expert of Chain Construction and Optimization Method for Satellite Mission Planning

doi: 10.11999/JEIT251018 cstr: 32379.14.JEIT251018
Funds:  The National Natural Science Foundation of China General Program (72271074)
  • Received Date: 2025-09-28
  • Accepted Date: 2026-01-05
  • Rev Recd Date: 2026-01-05
  • Available Online: 2026-01-09
  •   Objective  Satellite mission planning is a core optimization problem in space resource scheduling. Existing workflows exhibit a semantic gap between business-level natural language requirements and the mathematical models used for planning. In dynamic operational scenarios, model updates, such as constraint modification, parameter recalculation, or task attribute adjustment, rely heavily on human experts. This dependence leads to slow responses, limited adaptability, and high operational costs. To address these limitations, this paper proposes a Large Language Model (LLM)–driven inference framework based on a Chain of Experts (CoE) and a Dynamic Knowledge Enhancement (DKE) mechanism. The framework enables accurate, efficient, and robust modification of satellite mission planning models from natural language instructions.  Methods  The proposed framework decomposes natural language–driven model modification into a collaborative workflow comprising requirement parsing, task routing, and code generation experts. The requirement parsing expert converts natural language requests into structured modification instructions. The task routing expert assesses task difficulty and dispatches instructions accordingly. The code generation expert produces executable modification scripts for complex, large-scale, or batch operations. To improve accuracy and reduce reliance on manual expert intervention, a DKE mechanism is incorporated. This mechanism adopts a tiered LLM strategy, using a lightweight general model for rapid processing and a stronger reasoning model for complex cases, and constructs a dynamic knowledge base of validated modification cases. Through retrieval-augmented few-shot prompting, historical successful cases are integrated into the reasoning process, enabling continuous self-improvement without model fine-tuning. A sandbox environment performs mathematical consistency checks, including constraint completeness, parameter validity, and solution feasibility, before final acceptance of model updates.  Results and Discussions  Experiments are conducted on a simulated satellite mission planning dataset comprising 100 heterogeneous satellites and 1,000 point targets with different payload types, resolution requirements, and operational constraints. A test set of 100 natural language modification requests with varying complexity is constructed to represent dynamic real-world adjustment scenarios (Table 1). The proposed CoE with DKE framework is evaluated against three baselines: standard prompting with DeepSeek R1, Chain-of-Thought prompting with DeepSeek R1, and standard prompting with GPT-4o. The proposed method achieves an accuracy of 82% with an average response time of 81.28 s, outperforming all baselines in both correctness and efficiency (Table 2). Accuracy increases by 35 percentage points relative to the best-performing baseline, whereas response time decreases by 53.3% (Table 2). Scalability experiments show that the CoE with DKE framework maintains stable response times across small, medium, and large problem instances, whereas baseline methods exhibit significant delays as problem size increases (Table 3). Ablation studies indicate that DKE substantially reduces reliance on high-cost reasoning models, improves the general model’s ability to resolve complex modifications independently, and increases accuracy without sacrificing efficiency (Table 5).  Conclusions  This paper presents an LLM-powered reasoning framework that integrates a Chain of Experts workflow with a DKE mechanism to bridge the semantic gap between natural language requirements and formal optimization models in satellite mission planning. Through layered model collaboration, retrieval-augmented prompting, and sandbox-based mathematical verification, the proposed method achieves high accuracy, fast processing, and strong adaptability to dynamic and complex planning scenarios. Experimental results demonstrate its effectiveness in supporting precise model modification and improving operational intelligence. Future work will extend the framework to multimodal inputs and real-world mission environments to further improve robustness and engineering applicability.
  • loading
  • [1]
    LI Peiyan, CUI Peixing, and WANG Huiquan. Mission sequence model and deep reinforcement learning-based replanning method for multi-satellite observation[J]. Sensors, 2025, 25(6): 1707. doi: 10.3390/s25061707.
    [2]
    ZHENG Qingbiao, CAI Yuanwen, and WANG Peng. A modified genetic algorithm for large-scale and joint satellite mission planning[J]. Egyptian Informatics Journal, 2025, 31: 100713. doi: 10.1016/j.eij.2025.100713.
    [3]
    YAO Wei, SHEN Xin, ZHANG Guo, et al. A spiking neural network based proximal policy optimization method for multi-point imaging mission scheduling of earth observation satellite[J]. Swarm and Evolutionary Computation, 2025, 94: 101867. doi: 10.1016/j.swevo.2025.101867.
    [4]
    LI Shuo, WANG Gang, and CHEN Jinyong. AEM-D3QN: A graph-based deep reinforcement learning framework for dynamic earth observation satellite mission planning[J]. Aerospace, 2025, 12(5): 420. doi: 10.3390/aerospace12050420.
    [5]
    LI Xinyi, WANG Sai, ZENG Siqi, et al. A survey on LLM-based multi-agent systems: Workflow, infrastructure, and challenges[J]. Vicinagearth, 2024, 1(1): 9. doi: 10.1007/s44336-024-00009-2.
    [6]
    FOURATI F and ALOUINI M S. Artificial intelligence for satellite communication: A review[J]. Intelligent and Converged Networks, 2021, 2(3): 213–243. doi: 10.23919/ICN.2021.0015.
    [7]
    SUN Chuanneng, HUANG Songjun, and POMPILI D. LLM-based multi-agent decision-making: Challenges and future directions[J]. IEEE Robotics and Automation Letters, 2025, 10(6): 5681–5688. doi: 10.1109/lra.2025.3562371.
    [8]
    SHI Qian, HE Da, LIU Zhengyu, et al. Globe230k: A benchmark dense-pixel annotation dataset for global land cover mapping[J]. Journal of Remote Sensing, 2023, 3: 0078. doi: 10.34133/remotesensing.0078.
    [9]
    HU Fengming, XU Feng, WANG R, et al. Conceptual study and performance analysis of tandem multi-antenna spaceborne SAR interferometry[J]. Journal of Remote Sensing, 2024, 4: 0137. doi: 10.34133/remotesensing.0137.
    [10]
    JIANG Liguang, NIELSEN K, and ANDERSEN O B. Beyond exact repeat missions: Embracing geodetic altimetry for inland water monitoring and modeling[J]. Journal of Remote Sensing, 2024, 4: 0269. doi: 10.34133/remotesensing.0269.
    [11]
    JI H R and HUANG Dianyuan. A mission planning method for multi-satellite wide area observation[J]. International Journal of Advanced Robotic Systems, 2019, 16(6): 1729881419890715. doi: 10.1177/1729881419890715.
    [12]
    SONG Yanjie, ZHOU Ziyu, ZHANG Zhongshan, et al. A framework involving MEC: Imaging satellites mission planning[J]. Neural Computing and Applications, 2020, 32(19): 15329–15340. doi: 10.1007/s00521-019-04047-6.
    [13]
    ZHANG Guohui, LI Xinhong, WANG Xun, et al. Research on the prediction problem of satellite mission schedulability based on Bi-LSTM model[J]. Aerospace, 2022, 9(11): 676. doi: 10.3390/aerospace9110676.
    [14]
    LI Zhouxiao and LIU Yuan. Onboard autonomous mission generation method based on user preference[J]. Advances in Space Research, 2024, 74(1): 437–453. doi: 10.1016/j.asr.2024.03.055.
    [15]
    CARRASCO A, RODRIGUEZ-FERNANDEZ V, and LINARES R. Large language models as autonomous spacecraft operators in Kerbal Space Program[J]. Advances in Space Research, 2025, 76(6): 3480–3497. doi: 10.1016/j.asr.2025.06.034.
    [16]
    WU Di, ZHANG R, ZUCCHELLI E M, et al. APBench and benchmarking large language model performance in fundamental astrodynamics problems for space engineering[J]. Scientific Reports, 2025, 15(1): 7944. doi: 10.1038/s41598-025-91150-5.
    [17]
    RAMAMONJISON R, YU T T L, LI R, et al. NL4Opt competition: Formulating optimization problems based on their natural language descriptions[C]. The NeurIPS 2022 Competitions Track, New Orleans, USA, 2022: 189–203.
    [18]
    XIAO Ziyang, ZHANG Dongxiang, WU Yangjun, et al. Chain-of-experts: When LLMs meet complex operations research problems[C]. The Twelfth International Conference on Learning Representations, Vienna, Austria, 2024.
    [19]
    ASTORGA N, LIU T, XIAO Yuanzhang, et al. Autoformulation of mathematical optimization models using LLMs[C]. The 42nd International Conference on Machine Learning, Vancouver, Canada, 2025.
    [20]
    JIANG Caigao, SHU Xiang, QIAN Hong, et al. LLMOPT: Learning to define and solve general optimization problems from scratch[C]. The 13th International Conference on Learning Representations, Singapore, Singapore, 2025.
    [21]
    HUANG Chenyu, TANG Zhengyang, HU Shixi, et al. ORLM: A customizable framework in training large models for automated optimization modeling[J]. Operations Research, 2025, 73(6): 2986–3009. doi: 10.1287/opre.2024.1233.
    [22]
    AHMADITESHNIZI A, GAO Wenzhi, and UDELL M. OptiMUS: Optimization modeling using MIP solvers and large language models[EB/OL]. https://doi.org/10.48550/arXiv.2310.06116, 2023.
    [23]
    YE Haoran, WANG Jiarui, CAO Zhiguang, et al. ReEvo: Large language models as hyper-heuristics with reflective evolution[C]. The 38th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2024.
    [24]
    YAO Shunyu, LIU Fei, LIN Xi, et al. Multi-objective evolution of heuristic using large language model[C]. The 39th AAAI Conference on Artificial Intelligence, Philadelphia, USA, 2025: 27144–27152. doi: 10.1609/aaai.v39i25.34922.
    [25]
    DAT P V T, DOAN L, and BINH H T T. HSEvo: Elevating automatic heuristic design with diversity-driven harmony search and genetic algorithm using LLMs[C]. The 39th AAAI Conference on Artificial Intelligence, Philadelphia, USA, 2025: 26931–26938. doi: 10.1609/aaai.v39i25.34898.
    [26]
    SURINA A, MANSOURI A, QUAEDVLIEG L, et al. Algorithm discovery with LLMs: Evolutionary search meets reinforcement learning[EB/OL]. https://doi.org/10.48550/arXiv.2504.05108, 2025.
    [27]
    LI Zhanxian, YE Nan, and HE Yifeng. Meteorological data transmission management system based on multi-source satellite data[C]. 2023 IEEE 2nd International Conference on Electrical Engineering, Big Data and Algorithms, Changchun, China, 2023: 1343–1346. doi: 10.1109/EEBDA56825.2023.10090642.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(2)  / Tables(7)

    Article Metrics

    Article views (75) PDF downloads(8) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return