2026, 48(2): 651-661.
doi: 10.11999/JEIT250670
Abstract:
Objective Low Earth Orbit (LEO) satellite networks are central to future space-air-ground integrated systems, offering global coverage and low-latency communication. However, their high-speed mobility leads to rapidly changing topologies, and strict onboard cache constraints hinder efficient content delivery. Existing caching strategies often overlook real-time network congestion and content attributes (e.g., freshness), which leads to inefficient resource use and degraded Quality of Service (QoS). To address these limitations, we propose an adaptive cache placement strategy based on congestion awareness. The strategy dynamically couples real-time network conditions, including link congestion and latency, with a content value assessment model that incorporates both popularity and freshness.This integrated approach enhances cache hit rates, reduces backhaul load, and improves user QoS in highly dynamic LEO satellite environments, enabling efficient content delivery even under fluctuating traffic demands and resource constraints. Methods The proposed strategy combines a dual-threshold congestion detection mechanism with a multi-dimensional content valuation model. It proceeds in three steps. First, satellite nodes monitor link congestion in real time using dual latency thresholds and relay congestion status to downstream nodes through data packets. Second, a two-dimensional content value model is constructed that integrates popularity and freshness. Popularity is updated dynamically using an Exponential Weighted Moving Average (EWMA), which balances historical and recent request patterns to capture temporal variations in demand. Freshness is evaluated according to the remaining data lifetime, ensuring that expired or near-expired content is deprioritized to maintain cache efficiency and relevance. Third, caching thresholds are adaptively adjusted according to congestion level, and a hop count control factor is introduced to guide caching decisions. This coordinated mechanism enables the system to prioritize high-value content while mitigating congestion, thereby improving overall responsiveness and user QoS. Results and Discussions Simulations conducted on ndnSIM demonstrate the superiority of the proposed strategy over PaCC (Popularity-Aware Closeness-based Caching), LCE (Leave Copy Everywhere), LCD (Leave Copy Down), and Prob (probability-based caching with probability = 0.5). The key findings are as follows. (1) Cache hit rate. The proposed strategy consistently outperforms conventional methods. As shown in Fig. 8, the cache hit rate rises markedly with increasing cache capacity and Zipf parameter, exceeding those of LCE, LCD, and Prob. Specifically, the proposed strategy achieves improvements of 43.7% over LCE, 25.3% over LCD, 17.6% over Prob, and 9.5% over PaCC. Under high content concentration (i.e., larger Zipf parameters), the improvement reaches 29.1% compared with LCE, highlighting the strong capability of the strategy in promoting high-value content distribution. (2) Average routing hop ratio. The proposed strategy also reduces routing hops compared with the baselines. As shown in Fig. 9, the average hop ratio decreases as cache capacity and Zipf parameter increase. Relative to PaCC, the proposed strategy lowers the average hop ratio by 2.24%, indicating that content is cached closer to users, thereby shortening request paths and improving routing efficiency. (3) Average request latency. The proposed strategy achieves consistently lower latency than all baseline methods. As summarized in Table 2 and Fig. 10, the reduction is more pronounced under larger cache capacities and higher Zipf parameters. For instance, with a cache capacity of 100 MB, latency decreases by approximately 2.9%, 5.8%, 9.0%, and 10.3% compared with PaCC, Prob, LCD, and LCE, respectively. When the Zipf parameter is 1.0, latency reductions reach 2.7%, 5.7%, 7.2%, and 8.8% relative to PaCC, Prob, LCD, and LCE, respectively. Concretely, under a cache capacity of 100 MB and Zipf parameter of 1.0, the average request latency of the proposed strategy is 212.37 ms, compared with 236.67 ms (LCE), 233.45 ms (LCD), 225.42 ms (Prob), and 218.62 ms (PaCC). Conclusions This paper presents a congestion-aware adaptive caching placement strategy for LEO satellite networks. By combining real-time congestion monitoring with multi-dimensional content valuation that considers both dynamic popularity and freshness, the strategy achieves balanced improvements in caching efficiency and network stability. Simulation results show that the proposed method markedly enhances cache hit rates, reduces average routing hops, and lowers request latency compared with existing schemes such as PaCC, Prob, LCD, and LCE. These benefits hold across different cache sizes and request distributions, particularly under resource-constrained or highly dynamic conditions, confirming the strategy’s adaptability to LEO environments. The main innovations include a closed-loop feedback mechanism for congestion status, dynamic adjustment of caching thresholds, and hop-aware content placement, which together improve resource utilization and user QoS. This work provides a lightweight and robust foundation for high-performance content delivery in satellite-terrestrial integrated networks. Future extensions will incorporate service-type differentiation (e.g., delay-sensitive vs. bandwidth-intensive services), and orbital prediction to proactively optimize cache migration and updates, further enhancing efficiency and adaptability in 6G-enabled LEO networks.