Advanced Search

2025 Vol. 47, No. 2

Cover
Cover
2025, 47(2)
Abstract:
2025, 47(2): 1-4.
Abstract:
XXXXX
Research Overview of Reconfigurable Intelligent Surface Enabled Semantic Communication Systems
ZHU Zhengyu, LIANG Xinyue, SUN Gangcan, NIU Kai, CHU Zheng, YANG Zhaohui, YANG Guangrui, ZHENG Guhan
2025, 47(2): 287-295. doi: 10.11999/JEIT240984
Abstract:
  Objective   The proliferation of sixth-Generation (6G) wireless network technologies has led to an exponential demand for intelligent devices, such as autonomous transportation, environmental monitoring, and consumer robotics. These applications will generate vast amounts of data, reaching zetta-bytes in scale. Furthermore, they require support for massive connectivity over limited spectrum resources, and low latency, presenting critical challenges to traditional source-channel coding methods. Therefore, the 6G architecture is shifting from a traditional framework focused on high transmission rates to a novel paradigm centered on the intelligent interconnection of all things. Semantic Communication (SemCom) is considered an extension of the Shannon communication paradigm, aiming to extract the meaning from data and filter out unnecessary, irrelevant, or unessential information. As a core paradigm in 6G, SemCom enhances transmission accuracy and spectral efficiency, optimizing service quality. Despite its significant potential, challenges remain in implementing SemCom systems. Reconfigurable Intelligent Surfaces (RIS) are seen as key enablers for 6G networks. RIS can be dynamically deployed in wireless environments to manipulate electromagnetic wave characteristics (such as frequency, phase, and polarization) via programmable reflection and refraction, reshaping wireless channels to amplify signal strength, extend coverage, and optimize performance. Integrating RIS into SemCom systems helps address limitations like coverage voids while enhancing the precision and efficiency of semantic information delivery. This paper proposes an RIS-enabled SemCom framework, with numerical simulations validating its effectiveness in improving system accuracy and robustness.  Methods   This paper integrates RIS into the SemCom system. The transmitted signal reaches the receiver through both the direct link and the RIS-reflected link, mitigating communication interruptions caused by obstructions. Additionally, the Bilingual Evaluation Understudy (BLEU) metric is used to evaluate performance. Simulations compare RIS-enhanced channels with conventional channels (e.g., AWGN and Rayleigh), demonstrating the performance gain of RIS in SemCom systems.  Results and Discussions   A positive correlation is observed between Signal-To-Noise Ratio (SNR) increases and improvements in the BLEU score, where higher BLEU scores indicate better text reconstruction fidelity to the source content, reflecting enhanced semantic accuracy and communication quality (Fig. 4). Under RIS-enhanced channel conditions, SemCom systems not only show higher BLEU scores but also exhibit greater stability, with reduced sensitivity to SNR fluctuations. This validates the advantages of RIS channels in semantic information recovery. The performance gap between RIS and conventional channels widens significantly under low SNR conditions, suggesting that RIS-enabled systems maintain robust communication quality and semantic fidelity even with signal degradation, highlighting their stronger practical competitiveness. Additionally, the comparative analysis shows performance differences across N-gram models (Figs. 4(a) and (b)). Practical implementations, therefore, require model selection based on computational constraints and task requirements, with potential for exploring higher-order N-gram architectures.  Conclusions   This paper systematically examines the evolution of SemCom and the theoretical foundations of RIS. SemCom, aimed at overcoming the bandwidth limitations of traditional systems and enabling natural human-machine interactions, has shown transformative potential across various domains. At the same time, the paper highlights RIS’s advantages in improving wireless system performance and its potential integration with SemCom paradigms. A novel RIS-enabled SemCom architecture is proposed, with experimental validation confirming its effectiveness in enhancing information recovery accuracy. Additionally, the paper outlines future research directions for RIS-enhanced SemCom, urging the research community to address emerging challenges.  Prospects   Current research on RIS-enabled SemCom is still in its early stages, primarily focusing on resource allocation, performance enhancement, and architectural design. However, it faces fundamental challenges, such as the lack of Shannon-like theoretical foundations and vulnerabilities in knowledge base synchronization and updating. Three critical challenges emerge: (1) Cross-modal semantic fusion architecture, which requires adaptive frameworks to support diverse 6G services beyond single-modality paradigms; (2) Dynamic knowledge base optimization, requiring efficient update mechanisms to balance semantic consistency with computational and communication overhead; (3) Semantic-aware security protocols, which must incorporate hybrid defenses against AI-specific attacks (e.g., adversarial perturbations) and RIS-enabled channel manipulation threats.
Joint Secure Transmission and Trajectory Optimization for Reconfigurable Intelligent Surface-aided Non-Terrestrial Networks
XU Kexin, LONG Keping, LU Yang, ZHANG Haijun
2025, 47(2): 296-304. doi: 10.11999/JEIT240981
Abstract:
  Objective  The proliferation of technologies such as the Internet of Things, smart cities, and next-generation mobile communications has made Non-Terrestrial Networks (NTNs) increasingly important for global communication. Future communication systems are expected to rely heavily on NTNs to provide seamless global coverage and efficient data transmission. However, current NTNs face challenges, including limited coverage and link quality in direct satellite-to-ground user connections, as well as eavesdropping threats. To address these challenges, a system integrating Reconfigurable Intelligent Surfaces (RIS) with a twin-layer Deep Reinforcement Learning (DRL) algorithm is proposed. This approach aims to satisfy the system’s requirements for high transmission rates and enhanced security, improving the signal strength for legitimate users while facilitating real-time updates and optimization of channel state information in NTNs.  Methods  First, an RIS-aided downlink NTNs system using an Unmanned Aerial Vehicle (UAV) as a relay is established. To balance the system’s transmission rate and security requirements, the weighted sum of the satellite-to-UAV transmission rate and the secure rate of the legitimate ground user is designed as the system utility, which serves as the optimization objective. A joint optimization method based on the Twin-Twin Delayed Deep Deterministic Policy Gradient (TTD3) algorithm is then proposed. This method jointly optimizes satellite and UAV beamforming, the RIS phase shift matrix, and UAV trajectory. The algorithm divides the optimization problem into two layers for solution. The first-layer DRL optimizes satellite and UAV beamforming, as well as the RIS phase shift matrix. The second-layer DRL optimizes the UAV’s trajectory based on its position, user mobility, and channel state information. The twin DRL shares the same reward function, guiding the agents in each layer to adjust their actions and explore optimal strategies, ultimately enhancing the system’s utility.  Results and Discussions  (1) Compared to the Deep Deterministic Policy Gradient (DDPG), the proposed TTD3 algorithm exhibits smaller dynamic fluctuations, demonstrating greater stability and robustness (Fig. 2). (2) The UAV trajectory and user secrecy rate performance under four different schemes and algorithms show that the proposed method balances service for legitimate users. The UAV trajectory is smoother compared to that based on DDPG, and the overall user secrecy rate is also higher. This confirms that the proposed method can adapt to dynamically changing NTNs environments while improving user secrecy rates (Fig. 3, Fig. 4). (3) As the number of RIS reflecting elements increases, the degrees of freedom and precision of beamforming improve. Therefore, the overall user secrecy rates of different algorithms increase, resulting in enhanced system performance (Fig. 5).  Conclusions  This paper investigates an RIS-assisted downlink secure transmission system for NTNs, addressing the presence of eavesdropping threats. To meet the requirements of high transmission rates and security across different scenarios, the optimization objective is formulated as the weighted sum of the transmission rate from the satellite to the UAV and the secrecy rate of legitimate ground users. A TTD3-based joint optimization method for satellite and UAV beamforming, RIS phase shift matrix, and UAV trajectory is proposed. By adopting a twin-layer DRL structure, the beamforming and trajectory optimization subproblems are decoupled to maximize system utility. Simulation results validate the effectiveness of the proposed algorithm. Additionally, comparisons across different algorithms, RIS element counts, and schemes in high-security-demand scenarios demonstrate that the TTD3 algorithm is well-suited for dynamically changing NTNs environments and can significantly enhance system transmission performance. Future research will explore integrating emerging technologies, such as federated learning and meta-learning, to achieve distributed, low-latency policy optimization, thereby facilitating network resource optimization and interference analysis in large-scale, multi-satellite, and multi-UAV complex scenarios.
Secure Transmission Scheme for Reconfigurable Intelligent Surface-enabled Cooperative Simultaneous Wireless Information and Power Transfer Non-Orthogonal Multiple Access System
JI Wei, LIU Ziqing, LI Fei, LI Ting, LIANG Yan, SONG Yunchao
2025, 47(2): 305-314. doi: 10.11999/JEIT240822
Abstract:
  Objective  The Reconfigurable Intelligent Surface (RIS) is emerging as a promising technology due to its ability to provide passive beamforming gains, which can be seamlessly integrated into existing wireless networks without altering physical layer standards. The integration of RIS with other advanced technologies offers new opportunities for communication network design. In the context of future large-scale Internet of Things (IoT) systems, users are expected to have diverse requirements. These differences in structure and function lead to two distinct receiver operation modes: Power Splitting (PS) and Time Switching (TS). Furthermore, users’ service needs may vary, including energy harvesting and information transmission. In practice, IoT terminals often face energy constraints. Additionally, the network typically operates in an open wireless environment, where the inherent broadcasting nature of wireless channels may introduce security vulnerabilities. To address the diverse service demands in large-scale IoT networks and ensure secure information transmission, this study proposes an RIS-enabled secure transmission scheme for a cooperative Simultaneous Wireless Information and Power Transfer Non-Orthogonal Multiple Access (SWIPT-NOMA) system.  Methods  The RIS is strategically deployed to assist transmission during both the direct and cooperative transmission stages. The goal is to maximize the secrecy rate of the strong NOMA user, subject to the information rate requirements of the weak NOMA user, the energy harvesting needs of the strong NOMA user, and the base station’s minimum transmission power. To solve this multivariable-coupled, non-convex optimization problem, an alternating iterative optimization algorithm is applied. The algorithm optimizes the base station’s active beamforming, the RIS’s passive beam phase shift matrix in the direct transmission stage, the RIS’s active beam phase shift matrix in the cooperative transmission stage, and the PS coefficient of the strong user. These parameters are iteratively adjusted until convergence is achieved.  Results and Discussions  The convergence of the algorithm is demonstrated in (Fig. 3). As the number of RIS components increases and the number of iterations grows, the secrecy rate of the strong user (U2) gradually improves until it converges. To evaluate the effectiveness of the proposed scheme, it is compared with several benchmark schemes: (1) The random PS coefficient scheme, where RIS is used in both the direct and cooperative transmission stages, and the PS coefficients for strong user U2 are randomly generated. (2) The random RIS phase shift matrix scheme, where RIS enables both transmission stages, with phase shift matrices for both stages randomly generated. (3) The SDR scheme, in which RIS is used in both transmission stages, and the phase shift matrices are optimized using the SDR method. (4) The RIS-enabled direct transmission scheme, where RIS is used only in the direct transmission stage. The impact of the number of base station antennas on the system’s secrecy rate is shown in (Fig. 4), and the effect of the number of RIS components on the secrecy rate is explored in (Fig. 5). Compared to the other baseline schemes, the proposed scheme achieves a higher secrecy rate for the strong user.  Conclusions  This paper addresses the challenge of diverse service requirements for users in future large-scale IoT networks and the security of information transmission by designing a secure transmission scheme for an RIS-enabled cooperative SWIPT-NOMA communication system. RIS assists communication in both the direct and cooperative transmission stages. The secrecy rate of the strong user is maximized while considering the information rate requirements of weak NOMA users, the energy harvesting needs of strong NOMA users, and the base station’s minimum transmission power. The proposed optimization problem is a non-convex, multi-variable problem, which is difficult to solve directly. To address this, the problem is divided into several sub-problems, and the active beamforming of the base station, the passive beam phase shift matrix of the RIS in the direct transmission stage, the active beam phase shift matrix of the RIS in the cooperative transmission stage, and the power splitting coefficient of the strong user are iteratively optimized until convergence. Simulation results demonstrate that the secrecy rate of the proposed scheme outperforms that of the scheme where RIS is enabled only in the direct transmission stage. Compared to other baseline schemes, the proposed scheme further enhances the secrecy rate for strong users.
Tradeoff between Age of Information and Energy Efficiency for Intelligent Reflecting Surface Assisted Short Packet Communications
ZHANG Yangyi, GUAN Xinrong, WANG Quan, DENG Cheng, ZHU Zeyuan, CAI Yueming
2025, 47(2): 315-323. doi: 10.11999/JEIT240666
Abstract:
  Objective:   In monitoring Internet of Things (IoT) systems, it is essential for sensor devices to transmit collected data to the Access Point (AP) promptly. The timely transmission of information can be enhanced by increasing transmission power, as higher power levels tend to improve the reliability of data transfer. However, sensor devices typically have limited transmission power, and beyond a certain threshold, increases in power yield diminishing returns in terms of transmission timeliness. Therefore, effectively managing transmission power to balance timeliness and Energy Efficiency (EE) is crucial for sensor devices. This paper investigates the trade-off between the Age of Information (AoI) and EE in multi-device monitoring systems, where sensor devices communicate monitoring data to the AP using short packets with support from Intelligent Reflective Surface (IRS). To address packet collisions that occur when multiple devices access the same resource block, an access control protocol is developed, and closed-form expressions are derived for both the average AoI and EE. Based on these expressions, the average AoI-EE ratio is introduced as a metric that can be minimized to achieve an optimal balance between AoI and EE through transmission power optimization.  Methods:   Deriving the closed-form expression for the average AoI is challenging due to two factors. Firstly, obtaining the exact distribution of the composite channel gain is difficult. Secondly, in short-packet communications, the packet error rate expression involves a complementary cumulative distribution function with a complex structure, complicating the averaging process. However, the Moment Matching (MM) technique can approximate the probability distribution of the composite channel gain as a gamma distribution. To address the second challenge, a linear function is used to approximate the packet error rate, yielding an approximate expression for the average packet error rate. Additionally, to examine the relationship between the ratio of average AoI and EE with transmission power, the second derivative of this ratio is calculated and analyzed. Finally, the optimal transmission power is determined using the binary search algorithm.  Results and Discussions:   Firstly, the paper examines the division of a time slot into varying numbers of resource blocks and analyzes their AoI performances. The findings indicate that AoI performance does not increase monotonically with an increase in the number of resource blocks. Specifically, while a greater number of resource blocks enhances the probability of device access, it concurrently reduces the size of each resource block, leading to an increase in packet error rates during information transmission. Therefore, it is essential to strategically plan the number of resource blocks allocated for each time slot. Additionally, the results demonstrate that the AoI performance of the proposed access control scheme exceeds that of traditional random access and periodic sampling schemes. In the random access scheme, devices occupy resource blocks at random, which may lead to multiple devices occupying the same block and resulting in transmission collisions that compromise the reliability of information transmission. Conversely, while devices in the periodic sampling scheme can reliably access resource blocks within each cycle, one cycle includes multiple time slots, thus necessitating a prolonged wait for information transmission. Moreover, it is noted that at lower information transmission power levels, the periodic sampling scheme can achieve higher EE. This is attributed to the low transmission power resulting in substantially higher packet error rates across all schemes; however, the periodic sampling scheme manages to secure larger resource blocks, leading to lower packet error rates and a reduced likelihood of energy waste during signal transmission. As information transmission power increases, the advantages of the periodic sampling scheme begin to diminish, and the EE of the proposed access control scheme ultimately exceeds that of the periodic sampling scheme. Finally, the paper investigates the relationship between the ratio of average AoI and EE with the information transmission power. The analysis reveals that this ratio is a convex function that initially decreases and subsequently increases with rising transmission power, indicating the existence of an optimal power level that minimizes the ratio.  Conclusions:   This study examines the trade off between timeliness and EE in IRS-assisted short-packet communication systems. An access control protocol is proposed to mitigate packet collisions, and both timeliness and EE are analyzed. The ratio of average AoI to EE is introduced as a metric to balance AoI and EE, with optimization of transmission power shown to minimize this ratio. Simulation results validate the theoretical analysis and demonstrate that the proposed access control protocol achieves an improved AoI-EE trade off. Future research will focus on optimizing the deployment location of the IRS to further enhance the balance between timeliness and EE.
Performance and Optimal Placement Analysis of Intelligent Reflecting Surface-assisted Wireless Networks
SHU Feng, LAI Sihao, LIU Chuan, GAO Wei, DONG Rongen, WANG Yan
2025, 47(2): 324-333. doi: 10.11999/JEIT240488
Abstract:
  Objective:   Previous studies have extensively examined the performance of Intelligent Reflecting Surface (IRS)-assisted wireless communications by varying the location of the IRS. However, relocating the IRS alters the sum of the distances between the IRS and the base station, as well as the distances to users, leading to discrepancies in reflective channel transmission distances, which introduces a degree of unfairness. Additionally, the assumption that the path loss indices for the base station-to-IRS and IRS-to-user channels are equal is overly idealistic. In practical scenarios, the user’s height is typically much lower than that of the base station, and the IRS may be positioned closer to either the base station or the user. This disparity results in significantly different path loss indices for the two channels. Consequently, this paper focuses on identifying the optimal deployment location of the IRS while keeping the total distance fixed. The IRS is modeled to move along an ellipsoid or ellipsoidal plane defined by the base station and the user as focal points. The analysis provides insights into the optimal deployment of the IRS while taking into account a broader range of application scenarios, specifically addressing different path loss indices for the base station-to-IRS and IRS-to-user channels given a predetermined sum of the transmitting powers.  Methods:   Utilizing concepts of phase alignment and the law of large numbers, closed-form expressions for the reachability rate of both passive and active IRS-assisted wireless networks are initially derived for two scenarios: the line-of-sight channel and the Rayleigh channel. Following this, the study analyzes how the path loss exponents from the base station to the IRS and from the IRS to the user impact the optimal deployment location of the IRS.  Results and Discussions:   The reachability rate of a passive IRS-assisted wireless network, considering IRS locations under both line-of-sight and Rayleigh channels, is illustrated. It is evident that the optimal deployment location of the IRS is nearest to either the base station or the user when β1=β2. When β1>β2, the optimal deployment location of the IRS is obtained solely at the base station, while the least effective deployment location shifts progressively closer to the user. Conversely, a contrasting result is obtained when β1<β2. The above results verify the correctness of the theoretical derivation in Section 3.1.3. The reachability rate of an active IRS-assisted wireless network as a function of IRS location under line-of-sight and Rayleigh channels is depicted. The figure indicates that when β1=β2, the system’s reachability rate under the line-of-sight channel exceeds that of the Rayleigh channel, with the optimal deployment location of the active IRS positioned in proximity to the user. When β1>β2 (fixed β2, increasing β1), the optimal deployment location of the active IRS progressively approaches the base station. And when β1<β2, the optimal deployment location shifts closer to the user. The optimal deployment location of the IRS for IRS-assisted wireless networks operating under a Rayleigh channel, reflecting variations in the path loss index β, is portrayed. Notably, for passive IRS systems, regardless of the path loss index variations, the optimal deployment locations across three different cases yield consistent conclusions with those derived. For the active IRS, when β1=β2=β1, the optimal deployment location gradually distances itself from the user ultimately approaching the IRS location at m (directly above the midpoint of the line connecting the base station and user). Conversely, when β1>β2, the optimal deployment position of the IRS increasingly aligns with the base station along an elliptical trajectory; conversely, when β1<β2, it shifts towards the user. The optimal deployment location of the active IRS under both line-of-sight and Rayleigh channels as a function of Igressively approaches the base station. And wRS reflected power PI is displayed. The analysis indicates that in both channel conditions, as the IRS reflected power increases, the optimal deployment location for the active IRS progressively moves closer to the base station along an elliptical trajectory as PI gradually increases. And at β1=β2 and PI=PB, the optimal deployment location of the active IRS maintains an equal distance from both the base station and the user. The system’s reachability rate in relation to the distance r from the base station to the active IRS, accounting for different user noise \begin{document}$\sigma_{\mathrm{U}}^2 $\end{document} and amplified noise \begin{document}$\sigma_{\mathrm{I}}^2 $\end{document} of the active IRS, is presented. When fixing \begin{document}$\sigma_{\mathrm{I}}^2 $\end{document} and gradually increasing \begin{document}$\sigma_{\mathrm{U}}^2 $\end{document}, the optimal deployment location of the active IRS is situated closer to the user. Conversely, when fixing \begin{document}$\sigma_{\mathrm{U}}^2 $\end{document} and gradually increasing \begin{document}$\sigma_{\mathrm{U}}^2 $\end{document}, the optimal deployment location gradually approaches the base station. Additionally, irrespective of increased noise levels, the system’s reachability rate demonstrates a tendency to decline.  Conclusions:   This paper examines the maximization of system reachable rates by varying the deployment locations of passive and active IRSs in line-of-sight and Rayleigh channel transmission scenarios. In the analysis, fixed positions are assumed for both the base station and the user, with the sum of the base station-to-IRS and IRS-to-user distances kept constant. Phase alignment and the law of large numbers are employed to derive a closed-form expression for the reachable rate. Theoretical analysis and simulation results provide several key insights: When β1<β2, the optimal deployment locations for both passive and active IRS are close to the user, the least favorable deployment locations for passive IRS move progressively closer to the base station as the difference between β1 and β2 increases. When β1=β2, the optimal deployment location for the active IRS remains near the user, while the passive IRS can be effectively placed near either the base station or the user. When β1>β2, the optimal deployment location of the passive IRS remains close to the base station. As the difference between β1 and β2 ncreases, the optimal deployment location of the active IRS gradually shifts closer to the base station. Additionally, as the amplified noise of the active IRS increases, its optimal deployment location moves closer to the base station. Conversely, when the noise at the user increases, the optimal deployment location of the active IRS is always closer to the user.
A Joint Beamforming Method Based on Cooperative Co-evolutionary in Reconfigurable Intelligent Surface-Assisted Unmanned Aerial Vehicle Communication System
ZHONG Weizhi, WAN Shiqing, DUAN Hongtao, FAN Zhenxiong, LIN Zhipeng, HUANG Yang, MAO Kai
2025, 47(2): 334-343. doi: 10.11999/JEIT240561
Abstract:
  Objective:   High-quality wireless communication enabled by Unmanned Aerial Vehicles (UAVs) is set to play a crucial role in the future. In light of the limitations posed by traditional terrestrial communication networks, the deployment of UAVs as nodes within aerial access networks has become a vital component of emerging technologies in Beyond Fifth Generation (B5G) and Sixth Generation (6G) communication systems. However, the presence of infrastructure obstructions, such as trees and buildings, in complex urban environments can hinder the Line-of-Sight (LoS) link between UAVs and ground users, leading to a significant degradation in channel quality. To address this challenge, researchers have proposed the integration of Reconfigurable Intelligent Surfaces (RIS) into UAV communication systems, providing an energy-efficient and flexible passive beamforming solution. RIS consists of numerous adjustable electromagnetic units, with each element capable of independently configuring various phase shifts. By adjusting both the amplitude and phase of incoming signals, RIS can intelligently reflect signals from multiple transmission paths, thereby achieving directional signal enhancement or nulling through beamforming. Given the limitations of conventional joint beamforming methods—such as their exclusive focus on optimizing the RIS phase shift matrix and lack of universality—a novel joint beamforming approach based on a Cooperative Co-Evolutionary Algorithm (CCEA) is proposed. This method aims to enhance Spectrum Efficiency (SE) in multi-user scenarios involving RIS-assisted UAV communications.  Methods:   The proposed approach begins by optimizing the RIS phase shift matrix, followed by the design of the beam shape for RIS-reflected waves. This process modifies the spatial energy distribution of RIS reflections to improve the Signal-to-Interference-plus-Noise Ratio (SINR) at the receiver. To address challenges in existing optimization algorithms, an Evolutionary Algorithm (EA) is introduced for the first time, and a cooperative co-evolutionary structure based on EA is developed to decouple joint beamforming subproblems. The central concept of CCEA revolves around decomposing complex problems into several subproblems, which are then solved through distributed parallel evolution among subpopulations. The evaluation of individuals within each subpopulation, representing solutions to their respective subproblems, relies on collaboration among different populations. Specifically, this involves merging individuals from one subpopulation with representative individuals from others to create composite solutions. Subsequently, the overall fitness of these composite solutions is assessed to evaluate individual performance within each subpopulation.   Results and Discussions:   The simulation results demonstrate that, in comparison to joint beamforming, which focuses solely on designing the RIS phase shift matrix, further optimizing the shape of the reflected beam from the RIS significantly enhances the accuracy and effectiveness of the main lobe coverage over the user's position, resulting in improved SE. Although Maximum Ratio Transmission (MRT) precoding can maximize the output SINR of the desired signal, it may also lead to considerable inter-user interference, which subsequently diminishes the SE. Therefore, the implementation of joint beamforming is essential. The optimization algorithms proposed in this paper are effective for both the actual amplitude-phase shift model and the ideal RIS amplitude-phase shift model. However, factors such as dielectric loss associated with the actual circuit structure of the RIS can attenuate the strength of the reflected wave reaching the client, thereby reducing the SINR at the receiving end and ultimately lowering the SE. Additionally, the increase in SE achievable through Deep Reinforcement Learning (DRL) and Alternating Optimization (AO) is limited when compared to CCEA. Unlike the optimization of individual action strategies employed in DRL, the CCEA algorithm produces a greater variety of solutions by utilizing crossover and mutation among individuals within the population, thereby mitigating the risk of local optimization. Moreover, CCEA can optimize the spatial distribution of the reflected waves through a more sophisticated design of the RIS reflecting beam shape. This results in an enhanced signal intensity at the receiving end, allowing for a higher SE compared to AO and DRL, which primarily focus on optimizing the RIS phase shift matrix.  Conclusions:   In light of the limitations observed in previous joint beamforming optimization methods, this paper introduces a novel joint beamforming optimization approach based on CCEA. This method effectively decomposes the joint beam optimization problem into two distinct sub-problems: the design of the RIS reflection beam waveform and the beamforming design at the transmitter. These sub-problems are addressed through independent parallel evolution, utilizing two separate sub-populations. Notably, for RIS passive beamforming, this approach innovatively optimizes the RIS phase shift matrix alongside the design of the RIS reflected beam shape for the first time. Numerical simulation results indicate that, compared to joint beamforming strategies that focus solely on optimizing the RIS phase shift matrix, a more meticulous design of the RIS reflected waveform can significantly alter the intensity distribution of reflected waves in 3D space. This alignment enables the reflected beam to converge on the user’s location while mitigating interference, thereby enhancing the system’s SE. Furthermore, the CCEA algorithm demonstrates the capability to achieve effective coverage of RIS reflected beams for users, regardless of varying base station and user locations. The optimization process leads to a reduction in Peak Side Lobe Level (PSLL) and an improvement in SE by at least 5 dB, showing its spatial applicability across diverse scenarios. Future research will aim to further investigate the application of evolutionary algorithms and swarm intelligence optimization techniques in joint beamforming optimization, as well as explore the potential of RIS beam waveform design to optimize communication systems, adapting to increasingly complex and diversified communication requirements.
Design of 95~105 GHz SiGe BiCMOS Wideband Digitally Controlled Attenuator for Metasurface Antenna
LUO Jiang, ZHANG Wenzhu, CHENG Qiang
2025, 47(2): 344-352. doi: 10.11999/JEIT240059
Abstract:
  Objective   The W-band spectrum, spanning from 75 to 110 GHz, offers valuable spectrum resources, making it well-suited for high-speed wireless communication, radar detection, and biomedical imaging. Its lower atmospheric attenuation, compared to the commonly used 60 GHz band, further enhances its suitability for these applications. As modern wireless devices and electronic systems operate in increasingly complex electromagnetic environments, the demands on antenna systems are growing. These systems must independently control both radiated and scattered electromagnetic waves. Active phased array antennas, which integrate numerous transmit/receive (T/R) modules, provide precise control over the amplitude and phase of radiation elements, enabling superior manipulation of the radiated electromagnetic field. Meanwhile, rapidly advancing intelligent metasurface technology allows programmable, real-time modulation of scattered electromagnetic wave characteristics such as amplitude, phase, frequency, and polarization. This technology has attracted significant interest in the fields of communications, radar, and antenna systems. Consequently, metasurface antennas with active T/R modules offers a novel technological approach for efficient beam control of both radiated and scattered electromagnetic fields, providing new insights into solving complex electromagnetic environment problems. Digitally controlled attenuators (DSAs) are essential in metasurface antenna array systems, serving as millimeter-wave signal amplitude control modules. They primarily compensate for amplitude errors introduced by phase shifters or other components while also suppressing sidelobe levels in array antennas to enhance beam directivity. Additionally, these attenuators must exhibit minimal phase variation to reduce tracking errors, thus simplifying the calibration process. However, existing commercial millimeter-wave amplitude control chips are expensive and may face export restrictions, emphasizing the urgent need for high-performance, low-cost solutions to support the hardware implementation of metasurface antennas.  Methods   A 95 to 105 GHz DSA with 5-bit resolution is proposed. To address the issues of high insertion loss (IL), poor accuracy, and limited bandwidth in the large attenuation unit at W-band, a reflective structure based on a cross-coupled broadband coupler is proposed. The proposed coupler features a 180° inverter core and quasi-parallel stripline connections on both sides. At millimeter-wave frequencies (e.g., W-band), coupling capacitance introduced by the gaps creates a series resonance condition, achieving lower transmission loss and ensuring wide operational bandwidth. The 4 dB and 8 dB attenuation units, built using this structure, achieve high accuracy and low IL within a compact area. To minimize impedance mismatch during state switching, the attenuation units are cascaded in an order that reduces variations in amplitude and phase. Specifically, the 4 dB and 8 dB units are placed at the two ends of the attenuator to limit mutual interference. Smaller attenuation units (0.5 dB, 1 dB, and 2 dB) adopt a simplified T-type structure. The 0.5 dB unit, being more sensitive to impedance changes, is strategically positioned between the 1 dB and 2 dB units. Furthermore, phase changes during state switching are mitigated through a combination of positive- and negative-slope phase compensation networks. A positive-slope network is applied to the 4 dB unit, while negative-slope networks are used for the 0.5 dB, 1 dB, and 8 dB units. This dual compensation approach effectively avoids overcompensation, ensuring consistent amplitude and phase performance across all states, significantly improving the attenuator’s root means square (RMS) phase errors.  Results and Discussions   The layout of the proposed wideband DSA is shown in Fig. 10; the whole chip occupies a silicon area of 840 μm × 430 μm including all testing pads with a small core size of only 610 μm × 200 μm. It has five attenuation cells providing independent control of binary-coded attenuation levels of 0.5, 1, 2, 4, and 8 dB with a total of 32 states. Fig. 11 shows the simulated attenuation levels relative to the reference state for all 31 attenuation states within the desired frequency range of 95~105 GHz. The DSA achieves a dynamic attenuation range of 15.5 dB with a step resolution of 0.5 dB. The frequency response curves of adjacent states are evenly spaced with no overlap, indicating that the attenuator delivers precise amplitude control characteristics. The maximum phase variation relative to the reference state across all 31 attenuation states is less than 4.8°, as plotted in Fig. 12. As shown in Fig. 13, the IL in the reference state is less than 2.5 dB across the entire frequency band of interest. Fig. 14 illustrates the simulated RMS amplitude error and phase error versus frequency. The RMS amplitude errors remain below 0.31 dB over 95~105 GHz, while the RMS phase error is better than 2.2°. Table 3 summarizes the performance of the designed W-band attenuator and compares it with recently reported millimeter-wave DSAs. Compared to other attenuators, the proposed DSA demonstrates superior overall competitiveness, achieving low IL, high attenuation accuracy, and low RMS phase error within a compact chip size. While [6] achieves the lowest RMS phase error, it suffers from a high IL of 11.2 dB. In contrast, [20] offers excellent IL performance but is limited to a small attenuation range of only 4.7 dB.  Conclusions   In conclusion, the 5-bit W-band DSA presented in this paper, implemented in a 0.13 \begin{document}${\text{µm}} $\end{document} SiGe BiCMOS process, offers an efficient and compact solution for wideband attenuation with low IL and minimal phase shift. The design integrates reflective and simplified T-type topologies, along with RC-based positive and negative slope correction networks applied to different attenuation units, enabling precise attenuation steps and optimized phase errors. The attenuator achieves an attenuation range of 0~15.5 dB with 0.5 dB steps over the 95~105 GHz frequency range, occupying a compact area of 0.12 mm2. Simulated results show an IL of less than 2.5 dB, RMS amplitude error below 0.25 dB, and RMS phase error under 2.2°. The proposed DSA can serve as a key component empowering the hardware implementation of an integrated T/R metasurface antenna system with simultaneous radiation and scattering control.
Throughput Maximization for Double RIS-Assisted MISO Systems
XIE Wenwu, ZHANG Qinke, LIANG Xitao, LIU Chenyu, YU Chao, WANG Ji
2025, 47(2): 353-362. doi: 10.11999/JEIT240612
Abstract:
  Objective   With the continuous advancement of research on Reconfigurable Intelligent Surfaces (RIS), various application scenarios have emerged. Among these, Active Reconfigurable Intelligent Surfaces (ARIS) attracts significant attention from the academic community. While some studies focus on dual Passive RIS-assisted communication systems, others investigate dual RIS-assisted systems incorporating ARIS. Existing literature consistently demonstrates that dual RIS configurations outperform single RIS setups in terms of achievable Signal-to-Noise Ratio (SNR), power gain, and energy transfer efficiency, with dual RIS systems achieving approximately ten times higher energy transfer efficiency.However, most existing studies on RIS focus on optimizing the performance of reflection coefficients in one or more distributed RIS-aided systems, primarily serving users within their respective coverage areas, without sufficiently addressing the benefits of single-reflection links. While dual RIS systems can effectively mitigate the limitations of antenna numbers and improve transmission reliability and efficiency, single-reflection links can still significantly enhance channel capacity, especially under low transmission power conditions. This paper proposes a novel approach wherein dual-reflection links and two single-reflection links jointly serve users. The goal is to maximize the downlink capacity of dual RIS-assisted Multiple-Input Single-Output (MISO) systems by strategically configuring the interaction between the two RISs.  Methods   In this paper, four combinatorial models of RIS are investigated: the Transmitter-PRIS PRIS-Receiver (TPPR), Transmitter-ARIS PRIS-Receiver (TAPR), Transmitter-PRIS ARIS-Receiver (TPAR), and Transmitter-ARIS ARIS-Receiver (TAAR). The optimization objective of all models is to maximize the communication rate by optimizing the antenna beamforming vector of the base station and the phase shift matrix of the RIS. Due to the coupling of the three variables in the objective function, the model is non-convex, making it difficult to obtain an optimal solution. To address the coupling problem, the Alternating Optimization (AO) algorithm is employed, where one phase shift vector is fixed while the other is optimized alternately. To tackle the non-convex problem, SCA is applied to iteratively approximate the optimal solution by solving a series of convex subproblems.  Results and Discussions   Building on the research methods outlined above and employing the SCA and AO algorithms, experimental results are obtained. The system capacity of each combination model increases with rising amplification power (Fig. 2). However, once the amplification factor reaches a certain threshold, the capacity curves of all models begin to flatten due to the constraints imposed by the maximum amplification power.Further demonstration of the system capacity performance of different combination models as transmit power increases is shown in (Fig. 3). Across all dual-RIS combination models, system capacity improves with higher transmit power and outperforms the Single-Active model in all scenarios.In (Fig. 3(a)), under low transmit power conditions, regions of the curves corresponding to higher amplification power overlap due to the constraint of the amplification factor. As transmit power increases, system capacity stabilizes, which can be attributed to the proximity of ARIS to the base station, allowing it to receive stronger signals. Under high transmit power, system capacity continues to improve due to the influence of PRIS. Unlike ARIS, PRIS reflects the optimized signal path without being constrained by amplification power. Consequently, as transmit power increases, the signal strength received by PRIS is enhanced.In (Fig. 3(b)), system capacity increases with transmit power, showing trends similar to those in (Fig. 3(a)). In the TPAR combined model, the amplification factor constraint dominates, causing the system capacity curves to exhibit similar behavior across different amplification power levels. Under low transmit power, the signal strength at ARIS does not exceed the maximum amplification power budget. As transmit power increases, the amplification power constraint increasingly affects system capacity, leading to a gradual slowdown in the curve’s upward trend until it flattens. At high transmit power levels, the system capacity curve of the TPAR model levels off due to the low signal strength received by ARIS when it is positioned farther from the base station. This positioning necessitates higher transmit power to overcome the amplified power constraint. Thus, it is recommended that ARIS be deployed as close to the user as possible.In (Fig. 3(c)), the TAAR combined model leverages the characteristics of both ARIS and PRIS in a dual ARIS-assisted scenario. Under low transmit power conditions, significant capacity gains are achieved. However, at high transmit power, the system capacity is constrained by the maximum amplification power of ARIS and eventually levels off. The system capacity trends in (Fig. 3(a)) and (Fig. 3(b)) consistently increase with higher transmit power. This is because both combination models integrate the advantages of PRIS and ARIS, ensuring high performance under both high and low transmit power conditions.In (Fig. 3(d)), where ARIS is positioned on the user side, comparison with (Fig. 3(c)) reveals that, under high transmit power, the system capacity of both combination models is nearly identical, regardless of the amplification power level. This suggests that in strong transmit power scenarios, the additional gains from ARIS are limited.   Conclusions   This paper provides an in-depth analysis of the optimization of dual RIS-assisted MISO communication systems, confirming their superiority over single RIS configurations. However, several potential research directions remain unexplored. Most current studies assume ideal channel models, whereas real-world applications often involve complex channel conditions that significantly affect system performance. Future research could investigate the performance of dual RIS systems under these practical conditions, paving the way for more robust and applicable solutions.
Resource Allocation for RIS-aided Cross-Model Communications
CHEN Mingkai, SUN Zhende, WAN Yafang
2025, 47(2): 363-374. doi: 10.11999/JEIT240619
Abstract:
  Objective  The rapid development of digital and intelligent technologies has driven the increasing demand for cross-modal communication systems to support a wide range of applications, such as high-bandwidth video streaming, ultra-reliable low-latency haptic interactions, and immersive virtual reality experiences. These applications require the concurrent transmission of heterogeneous services, each with distinct and often conflicting resource demands. For instance, video services necessitate high data rates and large bandwidth allocations for smooth playback, while haptic services require ultra-low latency (<0.3 ms) and high reliability (>99.999%) for real-time interaction. Existing resource allocation schemes, typically designed for single-service scenarios or static optimization, do not effectively address the dynamic nature of wireless channels or the stringent requirements of multi-service coexistence. This paper proposes a dynamic resource allocation framework that utilizes Reconfigurable Intelligent Surfaces (RIS) to optimize the transmission efficiency of video services and the reliability of haptic services, thereby enhancing spectrum utilization and improving the Quality of Experience (QoE) in cross-modal communication systems.  Methods  To address the resource competition between video and haptic services, this paper proposes an RIS-aided network slicing architecture. The RIS dynamically adjusts its phase shifts to reshape the wireless propagation environment, improving channel gain and reducing interference. A puncturing-based resource sharing mechanism is introduced, enabling haptic traffic to temporarily use resources allocated to video services during burst arrivals. This mechanism ensures the stringent latency and reliability requirements of haptic services are met without significantly affecting video service performance. The optimization problem is formulated as a Mixed-Integer Nonlinear Programming (MINLP) task, with the objective of maximizing the video service rate while satisfying the constraints of haptic services. To tackle the complexity of joint RIS phase optimization and resource allocation, the problem is modeled as a Markov Decision Process (MDP) with continuous state and action spaces. A Deep Deterministic Policy Gradient (DDPG) algorithm is employed, integrating actor-critic networks, experience replay, and target networks to learn optimal policies. The actor network generates decisions regarding resource block allocation, RIS phase shifts, and puncturing ratios, while the critic network evaluates the long-term reward, defined as the weighted sum of video throughput and haptic service satisfaction.  Results and Discussions  Simulation results demonstrate the effectiveness of the proposed scheme. Compared to the HMSA scheme, the proposed method significantly improves the total transmission rate for users, particularly under varying Base Station (BS) power levels (Fig. 4). The RIS phase optimization scheme outperforms both the random phase and no-RIS scenarios, highlighting the importance of dynamically adjusting RIS reflection coefficients to enhance channel gain (Fig. 5). Furthermore, the average delay of haptic data packets decreases as the number of RIS reflection units increases, and higher BS transmit power further reduces latency, confirming the synergy between RIS deployment and power allocation (Fig. 6). The user sum rate declines as the arrival rate of haptic data packets increases, due to intensified resource competition. However, deploying additional RIS reflection units mitigates this degradation, demonstrating the robustness of RIS-aided resource allocation (Fig. 7). The convergence behavior of the DDPG algorithm is analyzed, showing faster convergence in low-SNR environments (e.g., P = 0 dBm) compared to high-SNR scenarios (e.g., P = 30 dBm), where reward fluctuations are more pronounced (Fig. 8). Additionally, the learning rate is identified as a key hyperparameter, with a value of 0.001 providing the optimal balance between convergence speed and stability (Fig. 9). These results confirm that the proposed framework enhances video service throughput while ensuring the stringent reliability and low-latency requirements of haptic services, enabling efficient cross-modal resource coexistence.  Conclusions  This work presents an RIS-assisted dynamic resource allocation framework for cross-modal communication systems, effectively addressing the coexistence challenges of video and haptic services. Key innovations include the integration of RIS phase optimization with puncturing-based resource sharing and the application of DDPG to solve high-dimensional MINLP problems. The proposed scheme significantly enhances video throughput and haptic reliability, demonstrating its potential for 6G-enabled immersive applications. Future research will extend this framework to mobile user scenarios, multi-RIS collaborative systems, and multi-service coexistence environments with diverse QoS requirements. Specifically, the study will examine the impact of user mobility on RIS configuration and resource allocation strategies. Additionally, the benefits of deploying multiple RIS units in a coordinated manner will be explored to further enhance system performance and coverage. Finally, the framework will be expanded to support a broader range of services with varying latency, reliability, and bandwidth demands, paving the way for more versatile and efficient cross-modal communication systems.
Resource Allocation Algorithm for Intelligent Reflecting Surface-assisted Integrated Sensing and Covert Communication
ZHOU Xiaobo, RUAN Danyang, ZHOU Xiuying, XIA Guiyang, SHU Feng
2025, 47(2): 375-385. doi: 10.11999/JEIT240643
Abstract:
  Objective  Integrated Sensing and Communication (ISAC) systems are considered key technologies for the upcoming 6G networks, offering a unified platform for wireless communication and environmental sensing. To enhance the security of ISAC systems, an Integrated Sensing and Covert Communication (ISACC) system is proposed. Additionally, an Intelligent Reflecting Surface (IRS)-assisted ISACC scheme is proposed to address the limitation of existing ISACC research, which cannot be applied to scenarios without a Line-of-Sight (LoS) link between the Base Station (BS) and the target. In this context, the average Cramér-Rao Lower Bound (CRLB) is adopted as a metric for sensing performance, aiming to overcome the limitations of traditional beampatterns in quantifying sensing performance directly.  Methods  The detection performance at warden Willie is first analyzed. An analytical expression for the average CRLB is then derived. Based on this, an optimization problem is formulated to minimize the average CRLB, subject to communication rate, covertness, and IRS phase shift constraints. The optimization problem is challenging to solve directly due to the coupling of the sensing covariance matrix, communication beamforming, and IRS reflective beamforming in the objective function, communication rate constraint, and covertness constraint. To tackle this, the optimization problem is decomposed into two subproblems: one for the sensing covariance matrix and communication beamforming optimization, and another for the IRS reflection beamforming optimization. An Alternating Optimization-based Penalty Successive Convex Approximation (AO-PSCA) algorithm is proposed to solve the two subproblems iteratively.  Results and Discussions  The relationship between the average CRLB, the number of IRS reflection elements, and the number of BS antennas is presented (Fig. 2). As observed, the average CRLB obtained by the AO-PSCA algorithm and the IRS random phase algorithm decreases as the number of IRS elements increases. This is because a larger number of IRS elements not only enhances covert communication performance but also improves the quality of the virtual link between the BS and the sensing target. Additionally, the proposed AO-PSCA algorithm outperforms the IRS random phase scheme, highlighting the importance of designing IRS reflection coefficients. Furthermore, as the number of BS antennas increases, the average CRLB decreases, since more antennas simultaneously improve both target sensing and covert communication performance. The relationship between the average CRLB, covertness threshold, and communication rate threshold is shown (Fig. 3). It can be seen that the average CRLB decreases as the covertness parameter \begin{document}$\varepsilon $\end{document} increases. This indicates that increasing the covertness parameter improves the sensing performance of the ISACC system. The reason is that a larger covertness value of \begin{document}$\varepsilon $\end{document} makes it easier to satisfy the covertness constraints, thereby allowing more resources for communication and sensing. In contrast, the average CRLB increases with the communication rate requirement, as a larger value of \begin{document}$ \varGamma $\end{document} requires more system resources, leaving fewer resources for radar sensing. The relationships between the average CRLB, average maximum transmit power, and symbol length, as well as between average maximum transmit power, communication signal power, and sensing signal power, are presented (Fig. 4). It can be observed that the average CRLB decreases as the average maximum transmit power increases. This is due to the increase in both sensing and communication signal powers with higher transmit power. The average CRLB also decreases as the symbol length increases, as a larger symbol length improves target sensing performance. The relationship between the beampattern, angle, and average maximum transmit power is illustrated (Fig. 5). The beampatterns are focused on their main lobe, with the sensing target located at 0°. Due to communication rate and covertness constraints, random fluctuations appear in the side lobe regions of the beampatterns. Moreover, the beampattern values increase with the average maximum transmit power, indicating that increasing transmit power effectively enhances both target sensing and covert communication performance.  Conclusions  The IRS-assisted ISACC system is investigated in this work. An optimization problem is formulated to minimize the average CRLB, subject to constraints on covertness, maximum transmit power, communication rate, and IRS phase shifts. The AO-PSCA algorithm is proposed for the joint design of the sensing covariance matrix, communication beamforming, and IRS phase shifts. Simulation results demonstrate that the proposed ISACC scheme, assisted by IRS, can effectively balance target sensing and covert wireless communication performance.
An Intelligent Reflecting Surface Assisted Covert Communication System with a Cooperative Unmanned Aerial Vehicle
LIU Xuemin, QIAN Yuwen, SONG Yaoliang, SHU Feng, CHEN Kuiyu, ZHU Jiewei
2025, 47(2): 386-396. doi: 10.11999/JEIT240663
Abstract:
  Objective:   Covert communication is a crucial area within network security, facilitating secure data transmission in monitored environments. Nevertheless, practical communication systems face challenges such as complex communication environments and extensive coverage areas. In recent years, Unmanned Aerial Vehicles (UAVs) have gained popularity in both commercial and military applications due to their flexibility, cost-effectiveness, and diverse applications. Additionally, Intelligent Reflection Surface (IRS)-assisted wireless communications have attracted significant attention, as IRS can be deployed in hostile communication environments while ensuring reliable transmission. Consequently, the exploration of hybrid IRS and UAV systems for the design of covert wireless communication systems presents a promising research avenue.   Methods:   This paper proposes a wireless covert communication system enhanced by an IRS and a UAV. In this configuration, the IRS functions as a relay node to transmit signals from the transmitter. The UAV serves as a cooperative relay node, facilitating not only the forwarding of covert messages to the intended receiver but also generating artificial noise to impede the detection of covert communication by malicious users. Under conditions of uncertainty regarding the received noise at the receiver, the minimum error detection probability is derived, and the system optimization problem is formulated with the objective of maximizing the covert communication rate while treating interruption probability as a constraint. Subsequently, the Dinkelbach-based approach is utilized to address the optimization problem.   Results and Discussions:   The key contributions of this research are as follows. First, a wireless covert communication system is developed using an IRS and an UAV. In this system, the IRS forwards covert messages from the transmitter to the receiver, while the UAV disrupts potential adversaries attempting to intercept secure communications. The integration of the IRS improves the covert communication rate, and the UAV-assisted design provides flexibility for deployment across diverse environments. The transmitter serves as the coordinator, managing both the UAV and IRS by transmitting control commands and collecting operational parameters. Second, the minimum detection error probability is derived under conditions of receiver uncertainty regarding noise, with the coordinates of the UAV and the transmitter assumed to be known. This derivation includes calculations of the False Alarm Probability (FAP) and the Missed Detection Probability (MDP) associated with the monitoring process. Third, a joint optimization problem is formulated to maximize the covert rate of the communication system. This problem optimizes the UAV’s trajectory, the IRS phase, and the transmit power while satisfying constraints related to the derived minimum detection error probability, maximum transmit power, and UAV mobility. The problem is restructured into a convex formulation by dividing it into two steps: optimization of the transmit power and UAV trajectory. Fourth, an iterative algorithm is developed to address the optimization challenge, employing the Successive Convex Approximation (SCA) and Dinkelbach methods. The Dinkelbach method is used to reformulate the upper bound of the optimization variables into a convex problem. Simulation results demonstrate that the maximum covert rate is achieved when the IRS phase, UAV trajectory, and transmit power are jointly optimized.   Conclusions:   In conclusion, the research establishes the implementation of an IRS-aided covert communication system utilizing a cooperative UAV, suitable for deployment in complex environments. Additionally, a closed-form expression for the Directly Emitted Power (DEP) of covert communication for the monitoring device has been derived, taking into account the uncertainty of transmit power. A joint optimization problem has been formulated to optimize the phases of the IRS units, the jamming power of the UAV, and the transmitting power of the transmitter, while satisfying constraints related to the optimal DEP of Willie, the transmit power of the transmitter, and the transmit power of the AN. Simulation results indicate that the system’s covertness and covert rate improve with an increased number of IRS units, extended UAV flight time, and higher interference power. Future research should also explore the deployment of this system in complex environments, focusing on the dynamic adjustment of the IRS phase units in conjunction with UAVs.
Joint Resource Management for Tunable Optical IRS-aided Cell-Free VLC Networks
JIA Linqiong, FENG Shicheng, LE Shujuan, SHI Wei, SHU Feng
2025, 47(2): 397-408. doi: 10.11999/JEIT240710
Abstract:
  Objective  Visible Light Communication (VLC) is emerging as a key technology for future communication systems, offering advantages such as abundant and license-free spectrum, immunity to electromagnetic interference, and low-cost front-end devices. Light Emitting Diodes (LEDs) serve a dual purpose, providing both communication and illumination in indoor environments. However, VLC links are vulnerable, as the interruption of the Line of Sight (LoS) can disrupt communication. The Optical Intelligent Reconfigurable Surface (IRS) has been proposed to enhance communication performance and robustness by reconfiguring optical channels. Two main types of optical IRS materials, mirror-based and meta-surface-based, are commonly used. Mirror-based IRS units introduce additional Non-LoS (NLoS) links with constant reflectance.A cell-free VLC network with the assistance of a newly proposed tunable IRS is proposed and fully investigated. The reflectance of the optical IRS can be dynamically adjusted, allowing it to function as a transmitter by modulating signals on the reflectance with stable incident light. In this system, at least one LED must operate in illumination mode to emit light with constant intensity when any IRS unit is in modulation mode. The IRS can also function in reflection mode to provide additional reflective links, enhancing signal strength. The tunable IRS increases the number of Access Points (APs), enabling ultra-dense VLC networks that significantly improve throughput and spectral efficiency. The system model for a tunable IRS-assisted cell-free VLC network is derived, and the channel gain is calculated using the Lambertian model. The transmission rate for each user is determined by the work mode of the APs and the IRS’s association with the LEDs and users, represented by binary variables. The primary objective of this study is to maximize the total throughput of the IRS-aided VLC network.  Methods  An optimization problem is formulated to maximize network throughput by jointly optimizing the work mode of the LEDs and IRS units, along with user-IRS associations. Given the non-convex nature of this integer optimization problem, it is decomposed into two sub-problems. (1) Problem P2: With fixed numbers of LEDs and IRS units in modulation mode, a Deep Deterministic Policy Gradient (DDPG)-based Deep Reinforcement Learning (DRL) algorithm is applied to optimize the work mode of each AP and the user-AP associations. The binary variables are relaxed to continuous values in the range [0,1]. The optimization problem is modeled as a Markov Decision Process (MDP), where the state corresponds to the channel gains, the action represents the optimization variables, and the reward is the network throughput. To ensure convergence, the reward is adjusted to reflect the negative of any unsatisfied constraints, and the noise in the DDPG model is dynamically modeled using two random variables. (2) Problem P1: The optimization problem is then solved by considering all possible combinations of the number of LEDs and IRS units in modulation mode.  Results and Discussions  Simulations for the indoor tunable IRS-aided system are performed using Python with PyTorch. The simulation parameters for the indoor scenario and the neural network configurations in the DDPG algorithm are shown (Table 1, Table 2), respectively. The results demonstrate the following: (1) The convergence and final reward of the modified DDPG algorithm (denoted as DDPG-O) are compared with the unmodified version (denoted as DDPG-N) in solving Problem P2 (Fig. 4). The results show that the modified DDPG algorithm converges efficiently and achieves an access and association policy that maximizes network throughput. (2) The maximized throughput for various numbers of LEDs in modulation mode, along with varying optical power, is presented when solving Problem P1 (Fig. 5). It is observed that the policy with one lighting LED achieves the maximum throughput with appropriate IRS units in modulation mode. (3) The relationship between maximized throughput and the number of IRS units is analyzed in (Fig. 6). The total throughput increases as the number of IRS units grows, although the increase is not linear. (4) Simulations with the same number of users and LEDs are also considered (Fig. 7). It is observed that the total network throughput with and without IRS APs is nearly identical when the number of users does not exceed the number of LEDs. Thus, the VLC network benefits more when the number of users exceeds the number of LEDs.  Conclusions  A tunable IRS-assisted cell-free VLC network has been proposed, where IRS units either operate in reflection mode to provide additional NLoS channels or in modulation mode to enable wireless access for users. The channel and transmission models are developed, and an optimization problem is formulated to jointly select the working mode of APs and user associations with the objective of maximizing network throughput. A modified DDPG algorithm is applied to solve for the optimal policy. The optimization problem is further tackled by exploring all possible combinations of modulating LEDs and IRS units. Simulation results verify the effectiveness of the proposed algorithm, showing that the network throughput can be significantly improved by incorporating IRS APs, particularly when the number of users is large.
Joint Beamforming Design for STAR-RIS Assisted URLLC-NOMA System
ZHU Jianyue, WU Yutong, CHEN Xiao, XIE Yaqin, XU Yao, ZHANG Zhizhong
2025, 47(2): 409-417. doi: 10.11999/JEIT240717
Abstract:
  Objective  This paper addresses the energy efficiency challenge in Ultra-Reliable Low-Latency Communication (URLLC) systems, crucial for mission-critical applications such as industrial automation and remote surgery. The integration of Simultaneously Transmitting and Reflecting Reconfigurable Intelligent Surfaces (STAR-RIS) with Non-Orthogonal Multiple Access (NOMA) is proposed to improve spectral efficiency and coverage while meeting URLLC’s stringent reliability and latency requirements. However, the joint optimization of base station beamforming, STAR-RIS transmission, and reflection matrices presents a non-trivial problem due to non-convexity and coupled variables. This work aims to minimize energy consumption under a total power constraint by jointly designing these parameters, advancing STAR-RIS-aided NOMA systems for URLLC.  Methods  To address the non-convex optimization problem, the proposed methodology involves several key steps. First, the user rate function under finite blocklength transmission is analyzed, considering the specific requirements of URLLC. This analysis facilitates the reformulation of the original problem into an equivalent form more amenable to optimization. Specifically, the rate function is approximated using a Taylor series expansion, and the effect of finite blocklength on decoding error probability is incorporated into the optimization framework.Next, an alternating optimization framework is adopted to decouple the joint design problem into subproblems, each focused on optimizing either the base station beamforming, the STAR-RIS transmission matrix, or the reflection matrix. Semidefinite Relaxation (SDR) techniques are then applied to address the non-convexity of these subproblems, ensuring efficient and tractable solutions. The SDR method transforms the original non-convex constraints into convex ones by relaxing certain matrix rank constraints, which are subsequently recovered using randomization techniques.The proposed approach is validated through extensive simulations, comparing its performance with Orthogonal Multiple Access (OMA) and traditional RIS-aided schemes. The simulation setup includes a multi-user scenario with varying channel conditions, blocklengths, and reliability requirements.  Results and Discussions  The main contributions of this paper are summarized as follows:(1) Joint Optimization of Active and Passive Beamforming Vectors: To minimize system transmission power, the paper jointly optimizes the active beamforming vector at the base station and the passive beamforming vector at the reflective surface, presenting an efficient joint beamforming design algorithm (Algorithm 1). (2) Validation and Energy Efficiency Comparison: Experimental results confirm the effectiveness of the proposed joint beamforming design. A comparison of energy consumption performance for STAR-RIS under different modes is provided. Specifically, the proposed STAR-RIS-aided NOMA scheme demonstrates a significant reduction in power consumption compared to OMA and conventional RIS-aided systems (Fig. 2 and Fig. 5). The proposed joint beamforming and STAR-RIS optimization framework effectively addresses the trade-offs between energy consumption, reliability, and latency in URLLC systems.   Conclusions  This paper presents a comprehensive framework for the transmission design of STAR-RIS-aided NOMA systems in URLLC scenarios. By jointly optimizing beamforming, transmission, and reflection matrices, the proposed method significantly enhances energy efficiency while meeting the stringent requirements of URLLC. The use of alternating optimization and SDR techniques effectively addresses the non-convexity of the problem, providing practical and scalable solutions.The results highlight the potential of STAR-RIS-aided NOMA systems to support next-generation wireless communication applications, laying the foundation for further research in this area. Future work will explore the integration of machine learning techniques to further enhance the performance and adaptability of the proposed framework. Additionally, the impact of hardware impairments and imperfect channel state information on system performance will be investigated to ensure robustness in real-world deployments.
Task Offloading for Simultaneously Transmitting and Reflecting Reconfigurable Intelligent Surface-assisted Mobile Edge Computing
LI Bin, YANG Dongdong
2025, 47(2): 418-426. doi: 10.11999/JEIT240733
Abstract:
  Objective   Mobile Edge Computing (MEC) is a distributed computing paradigm that brings computational resources closer to users, alleviating issues such as high latency and interference found in cloud computing. To enhance the offloading performance of MEC systems and promote green communication, Reconfigurable Intelligent Surface (RIS), a low-cost and easily deployable technology, offers a promising solution. RIS consists of numerous low-cost reflecting elements that can adjust phase shifts to alter the amplitude and phase of incident signals, thereby reconstructing the electromagnetic environment. This transforms traditional passive adaptation into active control. However, the signal reflected by RIS must pass through a two-stage cascaded channel, which is susceptible to multiplicative fading, leading to limited performance gains when direct links are unobstructed. To mitigate this, the concept of active RIS has been proposed, integrating signal amplification circuits into RIS elements, which not only reflect but also amplify signals, effectively overcoming this issue. Additionally, RIS can only transmit or reflect incident signals, limiting coverage to half-space: either the user and base station must be on the same side (reflecting RIS) or on opposite sides (transmitting RIS). This constraint limits deployment flexibility. To address this, Simultaneously Transmitting And Reflecting Reconfigurable Intelligent Surface (STAR-RIS) is proposed, combining both transmission and reflection functions, where part of the signal is reflected to the same side, and the rest is transmitted to the opposite side. To address the challenges in practical RIS-assisted MEC systems, the active Simultaneously Transmitting And Reflecting Reconfigurable Intelligent Surface (aSTAR-RIS) is integrated into the MEC system to overcome geographic deployment constraints and effectively mitigate the effects of multiplicative fading.  Methods   Considering the computational resources available at the MEC server, the energy consumption of the aSTAR-RIS, and the phase shift coupling constraints, the task offloading ratio, computational resource allocation, Multi-User Detection (MUD) matrix, aSTAR-RIS phase shift, and transmission power are jointly optimized, resulting in a multivariable coupled weighted total latency minimization problem. To solve this problem, an iterative algorithm combining Block Coordinate Descent (BCD) and Penalty Dual Decomposition (PDD) algorithms is proposed. In each iteration, the original problem is decomposed into two subproblems: one for optimizing computational resource allocation and task offloading ratio, and the other for designing the aSTAR-RIS phase shift, MUD matrix, and transmission power. For the first subproblem, the Lagrange multiplier method is used to incorporate constraints into the objective function and enable efficient optimization. The optimal Lagrange multiplier and resource allocation are found using the bisection method. The second subproblem involves handling the fractional objective function using the weighted minimum mean square error algorithm. From the first-order conditions, the optimal MUD matrix is derived. For the aSTAR-RIS phase shift optimization, a non-convex phase shift coupling constraint is decoupled using the PDD algorithm.  Results and Discussions   And discussions as shown in (Fig. 2), with increasing iterations, the weighted total latency steadily decreases and stabilizes, validating the effectiveness of the proposed algorithm. A comparison with three benchmark schemes reveals that, although the proposed scheme converges more slowly, it achieves the lowest weighted total latency upon convergence, with a 12.66% reduction compared to the passive STAR-RIS scheme. This improvement is mainly due to the power amplification effect, which reduces the impact of multiplicative fading, thereby enhancing the received signal at the base station and reducing latency. As illustrated in (Fig. 3), the weighted total latency decreases as the number of aSTAR-RIS elements increases, allowing for more reflection paths and higher channel gain. For fewer elements, aSTAR-RIS shows a significant performance gain over STAR-RIS, but as the number of elements grows, the performance of both aSTAR-RIS and passive STAR-RIS converges, primarily due to thermal noise and power constraints. Moreover, compared to the benchmark scheme that optimizes for maximum rate, the proposed scheme shows significant advantages in reducing latency. As shown in (Fig. 4), when the aSTAR-RIS power overhead increases, the weighted total latency decreases, further showing the potential of aSTAR-RIS in improving communication performance via active amplification.  Conclusions   This paper investigates a task offloading scheme for an aSTAR-RIS-assisted MEC system, which optimizes the task offloading ratio, computational resource allocation, MUD matrix, aSTAR-RIS phase shift, and transmission power to minimize total user delay. The optimization problem is solved using an iterative approach, decomposing the problem into two subproblems and applying the Lagrange multiplier method, PDD, and BCD algorithms. Simulation results demonstrate that the proposed algorithm significantly outperforms benchmark schemes in terms of weighted total latency. The findings validate the effectiveness of aSTAR-RIS in MEC systems, highlighting its advantages over passive STAR-RIS in task offloading, resource optimization, and communication performance.
Energy Aware Reconfigurable Intelligent Surface Assisted Unmanned Aerial Vehicle Age of Information Enabled Data Collection Policies
ZHANG Tao, ZHANG Qian, ZHU Yingwen, DAI Chen
2025, 47(2): 427-438. doi: 10.11999/JEIT240866
Abstract:
  Objective  This study aims to develop and implement an optimization framework that addresses the critical balance between energy consumption and information freshness in Unmanned Aerial Vehicle (UAV)-assisted Internet of Things (IoT) data collection systems, enhanced by Reconfigurable Intelligent Surfaces (RIS). In complex urban environments, traditional line-of-sight communication between UAVs and ground-based IoT devices is often obstructed by buildings and infrastructure, hindering comprehensive coverage and efficient data collection. While RIS technology offers promising solutions by dynamically adjusting signal reflection directions, optimizing communication signal coverage, and enhancing quality, it introduces additional complexity in system design and resource allocation, requiring sophisticated adaptive optimization techniques. The integration of RIS enables stable communication connections across various UAV flight heights and angles, mitigating disruptions caused by obstacles or signal interference, thus improving data collection efficiency and reliability. However, this integration must account for multiple factors, including UAV energy consumption, communication complexity, and Age of Information (AoI) constraints. These approaches must adapt to the dynamic nature of UAV operations and fluctuating communication conditions, ensuring optimal performance in terms of energy efficiency and data freshness. The research also addresses several key challenges, including real-time adaptation to environmental changes, optimal scheduling of IoT device interactions, dynamic adjustment of RIS phase configurations, efficient trajectory planning, and the maintenance of data freshness under various system constraints. The proposed framework establishes a robust foundation for next-generation IoT data collection systems that can adapt to diverse operational conditions while maintaining high performance standards. This is achieved through the implementation of advanced deep reinforcement learning techniques, specifically designed to manage the complex interplay between UAV mobility, RIS configuration, and IoT device scheduling, ensuring efficient and timely data collection while optimizing system resources.  Methods  A comprehensive data collection optimization strategy is proposed, based on deep reinforcement learning principles, specifically designed to address the complex challenges in UAV-assisted IoT data collection systems enhanced by RIS technology. The methodology employs a Double Deep Q-Network (DDQN) architecture, integrating UAV trajectory planning, IoT device scheduling, and RIS phase adjustment within a three-dimensional grid-based movement space. The system incorporates a channel model that accounts for both direct and RIS-assisted communication paths, including a probabilistic path loss model for direct links and Rician fading for RIS-assisted links. The optimization problem is formulated as a Markov Decision Process (MDP), where the state space includes the UAV position, previous movement information, and average AoI, while the action space involves 3D movement decisions and IoT device scheduling. The reward function is designed to balance multiple performance metrics, including system AoI, UAV flight energy consumption, data collection energy, data upload energy, and penalties for boundary violations. The DDQN implementation utilizes two Q-networks—the current and target networks—separating action selection from action evaluation, effectively addressing the issue of Q-value overestimation. The training process incorporates experience replay for sample storage and periodic updates to the target network to enhance learning stability. Additionally, the RIS phase shift optimization is derived through geometric relationships, considering both direct and RIS-assisted communication paths. This comprehensive approach enables the joint optimization of UAV trajectory, IoT device scheduling, and RIS phase adjustment, while ensuring energy efficiency and timely data collection in complex communication environments.  Results and Discussions  The proposed method enables the UAV to dynamically adjust its flight trajectory and communication strategy based on real-time environmental conditions, enhancing data transmission efficiency while reducing energy consumption. Extensive simulation experiments comprehensively evaluate the performance of the DDQN-based optimization framework. Convergence analysis demonstrates that the method achieves faster and more stable convergence compared to traditional DQN approaches. The average reward steadily increases and stabilizes after approximately 200 episodes, while baseline methods exhibit slower convergence and higher performance variance (Fig. 3). The optimized UAV trajectory visualization shows that the method effectively guides the UAV to collect data efficiently from all IoT devices while avoiding unnecessary detours. The trajectory strikes a balance between visiting high-priority devices (those with higher AoI) and maintaining energy-efficient flight paths, clearly illustrating the effectiveness of the joint optimization of movement and device scheduling decisions (Fig. 4). Energy consumption analysis reveals that the proposed method achieves superior energy efficiency, with a 15% reduction in total energy consumption while maintaining comparable data collection performance. This improvement results from the intelligent integration of RIS-assisted communication and optimal trajectory planning, which reduces the need for energy-intensive maneuvers and prolonged hovering periods (Fig. 5) (Fig. 6). The AoI performance evaluation further confirms the method’s effectiveness in maintaining data freshness. The average AoI across all IoT devices remains consistently lower than in baseline methods, with a 20% improvement in worst-case AoI values. This demonstrates the method’s ability to balance the trade-off between visiting different devices and maintaining acceptable AoI levels, even under challenging network conditions. The framework’s adaptive nature is evident in its capacity to prioritize devices with critical AoI values while maintaining overall system efficiency, showing robust performance across varying network densities and device distributions (Fig. 5) (Fig. 6).  Conclusions  The proposed deep reinforcement learning-based optimization policy effectively addresses the complex challenges in UAV-assisted IoT data collection systems enhanced by RIS technology, demonstrating significant improvements in both energy efficiency and information freshness. The integration of advanced learning techniques with RIS-assisted communication provides a robust and adaptive solution for practical deployment in urban IoT environments. The comprehensive evaluation framework and detailed performance analysis offer valuable insights for system designers and practitioners. The superior performance in terms of convergence speed, trajectory optimization, energy efficiency, and AoI management confirms the effectiveness of the proposed approach. Future research will focus on extending the framework to multi-UAV coordination scenarios, exploring the impact of dynamic environmental changes, and developing more sophisticated reward mechanisms to address additional operational constraints, such as security and airspace restrictions. The promising results also indicate potential applications in emergency response systems, smart city infrastructure, and environmental monitoring networks.
Reconfigurable Intelligent Surface-Aided Joint Spatial and Code Index Modulation Communication System
CHEN Pingping, ZHANG Yunxin, DU Weiqing
2025, 47(2): 439-448. doi: 10.11999/JEIT240987
Abstract:
  Objective   The rapid growth of wireless communication traffic is pushing existing networks toward greener, more energy-efficient solutions. Therefore, research into wireless communication systems that balance low complexity with high energy efficiency is of significant importance. Index Modulation (IM) technology, which offers advantages in low complexity and high energy efficiency, has emerged as a promising candidate for future systems. Reconfigurable Intelligent Surfaces (RIS) provide benefits such as reconfigurability, simple hardware, and low energy consumption, presenting new opportunities for wireless communication development. However, traditional RIS- aided Spatial Modulation (RIS-SM) and RIS-aided Code Index Modulation (RIS-CIM) systems use the index of the receiver antenna or code to transmit additional information bits. Therefore, the data transmission rates of RIS-SM systems improve at the cost of increasing the number of receiver antennas. To enhance the data transmission rates and energy efficiency of RIS-SM systems, this paper proposes a RIS-Aided Joint Spatial and Code Index Modulation (RIS-JSCIM) communication system.  Methods   The proposed system utilizes M-ary Quadrature Amplitude Modulation (M-QAM) symbols, spatial antenna index, and code index to transmit information bits. The information bits transmitted through the antenna and code indices of RIS-JSCIM do not consume energy, enabling RIS-JSCIM to achieve high energy efficiency. At the receiver, both Maximum Likelihood Detection (MLD) and low-complexity Greedy Detection (GD) algorithms are employed. The MLD algorithm, although high in complexity, provides excellent Bit Error Rate (BER) performance, while the GD algorithm offers a better trade-off between complexity and BER performance. Furthermore, this paper analyzes the energy efficiency and complexity of the proposed RIS-JSCIM system and uses Monte Carlo simulations to evaluate its BER performance. The performance metrics of the RIS-JSCIM system are also compared with those of other systems. The results show that, despite a slight increase in system complexity, the RIS-JSCIM system outperforms others in terms of energy efficiency and BER performance.  Results and Discussions   This paper compares the energy efficiency, system complexity, and BER performance of the RIS-JSCIM system with other systems. The comparison shows that, when the number of receiving antennas NR=4 and the number of Walsh codes L=8, the energy efficiency of the RIS-JSCIM system improves by 60% and 6.67% compared to the RIS-SM and RIS-CIM systems, respectively (Table 2). The complexity of the RIS-JSCIM system with the GD algorithm is comparable to that of the GCIM-SM system and slightly higher than that of the RIS-CIM system (Table 3). Simulation results indicate that, at BER=10–5, the RIS-JSCIM system achieves a performance gain of over 6 dB compared to the RIS-CIM system (Fig. 5). As the number of RIS units N increases, both the RIS-JSCIM and RIS-CIM systems show significant improvements in BER performance, with the RIS-JSCIM system outperforming the RIS-CIM system at high Signal-to-Noise Ratios (SNR). For example, at BER=10–5 and N=128, the RIS-JSCIM system offers a 5 dB SNR gain over the RIS-CIM system (Fig. 6). Similarly, at high SNR, the BER performance of the RIS-JSCIM system consistently outperforms that of the RIS-SM system (Fig. 7).  Conclusions   The RIS-JSCIM system utilizes M-QAM symbols to transmit information bits and employs the receiver antenna and code indices to convey additional information. Both the MLD and GD algorithms are introduced for recovering the transmitted bits. The MLD algorithm explores all possible combinations of receiver antenna indices, code indices, and M-QAM symbols, offering improved BER performance at the cost of increased complexity. In contrast, the GD algorithm performs separate detection of antenna indices, code indices, and M-QAM symbols, providing a favorable trade-off between complexity and BER performance. The RIS-JSCIM system transmits receiver antenna index and code index bits without consuming energy, resulting in high energy efficiency. When the number of receiving antennas NR=4 and the number of Walsh codes L=8, the energy efficiency of the RIS-JSCIM system improves by 60% and 6.67% compared to the RIS-SM and RIS-CIM systems, respectively. Moreover, when the BER=10–5 and N=128, the RIS-JSCIM system offers a 5 dB SNR gain over the RIS-CIM system.
An Unfolded Channel-based Physical Layer Key Generation Method For Reconfigurable Intelligent Surface-Assisted Communication Systems
YANG Lijun, CHEN Zishuo, LU Haitao, GUO Lin
2025, 47(2): 449-457. doi: 10.11999/JEIT240988
Abstract:
  Objective  Physical Layer Key Generation (PLKG) is an emerging technique that leverages the reciprocity, time variability, and spatial decorrelation properties of wireless channels to enable real-time key generation. This method offers potential for one-time-pad encryption and resilience against quantum attacks. PLKG typically includes four key steps: channel probing, preprocessing and quantization, information reconciliation, and privacy amplification. Proper preprocessing can improve channel reciprocity, eliminate redundancy, increase the Key Generation Rate (KGR), and reduce the Key Disagreement Rate (KDR). Reconfigurable Intelligent Surfaces (RIS) present advantages such as low cost, low power consumption, and ease of deployment. By manipulating incident signals in terms of amplitude, phase, and polarization, RIS enables the creation of intelligent communication environments, offering a novel approach to mitigating channel limitations in key generation. However, current preprocessing methods like Principal Component Analysis (PCA), Discrete Cosine Transform (DCT), Singular Value Decomposition (SVD), and nonlinear processing typically treat channel data as a whole for noise reduction and redundancy removal. These methods overlook the key capacity loss induced by channel cascading in RIS-assisted systems, limiting KGR. To address this challenge, this paper proposes a novel PLKG protocol based on unfolded channels, aimed at mitigating key capacity loss due to channel cascading, thereby enhancing KGR.  Methods   This paper first derives the degradation effect of channel cascading on the KGR using entropy theory and validates it through theoretical simulations. A PLKG scheme tailored for RIS-assisted communication scenarios is then proposed, with enhancements in both channel probing and preprocessing. In the channel probing phase, a two-stage channel estimation approach is introduced. The first stage employs the PARAllel FACtor (PARAFAC) method for channel estimation, utilizing the multidimensional information structure inherent in Multiple Input Multiple Output (MIMO) communication systems to construct a tensor. This tensor is used to estimate the baseline unfolded channel via the Alternating Least Squares (ALS) algorithm. In the second stage, the RIS phase shift matrix is randomized, and the Least Squares (LS) method is applied to estimate the cascaded channel, introducing an additional source of randomness for key generation. In the channel preprocessing phase, the baseline unfolded channel derived from the two-stage estimation is used to separate the cascaded channel into the unfolded channel and the RIS phase shift matrix. Conventional methods such as PCA, DCT, and Wavelet Transform (WT) are applied to remove noise and redundancy from the obtained data. By utilizing both the unfolded channel and the RIS phase shift matrix as joint key sources, the proposed scheme mitigates the KGR degradation caused by channel cascading, enhancing KGR while maintaining a low KDR.  Results and Discussions   A Rayleigh channel MIMO communication system model is established for experimentation. The proposed two-stage channel estimation method is used to separate the cascaded channel into the unfolded channel and the RIS phase shift matrix. Three preprocessing methods—PCA, DCT, and WT—are then applied to the cascaded channel, unfolded channel, and RIS phase shift matrix for noise reduction and decorrelation. The extracted channel features are quantized, followed by information reconciliation and privacy amplification. The experiment compares two key generation approaches: one using the cascaded channel as the key source and the other using the unfolded channel and RIS phase shift matrix as joint key sources. Simulation results show that the proposed scheme achieves a 72% improvement in KGR at a 2 dB Signal-to-Noise Ratio (SNR) (Fig. 8). Among the preprocessing methods, DCT demonstrates the highest KGR and the lowest KDR (Fig. 9, Fig. 10, Fig. 11, Fig. 11). Additionally, experiments on the number of RIS configuration matrices indicate that increasing the number beyond eight yields diminishing returns in KGR improvement. Thus, an optimal range of 8–10 configuration matrices is recommended. Furthermore, the computational complexity of the PARAFAC channel estimation method is analyzed, and the feasibility of real-time key generation is validated by considering channel coherence time, algorithm complexity, and communication protocol frame intervals.  Conclusions   This paper proposes a PLKG scheme that utilizes the PARAFAC channel estimation method to estimate the unfolded channel and the LS method to estimate the cascaded channel. During preprocessing, the cascaded channel is decomposed into the unfolded channel and the RIS phase shift matrix. By using both the unfolded channel and the RIS phase shift matrix as joint key sources, the proposed method mitigates the degradation of KGR caused by channel cascading. Compared with conventional PLKG schemes that use the cascaded channel as the key source, the proposed method achieves a 72% improvement in KGR at a 2 dB SNR, while maintaining a low KDR. However, despite enhancing KGR, the proposed scheme still faces challenges such as excessive pilot overhead and computational limitations. Future work should focus on optimizing overhead reduction to improve its practicality.
Wireless Communication and Internet of Things
Research on Task Offloading and Resource Allocation Algorithms in Cloud-edge-end Collaborative Computing for the Internet of Things
SHI Jianfeng, CHEN Xinyang, LI Baolong
2025, 47(2): 458-469. doi: 10.11999/JEIT240659
Abstract:
  Objective  With the rapid pace of digital transformation and the smart upgrading of the economy and society, the Internet of Things (IoT) has become a critical element of new infrastructure. Current wide-area IoT networks primarily rely on 5G terrestrial infrastructure. While these networks continue to evolve, challenges persist, particularly in remote or disaster-affected areas. The high cost and vulnerability of base stations hinder deployment and maintenance in these locations. Satellite networks provide seamless coverage, flexibility, and reliability, making them compelling alternatives to terrestrial networks for achieving global connectivity. Satellite-assisted Internet of Things (SIoT) can deliver ubiquitous and reliable connectivity for IoT devices. Typically, IoT devices offload tasks to edge servers or cloud platforms due to their limited power, computing, and caching resources. Mobile Edge Computing (MEC) helps reduce latency by caching content and placing edge servers closer to IoT devices. Low Earth Orbit (LEO) satellites with integrated processing units can also serve as edge computing nodes. Although cloud platforms offer abundant computing resources and a reliable power supply, the long distance between IoT devices and the cloud results in higher communication latency. With the explosive growth of IoT devices and the diversification of application requirements driven by 5G, it is essential to design a collaborative architecture that integrates cloud, edge, and end devices. Recent research has extensively explored MEC-enhanced SIoT systems. However, many studies focus solely on edge or cloud computing, with little emphasis on their integration, satellite mobility, or resource constraints. Furthermore, LEO satellites providing edge services face challenges due to their limited onboard resources and the high mobility of the satellite constellation, complicating resource allocation and task offloading. Single-satellite solutions may not satisfy performance expectations during peak demand. Inter-Satellite Collaboration (ISC) technology, which utilizes visible light communications, can significantly increase system capacity, extend coverage, reduce individual satellite resource consumption, and prolong network operational life. Although some studies address three-tier architectures involving IoT devices, satellites, and clouds, proposing load balancing mechanisms through ISC for optimizing offloading and resource allocation, many rely on static assumptions about network topologies and user associations. In practice, LEO satellites require frequent switching and dynamic adjustments in offloading strategies to maintain service quality due to their high-speed mobility. Therefore, there is a need for a method of task offloading and resource allocation in a dynamic environment that considers satellite mobility and limited resources. To address these research gaps, this paper proposes a dynamic ISC-enhanced cloud-edge-end SIoT network model. By formulating the joint optimization problem of offloading decisions and resource allocation as a Mixed Integer Non-Linear Programming (MINLP) problem, a Model-assisted Adaptive Deep Reinforcement Learning (MADRL) algorithm is developed to achieve minimum system cost in a changing environment.  Methods  The LEO satellite mobility model and the SIoT network model with ISC are constructed to analyze end-to-end latency and system energy consumption. This evaluation considers three modes: local computing, edge computing, and cloud computing. A joint optimization MINLP problem is formulated, focusing on task offloading and resource allocation to minimize system costs. A MADRL algorithm is introduced, integrating traditional optimization techniques with deep reinforcement learning. The algorithm operates in two parts. The first part optimizes communication and computational resource allocation using a model-assisted binary search algorithm and gradient descent method. The second part trains a Q-network to adapt offloading decisions based on stochastic task arrivals through an adaptive deep reinforcement learning approach.  Results and Discussions  Simulation experiments were conducted under various dynamic scenarios. The MADRL algorithm exhibits strong convergence properties, as demonstrated in the analysis. Comparisons of different learning rates and exploration decay factors reveal optimal parameter values. Incorporating satellite mobility reduces system costs by 41% compared to static scenarios, enabling dynamic resource allocation and improved efficiency. Integrating ISC reduces system costs by 22.1%. This demonstrates the effectiveness of inter-satellite load balancing in improving resource utilization. Additionally, the MADRL algorithm achieves a 3% reduction in system costs compared to the Deep Q Learning (DQN) algorithm, highlighting its adaptability and efficiency in dynamic environments. System costs decrease as satellite speed increases, with the MADRL algorithm consistently outperforming other methods.  Conclusions  This paper presents an innovative dynamic SIoT model that integrates IoT devices, LEO satellites, and a cloud computing center. The model addresses the latency and energy consumption issues faced by IoT devices in remote and disaster-stricken areas. The task offloading and resource allocation problem that minimizes system cost is constructed by incorporating ISC techniques to enhance satellite edge performance and by taking satellite mobility into account. A MADRL algorithm that combines traditional optimization with deep reinforcement learning is proposed. This approach effectively optimizes task offloading decisions and resource allocation. Simulation results demonstrate that our model and algorithm significantly reduce system costs. Specifically, the incorporation of satellite mobility and ISC technology leads to cost reductions of 41% and 22.1%, respectively. Compared to benchmark algorithms, the MADRL shows superior performance across various test environments, highlighting its significant application advantages.
Joint Task Scheduling and Computing Resource Allocation Optimization Strategy in Asynchronous Mobile Edge Computing Networks
WANG Ruyan, YANG Anqi, WU Dapeng, TANG Tong, ZHU Zhiyuan
2025, 47(2): 470-479. doi: 10.11999/JEIT240685
Abstract:
  Objective  Mobile Edge Computing (MEC) is a key technology for addressing the limited computing capabilities and energy constraints of wireless devices. MEC improves local computing performance and extends battery life by offloading computationally intensive tasks from sensors to nearby edge servers. However, in dynamic environments such as anomaly detection, environmental monitoring, and vehicle positioning, task heterogeneity becomes a significant factor limiting performance. For example, the asynchrony of task generation times can result in issues such as low communication efficiency and increased latency. Furthermore, traditional latency measurement techniques often fail to accurately assess task timeliness. To address these challenges, this paper proposes a strategy for the joint optimization of task scheduling and computational resource allocation in asynchronous MEC networks. The proposed strategy adaptively optimizes task scheduling and resource allocation, minimizing the average information age and energy consumption, thereby enhancing overall system performance.  Methods  This paper focuses on age-aware asynchronous MEC offloading and resource allocation. Specifically, a mathematical model is formulated based on the First Come First Served (FCFS) queuing principle, considering the order of asynchronous task arrivals. This model optimizes task scheduling and computational resource allocation in asynchronous MEC offloading, with the goal of minimizing the Average Age of Information (AoI) and average energy consumption. In dynamic asynchronous MEC, optimization problems are inherently complex. When these tasks involve both binary offloading decisions and continuous resource allocation, the combination of actions further complicates problem-solving, transforming it into a non-convex optimization challenge. Additionally, the actor network of the Actor-Critic algorithm (A2C) adapts its output layer to either a Categorical or Gaussian distribution, depending on whether the action space is discrete or continuous. This paper proposes a Hybrid Advantage Actor-Critic (HA2C) Deep Reinforcement Learning (DRL) algorithm, which effectively optimizes dynamic task scheduling and computational resource allocation strategies as tasks are generated.  Results and Discussions  In the simulations, the performance of the algorithm is evaluated by comparing four different strategies: random strategy, DRL strategy, delay strategy, and synchronous strategy. The following conclusions are drawn: 1. Average AoI is more sensitive to task timeliness than latency metrics. It not only accounts for the time interval between task generation and reception but also considers the intervals between task generations, offering a better measure of task timeliness. Moreover, the HA2C algorithm effectively balances the timeliness of information and energy consumption, achieving optimal average AoI and energy consumption (Figure 4). 2. The hybrid action space of the HA2C algorithm is better suited for adapting to a growing number of devices. As the number of devices increases, HA2C significantly outperforms multi-agent algorithms and traditional A2C algorithms (Figure 5). This is because the number of actions in a discrete action space grows exponentially with the device count, ultimately leading to the curse of dimensionality, which degrades the performance of discrete DRL algorithms. 3. In the asynchronous MEC model, task generation occurs instantaneously and asynchronously. This setup allows a large amount of computational resources to be concentrated on tasks that arrive earlier, maximizing the utilization of MEC resources. As a result, asynchronous models outperform synchronous models in terms of both average AoI and average energy consumption (Figure 6). In conclusion, these experiments confirm that, compared to synchronous models, asynchronous models not only significantly improve computational efficiency but also effectively reduce energy consumption. Furthermore, the proposed HA2C algorithm proves to be highly effective in solving the asynchronous edge offloading and resource allocation problems, maintaining efficient performance even as the number of devices increases.  Conclusions  This paper leverages MEC to address the limited computational capacity and energy of wireless devices. Specifically, the paper considers scenarios where Wireless Sensor Network (WSN) edge computing systems continuously collect and process data to monitor real-time changes in the detection environment. In these contexts, the paper focuses on the heterogeneous generation times of sensor tasks deployed at different locations. The optimization goal is to minimize both average information age and energy consumption, achieved through task scheduling and adaptive resource allocation. The HA2C algorithm is designed to handle dynamic and unpredictable system changes while simultaneously managing both continuous and discrete actions. Simulation results demonstrate that the algorithm significantly reduces average information age and energy consumption in asynchronous offloading networks, while meeting the timeliness requirements of tasks in WSNs.
Cross-Entropy Iteration Aided Time-Hopping Pattern Estimation and Multi-hop Coherent Combining Algorithm
MIAO Xiaqing, WU Rui, YUE Pingyue, ZHANG Rui, WANG Shuai, PAN Gaofeng
2025, 47(2): 480-489. doi: 10.11999/JEIT240677
Abstract:
  Objective:   As a vital component of the global communication network, satellite communication attracts significant attention for its capacity to provide seamless global coverage and establish an integrated space-ground information network. Time-Hopping (TH), a widely used technique in satellite communication, is distinguished by its strong anti-jamming capabilities, flexible spectrum utilization, and high security levels. In an effort to enhance data transmission security, a system utilizing randomly varying TH patterns has been developed. To tackle the challenge of limited transmission power, symbols are distributed across different time slots and repeatedly transmitted according to random TH patterns. At the receiver end, a coherent combining strategy is implemented for signals originating from multiple time slots. To minimize Signal-to-Noise Ratio (SNR) loss during this combining process, precise estimation of TH patterns and multi-hop carrier phases is essential. The randomness of the TH patterns and multi-hop carrier phases further complicates parameter estimation by increasing its dimensionality. Additionally, the low transmission power leads to low-SNR conditions for the received signals in each time slot, complicating parameter estimation even more. Traditional exhaustive search methods are hindered by high computational complexity, highlighting the pressing need for low-complexity multidimensional parameter estimation techniques tailored specifically for TH communication systems.  Methods:   Firstly, a TH communication system featuring randomly varying TH patterns is developed, where the time slot index of the signal in each time frame is determined by the TH code. Both parties involved in the communication agree that this TH code will change randomly within a specified range. Building on this foundation, a mathematical model for estimating TH patterns and multi-hop carrier phases is derived from the perspective of maximum likelihood estimation, framing it as a multidimensional nonlinear optimization problem. Moreover, guided by a coherent combining strategy and constrained by low SNR conditions at the receiver, a Cross-Entropy (CE) iteration aided algorithm is proposed for the joint estimation of TH patterns and multi-hop carrier phases. This algorithm generates multiple sets of TH code and carrier phase estimates randomly based on a predetermined probability distribution. Using the SNR loss of the combined signal as the objective function, the CE method incorporates an adaptive importance sampling strategy to iteratively update the probability distribution of the estimated parameters, facilitating rapid convergence towards optimal solutions. Specifically, in each iteration, samples demonstrating superior performance are selected according to the objective function to calculate the probability distribution for the subsequent iteration, thereby enhancing the likelihood of reaching the optimal solution. Additionally, to account for the randomness inherent in the iterations, a global optimal vector set is established to document the parameter estimates that correspond to the minimum SNR loss throughout the iterative process. Finally, simulation experiments are conducted to assess the performance of the proposed algorithm in terms of iterative convergence speed, parameter estimation error, and the combined demodulation Bit Error Rate (BER).  Results and Discussions:   The estimation errors for the TH code and carrier phase were simulated to evaluate the parameter estimation performance of the proposed algorithm. With an increase in SNR, the accuracy of TH code estimation approaches unity. When a small phase quantization bit width is applied, the Root Mean Square Error (RMSE) of the carrier phase estimation is primarily constrained by the grid search step size. Conversely, as the phase quantization bit width increases, the RMSE gradually converges to a fixed value. Regarding the influence of phase quantization on combined demodulation, as the phase quantization bit width increases, nearly theoretical BER performance can be achieved. A comparison between the proposed algorithm and the exhaustive search method reveals that the proposed algorithm significantly reduces the number of search trials compared to the grid search method, with minimal loss in BER performance. An increase in the variation range of the TH code necessitates a larger number of candidate groups for the CE method to maintain a low combining SNR loss. However, with a greater TH code variation range, the number of search iterations and its growth rate in the proposed algorithm are significantly lower than those in the exhaustive search method. Regarding transmission power in the designed TH communication method, as the number of hops in the multi-hop combination increases, the required SNR per hop decreases for the same BER performance, indicating that maximum transmission power can be correspondingly reduced.  Conclusions:   A TH communication system with randomly varying TH patterns tailored for satellite communication applications has been designed. This includes the presentation of a multi-hop signal coherent combining technique. To address the multidimensional parameter estimation challenge associated with TH patterns and multi-hop carrier phases under low SNR conditions, a CE iteration-aided algorithm has been proposed. The effectiveness of this algorithm is validated through simulations, and its performance regarding iterative convergence characteristics, parameter estimation error, and BER performance has been thoroughly analyzed. The results indicate that, in comparison to the conventional grid search method, the proposed algorithm achieves near-theoretical optimal BER performance while maintaining lower complexity.
Pilot Design Method for OTFS System in High-Speed Mobile Scenarios
LI Yibing, TANG Yunhe, JIAN Xin, SUN Qian, CHEN Hao
2025, 47(2): 490-497. doi: 10.11999/JEIT240349
Abstract:
  Objective  Orthogonal Time Frequency Space (OTFS) have attracted significant attention in recent years due to excellent performance in high-speed mobile communication scenarios characterized by time-frequency double-selective channels. Accurate and efficient channel state information acquisition is critical for these systems. To address this, a channel estimation method based on compressed sensing is employed, using specialized pilot sequences. The performance of such channel estimation algorithms based on compressed sensing and the cross-correlation properties of the dictionary sets generated by these pilot sequences. which vary depending on the sequence design. This study addresses the pilot design problem in OTFS communication systems, proposing an optimization method to identify pilot sequences that enhance channel estimation accuracy effectively.  Methods  A pilot-assisted channel estimation algorithm based on compressed sensing is employed to estimate the delay and Doppler channel state information in OTFS systems for high-speed mobile scenarios. To improve channel estimation accuracy in the Delay-Doppler domain and achieve better performance than traditional pseudo-random sequences, this study proposes a pilot sequence optimization method using an Improved Genetic Algorithm (IGA). The algorithm takes the cross-correlation among dictionary set columns as the optimization goal, leveraging the GA’s strong integer optimization capabilities to search for optimal pilot sequences. An adaptive adjustment strategy for crossover and mutation probabilities is also introduced to enhance the algorithm’s convergence and efficiency. Additionally, to address the high computational complexity of the fitness function, the study analyzes the expressions for calculating cross-correlation among dictionary set columns and simplifies redundant calculations, thereby improving the overall optimization efficiency.  Results and Discussions  This study investigates the channel estimation performance of OTFS systems using different pilot sequences. The simulation parameters are presented in (Table 1), and the simulation results are shown in (Figure 2), (Figure 3), and (Figure 4). (Figure 2) illustrates the convergence performance of several commonly used group heuristic intelligent optimization algorithms applied to the pilot optimization problem, including the Particle Swarm Optimization (PSO) algorithm, Discrete Particle Swarm Optimization (DPSO) algorithm, Snake Optimization (SO) algorithm, and Genetic Algorithm (GA). The results indicate that the performance of common continuous optimization algorithms, such as PSO and SO, is comparable, while DPSO slightly outperforms traditional PSO, GA, due to its unique genetic and mutation mechanisms, demonstrates significantly faster convergence and better solutions. Furthermore, this study proposes a targeted IGA capable of adaptively adjusting crossover and mutation probabilities, leading to better solutions with fewer iterations. The objective function calculation process is also analyzed and simplified, reducing its computational complexity from \begin{document}$ {O}({\lambda ^2}k_p^2{l_p}) $\end{document} to \begin{document}$ {O}(\lambda {k_p}{l_p}) $\end{document} without altering the cross-correlation coefficient, which significantly reduces the computational load while maintaining optimization efficiency. (Figure 3) and (Figure 4) depict the Normalized Mean Square Error (NMSE) and Bit Error Rate (BER) performance of OTFS systems using different pilot sequences for channel estimation. The commonly used pseudo-random sequences, including m-sequences, Gold sequences, Zadoff-Chu sequences, and the optimized sequences generated by the proposed algorithm, are compared. The results demonstrate that the optimized pilot sequences generated by the proposed algorithm achieve superior channel estimation performance compared with other pilot sequences.  Conclusions  This study analyzes a pilot-assisted channel estimation method for OTFS systems based on compressed sensing and proposes a pilot sequence optimization approach using an IGA to address the pilot optimization challenge. The optimization objective function is constructed based on the correlation among dictionary set columns, and an adaptive adjustment strategy for crossover and mutation probabilities is proposed to enhance the algorithm’s convergence speed and optimization capability, outperforming other commonly used group heuristic optimization algorithms. To address the high computational complexity associated with directly calculating cross-correlation coefficients, the calculation steps are simplified, reducing the complexity from \begin{document}$ {O}({\lambda ^2}k_p^2{l_p}) $\end{document} to \begin{document}$ {O}(\lambda {k_p}{l_p}) $\end{document}, while preserving the cross-correlation properties, thereby improving optimization efficiency. Simulation results demonstrate that the proposed optimized pilot sequences offer better channel estimation performance than traditional pseudo-random pilot sequences, with relatively low optimization complexity.
Power Control and Resource Allocation Strategy for Information Freshness Guarantee in Internet of Vehicles
YANG Peng, KANG Yiming, YANG Jing, TANG Tong, ZHU Zhiyuan, WU Dapeng
2025, 47(2): 498-509. doi: 10.11999/JEIT240698
Abstract:
  Objective  In the Internet of Vehicles (IoV), where differentiated services coexist, the system is progressively evolving towards safety and collaborative control applications, such as autonomous driving. Current research primarily focuses on optimizing mechanisms for high reliability and low latency, with Quality of Service (QoS) parameters commonly used as benchmarks, while the timeliness of vehicle status updates receives less attention. Merely optimizing metrics like transmission delay and throughput is insufficient for ensuring that vehicles obtain status information in a timely manner. For example, in security-critical IoV applications, which require the exchange of state information between vehicles, meeting only the constraints of delay interruption probability or data transmission interruption does not fully address the high timeliness requirements of security services. To tackle this challenge and meet the stringent timeliness demands of security and collaborative applications, this paper proposes a user power control and resource allocation strategy aimed at ensuring information freshness.  Methods  This paper investigates user power control and resource allocation strategies to ensure information freshness. First, the problem of maximizing the Quality of Experience (QoE) for Vehicle-to-Infrastructure (V2I) users under the constraint of freshness in Vehicle-to-Vehicle (V2V) status updates is formulated based on the system model. Then, by incorporating the queue backlog constraint, equivalent to the Age of Information (AoI) violation constraint, the extreme value theory is applied to optimize the tail distribution of AoI. Furthermore, using the Lyapunov optimization method, the original problem is transformed into minimizing the Lyapunov drift plus a penalty function, based on which the optimal user transmission power is determined. Finally, a resource allocation strategy based on Genetic Algorithm improved Particle Swarm Optimization (GA-PSO) is proposed, leveraging a hypergraph structure to determine the optimal user channel reuse mode.  Results and Discussions  Simulation analysis indicates the following: 1. The proposed algorithm employs a channel gain differential partitioning method to cluster V2V links, effectively reducing intra-cluster interference. By integrating GA-PSO, it accelerates the search for the optimal channel reuse pattern in three-dimensional matching, minimizing signaling overhead and avoiding local optima. Compared with benchmark algorithms, the proposed approach increases V2I channel capacity by 7.03% and significantly improves the average QoE for V2I users (Fig. 4). 2. As vehicle speed increases, the distance between vehicles also grows, leading to higher transmission power for V2V communication to maintain link reliability and timeliness. This power increase results in reduced V2I channel capacity, subsequently lowering the average QoE for V2I users. Simulation results show a nearly linear relationship between vehicle speed and average QoE for V2I users, suggesting a relatively uniform effect of speed on V2I link capacity (Fig. 5). 3. Under varying Vehicle User Equipment (VUE) densities, the extreme event control framework is used to compare the conditional Complementary Cumulative Distribution Function (CCDF) of AoI and V2V link beacon backlog. The equivalent queue constraint, derived using extreme value theory, effectively controls the occurrence of extreme AoI violations. The simulations show improved AoI tail distribution across different VUE densities (Fig. 6 and Fig. 7). 4. With decreasing vehicle speed, the CCDF tail distribution of AoI improves (Fig. 8). Reduced speed shortens the transmission distance, decreasing V2V link path loss. This lower path loss, combined with less restrictive VUE transmission power limits, increases the V2V link transmission rate. As beacon transmission rate increase, beacon backlog is reduced, and the probability of exceeding a fixed AoI threshold decreases, ensuring the freshness of V2V beacon transmissions. 5. A comparison of curves under identical beacon reach rate (Fig. 9) reveals that worst-case AoI consistently increases with rising beacon reach rate. At low beacon arrival rate, the average AoI is high. However, once the V2V beacon queue accumulates beyond a certain threshold, further increases in the update arrival rate also raise the average AoI. In summary, the proposed scheme optimizes both the AoI tail distribution and the QoE for V2I users.  Conclusions  This paper investigates resource allocation and power control in vehicular network communication scenarios. By simultaneously considering the constraints of transmission reliability and status update timeliness in V2V links, restricted by the Signal-to-Interference-plus-Noise Ratio (SINR) threshold and the AoI outage probability threshold, the proposed strategy ensures both link reliability and information freshness. An extreme control framework is applied to minimize the probability of extreme AoI outage events in V2V links, ensuring the timeliness of transmitted information and meeting service requirements. The Lyapunov optimization method is then used to transform the original problem, yielding the optimal transmission power for both V2I and V2V links. Additionally, a GA-PSO-based three-dimensional matching algorithm is developed to determine the optimal spectrum sharing scheme among V2I, V2V, and subchannels. Numerical results demonstrate that the proposed scheme optimizes the AoI tail distribution while enhancing the QoE for all V2I users.
Radar, Navigation and Array Signal Processing
Adaptive Beamforming Based on Dual Convolutional Autoencoder
JIANG Yilin, LI Shuai, ZHENG Pei, TANG Yuanbo
2025, 47(2): 510-518. doi: 10.11999/JEIT240486
Abstract:
  Objective  Most traditional beamforming techniques and adaptive beamforming methods rely on reference signals. These methods require prior knowledge of the signal frequency and Direction of Arrival (DOA) at the array for beamforming. However, in low Signal-to-Noise Ratio (SNR) environments, obtaining the frequency and DOA of the incident signals is extremely challenging. This difficulty leads to significant performance degradation in reference-signal-based beamforming, limiting its applicability in tasks such as electronic reconnaissance and electronic countermeasures in low SNR conditions. This paper addresses the challenge of enabling antenna arrays to perform adaptive beamforming for incident signals with unknown frequencies and DOAs in low-SNR environments.  Methods  This paper proposes a Dual Convolutional AutoEncoder-Adaptive Beamforming (DCAE-ABF) method for blind reception. The approach leverages dual Convolutional Autoencoders (CAEs) to extract features from both the array-received signal and the radiation source signal, utilizing extensive air-domain statistical information with joint time-frequency domain constraints. A Deep Neural Network (DNN) connects the feature encodings from the two CAEs to construct the DCAE network. This method enables adaptive beamforming in low SNR environments, even when the incident signal’s frequency and DOA are unknown, facilitating blind reception.  Results and Discussions  Simulation results demonstrate that the proposed DCAE-ABF method can rapidly and accurately adjust the beam direction for incident signals with unknown frequencies and directions of arrival in a low SNR environment, effectively orienting the beam towards the incident signals for optimal reception. This method improves the output signal’s SNR, with the SNR gain significantly exceeding that of traditional beamforming techniques (Fig. 4, Fig. 6). Furthermore, the SNR gain achieved by this method remains stable even when the frequency and angle of the incident signal vary (Fig. 5).  Conclusions  This paper presents an adaptive beamforming method based on dual convolutional autoencoders. The method outperforms the other three approaches discussed in this study when applied to incident signals with unknown directions of arrival in low SNR environments. Even when the DOA is unknown, the method effectively utilizes the spatial information accumulated during autoencoder training. It can extract features from the array signals and adaptively form beams directed at the incident signals, achieving optimal reception. This approach enables blind adaptive beamforming for signals with unknown frequencies and directions of arrival, significantly improving the SNR of the incident signals.
Cryption and Network Information Security
Adaptive Clustering Center Selection: A Privacy Utility Balancing Method for Federated Learning
NING Bo, NING Yi ming, YANG Chao, ZHOU Xin, LI Guan yu, MA Qian
2025, 47(2): 519-529. doi: 10.11999/JEIT240414
Abstract:
  Objective  Differential privacy, based on strict statistical models, is widely applied in federated learning. The common approach integrates privacy protection by perturbing parameters during local model training and global model aggregation to safeguard user privacy while maintaining model performance. A key challenge is minimizing performance degradation while ensuring strong privacy protection. Currently, an issue arises in early-stage training, where data gradient directions are highly dispersed. Directly applying initial data calculations and processing at this stage can reduce the accuracy of the global model.  Methods  To address this issue, this study introduces a differential privacy mechanism in federated learning to protect individual privacy while clustering gradient information from multiple data owners. During gradient clustering, the number of clustering centers is dynamically adjusted based on training epochs, with the rate of change in clusters aligned with the model training process. In the early stages, higher noise levels are introduced to enhance privacy protection. As the model converges, noise is gradually reduced to improve learning of the true data distribution.  Result and discussions  The first set of experimental results (Fig. 3) shows that different fixed numbers of cluster centers lead to varying rates of change in training accuracy during the early and late stages of the training cycle. This suggests that reducing the number of cluster centers as training progresses benefits model performance, and the segmentation function is selected based on these findings. The second set of experiments (Fig. 4) indicates that among four sets of model performance comparisons, our method achieves the highest accuracy in the later stages of training as the number of rounds increases. This demonstrates that adjusting the number of cluster centers during training has a measurable effect. As model training concludes, gradient directions tend to converge, and reducing the number of cluster centers improves accuracy. The performance comparison of the three models (Table 2) further shows that our proposed method outperforms others in most cases.  Conclusions  Comparative experiments on four publicly available datasets demonstrate that the proposed algorithm outperforms baseline methods in model performance after incorporating adaptive clustering center selection. Additionally, it ensures privacy protection for sensitive data while maintaining a more stable training process. The improved clustering strategy better aligns with the actual training dynamics, validating the effectiveness of this approach.
Image and Intelligent Information Processing
Light Field Angular Reconstruction Based on Template Alignment and Multi-stage Feature Learning
YU Mei, ZHOU Tao, CHEN Yeyao, JIANG Zhidi, LUO Ting, JIANG Gangyi
2025, 47(2): 530-540. doi: 10.11999/JEIT240481
Abstract:
  Objective  By placing a micro-lens array between the main lens and imaging sensor, a light field camera captures both intensity and directional information of light in a scene. However, due to sensor size, dense spatial sampling results in sparse angular sampling during light field imaging. Consequently, angular super-resolution reconstruction of Light Field Images (LFIs) is essential. Existing deep learning-based LFI angular super-resolution reconstruction typically achieves dense LFIs through two approaches. The direct generation approach models the correlation between spatial and angular information from sparse LFIs and then upsamples along the angular dimension to reconstruct the light field. The indirect approach, on the other hand, generates intermediate outputs, reconstructing LFIs through operations on these outputs and the inputs. LFI coding methods based on sparse sampling generally select partial Sub Aperture Images (SAIs) for compression and transmission, using angular super-resolution to reconstruct the LFI at the decoder. In LFI scalable coding, the SAIs are divided into multiple viewpoint layers, some of which are selectively transmitted based on bit rate allocation, while the remaining layers are reconstructed at the decoder. Although existing deep learning-based angular super-resolution methods yield promising results, they lack flexibility and generalizability across different numbers and positions of reference SAIs. This limits their ability to reconstruct SAIs from arbitrary viewpoints, making them unsuitable for LFI scalable coding. To address this, a Light Field Angular Reconstruction method based on Template Alignment and multi-stage Feature learning (LFAR-TAF) is proposed, capable of handling different angular sparse templates with a single network model.  Methods  The process involves alignment, Micro-Lens Array Image (MLAI) feature extraction, sub-aperture level feature fusion, feature mapping to the target angular position, and SAI synthesis at the target angular position. First, the different viewpoint layers used in LFI scalable encoding are treated as different representations of the MLAI, referred to as light field sparse templates. To minimize discrepancies between these sparse templates and reduce the complexity of fitting the single network model, bilinear interpolation is employed to align the templates and generate corresponding MLAIs. The MLAI Feature Learning (MLAIFL) module then uses a Residual Dense Block (RDB) to extract preliminary features from the MLAIs, thereby mitigating the differences introduced by bilinear interpolation. Since the MLAI feature extraction process may partially disrupt the angular consistency of the LFI, a conversion mechanism from MLAI features to sub-aperture features is devised, incorporating an SAI-level Feature fusion (SAIF) module. In this step, the input MLAI features are reorganized along the channel dimension to align with the SAI dimension. Three 1×1 convolutions and two agent attention mechanisms are then employed for progressive fusion, supported by residual connections to accelerate convergence. In the feature mapping module, the extracted SAI features are mapped and adjusted to the target angular position based on the given target angular coordinates. Specifically, the SAI features are expanded in dimension using a recombination operator to match the spatial dimensions of the input features, and are concatenated with the input features. The concatenated target angular information is then fused with the light field features using a 1×1 convolution and RDB. The fused features are subsequently input into two RDBs to generate intermediate convolution kernel weights and bias maps. In the SAI synthesis module for the target angular position, the common viewpoints of different sparse templates serve as reference SAIs for the indirect synthesis method, ensuring the stability of the proposed approach. Using non-shared weight convolution kernels of the same dimension, the reference SAIs are convolved, and the preliminary results are combined with the generated bias map to synthesize the target SAI with enhanced detail.  Results and Discussions  The performance of LFAR-TAF is evaluated using two publicly available natural scene LFI datasets: the STFLytro dataset and Kalantari et al.’s dataset. To ensure non-overlapping training and testing sets, the same partitioning method as in current advanced approaches for natural scene LFIs is adopted. Specifically, 100 natural scene LFIs (100 Scenes) from Kalantari et al.’s dataset are used for training, while the test set consists of 30 LFIs (30 Scenes) from the Kalantari et al.’s dataset, as well as 15 Reflective scenes and 25 occlusion scenes from the STFLytro dataset. LFAR-TAF is compared with six angular super-resolution reconstruction methods (ShearedEPI, Yeung et al.’s method, LFASR-geo, FS-GAF, DistgASR, and IRVAE) using PSNR and SSIM as objective quality metrics for angular reconstruction from 3×3 to 7×7. Experimental results demonstrate that LFAR-TAF achieves the highest objective quality scores across all three test datasets. Notably, the proposed method is capable of reconstructing SAIs at any viewpoint using either five reference SAIs or 3×3 reference SAIs, after training on angular reconstruction tasks from 3×3 to 7×7. Subjective visual comparisons further show that LFAR-TAF effectively restores color and texture details of the target SAI from the reference SAIs. Ablation experiments reveal that removing either the MLAIFL or SAIF module results in decreased objective quality scores on the three test datasets, with the loss being more pronounced when the MLAIFL module is omitted. This highlights the importance of MLAI feature learning in modeling the spatial and angular correlations of LFIs, while the SAIF module enhances the micro-lens array to sub-aperture feature conversion process. Additionally, coding experiments are conducted to assess the practical performance of the proposed method in LFI coding. Two angular sparse templates (five SAIs and 3×3 SAIs) are tested on four scenarios from the EPFL dataset. The results show that encoding five SAIs achieves high coding efficiency at lower bit rates, while encoding nine SAIs from the 3×3 sparse template provides better performance at higher bit rates. These findings suggest that, to improve LFI scalable coding compression efficiency, different sparse templates can be selected based on the bit rate, and LFAR-TAF demonstrates stable reconstruction capabilities for various sparse templates in a single training process.  Conclusions  The proposed LFAR-TAF effectively handles different sparse templates with a single network model, enabling the flexible reconstruction of SAIs at any viewpoint by referencing SAIs with varying numbers and positions. This flexibility is particularly beneficial for LFI scalable coding. Moreover, the designed training approach can be applied to other LFI angular super-resolution methods, enhancing their ability to handle diverse sparse templates.
Optimizing Age of Information in LoRa Networks via Deep Reinforcement Learning
CHENG Kefei, CHEN Caidie, LUO Jia, CHEN Qianbin
2025, 47(2): 541-550. doi: 10.11999/JEIT240404
Abstract:
Age of Information (AoI) quantifies information freshness, which is critical for time-sensitive Internet of Things (IoT) applications. This paper investigates AoI optimization in an LoRa network under the Slot-Aloha protocol in an intelligent transportation environment. A system model is established to characterize transmission collisions and packet waiting times. Analytical results indicate that in LoRa uplink transmission, as the number of packets increases, AoI is primarily influenced by packet collisions. To address the challenge of a large action space hindering effective solutions, this study maps the continuous action space to a discrete action space and employs the Soft Actor-Critic (SAC) algorithm for AoI optimization. Simulation results demonstrate that the SAC algorithm outperforms conventional algorithms and traditional deep reinforcement learning approaches, effectively reducing the network’s average AoI.  Objective  With the rapid development of intelligent transportation systems, ensuring the real-time availability and accuracy of traffic data has become essential, particularly in transmission systems for traffic monitoring cameras and related equipment. Long-range, low-power radio frequency (LoRa) networks have emerged as a key technology for sensor connectivity in intelligent transportation due to their advantages of low power consumption, wide coverage, and long-distance communication. However, in urban environments, LoRa networks are prone to frequent data collisions when multiple devices transmit simultaneously, which affects information timeliness and, consequently, the effectiveness of traffic management decisions. This study focuses on optimizing data packet timeliness in LoRa networks to enhance communication efficiency. Specifically, it aims to improve AoI under the Slotted Aloha protocol by analyzing the effects of packet collisions and over-the-air transmission time. Based on this analysis, an optimization method using deep reinforcement learning is proposed, employing the SAC algorithm to minimize AoI. The goal is to achieve lower latency and a higher data transmission success rate in an intelligent transportation environment with frequent data transmissions, thereby improving overall system performance and ensuring real-time information availability to meet the freshness requirements of intelligent transportation systems.  Method  To address the requirements for information freshness in intelligent transportation scenarios, this study investigates the optimization of packet AoI in LoRa networks under the Slotted Aloha protocol. A system model is established to analyze packet collisions and over-the-air transmission time, providing theoretical support for enhancing information transmission efficiency. Given the Markovian nature of AoI evolution, the optimization problem is formulated as a Markov Decision Process (MDP) and solved using the SAC algorithm in deep reinforcement learning.  Results and Discussions  The study examines AoI variations during collisions (Fig. 2) and develops a collision model for data packet transmission (Fig. 4). Simulation results indicate that the SAC algorithm outperforms the Temporal Difference (TD) algorithm and conventional methods (Fig. 6). As the number of terminals increases, the system’s average AoI also increases (Fig. 7). Additionally, the variations in average AoI under different time slots for the SAC and TD3 algorithms are analyzed (Fig. 8).  Conclusions  Given the limited research on AoI in LoRa networks, this study examines the AoI optimization problem in LoRa uplink packet transmission within an intelligent traffic management environment and proposes a packet collision model under the Slotted Aloha protocol. The greedy algorithm and SAC algorithm are employed for AoI optimization. Simulation results demonstrate that the greedy algorithm outperforms conventional deep reinforcement learning algorithms but remains less effective than the SAC algorithm. The SAC algorithm significantly improves AoI optimization in LoRa networks. However, this study focuses solely on AoI optimization without considering energy consumption and packet loss rate. Future research should explore the trade-offs between energy efficiency, packet loss, and AoI optimization to minimize energy consumption and data loss. Additionally, this study does not address heterogeneous network scenarios. In environments where LoRa networks coexist with other communication technologies (e.g., Wi-Fi, Bluetooth, NB-IoT), challenges related to interoperability, data consistency, and network management arise. Investigating AoI optimization in heterogeneous transmission environments could further enhance the performance and reliability of LoRa networks in complex applications such as intelligent traffic management.
A Self-distillation Object Segmentation Method Based on Transformer Feature Pyramid
CHEN Lei, YANG Jibin, CAO Tieyong, ZHENG Yunfei, WANG Yang, ZHANG Bo, LIN Zhenhua, LI Wenbin
2025, 47(2): 551-560. doi: 10.11999/JEIT240735
Abstract:
  Objective  Neural networks that demonstrate superior performance often necessitate complex architectures and substantial computational resources, thereby limiting their practical applications. Enhancing model performance without increasing network parameters has emerged as a significant area of research. Self-distillation has been recognized as an effective approach for simplifying models while simultaneously improving performance. Presently, research on self-distillation predominantly centers on models with Convolutional Neural Network (CNN) architectures, with less emphasis on Transformer-based models. It has been observed that due to their structural differences, different network models frequently extract varied semantic information for the same spatial locations. Consequently, self-distillation methods tailored to specific network architectures may not be directly applicable to other structures; those designed for CNNs are particularly challenging to adapt for Transformers. To address this gap, a self-distillation method for object segmentation is proposed, leveraging a Transformer feature pyramid to improve model performance without increasing network parameters.  Methods  First, a pixel-wise object segmentation model is developed utilizing the Swin Transformer as the backbone network. In this model, the Swin Transformer produces four layers of features. Each layer of mapped features is subjected to Convolution-Batch normalization-ReLU (CBR) processing to ensure that the backbone features maintain a uniform channel size. Subsequently, all backbone features are concatenated along the channel dimension, after which convolution operations are performed to yield pixel-wise feature representations. In the next phase, an auxiliary branch is designed that integrates Densely connected Atrous Spatial Pyramid Pooling (DenseASPP), Adjacent Feature Fusion Modules (AFFM), and a scoring module, facilitating self-distillation to guide the main network. The specific architecture is depicted. The self-distillation learning framework consists of four sub-branches, labeled FZ1 to FZ4, alongside a main branch labeled FZ0. Each auxiliary sub-branch is connected to different layers of the backbone network to extract layer-specific features and produce a Knowledge Representation Header (KRH) that serves as the segmentation result. The main branch is linked to the fully connected layer to extract fused features and optimize the mixed features from various layers of the backbone network. Finally, a top-down learning strategy is employed to guide the model’s training, ensuring consistency in self-distillation. The KRH0 derived from the main branch FZ0 integrates the knowledge KRH1-KRH4 obtained from each sub-branch FZ1–FZ4, steering the overall optimization direction for self-distillation learning. Consequently, the main branch and sub-branches can be regarded as teacher and student entities, respectively, forming four distillation pairs, with FZ0 directing FZ1–FZ4. This top-down distillation strategy leverages the main branch to instruct the sub-branches to learn independently, thereby enabling the sub-branches to acquire more discriminative features from the main branch while maintaining consistency in the optimization direction between the sub-branches and the main branch.  Results and Discussions  The results quantitatively demonstrate the segmentation performance of the proposed method. The data indicates that the proposed method consistently achieves superior segmentation results across all four datasets. On average, the metric Fβ of the proposed method exceeds that of the suboptimal method, Transformer Knowledge Distillation (TKD), by 1.18%. Additionally, the mean Intersection over Union (mIoU) metric of the proposed method is 0.86% higher than that of the suboptimal method, Target-Aware Transformer (TAT). These results demonstrate that the proposed method effectively addresses the challenge of camouflage target segmentation. Notably, on the Camouflage Object Detection (COD) dataset, the proposed method improves Fβ by about 2.29% compared to TKD, while achieving an enhancement of 1.72% in mIoU relative to TAT. Among CNN methods, Poolnet+ (POOL+) attained the highest average Fβ, yet it falls short of the proposed method by 5.05%. This difference can be attributed to the Transformer’s capability to overcome the limitations of the restricted receptive field inherent in CNNs, thereby extracting a greater amount of semantic information from images. The results also show that the self-distillation method is similarly effective within the Transformer framework, significantly enhancing the segmentation performance of the Transformer model. The proposed method outperforms other self-distillation strategies, achieving the best segmentation results across all four datasets. When compared to the baseline model, the average metrics for Fβ and mIoU exhibit increases of 2.42% and 3.54%, respectively.   Conclusions  The proposed self-distillation algorithm enhances object segmentation performance and demonstrates the efficacy of self-distillation within the Transformer architecture.
Low-Rank Regularized Joint Sparsity Modeling for Image Denoising
ZHA Zhiyuan, YUAN Xin, ZHANG Jiachao, ZHU Ce
2025, 47(2): 561-572. doi: 10.11999/JEIT240324
Abstract:
  Objective  Image denoising aims to reduce unwanted noise in images, which has been a long-standing issue in imaging science. Noise significantly degrades image quality, affecting their use in applications such as medical imaging, remote sensing, and image reconstruction. Over recent decades, various image prior models have been developed to address this problem, focusing on different image characteristics. These models, utilizing priors like sparsity, Low-Rankness (LR), and Nonlocal Self-Similarity (NSS), have proven highly effective. Nonlocal sparse representation models, including Joint Sparsity (JS), LR, and Group Sparse Representation (GSR), effectively leverage the NSS property of images. They capture the structural similarity of image patches, even when spatially distant. Popular dictionary-based JS algorithms use a relaxed convex penalty to avoid NP-hard sparse coding, leading to an approximately sparse representation. However, these approximations fail to enforce LR on the image data, reducing denoising quality, especially in cases of complex noise patterns or high self-similarity. This paper proposes a novel Low-Rank Regularized Joint Sparsity (LRJS) model for image denoising, integrating the benefits of LR and JS priors. The LRJS model enhances denoising performance, particularly where traditional methods underperform. By exploiting the NSS in images, the LRJS model better preserves fine details and structures, offering a robust solution for real-world applications.  Methods  The proposed LRJS model integrates low-rank and JS priors to enhance image denoising performance. By exploiting the NSS property of images, the LRJS model strengthens the dependency between nonlocal similar patches, improving image structure representation and noise suppression. The low-rank prior reflects the smoothness and regularity inherent in the image, whereas the JS prior captures the sparsity of the image patches. Incorporating these priors ensures a more accurate representation of the underlying clean image, enhancing denoising performance. An alternating minimization algorithm is proposed to solve this optimization problem, alternating between the low-rank and JS terms to simplify the optimization process. Additionally, an adaptive parameter adjustment strategy dynamically tunes the regularization parameters, balancing LR and sparsity throughout the optimization. The LRJS model offers an effective approach for image denoising by combining low-rank and JS priors, solved using an alternating minimization framework with adaptive parameter tuning.  Results and Discussions  Experimental results on two image denoising tasks, Gaussian noise removal (Fig. 4, Fig. 5, Table 1, Table 2) and Poisson denoising (Fig. 6, Table 3), demonstrate that the proposed LRJS method outperforms several popular and state-of-the-art denoising algorithms in both objective metrics and visual perceptual quality, particularly for images with high self-similarity. In Gaussian noise removal, the LRJS method achieves significant improvements, especially with highly self-similar images. This improvement results from LRJS effectively leveraging the NSS prior, which strengthens the dependencies among similar patches, leading to better noise suppression while preserving image details. Compared with other methods, LRJS demonstrates greater robustness, particularly in retaining fine details and structures often lost with traditional denoising techniques. For Poisson denoising, the LRJS method also yields notable performance gains. It better manages the complexity of Poisson noise compared with other approaches, highlighting its versatility and robustness across different noise types. The visual quality of the denoised images shows fewer artifacts and more accurate recovery of details. Quantitative results in terms of PSNR and SSIM further validate the effectiveness of LRJS, positioning it as a competitive solution in image denoising. Overall, these experimental findings confirm that LRJS offers a reliable and effective approach, particularly for images with high self-similarity and complex noise models.  Conclusions  The LRJS model proposed in this paper improves image denoising performance by combining LR and JS priors. This dual-prior framework better captures the underlying image structure while suppressing noise, particularly benefiting images with high self-similarity. Experimental results demonstrate that the LRJS method not only outperforms traditional denoising techniques but also exceeds many state-of-the-art algorithms in both objective metrics and visual quality. By leveraging the NSS property of image patches, the LRJS model enhances the dependencies among similar patches, making it particularly effective for tasks requiring the preservation of fine details and structures. The LRJS method significantly enhances the quality of denoised images, especially in complex noise scenarios such as Gaussian and Poisson noise. Its robust alternating minimization algorithm with adaptive parameter adjustment ensures effective optimization, contributing to superior performance. The results further highlight the LRJS model’s ability to preserve image edges, textures, and other fine details often degraded in other denoising algorithms. Compared with existing techniques, the LRJS method demonstrates superior performance in handling high noise levels while maintaining image clarity and detail, making it a promising tool for applications such as medical imaging, remote sensing, and image restoration. Future research could focus on optimizing the model for more complex noise environments, such as mixed noise or real-world noise that is challenging to model. Additionally, exploring more efficient algorithms and integrating advanced techniques, such as deep learning, may further improve the LRJS model’s capability and applicability to diverse denoising tasks.