Advanced Search
Articles in press have been peer-reviewed and accepted, which are not yet assigned to volumes /issues, but are citable by Digital Object Identifier (DOI).
Display Method:
Low-Power Multi-Node Radiation-Hardened SRAM Design for Aerospace Applications
BAI Na, LI Gang, XU Yaohua, WANG Yi
 doi: 10.11999/JEIT240294
[Abstract](0) [FullText HTML](0) [PDF 2524KB](0)
Abstract:
  Objective  As space exploration advances, the requirement for high-density memory in spacecraft escalates. However, SRAMs employed in aerospace applications face susceptibility to Single-Event Upsets (SEUs) and Multiple-Node Upsets (MNUs) due to high-energy particle bombardment, compromising the reliability of spacecraft systems. Hence, it is essential to engineer an SRAM design characterized by superior radiation resistance, reduced power consumption, and enhanced stability, fulfilling the rigorous demands of aerospace applications.  Methods  This paper proposes a 16T SRAM cell, designated as MNRS16T, featuring three sensitive nodes and utilizing a MOS transistor stacking structure. In this configuration, the upper tier of the stack employs a cross-coupling technique to enhance the pull-up drive capability while simultaneously diminishing that of the pull-down structure, thus balancing the driving abilities of both. The fundamental operations of the MNRS16T include write, read, and hold functions. For the write operation, bit lines WL and WWL are set to VDD, with specific MOS transistors managed to input data. During the read operation, word lines BL and BLB are precharged to VDD, and data is retrieved by assessing the voltage disparity across the bit lines. In the hold operation, bit lines are connected to ground, and word lines are precharged to VDD to preserve the data integrity. To assess the efficacy of MNRS16T, simulations are conducted using a 65nm CMOS process. Performance metrics, benchmarked against other SRAM cells include read access time, write access time, Hold Static Noise Margin (HSNM), read static noise margin (RSNM), Hold Power (Hpwr), and soft error recovery capability.  Results and Discussions  MNRS16T exhibits superior performance across various metrics. In terms of read access time, MNRS16T exceeds other cells like SIS10T, SARP12T, and LWS14T, attributed to its efficient discharge path and optimal cell ratio (Fig. 4(a)). Regarding write access time, MNRS16T outperforms most counterparts. Specifically, its write access time is reduced compared to SARP12T, facilitated by the properties of the S1 node and the elimination of a lengthy feedback path (Fig. 4(b)). Concerning the hold static noise margin, MNRS16T achieves a higher HSNM than units such as SIS10T and RSP14T, a result of the balanced pull-up and pull-down driving forces provided by the transistor stacking structure and cross-coupling method (Fig. 5). In the RSNM assessment, although MNRS16T's RSNM falls below that of LWS14T at elevated voltages, it remains superior to several others, including RH12T and RSP14T (Fig. 6). Regarding hold power, MNRS16T achieves reductions of 12.4%, 16.9%, 13.1%, and 50.1% relative to SAR14T, RSP14T, EDP12T, and RH12T respectively, demonstrating significant energy efficiency (Fig. 8). In simulations of soft error recovery capability, MNRS16T consistently returns to its original logic state post-SEU, even when sensitive nodes receive a 120fC charge. Additionally, 1000 Monte Carlo simulations affirm its resilience against single-node and multi-node flips under Process, Voltage, and Temperature (PVT) variations (Fig. 3, Fig. 7). In terms of physical dimensions, MNRS16T's 16 transistors necessitate a layout area of 3.3 μm×3.5 μm, which is comparatively larger. Finally, in the comprehensive performance index EQM, MNRS16T significantly outstrips other SRAM cells, indicating its overall performance (Fig. 9).  Conclusions  This paper presents the design of an MNRS16T SRAM cell tailored for aerospace applications, effectively addressing SEU and MNUs. The MNRS16T cell demonstrates reduced read and write delay times, decreased hold power, and enhanced hold and RSNMs compared to other units. An extensive evaluation using the EQM performance index reveals that MNRS16T exceeds other radiation-hardened SRAM cells in overall performance. Nevertheless, the relatively large area of MNRS16T represents a drawback that warrants optimization in future studies.
Secure Transmission Scheme for Reconfigurable Intelligent Surface-enabled Cooperative Simultaneous Wireless Information and Power Transfer Non-Orthogonal Multiple Access System
JI Wei, LIU Ziqing, LI Fei, LI Ting, LIANG Yan, SONG Yunchao
 doi: 10.11999/JEIT240822
[Abstract](16) [FullText HTML](7) [PDF 1141KB](3)
Abstract:
  Objective  The Reconfigurable Intelligent Surface (RIS) is emerging as a promising technology due to its ability to provide passive beamforming gains, which can be seamlessly integrated into existing wireless networks without altering physical layer standards. The integration of RIS with other advanced technologies offers new opportunities for communication network design. In the context of future large-scale Internet of Things (IoT) systems, users are expected to have diverse requirements. These differences in structure and function lead to two distinct receiver operation modes: Power Splitting (PS) and Time Switching (TS). Furthermore, users' service needs may vary, including energy harvesting and information transmission. In practice, IoT terminals often face energy constraints. Additionally, the network typically operates in an open wireless environment, where the inherent broadcasting nature of wireless channels may introduce security vulnerabilities. To address the diverse service demands in large-scale IoT networks and ensure secure information transmission, this study proposes a RIS-enabled secure transmission scheme for a cooperative Simultaneous Wireless Information and Power Transfer Non-Orthogonal Multiple Access (SWIPT-NOMA) system.  Methods  The RIS is strategically deployed to assist transmission during both the direct and cooperative transmission stages. The goal is to maximize the secrecy rate of the strong NOMA user, subject to the information rate requirements of the weak NOMA user, the energy harvesting needs of the strong NOMA user, and the base station's minimum transmission power. To solve this multivariable-coupled, non-convex optimization problem, an alternating iterative optimization algorithm is applied. The algorithm optimizes the base station's active beamforming, the RIS's passive beam phase shift matrix in the direct transmission stage, the RIS's active beam phase shift matrix in the cooperative transmission stage, and the PS coefficient of the strong user. These parameters are iteratively adjusted until convergence is achieved.  Results and Discussions  The convergence of the algorithm is demonstrated in (Fig. 3). As the number of RIS components increases and the number of iterations grows, the secrecy rate of the strong user (U2) gradually improves until it converges. To evaluate the effectiveness of the proposed scheme, it is compared with several benchmark schemes: (1) The random PS coefficient scheme, where RIS is used in both the direct and cooperative transmission stages, and the PS coefficients for strong user U2 are randomly generated. (2) The random RIS phase shift matrix scheme, where RIS enables both transmission stages, with phase shift matrices for both stages randomly generated. (3) The SDR scheme, in which RIS is used in both transmission stages, and the phase shift matrices are optimized using the SDR method. (4) The RIS-enabled direct transmission scheme, where RIS is used only in the direct transmission stage. The impact of the number of base station antennas on the system's secrecy rate is shown in (Fig. 4), and the effect of the number of RIS components on the secrecy rate is explored in (Fig. 5). Compared to the other baseline schemes, the proposed scheme achieves a higher secrecy rate for the strong user.  Conclusions  This paper addresses the challenge of diverse service requirements for users in future large-scale IoT networks and the security of information transmission by designing a secure transmission scheme for an RIS-enabled cooperative SWIPT-NOMA communication system. RIS assists communication in both the direct and cooperative transmission stages. The secrecy rate of the strong user is maximized while considering the information rate requirements of weak NOMA users, the energy harvesting needs of strong NOMA users, and the base station's minimum transmission power. The proposed optimization problem is a non-convex, multi-variable problem, which is difficult to solve directly. To address this, the problem is divided into several sub-problems, and the active beamforming of the base station, the passive beam phase shift matrix of the RIS in the direct transmission stage, the active beam phase shift matrix of the RIS in the cooperative transmission stage, and the power splitting coefficient of the strong user are iteratively optimized until convergence. Simulation results demonstrate that the secrecy rate of the proposed scheme outperforms that of the scheme where RIS is enabled only in the direct transmission stage. Compared to other baseline schemes, the proposed scheme further enhances the secrecy rate for strong users.
Joint Secure Transmission and Trajectory Optimization for Reconfigurable Intelligent Surface-aided Non-Terrestrial Networks
XU Kexin, LONG Keping, LU Yang, ZHANG Haijun
 doi: 10.11999/JEIT240981
[Abstract](15) [FullText HTML](5) [PDF 1431KB](3)
Abstract:
  Objective  The proliferation of technologies such as the Internet of Things, smart cities, and next-generation mobile communications has made Non-Terrestrial Networks (NTNs) increasingly important for global communication. Future communication systems are expected to rely heavily on NTNs to provide seamless global coverage and efficient data transmission. However, current NTNs face challenges, including limited coverage and link quality in direct satellite-to-ground user connections, as well as eavesdropping threats. To address these challenges, a system integrating Reconfigurable Intelligent Surfaces (RIS) with a twin-layer Deep Reinforcement Learning (DRL) algorithm is proposed. This approach aims to satisfy the system’s requirements for high transmission rates and enhanced security, improving the signal strength for legitimate users while facilitating real-time updates and optimization of channel state information in NTNs.  Methods  First, a RIS-aided downlink NTNs system using an Unmanned Aerial Vehicle (UAV) as a relay is established. To balance the system’s transmission rate and security requirements, the weighted sum of the satellite-to-UAV transmission rate and the secure rate of the legitimate ground user is designed as the system utility, which serves as the optimization objective. A joint optimization method based on the Twin-Twin Delayed Deep Deterministic Policy Gradient (TTD3) algorithm is then proposed. This method jointly optimizes satellite and UAV beamforming, the RIS phase shift matrix, and UAV trajectory. The algorithm divides the optimization problem into two layers for solution. The first-layer DRL optimizes satellite and UAV beamforming, as well as the RIS phase shift matrix. The second-layer DRL optimizes the UAV's trajectory based on its position, user mobility, and channel state information. The twin DRL shares the same reward function, guiding the agents in each layer to adjust their actions and explore optimal strategies, ultimately enhancing the system's utility.  Results and Discussions  (1) Compared to the Deep Deterministic Policy Gradient (DDPG), the proposed TTD3 algorithm exhibits smaller dynamic fluctuations, demonstrating greater stability and robustness (Fig. 2). (2) The UAV trajectory and user secrecy rate performance under four different schemes and algorithms show that the proposed method balances service for legitimate users. The UAV trajectory is smoother compared to that based on DDPG, and the overall user secrecy rate is also higher. This confirms that the proposed method can adapt to dynamically changing NTNs environments while improving user secrecy rates (Fig. 3, Fig. 4). (3) As the number of RIS reflecting elements increases, the degrees of freedom and precision of beamforming improve. Therefore, the overall user secrecy rates of different algorithms increase, resulting in enhanced system performance (Fig. 5).  Conclusions  This paper investigates a RIS-assisted downlink secure transmission system for NTNs, addressing the presence of eavesdropping threats. To meet the requirements of high transmission rates and security across different scenarios, the optimization objective is formulated as the weighted sum of the transmission rate from the satellite to the UAV and the secrecy rate of legitimate ground users. A TTD3-based joint optimization method for satellite and UAV beamforming, RIS phase shift matrix, and UAV trajectory is proposed. By adopting a twin-layer DRL structure, the beamforming and trajectory optimization subproblems are decoupled to maximize system utility. Simulation results validate the effectiveness of the proposed algorithm. Additionally, comparisons across different algorithms, RIS element counts, and schemes in high-security-demand scenarios demonstrate that the TTD3 algorithm is well-suited for dynamically changing NTNs environments and can significantly enhance system transmission performance. Future research will explore integrating emerging technologies, such as federated learning and meta-learning, to achieve distributed, low-latency policy optimization, thereby facilitating network resource optimization and interference analysis in large-scale, multi-satellite, and multi-UAV complex scenarios.
Reconfigurable Intelligent Surface-Aided Joint Spatial and Code Index Modulation Communication System
CHEN Pingping, ZHANG Yunxin, DU Weiqing
 doi: 10.11999/JEIT240987
[Abstract](12) [FullText HTML](5) [PDF 0KB](0)
Abstract:
  Objective  The rapid growth of wireless communication traffic is driving existing communication networks toward greener and more energy-efficient solutions. Consequently, research into wireless communication systems that balance low complexity with high energy efficiency is of significant importance. Index Modulation (IM) technology, with its notable advantages in low complexity and high energy efficiency, has emerged as a promising candidate for future communication systems. Reconfigurable Intelligent Surfaces (RIS) offer benefits such as reconfigurability, simple hardware, and low energy consumption, presenting new opportunities for the development of wireless communication systems. However, Traditional RIS-aided Spatial Modulation (RIS-SM) communication systems and RIS-aided Code Index Modulation (RIS-CIM) communication systems utilize the index of the receiver antenna or code to transmit additional information bits. Therefore, the data transmission rates of the RIS-SM systems are improved at the cost of increasing the number of receiver antennas. In order to improve the data transmission rates and energy efficiencies of RIS-SM systems, a Reconfigurable Intelligent Surface-aided Joint Space and Code Index Modulation (RIS-JSCIM) communication system is proposed in this paper.  Methods  The proposed system leverages M-ary Quadrature Amplitude Modulation (M-QAM) symbols, spatial antenna index and code index to transmit information bits. The information bits transmitted by the antenna index and code index of RIS-JSCIM do not consume energy, and therefore RIS-JSCIM can achieve good energy efficiency. At the receiver, both Maximum Likelihood Detection (MLD) and low-complexity Greedy Detection (GD) algorithms are introduced. The MLD algorithm, while operating at a high complexity, delivers excellent Bit Error Rate (BER) performance; conversely, the GD algorithm provides a better trade-off between complexity and BER performance. Furthermore, this paper analyzes the energy efficiency and complexity of the proposed RIS-JSCIM system, and employs Monte Carlo simulations to assess the BER performance of the scheme. Additionally, the performance metrics of the RIS-JSCIM system are compared with those of other systems. The comparative results indicate that, despite a certain increase in system complexity, the RIS-JSCIM system achieves superior energy efficiency and BER performance relative to other systems.  Results and Discussions  This paper compares the energy efficiency, system complexity, and BER performance of the RIS-JSCIM system with other systems. The comparison results indicate that, when the number of receiving antennas \begin{document}${N_R} = 4$\end{document} and the number of Walsh codes \begin{document}$L = 8$\end{document}, the energy efficiency of the RIS-JSCIM system is improved by 60% and 6.66% compared to the RIS-SM and RIS-CIM systems, respectively (Table 2). The complexity of the RIS-JSCIM system when employing the GD algorithm is equivalent to that of the GCIM-SM system and slightly higher than that of the RIS-CIM system (Table 3).Simulation results demonstrate that at \begin{document}${\text{BER}} = {10^{ - 5}}$\end{document}, the proposed RIS-JSCIM system achieves a performance gain of over 6 dB compared to the RIS-CIM system (Fig. 5). As the number of RIS units \begin{document}$N$\end{document} increases, both the RIS-JSCIM and RIS-CIM systems exhibit significant improvements in BER performance, with the RIS-JSCIM system outperforming the RIS-CIM system at high Signal-to-Noise Ratios (SNR). For example, when the \begin{document}${\text{BER}} = {10^{ - 5}}$\end{document} and \begin{document}$N = 128$\end{document}, the proposed RIS-JSCIM system provides a 5 dB SNR gain over the RIS-CIM system (Fig.6). Similarly, at high SNR, the BER performance of the RIS-JSCIM system consistently exceeds that of the RIS-SM system (Fig.7).  Conclusions  This paper proposes a RIS-JSCIM system, which not only uses M-QAM symbols to transmit information bits but also utilizes the index of the receiver antenna and code to convey additional information bits. Furthermore, we introduce the Maximum Likelihood Detection (MLD) algorithm and the Gradient Descent (GD) algorithm for recovering the transmitted information bits. The MLD algorithm searches through all possible combinations of receiver antenna indices, code indices, and M-QAM symbols, thereby achieving improved bit error rate (BER) performance at the expense of increased complexity. In contrast, the GD algorithm recovers the received antenna index bits, code index bits, and M-QAM modulation bits through separate detection of antenna indices, code indices, and M-QAM demodulation, thus achieving a favorable trade-off between complexity and BER performance. The RIS-JSCIM system transmits receiver antenna index bits and code index bits without consuming energy, enabling the system to attain high energy efficiency. When the number of receiving antennas \begin{document}${N_R} = 4$\end{document} and the number of Walsh codes \begin{document}$L = 8$\end{document}, the energy efficiency of the RIS-JSCIM system is improved by 60% and 6.66% compared to the RIS-SM and RIS-CIM systems, respectively. Furthermore, when the \begin{document}${\text{BER}} = {10^{ - 5}}$\end{document} and \begin{document}$N = 128$\end{document}, the proposed RIS-JSCIM system provides a 5 dB SNR gain over the RIS-CIM system.
Skeleton-based Action Recognition with Selective Multi-scale Graph Convolutional Network
CAO Yi, LI Jie, YE Peitao, WANG Yanwen, LÜ Xianhai
 doi: 10.11999/JEIT240702
[Abstract](11) [FullText HTML](5) [PDF 2055KB](5)
Abstract:
  Objective  Human action recognition plays a key role in computer vision and has gained significant attention due to its broad range of applications. Skeleton data, derived from human action samples, is particularly robust to variations in camera viewpoint, illumination, and background occlusion, offering advantages over depth image and video data. Recent advancements in skeleton-based action recognition using Graph Convolutional Networks (GCNs) have demonstrated effective extraction of the topological relationships within skeleton data. However, limitations remain in some current approaches employing GCNs: (1) Many methods focus on the discriminative dependencies between pairs of joints, failing to effectively capture the multi-scale discriminative dependencies across the entire skeleton. (2) Some temporal modeling methods use dilated convolutions for simple feature fusion, but do not employ convolutional kernels in a manner suitable for effective temporal modeling. To address these challenges, a selective multi-scale GCN is proposed for action recognition, designed to capture more joint features and learn valuable temporal information.  Methods  The proposed model consists of two key modules: a multi-scale graph convolution module and a selective multi-scale temporal convolution module. First, the multi-scale graph convolution module serves as the primary spatial modeling component. It generates a multi-scale, channel-wise topology refinement adjacency matrix to enhance the model's ability to learn multi-scale discriminative dependencies of skeleton joints, thereby capturing more joint features. Specifically, the pairwise joint adjacency matrix is used to capture the interactive relationships between joint pairs, enabling the extraction of local motion details. Additionally, the multi-joint adjacency matrix emphasizes the overall action feature changes, improving the model's spatial representation of the skeleton data. Second, the selective multi-scale temporal convolution module is designed to capture valuable temporal contextual information. This module comprises three stages: feature extraction, temporal selection, and feature fusion. In the feature extraction stage, convolution and max-pooling operations are applied to obtain temporal features at different scales. Once the multi-scale temporal features are extracted, the temporal selection stage uses global max and average pooling to select salient features while preserving key details. This results in the generation of temporal selection masks without directly fusing temporal features across scales, thus reducing redundancy. In the feature fusion stage, the output temporal feature is obtained by weighted fusion of the temporal features and the selection masks. Finally, by combining the multi-scale graph convolution module with the selective multi-scale temporal convolution module, the proposed model extracts multi-stream data from skeleton data, generating various prediction scores. These scores are then fused through weighted summation to produce the final prediction outcome.  Results and Discussions  Extensive experiments are conducted on two large-scale datasets: NTU-RGB+D and NTU-RGB+D 120, demonstrating the effectiveness and strong generalization performance of the proposed model. When the convolution kernel size in the multi-scale graph convolution module is set to 3, the model performs optimally, capturing more representative joint features (Table 1). The results (Table 4) show that the temporal selection stage is critical within the selective multi-scale temporal convolution module, significantly enhancing the model’s ability to extract temporal contextual information. Additionally, ablation studies (Table 5) confirm the effectiveness of each component in the proposed model, highlighting their contributions to improving recognition performance. The results (Tables 6 and 7) demonstrate that the proposed model outperforms state-of-the-art methods, achieving superior recognition accuracy and strong generalization capabilities.  Conclusions  This study presents a selective multi-scale GCN model for skeleton-based action recognition. The multi-scale graph convolution module effectively captures the multi-scale discriminative dependencies of skeleton joints, enabling the model to fully extract more joint features. By selecting appropriate temporal convolution kernels, the selective multi-scale temporal convolution module extracts and fuses temporal contextual information, thereby emphasizing useful temporal features. Experimental results on the NTU-RGB+D and NTU-RGB+D 120 datasets demonstrate that the proposed model achieves excellent accuracy and robust generalization performance.
Research Overview of Reconfigurable Intelligent Surface Enabled Semantic Communication Systems
ZHU Zhengyu, LIANG Xinyue, SUN Gangcan, NIU Kai, CHU Zheng, YANG Zhaohui, YANG Guangrui, ZHENG Guhan
 doi: 10.11999/JEIT240984
[Abstract](20) [FullText HTML](5) [PDF 2626KB](2)
Abstract:
  Objective  The proliferation of the sixth-Generation (6G) wireless networks technologies has catalyzed an exponential demand for intelligent devices, such as autonomous transportation, environmental monitoring and consumer robotics. These applications will generate a staggering amount of data in the order of zetta-bytes. Besides, these applications need to support massive connectivity over limited spectrum resources but require lower latency, which poses critical challenges to traditional source-channel coding. Consequently, the 6G architecture is transitioned from a traditional framework characterized by exclusive emphasis on high transmission rates to a novel paradigm centered on the intelligent interconnection of all things. Semantic Communication (SemCom) are believed to extend the Shannon communication paradigm by extracting the meanings of data and filtering out the useless, irrelevant, and unessential information in the semantic domain. As a new core paradigm in 6G, SemCom enhances transmission accuracy and spectral efficiency, thereby delivering optimized service quality to users. While semantic communication demonstrates significant potential to enhance transmission accuracy and spectral efficiency, thereby delivering superior quality-of-service as a prospective 6G paradigm, substantial challenges remain to be addressed. Reconfigurable Intelligent Surfaces (RIS), recognized as a pivotal enabler for 6G networks, can be dynamically deployed in wireless propagation environments to manipulate electromagnetic wave characteristics (e.g., frequency, phase, and polarization) through programmable reflection and refraction, thereby reshaping wireless channels to amplify signal strength, extend coverage, and optimize system performance. The integration of RIS into SemCom systems addresses critical limitations such as coverage voids while enhancing the precision and efficiency of semantic information delivery. This paper proposes an RIS enabled SemCom framework, with numerical simulations validating its effectiveness in improving system accuracy and robustness.  Methods  Based on the SemCom system, this paper introduces a RIS into the channel. The transmitted signal reaches the receiver through both the direct link and the RIS - reflected link, thereby mitigating communication interruptions caused by obstructions. Furthermore, the Bilingual Evaluation Understudy (BLEU) metric is adopted as the performance evaluation criterion. Simulation comparisons are conducted between RIS - enhanced channels and conventional channels (e.g., AWGN and Rayleigh channels), validating the performance gain of RIS in SemCom systems.  Results and Discussions  A positive correlation is observed between signal-to-noise ratio (SNR) increments and BLEU score improvements, where elevated BLEU score signify enhanced text reconstruction fidelity to source content, thereby indicating superior semantic accuracy and communication quality (Fig.4). Under RIS - enhanced channel conditions, SemCom systems demonstrate not only higher BLEU values but also greater stability, exhibiting reduced sensitivity to SNR fluctuations, which validates the exceptional advantages of RIS channels in semantic information recovery. Notably, the performance gap in BLEU values between RIS channels and conventional channels widens significantly under low SNR regimes, suggesting RIS - enabled systems maintain robust communication quality and semantic fidelity under signal degradation, thereby demonstrating stronger practical competitiveness. Furthermore, the comparative analysis in Figures 4(a) and (b) highlights performance divergences across N - gram models. Consequently, practical implementations necessitate model selection based on computational constraints and task requirements, with potential exploration of higher-order N - gram architectures.  Conclusions  This paper systematically investigates the evolutionary trajectory of SemCom and the foundational theoretical framework of RIS. SemCom, aiming to transcend the bandwidth limitations of conventional systems by enabling natural human-machine interactions, has demonstrated transformative potential across diverse domains. Concurrently, this paper analyzes RIS's inherent advantages in enhancing wireless system performance and its prospective integration with semantic communication paradigms. A novel RIS - enabled SemCom architecture is proposed, with experimental validation confirming its effectiveness in enhancing information recovery accuracy. Furthermore, this paper delineates prospective research directions for RIS - enhanced SemCom, calling for concerted efforts from the research community to address these emerging challenges.  Prospects  Current research on RIS - enabled SemCom remains nascent, primarily focusing on resource allocation, performance enhancement, and architectural design, while facing fundamental limitations including the absence of Shannon-like theoretical foundations and vulnerabilities in knowledge base synchronization and updating. Three critical challenges emerge: (1)Cross-modal semantic fusion architecture requiring adaptive frameworks to support diversified 6G services beyond single-modality paradigms; (2)Dynamic knowledge base optimization demanding efficient update mechanisms to balance semantic consistency with computational/communication overhead; (3)Semantic-aware security protocols needing hybrid defenses against AI-specific attacks (e.g., adversarial perturbations) and RIS - enabled channel manipulation threats.
An Unfolded Channel-based Physical Layer Key Generation Method For Reconfigurable Intelligent Surface-Assisted Communication Systems
YANG Lijun, CHEN Zishuo, LU Haitao, GUO Lin
 doi: 10.11999/JEIT240988
[Abstract](18) [FullText HTML](5) [PDF 1741KB](1)
Abstract:
  Objective  Physical Layer Key Generation (PLKG) is an emerging key generation technique that exploits the reciprocity, time variability, and spatial decorrelation properties of wireless channels to enable real-time key generation. This technique has the potential to achieve one-time-pad encryption and demonstrates resilience against quantum attacks. PLKG typically consists of four key steps: channel probing, preprocessing and quantization, information reconciliation, and privacy amplification. Proper preprocessing can enhance channel reciprocity, remove redundancy, improve the Key Generation Rate (KGR), and reduce the Key Disagreement Rate (KDR). Reconfigurable Intelligent Surfaces (RIS) offer advantages such as low cost, low power consumption, and ease of deployment. They enable the manipulation of incident signals in terms of amplitude, phase, and polarization, thus constructing an intelligent communication environment. This provides a novel approach to mitigating the limitations of key generation imposed by the channel environment. However, existing preprocessing methods, such as Principal Component Analysis (PCA), Discrete Cosine Transform (DCT), Singular Value Decomposition (SVD), and nonlinear processing, treat channel data as a whole for noise reduction and redundancy elimination. These approaches do not account for the key capacity loss caused by channel cascading in RIS-assisted communication systems, thereby limiting KGR. To address this issue, this paper proposes a novel PLKG protocol based on unfolded channels, aiming to mitigate the key capacity loss induced by channel cascading and thereby enhance KGR.  Methods  This paper first derives the degradation effect of channel cascading on the key generation rate through entropy theory and validates it via theoretical simulations. Next, a PLKG scheme designed for RIS-assisted communication scenarios is proposed, with improvements in both channel probing and preprocessing. In the channel probing phase, a two-stage channel estimation approach is designed. In the first stage, the PARAllel FACtor (PARAFAC) channel estimation method is employed, utilizing the inherent multidimensional information structure in Multiple Input Multiple Output (MIMO) communication systems to construct a tensor, which is then used to estimate the baseline unfolded channel via the Alternating Least Squares (ALS) algorithm. In the second stage, the RIS phase shift matrix is randomized, and the Least Squares (LS) method is used to estimate the cascaded channel, thereby introducing an additional source of randomness for key generation. In the channel preprocessing phase, the baseline unfolded channel obtained from the two-stage channel estimation is used to separate the cascaded channel into the unfolded channel and the RIS phase shift matrix. Conventional methods such as PCA, DCT, and Wavelet Transform (WT) are then applied to remove noise and redundancy from the obtained data. By utilizing both the unfolded channel and the RIS phase shift matrix as joint key sources, the proposed scheme mitigates the KGR degradation caused by channel cascading, thereby improving KGR while ensuring a low KDR.  Results and Discussions  A Rayleigh channel MIMO communication system model is established for experimentation. The proposed two-stage channel estimation method is used to separate the cascaded channel into the unfolded channel and the RIS phase shift matrix. Subsequently, three preprocessing methods—PCA, DCT, and WT—are applied to the cascaded channel, unfolded channel, and RIS phase shift matrix for noise reduction and decorrelation. The extracted channel features are then quantized, followed by information reconciliation and privacy amplification. The experiment compares two key generation approaches: one using the cascaded channel as the key source and the other using the unfolded channel and RIS phase shift matrix as joint key sources. Simulation results shows that the proposed scheme achieved a 72% KGR improvement at 2 dB Signal to Noise Ratio (SNR) (Fig.8). Among the preprocessing methods, DCT demonstrates the highest KGR and the lowest KDR (Fig.9, Fig.10). Additionally, experiments on the number of RIS configuration matrices indicates that increasing the number beyond eight yielded diminishing returns in KGR improvement. Therefore, an optimal range of 8–10 configuration matrices is recommended. Furthermore, the computational complexity of the PARAFAC channel estimation method is analyzed. The feasibility of real-time key generation is validated by considering channel coherence time, algorithm complexity, and communication protocol frame intervals.  Conclusions  This paper proposes a PLKG scheme that employs the PARAFAC channel estimation method to estimate the unfolded channel and the LS method to estimate the cascaded channel. Based on these estimations, the cascaded channel is decomposed into the unfolded channel and the RIS phase shift matrix during preprocessing. By using the unfolded channel and the RIS phase shift matrix as joint key sources, the proposed method mitigates the degradation of KGR caused by channel cascading. Compared with conventional PLKG schemes that use the cascaded channel as the key source, the proposed method achieves a 72% KGR improvement at a 2dB SNR while maintaining a low KDR. However, despite its ability to enhance KGR, the proposed scheme still faces challenges such as excessive pilot overhead and computational limitations. Future work should focus on optimizing overhead reduction to further enhance its practicality.
 doi: 10.11999/JEIT240528
[Abstract](17) [FullText HTML](6) [PDF 5616KB](3)
Abstract:
Joint Beamforming Design for STAR-RIS Assisted URLLC-NOMA System
ZHU Jianyue, WU Yutong, CHEN Xiao, XIE Yaqin, XU Yao, ZHANG Zhizhong
 doi: 10.11999/JEIT240717
[Abstract](17) [FullText HTML](7) [PDF 1232KB](5)
Abstract:
  Objective  This paper addresses the energy efficiency challenge in Ultra-Reliable Low-Latency Communication (URLLC) systems, crucial for mission-critical applications such as industrial automation and remote surgery. The integration of Simultaneously Transmitting and Reflecting Reconfigurable Intelligent Surfaces (STAR-RIS) with Non-Orthogonal Multiple Access (NOMA) is proposed to improve spectral efficiency and coverage while meeting URLLC’s stringent reliability and latency requirements. However, the joint optimization of base station beamforming, STAR-RIS transmission, and reflection matrices presents a non-trivial problem due to non-convexity and coupled variables. This work aims to minimize energy consumption under a total power constraint by jointly designing these parameters, advancing STAR-RIS-aided NOMA systems for URLLC.  Methods  To address the non-convex optimization problem, the proposed methodology involves several key steps. First, the user rate function under finite blocklength transmission is analyzed, considering the specific requirements of URLLC. This analysis facilitates the reformulation of the original problem into an equivalent form more amenable to optimization. Specifically, the rate function is approximated using a Taylor series expansion, and the effect of finite blocklength on decoding error probability is incorporated into the optimization framework.Next, an alternating optimization framework is adopted to decouple the joint design problem into subproblems, each focused on optimizing either the base station beamforming, the STAR-RIS transmission matrix, or the reflection matrix. Semidefinite Relaxation (SDR) techniques are then applied to address the non-convexity of these subproblems, ensuring efficient and tractable solutions. The SDR method transforms the original non-convex constraints into convex ones by relaxing certain matrix rank constraints, which are subsequently recovered using randomization techniques.The proposed approach is validated through extensive simulations, comparing its performance with Orthogonal Multiple Access (OMA) and traditional RIS-aided schemes. The simulation setup includes a multi-user scenario with varying channel conditions, blocklengths, and reliability requirements.  Results and Discussions  The main contributions of this paper are summarized as follows:(1) Joint Optimization of Active and Passive Beamforming Vectors: To minimize system transmission power, the paper jointly optimizes the active beamforming vector at the base station and the passive beamforming vector at the reflective surface, presenting an efficient joint beamforming design algorithm (Table 1). (2) Validation and Energy Efficiency Comparison: Experimental results confirm the effectiveness of the proposed joint beamforming design. A comparison of energy consumption performance for STAR-RIS under different modes is provided. Specifically, the proposed STAR-RIS-aided NOMA scheme demonstrates a significant reduction in power consumption compared to OMA and conventional RIS-aided systems (Fig. 2 and Fig. 5). The proposed joint beamforming and STAR-RIS optimization framework effectively addresses the trade-offs between energy consumption, reliability, and latency in URLLC systems.   Conclusions  This paper presents a comprehensive framework for the transmission design of STAR-RIS-aided NOMA systems in URLLC scenarios. By jointly optimizing beamforming, transmission, and reflection matrices, the proposed method significantly enhances energy efficiency while meeting the stringent requirements of URLLC. The use of alternating optimization and SDR techniques effectively addresses the non-convexity of the problem, providing practical and scalable solutions.The results highlight the potential of STAR-RIS-aided NOMA systems to support next-generation wireless communication applications, laying the foundation for further research in this area. Future work will explore the integration of machine learning techniques to further enhance the performance and adaptability of the proposed framework. Additionally, the impact of hardware impairments and imperfect channel state information on system performance will be investigated to ensure robustness in real-world deployments.
Consistent Generative Adversarial Based on Building Change Detection Data Generation Technology for Multi-temporal Remote Sensing Imagery
HAO Chen, GUANGYAO Zhou, QIANTONG Wang, BIN Gao, WENZHI Wang, HAO Tang
 doi: 10.11999/JEIT240720
[Abstract](18) [FullText HTML](8) [PDF 5520KB](6)
Abstract:
  Objective  Building change detection is an essential task in urban planning, disaster management, environmental monitoring, and other critical applications. Advances in multi-temporal remote sensing technology have provided vast amounts of data, enabling the monitoring of changes over large geographic areas and extended time frames. Despite this, significant challenges persist, particularly in acquiring sufficient labeled data pairs for training deep learning models. Building changes are typically characterized by long temporal cycles, leading to a scarcity of annotated data that is critical for training data-driven deep learning models. This scarcity severely limits the models' capacity to generalize and achieve high accuracy, particularly in complex and diverse scenarios. The performance of existing methods often suffers from poor generalization due to insufficient training data, reducing their applicability to practical tasks. To address these challenges, this study proposes a novel solution: the development of a multi-temporal building change detection data pair generation network, referred to as BAG-GAN. This network leverages a consistency adversarial generation mechanism to create diverse and semantically consistent data pairs. The aim is to enrich training datasets, thereby enhancing the learning capacity of deep learning models for detecting building changes. By addressing the bottleneck of insufficient labeled data, BAG-GAN provides a new pathway for improving the accuracy and robustness of multi-temporal building change detection.  Methods  BAG-GAN integrates Generative Adversarial Networks (GANs) with a specially designed consistency constraint mechanism, tailored for the generation of data pairs in multi-temporal building change detection tasks. The core innovation of this network lies in its adversarial consistency loss function. This loss function ensures that the generated images maintain semantic consistency with the corresponding input images while reflecting realistic and diverse changes. The consistency constraint is crucial for preserving the integrity of the generated data and ensuring its relevance to real-world scenarios. The network is composed of two main components: a generator and a discriminator, which work in tandem through an adversarial learning process. The generator aims to produce realistic and semantically consistent multi-temporal image pairs, while the discriminator evaluates the quality of the generated data, guiding the generator to improve iteratively. Additionally, BAG-GAN is equipped with multimodal output capabilities, enabling the generation of diverse building change data pairs. This diversity enhances the robustness of deep learning models by exposing them to a wider range of scenarios during training. To address the issue of limited training data, the study incorporates a data augmentation strategy. Original datasets, such as LEVIR-CD and WHU-CD, were reorganized by combining change labels with multi-temporal remote sensing images to create new synthetic datasets. These augmented datasets, along with the data generated by BAG-GAN, were used to train and evaluate several widely recognized deep learning models, including FC-EF, FC-Siam-Conc, and others. Comparative experiments were conducted to assess the effectiveness of BAG-GAN and its contribution to improving model performance in multi-temporal building change detection.  Results and Discussions  The experimental results demonstrate that BAG-GAN effectively addresses the challenges of insufficient labeled data in building change detection tasks. Models trained on the augmented datasets, which included BAG-GAN-generated data, achieved significant improvements in detection accuracy and robustness. For instance, classic models like FC-EF and FC-Siam-Conc showed substantial performance gains when trained on augmented datasets compared to their performance on the original datasets. These improvements validate the effectiveness of BAG-GAN in generating high-quality training data. BAG-GAN also excelled in producing diverse and multimodal building change data pairs Visual comparisons between the generated data and the original datasets highlighted the network's ability to create realistic and varied data, effectively enhancing the diversity of training datasets. This diversity is critical for addressing the imbalance in existing datasets, where effective building change information is underrepresented. By increasing the proportion of relevant change information in the training data, BAG-GAN improves the learning conditions for deep learning models, enabling them to better generalize across different scenarios. Further analysis revealed that BAG-GAN significantly enhances the ability of detection models to localize changes and recover fine-grained details of building modifications. This is particularly evident in complex scenarios involving subtle or small-scale changes. The adversarial consistency loss function played a pivotal role in ensuring the semantic relevance of the generated data, making BAG-GAN a reliable tool for data augmentation in remote sensing applications. Moreover, the network's ability to generate data pairs with high-quality and multimodal characteristics ensures its applicability to a wide range of remote sensing tasks beyond building change detection.  Conclusions  This study introduces BAG-GAN, a novel multi-temporal building change detection data pair generation network designed to overcome the limitations of insufficient labeled data in remote sensing. The network incorporates an adversarial consistency loss function, which ensures that the generated data is both semantically consistent and diverse. By leveraging a consistency adversarial generation mechanism, BAG-GAN enhances the quality and diversity of training datasets, addressing key bottlenecks in multi-temporal building change detection tasks. Through experiments on the LEVIR-CD and WHU-CD datasets, BAG-GAN demonstrated its ability to significantly improve the performance of classic remote sensing change detection models, such as FC-EF and FC-Siam-Conc. The results highlight the network’s effectiveness in generating high-quality data pairs that enhance model training and detection accuracy. This research not only provides a robust methodological framework for improving multi-temporal building change detection but also offers a foundational tool for broader applications in remote sensing. The findings pave the way for future advancements in change detection techniques, offering valuable insights for researchers and practitioners in the field.
Covert Communication Of UAV Aided By Time Modulated Array Perception
MIAO Chen, QIN Yuxuan, MA Ruiqian, LIN Zhi, MA Yue, ZHANG Wentao, WU Wen
 doi: 10.11999/JEIT240606
[Abstract](167) [FullText HTML](34) [PDF 2294KB](46)
Abstract:
  Objective  With the widespread application of Unmanned Aerial Vehicle (UAV) communication technology in military and civilian domains, ensuring secure information transmission within UAV networks has received increasing attention. Covert communication is an effective approach to conceal information transmission. However, existing methods, such as digital beamforming, improve covert communication performance but increase system size and power consumption. This study proposes a UAV short-packet covert communication method based on Time Modulated Planar Array (TMPA) sensing. A TMPA-UAV covert communication system architecture is introduced, along with a two-dimensional Direction of Arrival (DOA) estimation method. A covert communication model is established, and a closed-form expression for the covert constraint is derived using Kullback-Leibler (KL) divergence. Based on the estimated angle of Willie, the TMPA switching sequence is optimized to maximize signal gain in the target direction while minimizing gain in non-target directions. Covert throughput is selected as the optimization objective, and a one-dimensional search method determines the optimal data packet length and transmission power.  Results and Discussions  Simulations show that the Root Mean Square Error (RMSE) for DOA estimation in both directions approaches 0°, with RMSE decreasing significantly as the Signal-to-Noise Ratio (SNR) increases (Fig. 4). With a fixed elevation angle and azimuth angles varying between 0° and 60°, a comparison between the proposed method and the traditional DOA estimation method for time-modulated arrays indicates that the proposed method reduces DOA estimation error to the order of 0.1°, significantly improving accuracy. Beamforming simulations based on the estimation results (Fig. 6) show a SideLobe Level (SLL) below –30 dB and a beamwidth of 5°, meeting design requirements. Covert communication simulations reveal the existence of an optimal data packet length that maximizes covert throughput (Fig. 7). A stricter covert tolerance imposes tighter constraints on covert communication (Fig. 8), requiring Alice to use lower transmission power and shorter block lengths to communicate covertly with Bob. When the beamforming error angle is small, the system maintains high covert throughput (Fig. 9). Within a UAV flight height range of 50 m to 90 m, covert throughput remains low; however, when the height exceeds 90 m, throughput increases rapidly. Beyond 130 m, UAV height has little effect on maximum covert throughput, and performance reaches its optimal state. Therefore, controlling UAV flight height appropriately is crucial for effective communication between legitimate links.  Conclusions  This study proposes a TMPA-based multi-antenna UAV sensing-assisted covert communication system for short packets. A TMPA-based DOA estimation method is introduced to determine the relative position of non-cooperative nodes. The Compressed Sensing (CS) algorithm optimizes the beam radiation pattern, maximizing gain at the legitimate destination node while creating nulls at the non-cooperative node's location. A closed-form expression for covert constraints is derived using KL divergence, and covert throughput is maximized through the joint optimization of packet length and transmission power. Simulations analyze the relationships between the number of array elements, covert tolerance, beam direction error angles, UAV height, and covert throughput. Results indicate that an optimal packet length maximizes covert throughput. Additionally, increasing the number of array elements and relaxing covert constraints can improve covert throughput. Practical system design should comprehensively optimize these factors.
A High-Throughput Hardware Design for AV1 Rough Mode Decision
SHENG Qinghua, TAO Zehao, HUANG Xiaofang, LAI Changcai, HUANG Xiaofeng, YIN Haibin, DONG Zhekang
 doi: 10.11999/JEIT240823
[Abstract](81) [FullText HTML](19) [PDF 4125KB](6)
Abstract:
  Objective  As demand for 4K and 8K Ultra High Definition (UHD) videos increases, the latest generation of video coding standards has been developed to meet the growing need for UHD video transmission. UHD video coding requires processing more pixels and details, resulting in significant increases in computational complexity and resource consumption. Optimizing algorithms and implementing hardware acceleration are essential for achieving real-time encoding and decoding of UHD videos. In Alliance for Open Media Video 1 (AV1), richer intra-prediction modes have been introduced, expanding the number of modes from 10 in VP9 to 61, thereby increasing computational complexity. To address the added complexity of these modes and enhance hardware processing throughput, a hardware design for AV1 Rough Mode Decision (RMD) based on a fully pipelined architecture is proposed.  Methods  At the algorithm level, a 4×4 block is used as the minimum processing unit. RMD is applied to various sizes of Prediction Units (PUs) within a 64×64 Coding Tree Unit (CTU) following Z-order scanning. This approach allows for efficient processing of large blocks by dividing them into smaller, manageable units. To reduce computational complexity, the SATD cost calculations for different PU sizes (e.g., 1:2, 1:4, 2:1, and 4:1) are performed using a cost accumulation approximation method based on the 1:1 PU. This method minimizes the need to recalculate costs for every possible configuration, thus improving efficiency and reducing computational load. At the hardware level, the architecture supports RMD for PUs of various sizes (4×4 to 32×32) within a 64×64 CTU. This architecture differs from traditional designs, which use separate circuits for each PU size. It optimizes logical resource use and minimizes downtime. The design incorporates a 28-stage pipeline that enables parallel processing of intra-prediction modes, ensuring RMD for at least 16 pixels per clock cycle and significantly enhancing throughput and encoding efficiency. Additionally, the design emphasizes circuit compatibility and reusability across various PU sizes, reducing redundancy and maximizing hardware resource utilization.  Results and Discussions  Software analysis shows that the proposed AV1 coarse mode decision algorithm reduces processing time by an average of 45.78% compared to the standard AV1 algorithm under the All-Intra (AI) configuration, while achieving a 1.94% improvement in BD-Rate. The testing platform is an Intel(R) Core(TM) i9-9900K CPU @ 3.60 GHz with 16.0 GB of DRAM. Compared to existing methods, the algorithm significantly reduces processing time while maintaining encoding efficiency. It offers an optimized trade-off, with a slight BD-Rate loss in exchange for substantial reductions in encoding time. Hardware analysis reveals that the proposed hardware architecture has a total circuit area of 0.556 mm² after synthesis, with a maximum operating frequency of 432.7 MHz, enabling real-time encoding of 8k@50.6fps video. Although the circuit area is slightly larger than in existing designs, the architecture demonstrates significant improvements in processing speed and video resolution capability, providing a balanced trade-off between hardware resource usage and throughput/area efficiency. These results further confirm the design's superiority in terms of hardware resource efficiency and processing performance.  Conclusions  This paper presents a high-throughput hardware design for AV1 RMD, capable of processing all PU sizes with 56 directional and 5 non-directional prediction modes. The design employs a 28-stage pipeline for parallel intra-frame prediction mode processing, enabling RMD for at least 16 pixels per clock cycle and significantly improving encoding efficiency. Techniques such as false-reconstructed reference pixels, Z-order scanning, PMCM circuit structures, and circuit reuse address the increased hardware resource demands of parallel processing. Experimental results show that the proposed algorithm reduces processing time by an average of 45.78% and improves BD-Rate by 1.94% compared to the AV1 standard, ensuring high speed and encoding quality. Circuit synthesis confirms the architecture's capability for real-time 8k@50.6fps video processing, meeting the demands of future UHD video encoding with exceptional performance and efficiency.
Performance and Optimal Placement Analysis of Intelligent Reflecting Surface-assisted Wireless Networks
SHU Feng, LAI Sihao, LIU Chuan, GAO Wei, DONG Rongen, WANG Yan
 doi: 10.11999/JEIT240488
[Abstract](222) [FullText HTML](57) [PDF 2779KB](46)
Abstract:
  Objective:   Previous studies have extensively examined the performance of Intelligent Reflecting Surface (IRS)-assisted wireless communications by varying the location of the IRS. However, relocating the IRS alters the sum of the distances between the IRS and the base station, as well as the distances to users, leading to discrepancies in reflective channel transmission distances, which introduces a degree of unfairness. Additionally, the assumption that the path loss indices for the base station-to-IRS and IRS-to-user channels are equal is overly idealistic. In practical scenarios, the user's height is typically much lower than that of the base station, and the IRS may be positioned closer to either the base station or the user. This disparity results in significantly different path loss indices for the two channels. Consequently, this paper focuses on identifying the optimal deployment location of the IRS while keeping the total distance fixed. The IRS is modeled to move along an ellipsoid or ellipsoidal plane defined by the base station and the user as focal points. The analysis provides insights into the optimal deployment of the IRS while taking into account a broader range of application scenarios, specifically addressing different path loss indices for the base station-to-IRS and IRS-to-user channels given a predetermined sum of the transmitting powers.  Methods:   Utilizing concepts of phase alignment and the law of large numbers, closed-form expressions for the reachability rate of both passive and active IRS-assisted wireless networks are initially derived for two scenarios: the line-of-sight channel and the Rayleigh channel. Following this, the study analyzes how the path loss exponents from the base station to the IRS and from the IRS to the user impact the optimal deployment location of the IRS.  Results and Discussions:   The reachability rate of a passive IRS-assisted wireless network, considering IRS locations under both line-of-sight and Rayleigh channels, is illustrated. It is evident that the optimal deployment location of the IRS is nearest to either the base station or the user when β1=β2. When β1>β2, the optimal deployment location of the IRS is obtained solely at the base station, while the least effective deployment location shifts progressively closer to the user. Conversely, a contrasting result is obtained when β1<β2. The above results verify the correctness of the theoretical derivation in Section 3.1.3. The reachability rate of an active IRS-assisted wireless network as a function of IRS location under line-of-sight and Rayleigh channels is depicted. The figure indicates that when β1=β2, the system’s reachability rate under the line-of-sight channel exceeds that of the Rayleigh channel, with the optimal deployment location of the active IRS positioned in proximity to the user. When β1>β2 (fixed β2, increasing β1), the optimal deployment location of the active IRS progressively approaches the base station. And when β1<β2, the optimal deployment location shifts closer to the user. The optimal deployment location of the IRS for IRS-assisted wireless networks operating under a Rayleigh channel, reflecting variations in the path loss index β, is portrayed. Notably, for passive IRS systems, regardless of the path loss index variations, the optimal deployment locations across three different cases yield consistent conclusions with those derived. For the active IRS, when β1=β2=β1, the optimal deployment location gradually distances itself from the user ultimately approaching the IRS location at m (directly above the midpoint of the line connecting the base station and user). Conversely, when β1>β2, the optimal deployment position of the IRS increasingly aligns with the base station along an elliptical trajectory; conversely, when β1<β2, it shifts towards the user. The optimal deployment location of the active IRS under both line-of-sight and Rayleigh channels as a function of Igressively approaches the base station. And wRS reflected power PI is displayed. The analysis indicates that in both channel conditions, as the IRS reflected power increases, the optimal deployment location for the active IRS progressively moves closer to the base station along an elliptical trajectory as PI gradually increases. And at β1=β2 and PI=PB, the optimal deployment location of the active IRS maintains an equal distance from both the base station and the user. The system's reachability rate in relation to the distance r from the base station to the active IRS, accounting for different user noise \begin{document}$\sigma_{\mathrm{u}}^2 $\end{document} and amplified noise \begin{document}$\sigma_{\mathrm{i}}^2 $\end{document} of the active IRS, is presented. When fixing \begin{document}$\sigma_{\mathrm{i}}^2 $\end{document} and gradually increasing \begin{document}$\sigma_{\mathrm{u}}^2 $\end{document}, the optimal deployment location of the active IRS is situated closer to the user. Conversely, when fixing \begin{document}$\sigma_{\mathrm{u}}^2 $\end{document} and gradually increasing \begin{document}$\sigma_{\mathrm{u}}^2 $\end{document}, the optimal deployment location gradually approaches the base station. Additionally, irrespective of increased noise levels, the system’s reachability rate demonstrates a tendency to decline.  Conclusions:   This paper examines the maximization of system reachable rates by varying the deployment locations of passive and active IRSs in line-of-sight and Rayleigh channel transmission scenarios. In the analysis, fixed positions are assumed for both the base station and the user, with the sum of the base station-to-IRS and IRS-to-user distances kept constant. Phase alignment and the law of large numbers are employed to derive a closed-form expression for the reachable rate. Theoretical analysis and simulation results provide several key insights: When β1<β2, the optimal deployment locations for both passive and active IRS are close to the user, the least favorable deployment locations for passive IRS move progressively closer to the base station as the difference between β1 and β2 increases. When β1=β2, the optimal deployment location for the active IRS remains near the user, while the passive IRS can be effectively placed near either the base station or the user. When β1>β2, the optimal deployment location of the passive IRS remains close to the base station. As the difference between β1 and β2 ncreases, the optimal deployment location of the active IRS gradually shifts closer to the base station. Additionally, as the amplified noise of the active IRS increases, its optimal deployment location moves closer to the base station. Conversely, when the noise at the user increases, the optimal deployment location of the active IRS is always closer to the user.
A Joint Beamforming Method Based on Cooperative Co-evolutionary in Reconfigurable Intelligent Surface-Assisted Unmanned Aerial Vehicle Communication System
ZHONG Weizhi, WAN Shiqing, DUAN Hongtao, FAN Zhenxiong, LIN Zhipeng, HUANG Yang, MAO Kai
 doi: 10.11999/JEIT240561
[Abstract](263) [FullText HTML](79) [PDF 2075KB](40)
Abstract:
  Objective:   High-quality wireless communication enabled by Unmanned Aerial Vehicles (UAVs) is set to play a crucial role in the future. In light of the limitations posed by traditional terrestrial communication networks, the deployment of UAVs as nodes within aerial access networks has become a vital component of emerging technologies in Beyond Fifth Generation (B5G) and Sixth Generation (6G) communication systems. However, the presence of infrastructure obstructions, such as trees and buildings, in complex urban environments can hinder the Line-of-Sight (LoS) link between UAVs and ground users, leading to a significant degradation in channel quality. To address this challenge, researchers have proposed the integration of Reconfigurable Intelligent Surfaces (RIS) into UAV communication systems, providing an energy-efficient and flexible passive beamforming solution. RIS consists of numerous adjustable electromagnetic units, with each element capable of independently configuring various phase shifts. By adjusting both the amplitude and phase of incoming signals, RIS can intelligently reflect signals from multiple transmission paths, thereby achieving directional signal enhancement or nulling through beamforming. Given the limitations of conventional joint beamforming methods—such as their exclusive focus on optimizing the RIS phase shift matrix and lack of universality—a novel joint beamforming approach based on a Cooperative Co-Evolutionary Algorithm (CCEA) is proposed. This method aims to enhance Spectrum Efficiency (SE) in multi-user scenarios involving RIS-assisted UAV communications.  Methods:   The proposed approach begins by optimizing the RIS phase shift matrix, followed by the design of the beam shape for RIS-reflected waves. This process modifies the spatial energy distribution of RIS reflections to improve the Signal-to-Interference-plus-Noise Ratio (SINR) at the receiver. To address challenges in existing optimization algorithms, an Evolutionary Algorithm (EA) is introduced for the first time, and a cooperative co-evolutionary structure based on EA is developed to decouple joint beamforming subproblems. The central concept of CCEA revolves around decomposing complex problems into several subproblems, which are then solved through distributed parallel evolution among subpopulations. The evaluation of individuals within each subpopulation, representing solutions to their respective subproblems, relies on collaboration among different populations. Specifically, this involves merging individuals from one subpopulation with representative individuals from others to create composite solutions. Subsequently, the overall fitness of these composite solutions is assessed to evaluate individual performance within each subpopulation.   Results and Discussions:   The simulation results demonstrate that, in comparison to joint beamforming, which focuses solely on designing the RIS phase shift matrix, further optimizing the shape of the reflected beam from the RIS significantly enhances the accuracy and effectiveness of the main lobe coverage over the user's position, resulting in improved SE. Although Maximum Ratio Transmission (MRT) precoding can maximize the output SINR of the desired signal, it may also lead to considerable inter-user interference, which subsequently diminishes the SE. Therefore, the implementation of joint beamforming is essential. The optimization algorithms proposed in this paper are effective for both the actual amplitude-phase shift model and the ideal RIS amplitude-phase shift model. However, factors such as dielectric loss associated with the actual circuit structure of the RIS can attenuate the strength of the reflected wave reaching the client, thereby reducing the SINR at the receiving end and ultimately lowering the SE. Additionally, the increase in SE achievable through Deep Reinforcement Learning (DRL) and Alternating Optimization (AO) is limited when compared to CCEA. Unlike the optimization of individual action strategies employed in DRL, the CCEA algorithm produces a greater variety of solutions by utilizing crossover and mutation among individuals within the population, thereby mitigating the risk of local optimization. Moreover, CCEA can optimize the spatial distribution of the reflected waves through a more sophisticated design of the RIS reflecting beam shape. This results in an enhanced signal intensity at the receiving end, allowing for a higher SE compared to AO and DRL, which primarily focus on optimizing the RIS phase shift matrix.  Conclusions:   In light of the limitations observed in previous joint beamforming optimization methods, this paper introduces a novel joint beamforming optimization approach based on CCEA. This method effectively decomposes the joint beam optimization problem into two distinct sub-problems: the design of the RIS reflection beam waveform and the beamforming design at the transmitter. These sub-problems are addressed through independent parallel evolution, utilizing two separate sub-populations. Notably, for RIS passive beamforming, this approach innovatively optimizes the RIS phase shift matrix alongside the design of the RIS reflected beam shape for the first time. Numerical simulation results indicate that, compared to joint beamforming strategies that focus solely on optimizing the RIS phase shift matrix, a more meticulous design of the RIS reflected waveform can significantly alter the intensity distribution of reflected waves in 3D space. This alignment enables the reflected beam to converge on the user's location while mitigating interference, thereby enhancing the system's SE. Furthermore, the CCEA algorithm demonstrates the capability to achieve effective coverage of RIS reflected beams for users, regardless of varying base station and user locations. The optimization process leads to a reduction in Peak Side Lobe Level (PSLL) and an improvement in SE by at least 5 dB, showing its spatial applicability across diverse scenarios. Future research will aim to further investigate the application of evolutionary algorithms and swarm intelligence optimization techniques in joint beamforming optimization, as well as explore the potential of RIS beam waveform design to optimize communication systems, adapting to increasingly complex and diversified communication requirements.
Tradeoff between Age of Information and Energy Efficiency for Intelligent Reflecting Surface Assisted Short Packet Communications
ZHANG Yangyi, GUAN Xinrong, WANG Quan, DENG Cheng, ZHU Zeyuan, CAI Yueming
 doi: 10.11999/JEIT240666
[Abstract](177) [FullText HTML](56) [PDF 2048KB](62)
Abstract:
  Objective:   In monitoring Internet of Things (IoT) systems, it is essential for sensor devices to transmit collected data to the Access Point (AP) promptly. The timely transmission of information can be enhanced by increasing transmission power, as higher power levels tend to improve the reliability of data transfer. However, sensor devices typically have limited transmission power, and beyond a certain threshold, increases in power yield diminishing returns in terms of transmission timeliness. Therefore, effectively managing transmission power to balance timeliness and Energy Efficiency (EE) is crucial for sensor devices. This paper investigates the trade-off between the Age of Information (AoI) and EE in multi-device monitoring systems, where sensor devices communicate monitoring data to the AP using short packets with support from Intelligent Reflective Surface (IRS). To address packet collisions that occur when multiple devices access the same resource block, an access control protocol is developed, and closed-form expressions are derived for both the average AoI and EE. Based on these expressions, the average AoI-EE ratio is introduced as a metric that can be minimized to achieve an optimal balance between AoI and EE through transmission power optimization.  Methods:   Deriving the closed-form expression for the average AoI is challenging due to two factors. Firstly, obtaining the exact distribution of the composite channel gain is difficult. Secondly, in short-packet communications, the packet error rate expression involves a complementary cumulative distribution function with a complex structure, complicating the averaging process. However, the Moment Matching (MM) technique can approximate the probability distribution of the composite channel gain as a gamma distribution. To address the second challenge, a linear function is used to approximate the packet error rate, yielding an approximate expression for the average packet error rate. Additionally, to examine the relationship between the ratio of average AoI and EE with transmission power, the second derivative of this ratio is calculated and analyzed. Finally, the optimal transmission power is determined using the binary search algorithm.  Results and Discussions:   Firstly, the paper examines the division of a time slot into varying numbers of resource blocks and analyzes their AoI performances. The findings indicate that AoI performance does not increase monotonically with an increase in the number of resource blocks. Specifically, while a greater number of resource blocks enhances the probability of device access, it concurrently reduces the size of each resource block, leading to an increase in packet error rates during information transmission. Therefore, it is essential to strategically plan the number of resource blocks allocated for each time slot. Additionally, the results demonstrate that the AoI performance of the proposed access control scheme exceeds that of traditional random access and periodic sampling schemes. In the random access scheme, devices occupy resource blocks at random, which may lead to multiple devices occupying the same block and resulting in transmission collisions that compromise the reliability of information transmission. Conversely, while devices in the periodic sampling scheme can reliably access resource blocks within each cycle, one cycle includes multiple time slots, thus necessitating a prolonged wait for information transmission. Moreover, it is noted that at lower information transmission power levels, the periodic sampling scheme can achieve higher EE. This is attributed to the low transmission power resulting in substantially higher packet error rates across all schemes; however, the periodic sampling scheme manages to secure larger resource blocks, leading to lower packet error rates and a reduced likelihood of energy waste during signal transmission. As information transmission power increases, the advantages of the periodic sampling scheme begin to diminish, and the EE of the proposed access control scheme ultimately exceeds that of the periodic sampling scheme. Finally, the paper investigates the relationship between the ratio of average AoI and EE with the information transmission power. The analysis reveals that this ratio is a convex function that initially decreases and subsequently increases with rising transmission power, indicating the existence of an optimal power level that minimizes the ratio.  Conclusions:   This study examines the trade-off between timeliness and EE in IRS-assisted short-packet communication systems. An access control protocol is proposed to mitigate packet collisions, and both timeliness and EE are analyzed. The ratio of average AoI to EE is introduced as a metric to balance AoI and EE, with optimization of transmission power shown to minimize this ratio. Simulation results validate the theoretical analysis and demonstrate that the proposed access control protocol achieves an improved AoI-EE trade-off. Future research will focus on optimizing the deployment location of the IRS to further enhance the balance between timeliness and EE.
Cover
Cover
2025, 47(1).  
[Abstract](33) [PDF 7583KB](6)
Abstract:
2025, 47(1): .  
[Abstract](32) [FullText HTML](16) [PDF 240KB](6)
Abstract:
Overviews
Status and Prospect of Hardware Design on Integrated Sensing and Communication
LIN Yuewei, ZHANG Qixun, WEI Zhiqing, LI Xingwang, LIU Fan, FAN Shaoshuai, WANG Yi
2025, 47(1): 1-21.   doi: 10.11999/JEIT240012
[Abstract](1312) [FullText HTML](984) [PDF 9034KB](282)
Abstract:
  Objective:   The field of cellular mobile communication is advancing toward post-5G (5.5G, Beyond 5G, 5G Advanced) and 6th Generation (6G) standards. This evolution involves a shift from traditional sub-6 GHz operating frequency bands to higher frequency ranges, including millimeter wave (mmWave), terahertz (THz), and even visible light frequencies, which intersect with radar operating bands. Technologies such as Orthogonal Frequency Division Multiplexing (OFDM) and Multiple Input Multiple Output (MIMO) have gained widespread application in both wireless communication and radar domains. Given the shared characteristics and commonalities in signal processing and operating frequency bands between these two fields, “Integrated Sensing And Communication (ISAC)” has emerged as a significant research focus in wireless technologies like 5G Advanced (5G-A), 6G, Wireless Fidelity (WiFi), and radar. This development points toward a network perception paradigm that combines communication, sensing, and computing. The “ISAC” concept aims to unify wireless communication systems (including cellular and WiFi) with wireless sensing technologies (such as radar) and even network Artificial Intelligence (AI) computing capabilities into a cohesive framework. By integrating these elements, the physical layer can share frequencies and Radio Frequency (RF) hardware resources, leading to several advantages: spectrum conservation, cost reduction, minimized hardware size and weight, and enhanced communication perception. In this article, the focus of communication perception integration is primarily on radar communication. ISAC necessitates that both communication and sensing utilize the same radio frequency band and hardware resources. The diverse characteristics of multiple frequency bands, along with the varying hardware requirements for communication and sensing, present increased challenges for ISAC hardware design. Effective hardware design for ISAC systems demands a well-considered architecture and device design for RF transceivers. Key considerations include the receiver’s continuous signal sensing, link budget, and noise figure, all of which are sensitive to factors such as system size, weight, power consumption, and cost. A comprehensive review of relevant literature reveals that while studies on overall architecture, waveform design, signal processing, and THz technology exist within the ISAC domain, they often center on theoretical models and software simulation. Hardware design and technical verification methodologies are sporadically addressed across different studies. Although some literature details specific hardware designs and validation approaches, these are limited in number compared to the rich body of theoretical and algorithmic research, indicating a need for more comprehensive and systematic reviews focused specifically on ISAC hardware design.  Methods:   This paper summarizes the hardware designs, verification technologies, and systemic hardware verification platforms pertinent to beyond 5G, 6G, and WiFi ISAC systems. Additionally, recent researches on related hardware designs and verification both domestically and internationally are reviewed. The analysis addresses the challenges in hardware design, including the conflicting requirements between communication and sensing systems, In Band Full Duplex (IBFD) Self-Interference Cancellation (SIC), Power Amplifier (PA) efficiency, and the need for more accurate circuit performance modeling.  Results and Discussions:   Initially, the design of ISAC transceiver architectures from existing research is summarized and compared. Subsequently, an overview and analysis of current ISAC IBFD self-interference suppression strategies, low Peak to Average Power Ratio (PAPR) waveforms, high-performance PA designs, precise device modeling techniques, and systemic hardware verification platforms are presented. Finally, the paper provides a summary of the findings. Future challenges in ISAC hardware design are discussed, including the effects of hardware defects on sensing accuracy, ultra-large scale MIMO systems, high-frequency IBFD, and ISAC hardware designs for Unmanned Aerial Vehicle (UAV) applications. The performance metrics of ISAC IBFD architectures are compared, while the various ISAC transceiver architectures are outlined. Representative hardware verification platforms for ISAC systems are presented. The different ISAC transceiver architectures summarized in this paper are illustrated.  Conclusions:   In recent years, preliminary research has been conducted on integrated air interface architecture, transceiver hardware design, systematic hardware verification, and demonstration of sensing technologies such as 5G-A, 6G, and WiFi, both domestically and internationally. However, certain limitations persist. Beyond 5G networks, post-5G and 6G ISAC hardware verification platforms primarily operate at the link level rather than at the network system level. This focus on ISAC without the integration of computing functions leads to increased volume and power consumption costs and a reliance on commercial instruments and SDR platforms. Furthermore, the IBFD self-interference suppression technology has yet to fully satisfy the demands of future ultra-large-scale MIMO systems, necessitating further integration with large-scale artificial intelligence model technologies. In light of impending technological challenges and issues of openness, it is crucial for academia and industry to collaborate in addressing these challenges and researching viable solutions. To expedite testing optimization and industrial implementation, practical hardware design transition solutions are required that balance advancements in high-frequency support, receiver architecture, and networking architecture, facilitating the efficient realization of the “ideal” of ISAC.
Research Progress of Inverse Lithography Technology
AI Fei, SU Xiaojing, WEI Yayi
2025, 47(1): 22-34.   doi: 10.11999/JEIT240308
[Abstract](230) [FullText HTML](106) [PDF 10068KB](28)
Abstract:
  Objective  Inverse Lithography Technology (ILT) provides improved imaging effects and a larger process window compared to traditional Optical Proximity Correction (OPC). As chip manufacturing continually reduces process dimensions, ILT has become the leading lithography mask correction technology. This paper first introduces the basic principles and several common implementation methods of the reverse lithography algorithm. It then reviews current research on using reverse lithography technology to optimize lithography masks, as well as analyzes the advantages and existing challenges of this technology.  Methods  The general process of generating mask patterns in ILT is exemplified using the level set method. First, the target graphics, light sources, and other inputs are identified. Then, the initial mask pattern is created and a pixelated model is constructed. A photolithography model is then established to calculate the aerial image. The general photoresist threshold model is represented by a sigmoid function, which helps derive the imaging pattern on the photoresist. The key element of the ILT algorithm is the cost function, which measures the difference between the wafer image and the target image. The optimization direction is determined based on the chosen cost function. For instance, the continuous cost function can calculate gradients, enabling the use of gradient descent to find the optimal solution. Finally, when the cost function reaches its minimum, the output mask is generated.  Results and Discussions  This paper systematically introduces several primary methods for implementing ILT. The level set method’s main concept is to convert a two-dimensional closed curve into a three-dimensional surface. Here, the closed curve is viewed as the set of intersection lines between the surface and the zero plane. During the ILT optimization process, the three-dimensional surface shape remains continuous. This continuity allows the ILT problem to be transformed into a multivariate optimization problem, solvable using gradient algorithms, machine learning, and other methods. Examples of the level set method’s application can be found in both mask optimization and light source optimization. The level set mathematical framework effectively addresses two-dimensional curve evolution. When designing the ILT algorithm, a lithography model determines the optimization direction and velocity for each mask point, employing the level set for mask evolution. Intel has proposed an algorithm that utilizes a pixelated model to optimize the entire chip. However, this approach incurs significant computational costs, necessitating larger mask pixel sizes. Notably, the pixelated model is consistently used throughout the process, with a defined pixelated cost function applicable to multi-color masks. The frequency domain method for calculating the ILT curve involves transforming the mask from the spatial domain into the frequency domain, followed by lithography model calculations. This approach generates a mask with continuous pixel values, which is then gradually converted into a binary mask through multiple steps. When modifying the cost function in frequency domain optimization, all symmetric and repetitive patterns are altered uniformly, preserving symmetry. The transition of complex convolution calculations into multiplication processes within the frequency domain significantly reduces computational complexity and can be accelerated using GPU technology. Due to the prevalent issue of high computational complexity in various lithography mask optimization algorithms, scholars have long pursued machine learning solutions for mask optimization. Early research often overlooked the physical model of photolithography technology, training neural networks solely based on optimized mask features. This oversight led to challenges such as narrow process windows. Recent studies have, however, integrated machine learning with other techniques, enabling the physical model of lithography technology to influence neural network training, resulting in improved optimization results. While the ILT-optimized mask lithography process window is relatively large, its high computational complexity limits widespread application. Therefore, combining machine learning with the ILT method represents a promising research direction.  Conclusions  Three primary techniques exist for optimizing masks using ILT: the Level Set Method, Intel Pixelated ILT Method, and Frequency Domain Calculation of Curve ILT. The Level Set Method reformulates the ILT challenge into a multivariate optimization issue, utilizing a continuous cost function. This approach allows for the application of established methods like gradient descent, which has attracted significant attention and is well-documented in the literature. In contrast, the Intel method relies entirely on pixelated models and pixelated cost functions, though relevant literature on this method is limited. Techniques in the frequency domain can leverage GPU operations to substantially enhance computational speed, and advanced algorithms also exist for converting curve masks into Manhattan masks. The integration of ILT with machine learning technologies shows considerable potential for development. Further research is necessary to effectively reduce computational complexity while ensuring optimal results. Currently, ILT technology faces challenges such as high computational demands and obstacles in full layout optimization. Collaboration among experts and scholars in integrated circuit design and related fields is essential to improve ILT computational speed and to integrate it with other technologies. We believe that as research on ILT-related technologies advances, it will play a crucial role in helping China’s chip industry overcome technological bottlenecks in the future.
Wireless Communication and Internet of Things
Efficient Power Allocation Algorithm for Throughput Optimization of Multi-User Massive MIMO Systems in Finite Blocklength Regime
HU Yulin, XIAO Zhicheng, XU Hao
2025, 47(1): 35-47.   doi: 10.11999/JEIT240241
[Abstract](137) [FullText HTML](42) [PDF 3790KB](17)
Abstract:
  Objective   The 6th Generation (6G) mobile communication network aims to provide Ultra-Reliable and Low Latency Communication (URLLC) services to a large number of nodes. To support URLLC for massive users, Multiple-In-Multiple-Out (MIMO) technology has become a key enabler for improving system performance in 6G. However, URLLC systems typically operate with Finite BlockLength (FBL) codes, which pose unique challenges for resource allocation design due to their deviation from traditional methods in the infinite blocklength regime. Although prior studies have explored resource allocation strategies for MIMO-assisted URLLC, power allocation design that considers user fairness remains unresolved. This paper proposes an efficient power allocation algorithm for multi-user MIMO systems in the FBL regime, addressing the issue of user fairness.  Methods   This study investigates the MIMO-assisted URLLC downlink communication scenario. The system performance is first characterized based on FBL theory, revealing the achievable rate of MIMO downlink users, which introduces significant nonconvexity compared to the infinite blocklength regime. Given the base station's limited power resources, setting system throughput as the optimization objective fails to ensure user fairness. To address this, a Maximum Minimum Rate (MMR) optimization problem is formulated, with power allocation factors as the optimization variables, subject to a total power constraint. The formulated problem is highly nonconvex due to the nonconvex terms in the objective function. To develop an efficient power allocation design for the MMR problem, a low-complexity precoding strategy is first proposed to mitigate both inter-user and intra-user interference. This precoding strategy, based on the local Singular Value Decomposition (SVD) method, reduces complexity compared with the traditional global SVD precoding strategy and effectively suppresses interference. To address the nonconvexity introduced by the Shannon capacity and channel dispersion terms in the objective function, convex relaxation and approximation techniques are introduced. The convex relaxation involves auxiliary variables and piecewise McCormick envelopes to manage the Shannon capacity term, transforming the MMR problem into an optimization problem where only the channel dispersion term remains nonconvex. For the relaxed problem, the channel dispersion term is then approximated by an upper bound function, rigorously shown to be convex through analytical findings. Based on this convex relaxation and approximation, the original MMR problem can be approximated by a convex subproblem at a given feasible point and efficiently solved using the Successive Convex Approximation (SCA) algorithm. Convergence and optimality analyses of the proposed algorithm are provided.  Results and Discussions   The proposed power allocation design is evaluated and validated through numerical simulations. To demonstrate the superiority of the proposed design, several benchmarks are introduced for comparison. For the proposed precoding design based on local SVD, the global SVD precoding method is included to validate its advantage in terms of complexity. The Shannon-rate-oriented rate maximization methods are also introduced to verify the accuracy of the proposed convex relaxation design. Moreover, the suboptimality of the SCA-based algorithm is validated through comparisons with other benchmarks, including the exhaustive search method. First, the low-complexity advantage of the proposed precoding design is demonstrated (Fig. 2). The proposed precoding strategy exhibits low complexity in both small-scale and large-scale user scenarios. The performance of the suboptimal solutions obtained using the proposed SCA-based algorithm is compared with the globally optimal solutions from the exhaustive search method (Fig. 3). The results confirm the accuracy of the proposed algorithm, with a performance loss of less than 3%. The effect of the base station's antenna number on both the MMR problem and the system throughput maximization problem is illustrated (Fig. 4), further validating the effectiveness and applicability of the local SVD precoding strategy. The tightness of the convex relaxation is examined (Fig. 5), confirming that the proposed design becomes more advantageous as the user antenna number increases. The effect of blocklength on MMR performance is explored (Fig. 6), while the variation trend of MMR with respect to average transmit power is shown (Fig. 7). The influence of antenna number at both base station and user sides on MMR performance is investigated (Fig. 8), while the effect of user number on MMR performance is demonstrated (Fig. 9).  Conclusions   This paper investigates the optimization of user rate fairness in downlink MIMO-assisted URLLC scenarios. The system performance in the FBL regime is characterized, and based on this modeling, an MMR optimization problem is formulated under a sum power constraint, which is inherently nonconvex. To address this nonconvexity, a local SVD-based precoding design is proposed to reduce precoding complexity while ensuring fairness. Furthermore, convex relaxation is applied by introducing auxiliary variables and piecewise McCormick envelopes. The relaxed objective function is then approximated by an upper bound function, whose convexity is rigorously proven. Building on this relaxation and approximation, an SCA-based algorithm is developed to effectively solve the MMR problem. The proposed design is validated through numerical simulations, where its validity and parameter influence on system performance are discussed. The approach can be extended to other URLLC scenarios, such as multi-cell MIMO, and provides valuable insights for solving nonconvex optimization problems in related fields.
Optical Intelligent Reflecting Surfaces-Assisted Distributed OMC for UAV Clusters
WANG Haibo, ZHANG Zaichen, GE Yingmeng, ZENG Han
2025, 47(1): 48-56.   doi: 10.11999/JEIT240302
[Abstract](295) [FullText HTML](118) [PDF 2785KB](51)
Abstract:
  Objective   The development of Unmanned Aerial Vehicle (UAV) technology has led to new applications, such as UAV-based high-altitude base stations and UAV three-dimensional mapping. These applications demand higher communication rates and wider bandwidths. Optical Mobile Communications (OMC), a wireless communication method with high energy efficiency, wide bandwidth, and high speed, has become a crucial direction for UAV communication. However, traditional UAV-OMC systems primarily focus on point-to-point transmission. As the number of UAVs increases, these systems struggle to meet the real-time, high-speed communication requirements of multi-UAV networks. Therefore, a technology is needed that can preserve the energy efficiency and speed of the UAV-OMC system while supporting multi-UAV communication. This study proposes a solution where Optical Intelligent Reflecting Surfaces (OIRS) are deployed on specific UAVs to spread the optical signal from a single UAV node to multiple UAV nodes, thus maintaining the high energy efficiency and speed of the UAV-OMC system, while enabling stable and energy-efficient communication for distributed UAV clusters.  Methods   OIRS is a new type of programmable passive optical device capable of deflecting, splitting, and reconstructing light beams. Due to its small size and light weight, OIRS is suitable for installation on drones. Based on this technology, this study proposes a distributed OMC system for UAV clusters. OIRS is installed on select UAVs and is used to diffuse optical signals from a single UAV to multiple UAVs. Since each OIRS has limited beam splitting capabilities and coverage, the UAV cluster is divided into several regions, with each OIRS handling communication and power allocation for UAVs in its designated region. The OIRS not only forwards the optical signal but also performs beam alignment and focusing to ensure that each UAV receives a focused and aligned beam. A mathematical model of the OIRS-assisted distributed OMC system for UAV clusters is developed, considering factors such as OIRS beam control, UAV relative motion, UAV jitter, and strong turbulence at high altitudes. The model’s validity is assessed by analyzing the communication performance of the backend UAV nodes in the cluster. Additionally, closed-form expressions for the average Bit Error Rate (BER) and asymptotic outage probability are derived, and key parameters influencing system performance are discussed.  Results and Discussions   (1) Using the derived OIRS unit control algorithm, the OIRS can be controlled to align beams with the slave UAVs. The diffuse beam initially reaching the master UAV is refocused onto the slave UAVs. For multi-UAV beam splitting, regional control of the OIRS selects specific mirror units to direct beams to particular slave UAVs, thus enabling beam splitting. This method effectively exploits the adjustable nature of OIRS and dynamically adjusts the direction of each mirror unit to serve multiple targets, thereby extending the communication coverage (Fig. 2). This regional control strategy allows the system to adapt to environmental changes and the dynamic configuration of the UAV group, optimizing communication efficiency and network quality. The technology maximizes power allocation and utilization while ensuring sufficient signal strength for each slave UAV. (2) The performance degradation caused by the inherent parameters of OIRS, such as jitter and turbulence coefficient, in the OMC channel is substantial. However, OIRS beam alignment has an even more significant effect on system performance. Without beam alignment, the system performance is notably worse, with a more substantial impact on communication performance than severe weather conditions (Fig.3). (3) Simulation results show that system performance deteriorates significantly with each additional slave UAV. Thus, it is essential to determine the optimal number of slave UAVs that each OIRS should handle. Under the simulation conditions, it is most effective for each master UAV to manage three slave UAVs. Beyond this, maintaining stable communication becomes increasingly difficult.   Conclusions   This paper proposes an OIRS-assisted distributed OMC system for UAV clusters, utilizing the beam reflection and deflection capabilities of OIRS to extend the OMC link from a single UAV to multiple UAVs. OIRS’s refocusing capability ensures that UAVs at the rear can still receive a concentrated beam. Performance analysis and simulations show that slave UAVs maintain strong communication performance using OIRS for signal transmission. The beam alignment function of OIRS enhances system performance. However, due to power constraints, the addition of slave UAVs results in significant performance degradation. Future work will focus on optimizing the OIRS-assisted UAV OMC network architecture.
Spatial Deployment and Beamforming for Design Multi-Unmanned Aerial Vehicle-integrated Sensing and Communication Based on Transmission Fairness
SHI Tongzhi, LI Bo, YANG Hongjuan, ZHANG Tong, WANG Gang
2025, 47(1): 57-65.   doi: 10.11999/JEIT240590
[Abstract](444) [FullText HTML](97) [PDF 2348KB](73)
Abstract:
  Objective:   As economic and social development rapidly progresses, the demand for applications across various sectors is increasing. The use of higher frequency bands for future 6G communication is advancing to facilitate enhanced perception. Additionally, due to the inherent similarities in signal processing and hardware configurations between sensing and communication, Integrated Sensing And Communication (ISAC) is becoming a vital area of research for future technological advancements. However, during sudden emergencies, communication coverage and target detection in rural and remote areas with limited infrastructure face considerable challenges. The integration of communication and sensing in Unmanned Aerial Vehicles (UAVs) presents a unique opportunity for equipment flexibility and substantial research potential. Despite this, current academic research primarily focuses on single UAV systems, often prioritizing communication rates while neglecting fairness in multi-user environments. Furthermore, existing literature on multiple UAV systems has not sufficiently addressed the variations in user or target numbers and their random distributions, which impedes the system’s capability to adaptively allocate resources based on actual demands and improve overall efficiency. Therefore, exploring the application of integrated communication and sensing technologies within multi-UAV systems to provide essential services to ground-based random terminals holds significant practical importance.  Methods:   This paper addresses the scenario in which ground users and sensing targets are randomly distributed within clusters. The primary focus is on the spatial deployment of UAVs and their beamforming techniques tailored for ground-based equipment. The system seeks to enhance the lower bound of user transmission achievable rates by optimizing the communication and sensing beamforming variables of the UAVs, while also adhering to essential communication and sensing requirements. Key constraints considered include the aerial-ground coverage correlation strategy, UAV transmission power, collision avoidance distances, and the spatial deployment locations. To effectively address the non-convex optimization problem, the study divides it into two sub-problems: the joint optimization of aerial-ground correlation and planar position deployment, and the joint optimization of communication and sensing beamforming. The first sub-problem is solved using an improved Mean Shift algorithm (MS), which focuses on optimizing aerial-ground correlation variables alongside UAV planar coordinate variables (Algorithm 1). The second sub-problem employs a quadratic transformation technique to optimize communication beamforming variables (Algorithm 2), further utilizing a successive convex approximation strategy to tackle the optimization challenges associated with sensing beamforming (Algorithm 3). Ultimately, a Block Coordinate Descent algorithm is implemented to alternately optimize the two sets of variables (Algorithm 4), leading to a relatively optimal solution for the system.  Results and Discussions:   Algorithm 1 focuses on establishing aerial-ground correlations and determining the planar deployment of UAVs. During the clustering phase, users and targets are treated as equivalent sample entities, with ground sample points generated through a Poisson clustering random process. These points are subsequently categorized into nine optimal clusters using an enhanced mean shift algorithm. Samples within the same Voronoi category are assigned to a single UAV, positioned at the mean shift center for optimal service coverage. Algorithm 4 addresses the beamforming requirements for UAVs servicing ground users or targets. Remarkably, multiple UAVs achieve convergence within seven iterations concerning regional convergence. The dynamic interplay between communication and sensing resources is highlighted by variations in the number of sensing targets and the altitude of UAV deployment. The fairness-first approach proposed in this paper, in contrast to a rate-centric strategy, ensures maximum individual transmission quality while maintaining balanced system performance. Furthermore, the overall scheme, referred to as MS+BCD, is compared with two benchmark algorithms: Block Coordinate Descent beamforming optimization with Central point Sensing Deployment (CSD+BCD) and Random Sensing Beamforming with Mean Shift deployment (MS+RSB). The proposed optimization strategy clearly demonstrates advantages in system effectiveness, irrespective of changes in beam pattern gain or increases in UAV antenna numbers.  Conclusions:   This paper addresses the multi-UAV coverage challenge within the framework of Integrated Sensing and Communication. With a focus on equitable user transmission rates, this study incorporates constraints related to communication and sensing power, beam pattern gain, and aerial-ground correlation. By employing an enhanced Mean Shift algorithm along with the Block Coordinate Descent method, this research optimizes a variety of parameters, including aerial-ground correlation strategies, UAV planar deployment, and communication-sensing beamforming. The objective is to maximize the system’s transmission rate while ensuring high-quality user transmission and fair resource allocation, thereby providing a novel approach for multi-UAV systems enhanced by integrated communication and sensing. Future research will extend these findings to tackle additional altitude optimization challenges and to ensure equitable resource distribution across different UAV coverage zones.
Resource Allocation Algorithm for Intelligent Reflecting Surface-assisted Secure Integrated Sensing And Communications System
ZHU Zhengyu, YANG Chenyi, LI Zheng, HAO Wanming, YANG Jing, SUN Gangcan
2025, 47(1): 66-74.   doi: 10.11999/JEIT240083
[Abstract](633) [FullText HTML](256) [PDF 2468KB](105)
Abstract:
  Objective   In the 6G era, the rapid increase in wireless devices coupled with a scarcity of spectrum resources necessitates the enhancement of system capacity, data rates, and latency. To meet these demands, Integrated Sensing And Communications (ISAC) technology has been proposed. Unlike traditional methods where communication and radar functionalities operate separately, ISAC merges wireless communication with radar sensing, utilizing a shared infrastructure and spectrum. This innovative approach maximizes the efficiency of compact wireless hardware and improves spectral efficiency. However, the integration of communication and radar signals into transmitted beams introduces vulnerabilities, as these signals can be intercepted by potential eavesdroppers, increasing the risk of data leakage. As a result, Physical Layer Security (PLS) becomes essential for ISAC systems. PLS capitalizes on the randomness and diversity inherent in wireless channels to create transmission schemes that mitigate eavesdropping risks and bolster system security. Nevertheless, PLS’s effectiveness is contingent on the quality of wireless channels, and the inherently fluctuating nature of these channels leads to inconsistent security performance, posing significant challenges for system adaptability and optimization. Moreover, Intelligent Reflecting Surfaces (IRS) emerge as a pivotal technology in 6G networks, offering the capability to control wireless propagation and the environment by adjusting reflection phase shifts. This advancement facilitates the establishment of reliable communication and sensing links, thereby enhancing the ISAC system’s sensing coverage, accuracy, wireless communication performance, and overall security. Consequently, IRS presents a vital solution for addressing PLS challenges in ISAC systems. In light of this, the paper proposes a design study focused on IRS-assisted ISAC systems incorporating cooperative jamming to effectively tackle security concerns.  Methods   This paper examines the impact of eavesdroppers on the security performance of ISAC systems and proposes the secure IRS-ISAC system model. The proposed model features a dual-functional base station equipped with antennas, an IRS with reflective elements, single-antenna legitimate users, and an eavesdropping device. To enhance system security, a jammer equipped with antennas is integrated into the system, transmitting interference signals to mitigate the effects of eavesdroppers. Given the constraints on maximum transmit power for both the base station and the jammer, as well as the IRS reflection phase shifts and radar Signal-to-Interference-plus-Noise Ratio (SINR), a joint optimization problem is formulated to maximize the system’s secrecy rate. This optimization involves adjusting base station transmission beamforming, jammer precoding, and IRS phase shifts. The problem, characterized by multiple coupled variables, exhibits non-convexity, complicating direct solutions. To address this non-convex challenge, Alternating Optimization (AO) methods are first employed to decompose the original problem into two sub-problems. Semi-Definite Relaxation (SDR) algorithms, along with auxiliary variable introductions, are then applied to transform the non-convex optimization issue into a convex form, enabling a definitive solution. Finally, a resource allocation algorithm based on alternating iterations is proposed to ensure secure operational efficiency.  Results and Discussions   The simulation results substantiate the security and efficacy of the proposed algorithm, as well as the superiority of the IRS-ISAC system. Specifically, the system secrecy rate in relation to the number of iterations is illustrated, demonstrating the convergence of the proposed algorithm across varying numbers of base station transmit antennas. The findings indicate that the algorithm reaches the maximum system secrecy rate and stabilizes at the fifth iteration, which shows its excellent convergence characteristics. Furthermore, an increase in the number of transmit antennas correlates with a notable enhancement in the system secrecy rate. This improvement can be attributed to the additional spatial degrees of freedom afforded by the base station’s antennas, which enable the projection of legitimate information into the null space of all eavesdropper channels—effectively reducing the information received by eavesdroppers and boosting the overall system secrecy rate. The system secrecy rate is presented as a function of the transmit power of the base station. The results indicate that an increase in the base station’s maximum transmit power corresponds with an increase in the system secrecy rate. This enhancement occurs because higher transmit power effectively mitigates path loss, thereby improving the quality of the signal. The IRS-assisted ISAC system significantly outperforms scenarios without IRS, thanks to the introduction of additional non-line-of-sight links. Additionally, the proposed scheme demonstrates superior performance compared to the random scheme in the joint design of transmit beamforming and reflection coefficients, validating the effectiveness of the algorithm. The system secrecy rate is illustrated in relation to the number of IRS reflection elements. The results reveal that the system secrecy rates for both the proposed and random methods increase as the number of IRS elements rises. This can be attributed to the incorporation of additional reflective elements, which facilitate enhanced passive beamforming gain and expand the spatial freedom available for optimizing the propagation environment, thereby strengthening anti-eavesdropping capabilities. In contrast, the system secrecy rate for the scheme without IRS remains constant. Notably, as the number of IRS elements increases, the gap in secrecy rates between the proposed scheme and the random scheme expands, highlighting the significant advantage of optimizing the IRS phase shift in improving system performance. The radar SINR is depicted concerning the transmit power of the base station. The results indicate that as the maximum transmit power of the base station increases, the SINR of the radar likewise improves. The proposed scheme outperforms the two benchmark schemes in this respect, attributable to the optimization of the IRS phase shift matrix, which not only enhances system security but also effectively conserves energy resources within the communication system. This enables a more efficient allocation of resources to improve sensing performance. By incorporating IRS into the ISAC system, performance in the sensing direction is markedly enhanced while simultaneously bolstering system security.   Conclusions   This paper addresses the potential for eavesdropping by proposing a secure resource allocation algorithm for ISAC systems with the support of IRS. A secrecy rate maximization problem is formulated, subject to constraints on the transmit power of the base station and jammer, the IRS reflection phase shifts, and the radar SINR. This formulation involves the joint design of transmit beamforming, jammer precoding, and IRS reflection beamforming. The interdependencies among these variables create significant challenges for direct solution methods. To overcome these complexities, the AO algorithm is employed to decompose the non-convex problem into two sub-problems. SDR techniques are then applied to transform these sub-problems into convex forms, enabling their resolution with convex optimization tools. Our simulation results indicate that the proposed method considerably outperforms two benchmark schemes, confirming the algorithm’s effectiveness. These findings highlight the considerable potential of IRS in bolstering the security performance of ISAC systems.
Channel Estimation for Intelligent Reflecting Surface Assisted Ambient Backscatter Communication Systems
XU Yongjun, QIU Youjing, ZHANG Haibo
2025, 47(1): 75-83.   doi: 10.11999/JEIT240395
[Abstract](183) [FullText HTML](36) [PDF 1504KB](29)
Abstract:
  Objective   Ambient Backscatter Communication (AmBC) is an emerging, low-power, low-cost communication technology that utilizes ambient Radio Frequency (RF) signals for passive information transmission. It has demonstrated significant potential for various wireless applications. However, in AmBC systems, the reflected signals are often severely weakened due to double fading effects and signal obstruction from environmental obstacles. This results in a substantial reduction in signal strength, limiting both communication range and overall system performance. To address these challenges, Intelligent Reflecting Surface (IRS) technology has been integrated into AmBC systems. IRS can enhance reflection link gain by precisely controlling reflected signals, thereby improving system performance. However, the passive nature of both the IRS and tags makes accurate channel estimation a critical challenge. This study proposes an efficient channel estimation algorithm for IRS-assisted AmBC systems, aiming to provide theoretical support for optimizing system performance and explore the feasibility of achieving high-precision channel estimation in complex environments—key to the practical implementation of this technology.  Methods   This study develops a general IRS-assisted AmBC system model, where the system channel is divided into multiple subchannels, each corresponding to a specific IRS reflection element. To minimize the Mean Squared Error (MSE) in channel estimation, the Least Squares (LS) method is used as the estimation criterion. The joint optimization problem for channel estimation is explored by integrating various IRS reflection modes, including ON/OFF, Discrete Fourier Transform (DFT), and Hadamard modes. The communication channel is assumed to follow a Rayleigh fading distribution, with noise modeled as zero-mean Gaussian. Pilot signals are modulated using Quadrature Phase Shift Keying (QPSK). To thoroughly evaluate the performance of channel estimation, 1000 Monte Carlo simulations are conducted, with MSE and the Cramer-Rao Lower Bound (CRLB) serving as performance metrics. Simulation experiments conducted on the Matlab platform provide a comprehensive comparison and analysis of the performance of different algorithms, ultimately validating the effectiveness and accuracy of the proposed algorithm.  Results and Discussions   The simulation results demonstrate that the IRS-assisted channel estimation algorithm significantly improves performance. Under varying Signal-to-Noise Ratio (SNR) conditions, the MSE of methods based on DFT and Hadamard matrices consistently outperforms the ON/OFF method, aligning with the CRLB, thereby confirming the optimal performance of the proposed algorithms (Fig. 2, Fig. 3). Additionally, the MSE for direct and cascaded channels is identical when using the DFT and Hadamard methods, while the cascaded channel MSE for the ON/OFF method is twice that of the direct channel, highlighting the superior performance of the DFT and Hadamard approaches. As the number of IRS reflection elements increases, the MSE for the DFT and Hadamard methods decreases significantly, whereas the MSE for the ON/OFF method remains unchanged (Fig. 4, Fig. 5). This illustrates the ability of the DFT and Hadamard methods to effectively exploit the scalability of IRS, demonstrating better adaptability and estimation performance in large-scale IRS systems. Furthermore, increasing the number of pilot signals leads to a further reduction in MSE for the DFT and Hadamard methods, as more pilot signals provide higher-quality observations, thereby enhancing channel estimation accuracy (Fig. 6, Fig. 7). Although additional pilot signals consume more resources, their substantial impact on reducing MSE highlights their importance in improving estimation precision. Moreover, under high-SNR conditions, the MSE for all algorithms is lower than that under low-SNR conditions, with the DFT and Hadamard methods showing more pronounced reductions (Fig. 4, Fig. 5). This indicates that the proposed methods enable more efficient channel estimation under better signal quality, further enhancing system performance. In conclusion, the channel estimation algorithms based on DFT and Hadamard matrices offer significant advantages in large-scale IRS systems and high-SNR scenarios, providing robust support for optimizing low-power, low-cost communication systems.  Conclusions   This paper presents a low-complexity channel estimation algorithm for IRS-assisted AmBC systems based on the LS criterion. The channel is decomposed into multiple subchannels, and the optimization of IRS phase shifts is designed to significantly enhance both channel estimation and transmission performance. Simulation results demonstrate that the proposed algorithm, utilizing the DFT and Hadamard matrices, achieves excellent performance across various SNR and system scale conditions. Specifically, the algorithm effectively reduces the MSE of channel estimation, exhibits higher estimation accuracy under high-SNR conditions, and shows low computational complexity and strong robustness in large-scale IRS systems. These results provide valuable insights for the theoretical modeling and practical application of IRS-assisted AmBC systems. The findings are particularly relevant for the development of low-power, large-scale communication systems, offering guidance on the design and optimization of IRS-assisted AmBC systems. Additionally, this work lays a solid theoretical foundation for the advancement of next-generation Internet of Things applications, with potential implications for future research on IRS technology and their integration with AmBC systems.
A Joint Parameter Estimation Method Based on 3D Matrix Pencil for Integration of Sensing and Communication
YANG Xiaolong, ZHANG Bingrui, ZHOU Mu, ZHANG Wen
2025, 47(1): 84-92.   doi: 10.11999/JEIT240003
[Abstract](283) [FullText HTML](90) [PDF 2124KB](35)
Abstract:
  Objective   Integration of Sensing and Communication (ISAC) is an emerging technology that leverages the sharing of software and hardware resources, as well as information exchange, to integrate wireless sensing into Wi-Fi platforms, providing a cost-effective solution for indoor positioning. Existing Wi-Fi-based Channel State Information (CSI) positioning technologies are advantageous in resolving multipath signals in indoor environments, offering finer sensing granularity and higher detection accuracy. These features make them suitable for high-precision target detection and positioning in complex indoor environments, enabling the estimation of parameters such as Angle of Arrival (AoA), Time of Flight (ToF), and Doppler Frequency Shift (DFS). However, CSI-based indoor positioning faces significant challenges. On one hand, the complexity of indoor environments, including reflections from walls and pedestrian movement, reduces the Signal-to-Noise Ratio (SNR), leading to difficulties in effectively estimating signal parameters using traditional algorithms. On the other hand, indoor positioning requires high real-time performance, but most algorithms suffer from high computational complexity, resulting in low efficiency and poor real-time performance. To address these issues, this paper proposes a positioning method based on the three-dimensional (3D) Matrix Pencil (MP) algorithm, which improves the real-time performance and accuracy of existing indoor positioning parameter estimation techniques.  Methods   To address the real-time and accuracy issues in indoor positioning parameter estimation, a joint parameter estimation algorithm based on the 3D MP algorithm is proposed. First, the CSI data is analyzed, and Doppler parameters are integrated into the two-dimensional (2D) MP algorithm to construct a 3D matrix that includes AoA, ToF, and DFS. The 3D matrix is then smoothened, and the 3D MP algorithm is applied for parameter estimation. Clustering methods are used to obtain the AoA of the direct path, and a weighted least squares method is applied for final target position estimation, while also achieving AoA, ToF, and DFS estimation. This approach effectively improves the resolution and accuracy of parameter estimation. A two-angle positioning method is used for localization to validate the proposed algorithm. By using multiple CSI packets to construct the 3D Hankel Matrix (HM), parameter estimation accuracy is improved compared to using a single CSI packet. Compared to the 3D Multiple Signal Classification (MUSIC) algorithm, the proposed method reduces computational complexity. Incorporating the DFS parameter enhances path resolution, leading to improved AoA parameter estimation accuracy compared to the 2D MP algorithm.  Results and Discussions   Experiments are conducted in two different scenarios (Fig. 1), with the detailed experimental parameters provided in the table. The two scenarios tested 21 and 13 target positions, respectively. The receiver and transmitter were positioned at the same height, and their geometric relationship was confirmed using a laser rangefinder to determine positioning and direction on the ground. The results indicate that in the conference room scenario, the AoA accuracy and positioning accuracy of the 3D MP algorithm are comparable to those of the MUSIC algorithm, with the 3D MP algorithm showing a significant improvement over the 2D MP algorithm. This is because the 3D MP algorithm introduces an additional dimension to parameter estimation, improving signal resolution and making it easier to identify the direct path of the target (Fig.3). In the classroom scenario, cumulative distribution functions are used to represent overall AoA and positioning errors. For an estimation error of 0.667, the positioning accuracy of the 2D MP, MUSIC, and 3D MP algorithms are 0.73 m, 0.44 m, and 0.48 m, respectively. To observe the real-time performance, each algorithm is run ten times under identical conditions on the same computer, and the average runtime (Fig.5) is recorded. The 2D MP algorithm has the shortest runtime, while the MUSIC algorithm has the longest. The runtime of the 3D MP algorithm is approximately 90% shorter than that of the MUSIC algorithm.  Conclusions   This paper presents a localization method based on a 3D MP parameter estimation algorithm. A data model for the receiver is first established, and the 3D MP algorithm is introduced. Using a clustering method, the AoA of the direct path is estimated, and multiple Access Points (APs) are combined for target localization. Experimental results show that the proposed algorithm achieves an average localization accuracy of 0.56 m with an estimation error ratio of 0.667, while reducing computational complexity by 90% compared to the MUSIC algorithm. This makes the algorithm highly practical for real-time localization. The results demonstrate that the proposed method significantly reduces computational complexity while maintaining minimal positioning error when compared to the MUSIC algorithm. Although the 3D MP algorithm introduces some computational overhead compared to the 2D MP algorithm, it improves localization accuracy. Parameter estimation and localization experiments in two typical environments confirm that the proposed algorithm outperforms current systems, extending the application of Wi-Fi sensing technology within ISAC.
Research on Beam Optimization Design Technology for Capacity Enhancement of Satellite Internet of Things
LIU Ziwei, XU Yuanyuan, BIAN Dongming, ZHANG Gengxin
2025, 47(1): 93-101.   doi: 10.11999/JEIT231120
[Abstract](90) [FullText HTML](26) [PDF 2637KB](18)
Abstract:
  Objective   Under the hundreds of kilometers of transmission distance in low-orbit satellite communication, both power consumption and latency are significantly higher than that in ground-based networks. Additionally, many data collection services exhibit short burst characteristics. Conventional resource reservation-based access methods have extremely low resource utilization, whereas dynamic application-based access methods incur large signaling overhead and fail to meet the latency and power consumption requirements for satellite Internet of Things (IoT). Random access technology, which involves competition for resources, can better accommodate the short burst data packet services typical of satellite IoT. However, as the load increases, data packet collisions at satellite access points lead to a sharp decline in actual throughput under medium and high loads. In terrestrial wireless networks, technologies such as near-far effect management and power control are commonly employed to create differences in packet reception power. However, due to the large number of terminals covered and the long distance between the satellite and the Earth, these techniques are unsuitable for satellite IoT, preventing the establishment of an adequate carrier-to-noise ratio. Developing separation conditions suitable for satellite IoT access scenarios is a key research focus. Considering the future development of spaceborne digital phased array technology, this paper leverages the data-driven beamforming capability of the on-board phased array and introduces the concept of spatial auxiliary channels. By employing a sum-and-difference beam design method, it expands the dimensions for separating collision signals beyond the time, frequency, and energy domains. This approach imposes no additional processing burdens on the terminal and aligns with the low power consumption and minimal control design principles for satellite IoT.  Methods   To address packet collision issues in hotspot areas of satellite IoT services, this study extends the conventional time-slot ALOHA access framework by introducing an auxiliary receiving beam alongside the random access of conventional receiving beams. The main and auxiliary beams simultaneously receive signals from the same terminal. By optimizing the main lobe gain of the auxiliary beam, a difference in the Signal-to-Noise Ratio (SNR) between the signals received by the main and auxiliary beams is established. This difference is then separated using Successive Interference Cancellation (SIC) technology, leveraging the correlation between the received signals of the auxiliary and main beams to support the separation of collision signals and ensure reliable reception of satellite IoT signals.  Results and Discussions   Firstly, the system throughput of the proposed scheme is simulated (Fig. 4). The theoretical throughput derived in the previous section is consistent with the simulation results. When the normalized load reached 1.8392, the maximum system throughput is 0.81085 packet/slot. Compared with existing methods such as SA, CRDSA, and IRSA, the proposed scheme demonstrated improved system throughput and packet loss rate performance in both peak and high-load regions, with a peak throughput increase of approximately 120%. Secondly, the influence of amplitude, phase, and angle measurement errors on system performance is evaluated. The angle measurement error had a greater effect on throughput performance than amplitude and phase errors. Amplitude and phase errors had a smaller effect on the main lobe gain but a larger effect on the sidelobe gain (Tables 35). Therefore, angle measurement errors have a considerable effect on throughput improvement. Regarding beamwidth, as beamwidth increased, the roll-off of the corresponding difference beam with 10 array elements is gentler than that with 32 array elements. However, the peak gain of the auxiliary beam decreased, leading to reduced system throughput for configurations with larger main lobe widths.  Conclusions   This paper presents an auxiliary beam design strategy for power-domain signal separation in satellite IoT scenarios, aiming to improve system throughput and packet loss rate performance. The approach incorporates spatial domain processing and proposes the concept of auxiliary receiving beams. By generating a difference beam derived from the main beam and using it as the auxiliary beam, the scheme constructs the required SNR difference for power-domain signal separation, enhancing the probability of successfully receiving collided signals. Simulation results indicate that, compared with SA, the peak system throughput increased by 120%, with significant improvements observed. Furthermore, the scheme demonstrated robustness by tolerating moderate system and measurement errors, facilitating large-capacity random access for satellite IoT terminals.
Collaborative Air-Ground Computation Offloading and Resource Optimization in Dynamic Vehicular Network Scenarios
WANG Junhua, LUO Fei, GAO Guangxin, LI Bin
2025, 47(1): 102-115.   doi: 10.11999/JEIT240464
[Abstract](139) [FullText HTML](37) [PDF 3825KB](24)
Abstract:
  Objective  In response to the rapid growth of mobile users and the limited distribution of ground infrastructure, this research addresses the challenges faced by vehicular networks. It emphasizes the need for efficient computation offloading and resource optimization, highlighting the role of Unmanned Aerial Vehicles (UAVs), RoadSide Units (RSUs), and Base Stations (BSs) in enhancing overall system performance.  Methods  This paper presents an innovative research methodology that proposes an energy harvesting-assisted air-ground cooperative computation offloading architecture. This architecture integrates UAVs, RSUs, and BSs to effectively manage the dynamic task queues generated by vehicles. By incorporating Energy Harvesting (EH) technology, UAVs can capture and convert ambient renewable energy, ensuring a continuous power supply and stable computing capabilities. To address the challenges associated with time-varying channel conditions and high mobility of nodes, a Mixed Integer Programming (MIP) problem is formulated. An iterative process is used to adjust offloading decisions and computing resource allocations at low cost, aiming to optimize overall system performance. The approach is outlined as follows: Firstly, an innovative framework for energy harvesting-assisted air-ground cooperative computation offloading is introduced. This framework enables the collaborative management of dynamic task queues generated by vehicles through the integration of UAVs, RSUs, and BSs. The inclusion of EH technology ensures that UAVs maintain a continuous power supply and stable computing capabilities, addressing limitations due to finite energy resources. Secondly, to address system complexities—such as time-varying channel conditions, high node mobility, and dynamic task arrivals—an MIP problem is formulated. The objective is to optimize system performance by determining effective joint offloading decisions and resource allocation strategies, minimizing global service delays while meeting various dynamic and long-term energy constraints. Thirdly, an Improved Actor-Critic Algorithm (IACA), based on reinforcement learning principles, is introduced to solve the formulated MIP problem. This algorithm utilizes Lyapunov optimization to decompose the problem into frame-level deterministic optimizations, thereby enhancing its manageability. Additionally, a genetic algorithm is employed to compute target Q-values, which guides the reinforcement learning process and enhances both solution efficiency and global optimality. The IACA algorithm is implemented to iteratively refine offloading decisions and resource allocations, striving for optimized system performance. Through the integration of these research methodologies, this paper makes significant contributions to the field of air-ground cooperative computation offloading by providing a novel framework and algorithm designed to address the challenges posed by limited energy resources, fluctuating channel conditions, and high node mobility.   Results and Discussions  The effectiveness and efficiency of the proposed framework and algorithm are evaluated through extensive simulations. The results illustrate the capability of the proposed approach to achieve dynamic and efficient offloading and resource optimization within vehicular networks. The performance of the IACA algorithm is illustrated, emphasizing its efficient convergence. Over the course of 4 000 training episodes, the agent continuously interacted with the environment, refining its decision-making strategy and updating network parameters. As shown, the loss function values for both the Actor and Critic networks progressively decreased, indicating improvements in their ability to model the real-world environment. Meanwhile, a rising trend in reward values is observed as training episodes increase, ultimately stabilizing, which signifies that the agent has discovered a more effective decision-making strategy. The average system delay and energy consumption relative to time slots are presented. As the number of slots increases, the average delay decreases for all algorithms except for RA, which remains the highest due to random offloading. RLA2C demonstrates superior performance over RLASD due to its advantage function. IACA, trained repeatedly in dynamic environments, achieves an average service delay that closely approximates CPLEX’s optimal performance. Additionally, it significantly reduces average energy consumption by minimizing Lyapunov drift and penalties, outperforming both RA and RLASD. The impact of task input data size on system performance is examined. As the data size increases from 750 kbit to 1 000 kbit, both average delay and energy consumption rise. The IACA algorithm, with its effective interaction with the environment and enhanced genetic algorithm, consistently produces near-optimal solutions, demonstrating strong performance in both energy efficiency and delay management. In contrast, the performance gap between RLASD and RLA2C widens compared to CPLEX due to unstable training environments for larger tasks. RA leads to significant fluctuations in average delay and energy consumption. The effect of the Lyapunov parameter V on average delay and energy consumption at T=200 is illustrated. With V, performance can be finely tuned; as V increases, average delay decreases while energy consumption rises, eventually stabilizing. The IACA algorithm, with its enhanced Q-values, effectively optimizes both delay and energy. Furthermore, the impact of UAV energy thresholds and counts on average system delay is demonstrated. IACA avoids local optima and adapts effectively to thresholds, outperforming RLA2C, RLASD, and RA. An increase in the number of UAVs initially reduces delay; however, an excess can lead to increased delay due to limited computing power.  Conclusions  The proposed EH-assisted collaborative air-ground computing offloading framework and IACA algorithm significantly improve the performance of vehicular networks by optimizing offloading decisions and resource allocations. Simulation results validate the effectiveness of the proposed methodology in reducing average delay, enhancing energy efficiency, and increasing system throughput. Future research could focus on integrating more advanced energy harvesting technologies and further refining the proposed algorithm to better address the complexities associated with large-scale vehicular networks. (While specific figures or tables are not referenced in this summary due to format constraints, the simulations conducted within the paper provide comprehensive quantitative results to support the findings discussed.)
Task Offloading Algorithm for Large-scale Multi-access Edge Computing Scenarios
LU Xianling, LI Dekang
2025, 47(1): 116-127.   doi: 10.11999/JEIT240624
[Abstract](230) [FullText HTML](44) [PDF 3433KB](53)
Abstract:
  Objective   Recently, task offloading techniques based on reinforcement learning in Multi-access Edge Computing (MEC) have attracted considerable attention and are increasingly being utilized in industrial applications. Algorithms for task offloading that rely on single-agent reinforcement learning are typically developed within a decentralized framework, which is preferred due to its relatively low computational complexity. However, in large-scale MEC environments, such task offloading policies are formed solely based on local observations, often resulting in partial observability challenges. Consequently, this can lead to interference among agents and a degradation of the offloading policies. In contrast, traditional multi-agent reinforcement learning algorithms, such as the Multi-Agent Deep Deterministic Policy Gradient (MADDPG), consolidate the observation and action vectors of all agents, thereby effectively addressing the partial observability issue. Optimal joint offloading policies are subsequently derived through online training. Nonetheless, the centralized training and decentralized execution model inherent in MADDPG causes computational complexity to increase linearly with the number of mobile devices (MDs). This scalability issue restricts the ability of MEC systems to accommodate additional devices, ultimately undermining the system’s overall scalability.  Methods   First, a task offloading queue model for large-scale MEC systems is developed to handle delay-sensitive tasks with deadlines. This model incorporates both the transmission process, where tasks are offloaded via wireless channels to the edge server, and the computation process, where tasks are processed on the edge server. Second, the offloading process is defined as a Partially Observable Markov Decision Process (POMDP) with specified observation space, action space, and reward function for the agents. The Mean-Field Multi-Agent Task Offloading (MF-MATO) algorithm is subsequently proposed. Long Short-Term Memory (LSTM) networks are utilized to predict the current state vector of the MEC system by analyzing historical observation vectors. The predicted state vector is then input into fully connected networks to determine the task offloading policy. The incorporation of LSTM networks addresses the partial observability issue faced by agents during offloading decisions. Moreover, mean field theory is employed to approximate the Q-value function of MADDPG through linear decomposition, resulting in an approximate Q-value function and a mean-field-based action approximation for the MF-MATO algorithm. This mean-field approximation replaces the joint action of agents. Consequently, the MF-MATO algorithm interacts with the MEC environment to gather experience over one episode, which is stored in an experience replay buffer. After each episode, experiences are sampled from the buffer to train both the policy network and the Q-value network.  Results and Discussions   The simulation results indicate that the average cumulative rewards of the MF-MATO algorithm are comparable to those of the MADDPG algorithm, outperforming the other comparison algorithms during the training phase. (1) The task offloading delay curves for MD using the MF-MATO and MADDPG algorithms show a synchronous decline throughout the training process. Upon reaching training convergence, the delays consistently remain lower than those of the single-agent task offloading algorithm. In contrast, the average delay curve for the single-agent algorithm exhibits significant variation across different MD scenarios. This inconsistency is attributed to the single-agent algorithm’s inability to address mutual interference among agents, resulting in policy degradation for certain agents due to the influence of others. (2) As the number of MD increases, the MF-MATO algorithm’s performance regarding delay and task drop rate increasingly aligns with that of MADDPG, while exceeding all other comparison algorithms. This enhancement is attributed to the improved accuracy of the mean-field approximation as the number of MD rises. (3) A rise in task arrival probability leads to a gradual increase in the average delay and task drop rate curves for both the MF-MATO and MADDPG algorithms. When the task arrival probability reaches its maximum value, a significant rise in both the average delay and task drop rate is observed across all algorithms, due to the high volume of tasks fully utilizing the available computational resources. (4) As the number of edge servers increases, the average delay and task drop rate curves for the MF-MATO and MADDPG algorithms show a gradual decline, whereas the performance of the other comparison algorithms experiences a marked improvement with only a slight increase in computational resources. This suggests that the MF-MATO and MADDPG algorithms effectively optimize computational resource utilization through cooperative decision-making among agents. The simulation results substantiate that, by reducing computational complexity, the MF-MATO algorithm achieves performance in terms of delay and task drop rate that is consistent with that of the MADDPG algorithm.  Conclusions   The task offloading algorithm proposed in this paper, which is based on LSTM networks and mean field approximation theory, effectively addresses the challenges associated with task offloading in large-scale MEC scenarios. By utilizing LSTM networks, the algorithm alleviates the partially observable issues encountered by single-agent approaches, while also enhancing the efficiency of experience utilization in multi-agent systems and accelerating algorithm convergence. Additionally, mean field approximation theory reduces the dimensionality of the action space for multiple agents, thereby mitigating the computational complexity that traditional MADDPG algorithms face, which increases linearly with the number of mobile devices. As a result, the computational complexity of the MF-MATO algorithm remains independent of the number of mobile devices, significantly improving the scalability of large-scale MEC systems.
General Low-complexity Beamforming Designs for Reconfigurable Intelligent Surface-aided Multi-user Systems
CHEN Xiao, SHI Jianfeng, ZHU Jianyue, PAN Cunhua
2025, 47(1): 128-137.   doi: 10.11999/JEIT240051
[Abstract](445) [FullText HTML](117) [PDF 1541KB](63)
Abstract:
  Objective  Reconfigurable Intelligent Surface (RIS), an innovative technology for 6G communication, can effectively reduce hardware costs and energy consumption. Most researchers examine the joint BeamForming (BF) design problem in RIS-assisted Multiple-Input Single-Output (MISO) systems or single-user Multiple-Input Multiple-Output (MIMO) systems. However, few investigate the non-convex joint BF optimization problem for RIS-assisted multi-user MISO systems. The existing joint BF design approaches for these systems primarily rely on iterative algorithms that are complex, and some methods have a limited application range.   Methods  To address the issue, general low-complexity joint BF designs for RIS-assisted multi-user systems are considered. The communication system consists of a Base Station (BS) with an M -antenna configuration utilizing a Uniform Rectangular Array (URA), a RIS with N reflecting elements also arranged in a URA, and K single-antenna User Equipment (UEs). It is assumed that the transmission channel between the BS and UEs experiences blocking due to fading and potential obstacles in a dynamic wireless environment. The non-convex optimization challenge of joint BF design is analyzed, with the goal of maximizing the sum data rate for RIS-aided multi-user systems. The design process involves three main steps: First, the RIS reflection matrix \begin{document}${\boldsymbol{\varTheta}} $\end{document} is designed based on the perfect channel state information obtained from both the BS-RIS and RIS-UE links. This design exploits the approximate orthogonality of the beam steering vectors for all transmitters and receivers using the URA (as detailed in Lemma 1). Second, the transmit BF matrix W at the BS is derived using the zero-forcing method. Third, the power allocation at the BS for multiple users is optimized using the Water-Filling (WF) algorithm. The proposed scheme is applicable to both single-user and multi-user scenarios, accommodating Line-of-Sight (LoS) paths, Rician channels with LoS paths, as well as Non-LoS (NLoS) paths. The computational complexity of the proposed joint BF design is quantified at a total complexity of \begin{document}${\mathcal{O}}(N+K^2M+K^3) $\end{document}. Compared with existing schemes, the computational complexity of the proposed design is reduced by at least an order of magnitude.  Results and Discussions  To verify the performance of the proposed joint BF scheme, simulation tests were conducted using the MATLAB platform. Five different schemes were considered for comparison: Scheme 1: BF design and Water-Filling Power Allocation (WFPA) proposed in this paper, utilizing Continuous Phase Shift (CPS) design without accounting for the limitations of the RIS phase shifter’s accuracy. Scheme 2: Proposed Beamforming (PBF) and WFPA with 2-bit Phase Shift (2PS) design, taking phase shift accuracy limitations into consideration. Scheme 3: 1-bit Phase Shift (1PS) design under PBF and WFPA. Scheme 4: 2PS design under Random BeamForming (RBF) and WFPA. Scheme 5: Equal Power Allocation (EPA) design under PBF and CPS. Initial numerical results demonstrate that the proposed BF design can achieve a high sum data rate, which can be further enhanced by employing optimal power allocation. Furthermore, under identical simulation conditions, the LoS scenario exhibited superior sum data rate performance compared to the Rician channel scenario, with a performance advantage of approximately 6 bit/(s∙Hz). This difference can be attributed to the presence of multiple paths in the Rician channel, which increases interference and decreases the signal-to-noise ratio, thereby reducing the sum data rate. Additionally, when the distance between BS and UEs is fixed, and the RIS is positioned on the straight line between the BS and the UEs, the system sum data rate initially decreases and then increases as the distance between the RIS and UEs increases due to path loss. The simulation results confirm that when the RIS is situated near the UEs (i.e., further from the BS), improved data rate performance can be achieved. This improvement arises because the path loss of the RIS-UE link is greater than that of the BS-RIS link. Therefore, optimal data rate performance is attained when the RIS is closer to the UEs. Moreover, both the simulation results and theoretical analysis indicate that the sum data rate is influenced by the RIS location, offering valuable insights for the selection of RIS positioning.  Conclusions  This paper proposes a general low-complexity BF design for RIS-assisted multi-user communication systems. Closed-form solutions for transmit BF, power distribution of the BS, and the reflection matrix of the RIS are provided to maximize the system’s sum data rate. Simulation results indicate that the proposed BF design achieves higher data rates than alternative schemes. Additionally, both the simulation findings and theoretical analysis demonstrate that the sum data rate varies with the RIS’s location, providing a reference criterion for optimizing RIS placement.
Performance Analysis of Discrete-Phase-Shifter IRS-aided Amplify-and-Forward Relay Network
DONG Rongen, XIE Zhongyi, MA Haibo, ZHAO Feilong, SHU Feng
2025, 47(1): 138-146.   doi: 10.11999/JEIT240236
[Abstract](195) [FullText HTML](44) [PDF 1827KB](37)
Abstract:
  Objective   Most existing research assumes that the Intelligent Reflecting Surface (IRS) is equipped with continuous phase shifters, which neglects the phase quantization error. However, in practice, IRS devices are typically equipped with discrete phase shifters due to hardware and cost constraints. Similar to the performance degradation caused by finite quantization bit shifters in directional modulation networks, discrete phase shifters in IRS systems introduce phase quantization errors, potentially affecting system performance. This paper analyzes the performance loss and approximate performance loss in a double IRS-aided amplify-and-forward relay network, focusing on Signal-to-Noise Ratio (SNR) and achievable rate under Rayleigh fading channels. The findings provide valuable guidance on selecting the appropriate number of quantization bit for IRS in practical applications.  Methods   Based on the weak law of large numbers, Euler’s formula, and Rayleigh distribution, closed-form expressions for the SNR performance loss and achievable rate of the discrete phase shifter IRS-aided amplify-and-forward relay network are derived. Additionally, corresponding approximate expressions for the performance loss are derived using the first-order Taylor series expansion.  Results and Discussions   The SNR performance loss at the destination is evaluated as a function of the number of IRS-1 elements (N), assuming that the number of IRS-2 elements (M) equals N (Fig. 2). It is evident that, regardless of whether the scenario involves actual or approximate performance loss, the SNR performance loss decreases as the number of quantization bit (k) increases but increases as N grows. When k = 1, the gap between the actual performance loss and the approximate performance loss widens with increasing N. This gap becomes negligible when k is greater than or equal to 2. Notably, when k = 4, the SNR performance loss is less than 0.06 dB. Furthermore, both the SNR performance loss and approximate performance loss gradually decelerate as N increases towards a larger scale. The achievable rate at the destination is evaluated as a function of the N, where M equals N (Fig. 3). It can be observed that, in all scenarios—whether there is no performance loss, with performance loss, or approximate performance loss—the achievable rate increases gradually as N increases. This is because both IRS-1 and IRS-2 provide greater performance gains as N grows. When k = 1, the difference in achievable rate between the performance loss and approximate performance loss scenarios increases with N. As k increases, the achievable rate with performance loss and approximate performance loss converge towards the no-performance-loss scenario. For example, when N = 1 024, the performance loss in achievable rate is about 0.15 bit/(s·Hz) at k = 2 and only 0.03 bit/(s·Hz) at k = 3. The achievable rate is evaluated as a function of k (Fig. 4). The performance loss in achievable rate increases with N and M. When k = 3, the achievable rate with performance loss and approximate performance loss decrease by 0.04 bit/(s·Hz) compared to the no performance loss scenario. When k = 1, the differences in achievable rate between the no performance loss, performance loss, and approximate performance loss scenarios grow with increasing N and M. Remarkably, the achievable rate for the system with N = 1 024 and M = 128 outperforms that of N = 128 and M = 1 024. This suggests that increasing N provides a more significant improvement in rate performance than increasing M.  Conclusions   This paper investigates a double IRS-assisted amplify-and-forward relay network and analyzes the system performance loss caused by phase quantization errors in IRS equipped with discrete phase shifters under Rayleigh fading channels. Using the weak law of large numbers, Euler’s formula, and Rayleigh distribution, closed-form expressions for SNR performance loss and achievable rate are derived. Approximate performance loss expressions are also derived based on a first-order Taylor series expansion. Simulation results show that the performance losses in SNR and achievable rate decrease with increasing quantization bit, but increase with the number of IRS elements. When the number of quantization bit is 4, the performance losses in SNR and achievable rate are less than 0.06 dB and 0.03 bit/(s·Hz), respectively, suggesting that the system performance loss is negligible when using 4-bit phase quantization shifters.
Large-Scale STAR-RIS Assisted Near-Field ISAC Transmission Method
WANG Xiaoming, LI Jiaqi, LIU Ting, JIANG Rui, XU Youyun
2025, 47(1): 147-155.   doi: 10.11999/JEIT240018
[Abstract](291) [FullText HTML](93) [PDF 9757KB](59)
Abstract:
  Objective   The growing demand for advanced service applications and the stringent performance requirements envisioned in future 6G networks have driven the development of Integrated Sensing and Communication (ISAC). By combining sensing and communication capabilities, ISAC enhances spectral efficiency and has attracted significant research attention. However, real-world signal propagation environments are often suboptimal, making it difficult to achieve optimal transmission and sensing performance under harsh or dynamic conditions. To address this, Simultaneously Transmitting and Reflecting Reconfigurable Intelligent Surfaces (STAR-RIS) enable a full-space programmable wireless environment, offering an effective solution to enhance wireless system capabilities. In large-scale 6G industrial scenarios, STAR-RIS panels could be deployed on rooftops and walls for comprehensive coverage. As the number of reflecting elements increases, near-field effects become significant, rendering the conventional far-field assumption invalid. This paper explores the application of large-scale STAR-RIS in near-field ISAC systems, highlighting the role of near-field effects in enhancing sensing and communication performance. It highlights the importance of incorporating near-field phenomena into system design to exploit the additional degrees of freedom provided by large-scale STAR-RIS for improved localization accuracy and communication quality.  Methods   First, near-field ISAC system is formulated, where a large-scale STAR-RIS assists both sensing and communication processes. The theoretical framework of near-field steering vectors is applied to derive the steering vectors for each link, including those from the Base Station (BS) to the STAR-RIS, from the STAR-RIS to communication users, from the STAR-RIS to sensing targets, and from sensing targets to sensors. Based on these vectors, a system model is constructed to characterize the relationships among transmitted signals, signals reflected or transmitted via the STAR-RIS, and received signals for both communication and sensing.Next, the Cramér-Rao Bound (CRB) is then derived by calculating the Fisher Information Matrix (FIM) for three-dimensional (3D) parameter estimation of the sensing target, specifically its azimuth angle, elevation angle, and distance. The CRB serves as a theoretical benchmark for estimation accuracy. To optimize sensing performance, the CRB is minimized subject to communication requirements defined by a Signal-to-Interference-plus-Noise Ratio (SINR) constraint. The optimization involves jointly designing the BS precoding matrices, the transmit signal covariance matrices, and the STAR-RIS transmission and reflection coefficients to balance accurate sensing with reliable communication. Since the joint design problem is inherently nonconvex, an augmented Lagrangian formulation is employed. The original problem is decomposed into two subproblems using alternating optimization. Schur complement decomposition is first applied to transform the target function, and semidefinite relaxation is then used to convert each nonconvex subproblem into a convex one. These subproblems are alternately solved, and the resulting solutions are combined to achieve a globally optimized system configuration. This two-stage approach effectively reduces the computational complexity associated with high-dimensional, nonconvex optimization typical of large-scale STAR-RIS setups.  Results and Discussions   Simulation results under varying SINR thresholds indicate that the proposed STAR-RIS coefficient design achieves a lower CRB root than random coefficient settings (Fig. 2), demonstrating that optimizing the transmission and reflection coefficients of the STAR-RIS improves sensing precision. Additionally, the CRB root decreases as the number of Transmitting-Reflecting (T-R) elements increases in both the proposed and random designs, indicating that a larger number of T-R elements provides additional degrees of freedom. These degrees of freedom enable the system to generate more targeted beams for both sensing and communication, enhancing overall system performance.The influence of sensor elements on sensing accuracy is further analyzed by varying the number of sensing elements (Fig. 3). As the number of sensing elements increases, the CRB root declines, indicating that a larger sensing array improves the capture and processing of backscattered echoes, thereby enhancing the overall sensing capability. This finding highlights the importance of sufficient sensing resources to fully exploit the benefits of near-field ISAC systems.The study also examines three-dimensional localization of the sensing target under different SINR thresholds (Fig. 4, Fig. 5). Using Maximum Likelihood Estimation (MLE), the proposed method demonstrates highly accurate target positioning, validating the effectiveness of the joint design of precoding matrices, signal covariance, and STAR-RIS coefficients. Notably, near-field effects introduce distance as an additional dimension in the sensing process, absent in conventional far-field models. This additional dimension expands the parameter space, enhancing range estimation and contributing to more precise target localization. These results emphasize the potential of near-field ISAC for meeting the demanding localization requirements of future 6G systems.More broadly, the findings highlight the significant advantages of employing large-scale STAR-RIS in near-field settings for ISAC tasks. The improved localization accuracy demonstrates the synergy between near-field physics and advanced beam management techniques facilitated by STAR-RIS. These insights also suggest promising applications, such as industrial automation and precise positioning in smart factories, where reliable and accurate sensing is essential.  Conclusions   A large-scale STAR-RIS-assisted near-field ISAC system is proposed and investigated in this study. The near-field steering vectors for the links among the BS, STAR-RIS, communication users, sensing targets, and sensors are derived to construct an accurate near-field system model. The CRB for the 3D estimation of target location parameters is formulated and minimized by jointly designing the BS transmit beamforming matrices, the transmit signal covariance, and the STAR-RIS transmission and reflection coefficients, while ensuring the required communication quality. The nonconvex optimization problem is divided into two subproblems and addressed iteratively using semidefinite relaxation and alternating optimization techniques. Simulation results confirm that the proposed optimization scheme effectively reduces the CRB, enhancing sensing accuracy and demonstrating that near-field propagation provides an additional distance domain beneficial for both sensing and communication tasks. These findings suggest that near-field ISAC, enhanced by large-scale STAR-RIS, is a promising research direction for future 6G networks, combining increased degrees of freedom with high-performance integrated services.
Radar, Navigation and Array Signal Processing
An Adaptive Target Tracking Method Utilizing Marginalized Cubature Kalman Filter with Uncompensated Biases
DENG Honggao, YU Runhua, JI Yuanfa, WU Sunyong, SUN Shaoshuai
2025, 47(1): 156-166.   doi: 10.11999/JEIT240469
[Abstract](135) [FullText HTML](31) [PDF 1958KB](31)
Abstract:
  Objective   In radar target tracking, tracking accuracy is often influenced by sensor measurement biases and measurement noise. This is particularly true when measurement biases change abruptly and measurement noise is unknown and time-varying. Ensuring effective target tracking under these conditions poses a significant challenge. An adaptive target tracking method is proposed, utilizing a marginalized cubature Kalman filter to address this issue.  Methods   (1) Initially, measurements taken at adjacent time points are differentiated to formulate the differential measurement equation, thereby effectively mitigating the influence of measurement biases that are either constant or change gradually between adjacent observations. Concurrently, the target states at these moments are expanded to create an extended state vector facilitating real-time filtering. (2) Following the differentiation of measurements, sudden changes in measurement biases may cause the differential measurement at the current moment to be classified as outliers. To identify the occurrence of these abrupt bias changes, a Beta-Bernoulli indicator variable is established. If such a change is detected, the differential measurement for that moment is disregarded, and the predicted state is adopted as the updated state. In the absence of any abrupt changes, standard filtering procedures are conducted. The Gaussian measurement noise, despite having unknown covariance, continues to follow a Gaussian distribution after differentiation, allowing its covariance matrix to be modeled using the inverse Wishart distribution. (3) A joint distribution is formulated for the target state, indicator variables, and the covariance matrix of the measurement noise. The approximate posteriors of each parameter are derived using variational Bayesian inference. (4) To mitigate the increased filtering burden arising from the high-dimensional extended state vector, the extended target state is marginalized, and a marginalized cubature Kalman filter for target tracking is implemented in conjunction with the cubature Kalman filtering method.  Results and Discussions   The target tracking performance is clearly illustrated, indicating that the proposed method accurately identifies abrupt measurement biases while effectively managing unknown time-varying measurement noise. This leads to a tracking performance that significantly exceeds that of the comparative methods. The findings further support the conclusions by examining the Root Mean Square Error (RMSE). Additionally, the stability of the proposed method is demonstrated. The results reveal that the computational load associated with the proposed method is greatly reduced through marginalization processing. This reduction occurs because, during the variational Bayesian iteration process, cubature sampling and integration are performed multiple times. Once the target state is marginalized, the dimensionality of the cubature sampling is halved, and the number of sampling points for each variational iteration is also reduced by half. As a result, the computational load during the nonlinear propagation of the sampling points decreases, with the amount of computation reduction increasing with the number of variational iterations. Furthermore, the results demonstrate that marginalization does not compromise tracking accuracy, thereby further validating the effectiveness of marginalization processing. This finding also confirms that marginalization processing can be extended to other nonlinear variational Bayesian filters based on deterministic sampling, providing a means to reduce computational complexity.  Conclusions   This paper proposes an adaptive marginalized cubature Kalman filter to improve target tracking in scenarios with measurement biases and unknown time-varying measurement noise. The approach incorporates measurement differencing to eliminate constant biases, constructs indicator variables to detect abrupt biases, and models the unknown measurement noise covariance matrix using the inverse Wishart distribution. A joint posterior distribution of the parameters is established, and the approximate posteriors are solved through variational Bayesian inference. Additionally, marginalization of the target state is performed before implementing tracking within the CKF framework, reducing the filtering burden. The results of our simulation experiments yield the following conclusions: (1) The proposed method demonstrates superior target tracking performance compared to existing techniques in scenarios involving abrupt measurement biases and unknown measurement noise; (2) The marginalization processing strategy significantly alleviates the filtering burden of the proposed filter, making it applicable to more complex nonlinear variational Bayesian filters, such as robust nonlinear random finite set filters, to reduce filtering complexity; (3) This filtering methodology can be extended to target tracking scenarios in higher dimensions.
Improved Extended Kalman Filter Tracking Method Based On Active Waveguide Invariant Distribution
SUN Tongjing, ZHU Qingyu, WANG Zhizhuan
2025, 47(1): 167-177.   doi: 10.11999/JEIT240595
[Abstract](156) [FullText HTML](31) [PDF 7140KB](16)
Abstract:
  Objective   The complex and variable nature of the underwater environment presents significant challenges for the field of underwater target tracking. Factors such as environmental noise, interference, and reverberation can severely distort and obscure the signals emitted or reflected by underwater targets, complicating accurate detection and tracking efforts. Additionally, advancements in weak target signal technology add further complexity, as they necessitate sophisticated methods to enhance and interpret signals that may be lost amidst background noise. The challenges associated with underwater target tracking are multifaceted. One major issue is the interference that can compromise the integrity and reliability of target information. Another critical challenge lies in the difficulty of extracting useful feature information from the often chaotic underwater environment. Traditional tracking methods, which typically rely on basic signal processing techniques, frequently fall short in addressing these complexities. Underwater tracking technology serves as a cornerstone in several key fields, including marine science, military strategy, and marine resource development. Notably, effective underwater tracking is essential for monitoring, sonar detection, and the deployment of underwater weapons within the military sector. Considering the significance of underwater tracking technology, there is an urgent need for innovative methods to address the existing challenges. Therefore, this paper proposes a unified approach that views the target and environment as an integrated system, extracting coupled features—specifically, active waveguide invariants—and fusing these features into the motion model to enhance underwater tracking capabilities.  Methods:   This paper presents an enhanced extended Kalman filter tracking method, which is built upon the active waveguide invariant distribution. The mathematical model for active waveguide invariant representation is derived based on the foundational theory of target scattering characteristics in shallow water waveguides, with specific consideration of transmitter-receiver separation. This derivation establishes the constraint relationships among distance, frequency, and the active waveguide invariant distribution. These constraints are subsequently incorporated into the state vector of the extended Kalman filter to enhance the alignment between the target motion model and the actual trajectory, thereby improving tracking accuracy. The method includes image preprocessing steps such as filtering, edge detection, and edge smoothing, followed by the application of the Radon transform to extract essential parameters, including distance and frequency. The Radon transform is refined using threshold filtering to improve parameter extraction. The active waveguide invariant distribution is then computed, and the tracking performance of the method is validated through simulation experiments and real measurement data. The simulation setup involves a rigid ball as the target in a shallow water environment, modeled using a constant velocity assumption. Real measurement data is collected under similar conditions at the Xin’An River experimental site. For both simulations and real measurements, a steel ball model target and constant velocity model are employed, with equipment deployed on the same horizontal plane.   Results and Discussions:   First, the distribution of the constant of propagation within the active waveguide was obtained through simulation experiments. A comparison was made between the Invariant Distribution-Extended Kalman Filter (ID-EKF), the Extended Kalman Filter (EKF), and the Invariance Extended Kalman Filter (IEKF). In trajectory tracking, the ID-EKF demonstrated closer alignment to the true trajectory compared to both the EKF and IEKF. Additionally, in terms of the mean square error of the predicted posterior position, the ID-EKF exhibited a lower error rate. As indicated by the overall estimation accuracy, the ID-EKF achieved approximately 50% greater accuracy than the EKF and about 30% higher accuracy than the IEKF. Subsequently, the ID-EKF algorithm was validated in a real-world scenario using actual measurement data. The acoustic field interference stripes were obtained through the processing of received echo signals, and the distribution of the constant of propagation was extracted by manually selecting points and conducting a maximum search, followed by curve fitting using the joint edge curve fitting method. Results from Monte Carlo simulation experiments demonstrated a decreasing order of tracking accuracy for the ID-EKF, IEKF, and EKF, consistent with the simulation results. The overall estimation accuracy of the ID-EKF was approximately 60% higher than that of the EKF and about 40% superior to that of the IEKF.  Conclusions:  This paper presents a novel tracking method based on the extended Kalman filter, informed by the interference characteristics of target and environmental coupling in shallow water waveguides. The effectiveness of this method is substantiated through both theoretical simulation data and empirical lake measurement data. The active waveguide invariant distribution was derived using the Radon transform, which facilitated the implementation of the ID-EKF tracking. Results from both simulations and experiments reveal that the extracted active invariant value distribution manifests in two scenarios: either coinciding with 1 or exhibiting significant deviation from 1. When the extracted invariant value is markedly different from 1, the ID-EKF demonstrates a reduced tracking error and a more pronounced convergence relative to other tracking algorithms, highlighting the importance of precisely extracting this value to enhance the ID-EKF’s performance. Conversely, when the extracted value is close to 1, the tracking error of the ID-EKF aligns more closely with that of the IEKF algorithm. In both cases, it is evident that the extracted invariant value is pivotal in enhancing the accuracy of the tracking algorithm. Future research will prioritize the extraction of more accurate invariant values to facilitate the development of higher-precision tracking algorithms.
Sparse Array Design Methods via Redundancy Analysis of Coprime Array
ZHANG Yule, ZHOU Hao, HU Guoping, SHI Junpeng, ZHENG Guimei, SONG Yuwei
2025, 47(1): 178-187.   doi: 10.11999/JEIT240348
[Abstract](136) [FullText HTML](54) [PDF 2533KB](15)
Abstract:
  Objective   Sensor arrays are widely used to capture the spatio-temporal information of incident signal sources, with their configurations significantly affecting the accuracy of Direction Of Arrival (DOA) estimation. The Degrees Of Freedom (DOF) of conventional Uniform Linear Array (ULA) are limited by the number of physical sensors, and dense array deployments lead to severe mutual coupling effects. Emerging sparse arrays offer clear advantages by reducing hardware requirements, increasing DOF, mitigating mutual coupling, and minimizing system redundancy through flexible sensor deployment, making them a viable solution for high-precision DOA estimation. Among various sparse array designs, the Coprime Array (CA)—consisting of two sparse ULAs with coprime inter-element spacing and sensor counts—has attracted considerable attention due to its reduced mutual coupling effects. However, the alternately deployed subarrays result in a much lower number of Continuous Degrees Of Freedom (cDOF) than anticipated, which degrades the performance of subspace-based DOA estimation algorithms that rely on spatial smoothing techniques. Although many studies have explored array configuration optimization and algorithm design, real-time application demands indicate that optimizing array configurations is the most efficient approach to improve DOA estimation performance.  Methods   This study examines the weight functions of CA and identifies a significant number of redundant virtual array elements in the difference coarray. Specifically, all virtual array elements in the difference coarray exhibit weight functions of two or more, a key factor reducing the available cDOF and DOF. To address this deficiency, the conditions for generating redundant virtual array elements in the cross-difference sets of subarrays are analyzed, and two types of coprime arrays with translated subarrays, namely, CATrS-I and CATrS-II are devised. These designs aim to increase available cDOF and DOF and enhance DOA estimation performance. Firstly, without altering the number of physical sensors, the conditions for generating redundant virtual array elements in the cross-difference sets are modified by translating any subarray of CA to an appropriate position. Then, the precise range of translation distances is determined, and the closed-form expressions for cDOF and DOF, the hole positions in the difference coarray, and weight functions of CATrS-I and CATrS-II are derived. Finally, the optimal configurations of CATrS-I and CATrS-II are obtained by solving an optimization problem that maximizes cDOF and DOF while maintaining a fixed number of physical sensors.  Results and Discussions   Theoretical analysis shows that the proposed CATrS-I and CATrS-II can reduce the weight functions of most virtual array elements in the difference coarray to 1, thus increasing the available cDOF and DOF while maintaining the same number of physical sensors. Comparisons with several previously developed sparse arrays highlight the advantages of CATrS-I and CATrS-II. Specifically, the Augmented Coprime Array (ACA), which doubles the number of sensors in one subarray, and the Reference Sensor Relocated Coprime Array (RSRCA), which repositions the reference sensor, achieve only a limited reduction in redundant virtual array elements, particularly those associated with small virtual array elements. As a result, their mutual coupling effects are similar to those of the original CA. In contrast, the proposed CATrS-I and CATrS-II significantly reduce both the number of redundant virtual array elements and the weight functions corresponding to small virtual array elements by translating one subarray to an optimal position. This adjustment effectively mitigates mutual coupling effects among physical sensors. Numerical simulations further validate the superior DOA estimation performance of CATrS-I and CATrS-II in the presence of mutual coupling, demonstrating their superiority in spatial spectrum and DOA estimation accuracy compared to existing designs.  Conclusions   Two types of CATrS are proposed for DOA estimation by translating the subarrays of CA to appropriate distances. This design effectively reduces the number of redundant virtual array elements in the cross-difference sets, leading to a significant increase in cDOF and DOF, while mitigating mutual coupling effects among physical sensors. The translation distance of the subarray is analyzed, and the closed-form expressions for cDOF and DOF, the hole positions in the difference coarray, and the weight functions of virtual array elements are derived. Theoretical analysis and simulation results demonstrate that the proposed CATrS-I and CATrS-II offer superior performance in terms of cDOF, DOF, mutual coupling suppression, and DOA estimation accuracy. Future research will focus on further reducing redundant virtual array elements in the self-difference sets by disrupting the uniform deployment of subarrays and extending these ideas to more generalized and complex sparse array designs to further enhance array performance.
Compound Active Jamming Recognition for Zero-memory Incremental Learning
WU Zhenhua, CUI Jinxin, CAO Yice, ZHANG Qiang, ZHANG Lei, YANG Lixia
2025, 47(1): 188-200.   doi: 10.11999/JEIT240521
[Abstract](180) [FullText HTML](78) [PDF 4263KB](25)
Abstract:
  Objective:   In contemporary warfare, radar systems serve a crucial role as vital instruments for detection and tracking. Their performance is essential, often directly impacting the progression and outcome of military engagements. As these systems operate in complex and hostile environments, their susceptibility to adversarial interference becomes a significant concern. Recent advancements in active jamming techniques, particularly compound active jamming, present considerable threats to radar systems. These jamming methods are remarkably adaptable, employing a range of signal types, parameter variations, and combination techniques that complicate countermeasures. Not only do these jamming signals severely impair the radar’s ability to detect and track targets, but they also exhibit rapid adaptability in high-dynamic combat scenarios. This swift evolution of jamming techniques renders traditional radar jamming recognition models ineffective, as they struggle to address the fast-changing nature of these threats. To counter these challenges, this paper proposes a novel incremental learning method designed for recognizing compound active jamming in radar systems. This innovative approach seeks to bridge the gaps of existing methods when confronted with incomplete and dynamic jamming conditions typical of adversarial combat situations. Specifically, it tackles the challenge of swiftly updating models to identify novel out-of-database compound jamming while mitigating the performance degradation caused by imbalanced sample distributions. The primary objective is to enhance the adaptability and reliability of radar systems within complex electronic warfare environments, ensuring robust performance against increasingly sophisticated and unpredictable jamming techniques.  Methods:   The proposed method commences with prototypical learning within a meta-learning framework to achieve efficient feature extraction. Initially, a feature extractor is trained utilizing in-database single jamming signals. This extractor is thoroughly designed to proficiently capture the features of out-of-database compound jamming signals. Subsequently, a Zero-Memory Incremental Learning Network (ZMILN) is developed, which incorporates hyperdimensional space and cosine similarity techniques. This network facilitates the mapping and storage of prototype vectors for compound jamming signals, thereby enabling the dynamic updating of the recognition model. To address the challenges associated with imbalanced test sample distributions, a Transductive Information Maximization (TIM) testing module is introduced. This module integrates divergence constraints into the mutual information loss function, refining the recognition model to optimize its performance across imbalanced datasets. The implementation begins with a comprehensive modeling of radar active jamming signals. Linear Frequency Modulation (LFM) signals, frequently utilized in contemporary radar systems, are chosen as the foundation for the transmitted radar signals. The received signals are modeled as a blend of target echo signals, jamming signals, and noise. Various categories of radar active jamming, including suppression jamming and deceptive jamming, are classified, and their composite forms are examined. For feature extraction, a five-layer Convolutional Neural Network (CNN) is employed. This CNN is specifically designed to transform input radar jamming time-frequency image samples into a hyperdimensional feature space, generating 512-dimensional prototype vectors. These vectors are then stored within the prototype space, with each jamming category corresponding to a distinct prototype vector. To enhance classification accuracy and efficiency, a quasi-orthogonal optimization strategy is utilized to improve the spatial arrangement of these prototype vectors, thereby minimizing overlap and confusion between different categories and increasing the precision of jamming signal recognition. The ZMILN framework addresses two primary challenges in recognizing compound jamming signals: the scarcity of new-category samples and the limitations inherent in existing models when it comes to identifying novel categories. By integrating prototypical learning with hyperdimensional space techniques, the ZMILN enables generalized recognition from in-database single jamming signals to out-of-database compound jamming. To further enhance model performance in the face of imbalanced sample conditions, the TIM module maximizes information gain by partitioning the test set into supervised support and unsupervised query sets. The ZMILN model is subsequently fine-tuned using the support set, followed by unsupervised testing on the query set. During the testing phase, the model computes the cosine similarity between the test samples and the prototype vectors, ultimately yielding the final recognition results.  Results and Discussions:   The proposed method exhibits notable effectiveness in the recognition of radar compound active jamming signals. Experimental results indicate an average recognition accuracy of 93.62% across four single jamming signals and seven compound jamming signals under imbalanced test conditions. This performance significantly exceeds various baseline incremental learning methods, highlighting the superior capabilities of the proposed approach in the radar jamming recognition task. Additionally, t-distributed Stochastic Neighbor Embedding (t-SNE) visualization experiments present the distribution of jamming features at different stages of incremental learning, further confirming the method’s effectiveness and robustness. The experiments simulate a realistic radar jamming recognition scenario by categorizing “in-database” jamming as single types included in the base training set, and “out-of-database” jamming as novel compound types that emerge during the incremental training phase. This configuration closely resembles real-world operational conditions, where radar systems routinely encounter new and evolving jamming techniques. Quantitative performance metrics, including accuracy and performance degradation rates, are utilized to assess the model’s capacity to retain knowledge of previously learned categories while adapting to new jamming types. Accuracy is computed at each incremental learning stage to evaluate the model’s performance on both old and new categories. Furthermore, the performance degradation rate is calculated to measure the extent of knowledge retention, with lower degradation rates indicative of stronger retention of prior knowledge throughout the learning process.  Conclusions:   In conclusion, the proposed Zero-Memory Incremental Learning method for recognizing radar compound active jamming is highly effective in addressing the challenges posed by rapidly evolving and complex radar jamming techniques. By leveraging a comprehensive understanding of individual jamming signals, this method facilitates swift and dynamic recognition of out-of-database compound jamming across diverse and high-dynamic conditions. This approach not only enhances the radar system’s capabilities in recognizing novel compound jamming but also effectively mitigates performance degradation resulting from imbalanced sample distributions. Such advancements are essential for improving the adaptability and reliability of radar systems in complex electronic warfare environments, where the nature of jamming signals is in constant flux. Additionally, the proposed method holds significant implications for other fields facing incremental learning challenges, particularly those involving imbalanced data and rapidly emerging categories. Future research will focus on exploring open-set recognition models, further enhancing the cognitive recognition capabilities of radar systems in fully open and highly dynamic adversarial environments. This work lays the groundwork for developing more agile cognitive closed-loop recognition systems, ultimately contributing to more resilient and adaptable radar systems capable of effectively managing complex electronic warfare scenarios.
A Non-interference Multi-Carrier Complementary Coded Division Multiple Access Dual-Functional Radar-Communication Scheme
SHEN Bingsheng, ZHOU Zhengchun, YANG Yang, FAN Pingzhi
2025, 47(1): 201-210.   doi: 10.11999/JEIT240297
[Abstract](205) [FullText HTML](102) [PDF 6364KB](44)
Abstract:
  Objective   As the digital landscape evolves, the rise of innovative applications has led to unprecedented levels of spectrum congestion. This congestion poses significant challenges for the seamless operation and expansion of wireless networks. Among the various solutions being explored, Dual-Functional Radar-Communication (DFRC) emerges as a key technology. It offers a promising pathway to alleviate the growing spectrum crunch. DFRC systems are designed to harmonize radar sensing and communication within the same spectral resources, maximizing efficiency and minimizing waste. However, implementing DFRC systems presents significant challenges, particularly in mitigating mutual interference between communication and radar functions. If this interference is not addressed, it can severely degrade the performance of both systems, undermining the dual-purpose design of DFRC. Additionally, achieving high communication rates under these constraints adds complexity that must be carefully managed. Therefore, tackling interference mitigation while ensuring robust and high-speed communication capabilities is a fundamental challenge the research community must address urgently within DFRC systems. Successfully resolving these issues will pave the way for widespread DFRC adoption and drive advancements across various fields, from autonomous driving to smart cities, fundamentally transforming our interactions with the world.  Methods   Multi-carrier Complementary-Coded Division Multiple Access (MC-CDMA) is a sophisticated spread spectrum communication technology that utilizes the unique properties of complementary codes to enhance system performance. A key advantage of MC-CDMA is the ideal correlation characteristics of these codes. Theoretically, they can eliminate interference between communication users and radar systems. However, this requires a data block length of 1. Since a guard interval must be added after the data block, a length of 1 results in many guard intervals during transmission, lowering the communication user’s transmission rate. To address this issue, this paper expands the spread spectrum codes used by both communication users and radars. The communication code is expanded by repetition, while the radar code is extended using Kronecker products and Golay complementary pairs, matching the data block length. This approach ensures that even if the data block length exceeds 1, the radar signal remains unaffected by the communication users.  Results and Discussions   The proposed scheme effectively addresses interference between radar and communication, while also improving the data rate for communication users. Experimental simulation results demonstrate that the proposed scheme performs well in terms of bit error rate, anti-Doppler frequency shift capability, and target detection.  Conclusions   Waveform design is crucial in DFRC systems. This paper presents a new DFRC waveform based on MC-CDMA technology. The scheme generates an integrated waveform through code division, enhancing user data rates and preventing random communication data from interfering with the radar waveform. To achieve this, the communication and radar codes are both extended. The communication code uses repetition for extension, while the radar code employs Golay complementary pairs. Theoretical analysis and simulation results suggest that, compared to traditional spread spectrum schemes, the proposed approach allows for interference-free transmission for both communication and radar, achieves a low bit error rate, and provides excellent data rates. On the radar side, the proposed waveform exhibits a low peak sidelobe ratio and excellent Doppler tolerance, allowing for accurate target detection. Additionally, the approach facilitates rapid generation and strong online design capabilities through the direct design of complementary codes.
A Code-phase Shift Key-Linear Frequency Modulated Low Earth Orbit Navigation Signal and Acquisition Performance Analysis
LIN Honglei, GENG Minyan, FU Dong, OU Gang, XIAO Wei, MA Ming
2025, 47(1): 211-222.   doi: 10.11999/JEIT240650
[Abstract](90) [FullText HTML](35) [PDF 6499KB](23)
Abstract:
  Objective   The provision of satellite navigation services through Low Earth Orbit (LEO) constellations has become a prominent topic in the Position, Navigation and Timing (PNT) system. Although LEO satellites offer low spatial propagation loss and high signal power at ground level. However, their high-speed movement results in significant dynamics in the signal, leading to considerable Doppler frequency shifts that affect signal reception on the ground. This dynamic environment increases the frequency search space required by receivers. Furthermore, LEO constellations typically comprise hundreds or even thousands of satellites to achieve global coverage, further expanding the search space for satellite signals at terminals. Consequently, during cold start conditions, the LEO satellite navigation system faces a substantial increase in the search range for navigation signals, presenting significant challenges for signal acquisition. Existing GPS, BDS, GALILEO, and other navigation signals primarily utilize BPSK-CDMA modulation, relying on spread spectrum sequences to differentiate various satellite signals. However, these existing signals exhibit limited resistance to Doppler frequency offsets. Therefore, research into signal waveforms that are more suitable for LEO satellite navigation systems is crucial. Such research aims to enhance the anti-Doppler frequency offset capability and multi-access performance under conditions involving numerous satellites, thereby improving the signal acquisition performance of LEO navigation terminals and enhancing the overall availability of LEO navigation systems.  Methods   This paper adopts a multi-faceted research approach including theoretical analysis, simulation experiments, and comparative analysis. Since the performance of the correlation function directly impacts signal acquisition performance, an initial theoretical analysis of the correlation function and the multiple access capabilities of the proposed signal is conducted. Following this, the corresponding capture detection metrics and decision-making methods are proposed based on the principles of signal capture. The investigation continues with a focus on optimizing capture parameters, followed by verification of the signal’s acquisition performance through simulations and experiments. Additionally, the performance of the proposed signal is compared to that of traditional navigation signals using both theoretical and simulation analyses.  Results  and Discussions The theoretical analysis outcomes reveal that the proposed Code-phase Shift Key-Linear Frequency Modulated (CSK-LFM) signal exhibits lower Doppler loss, delay loss, and multiple access loss when compared to the traditional Binary Phase Shift Keying–Code Division Multiple Access (BPSK-CDMA) signal. To minimize the loss of signal detection capacity, it is advisable to expand the signal bandwidth and reduce the spread spectrum ratio during the signal design phase. A satellite parallel search method is developed for the acquisition of the CSK-LFM signal, employing a Partial Match Filter-Fast Fourier Transformations (PMF-FFT) approach. A parameter optimization model has also been developed to enhance the acquisition performance of the CSK-LFM signal. Furthermore, the acquisition performance of CSK-LFM and BPSK-CDMA signals are compared. Under the same conditions, the acquisition and search space required for the BPSK-CDMA signal is larger than that of the CSK-LFM signal. It is noteworthy that, under equivalent dynamic conditions, the acquisition performance of the CSK-LFM signal is approximately 1 dB superior to that of the BPSK-CDMA signal. Lastly, experimental results confirm that the proposed satellite parallel search method based on the PMF-FFT acquisition algorithm is effective for the acquisition of CSK-LFM signals.  Conclusions   To address the challenge of achieving rapid signal acquisition in low-orbit satellite navigation systems, a hybrid modulation scheme, CSK-LFM is designed. The LFM modulation improves the signal’s Doppler tolerance, while the use of diverse pseudo-code phases enables multiple access broadcasts from different satellites. This design compresses the three-dimensional search space involving satellite count, time delay, and Doppler shift. Additionally, a satellite parallel search method is implemented based on a PMF-FFT acquisition algorithm for the CSK-LFM signal. An optimization model for acquisition parameters is also developed to enhance performance. Our comparative analysis of the acquisition performance between CSK-LFM and BPSK-CDMA signals demonstrates that at a signal intensity of 40 dBHz, the navigation signal using CSK-LFM modulation achieves an acquisition performance approximately 1 dB superior to that of the BPSK-CDMA modulation signal under identical conditions; furthermore, the signal search space can be reduced to one-tenth that of the BPSK-CDMA modulation signal.
Image and Intelligent Information Processing
Image Enhancement under Transformer Oil Based on Multi-Scale Weighted Retinex
QIANG Hu, ZHONG Yuzhong, DIAN Songyi
2025, 47(1): 223-232.   doi: 10.11999/JEIT240645
[Abstract](101) [FullText HTML](42) [PDF 9622KB](18)
Abstract:
  Objective:   Large oil-immersed transformers are critical in power systems, with their operational status essential for maintaining grid stability and reliability. Periodic inspections are necessary to identify and resolve transformer faults and ensure normal operation. However, manual inspections require significant human and material resources. Moreover, conventional inspection methods often fail to promptly detect or accurately locate internal faults, which may ultimately affect transformer lifespan. Robots equipped with visual systems can replace manual inspections for fault identification inside oil-immersed transformers, enabling timely fault detection and expanding the inspection range compared to manual methods. However, high-definition visual imaging is crucial for effective fault detection using robots. Transformer oil degrades and discolors under high-temperature, high-pressure conditions, with these effects varying over time. The oil color typically shifts from pale yellow to reddish-brown, and the types and forms of suspended particles evolve dynamically. These factors cause complex light attenuation and scattering, leading to color distortion and detail loss in captured images. Additionally, the sealed metallic structure of oil-immersed transformers requires robots to rely on onboard artificial light sources during inspections. The limited illumination from these sources further reduces image brightness, hindering clarity and impacting fault detection accuracy. To address issues such as color distortion, low brightness, and detail loss in images captured under transformer oil, this paper proposes a multi-scale weighted Retinex algorithm for image enhancement.  Methods:   This paper proposes a multi-scale weighted Retinex algorithm for image enhancement under transformer oil. To mitigate color distortion, a hybrid dynamic color channel compensation algorithm is proposed, which dynamically adjusts compensation based on the attenuation of each channel in the captured image. To address detail loss, a sharpening weight strategy is applied. Finally, a pyramid multi-scale fusion strategy integrates Retinex reflection components from multiple scales with their corresponding weight maps, producing clearer images under transformer oil.   Results and Discussions:   Qualitative experimental results (Fig. 5, Fig. 6, Fig. 7) indicate that the UCM algorithm, based on non-physical models, achieves color correction by assuming minimal attenuation in the blue channel. However, the dynamic changes in transformer oil result in varying channels with the least attenuation, reducing the algorithm’s generalization capability. Enhancement results from physical-model algorithms, including UDCP, IBLA, and ULAP, exhibited low brightness, often leading to the loss of critical image details. Furthermore, these physical-model methods not only fail to resolve color distortion but frequently intensify it. Deep learning-based algorithms, such as Water-Net, Shallow-uwnet, and UDnet, demonstrated effectiveness in mitigating mild color distortion. However, their enhancement results still suffer from low brightness and blurred details. In contrast, the algorithm proposed in this paper fully accounts for the dynamic characteristics of transformer oil, effectively addressing color distortion, blurring, and detail loss in images captured under transformer oil. Quantitative experiments (Table 1) show that the UIQM value of images enhanced by the proposed algorithm increased by an average of 121.206% compared with the original images, the FDUM value increased by an average of 105.978%, and the NIQE value decreased by an average of 6.772%. Both qualitative and quantitative results demonstrate that the proposed algorithm effectively resolves image degradation issues under transformer oil and outperforms the comparison methods. Additionally, applicability tests reveal that the algorithm not only performs well for transformer oil images but also demonstrates strong enhancement capabilities in underwater imaging.  Conclusions:   Experimental results demonstrate that the algorithm proposed in this paper effectively addresses the complex degradation issues in images captured under transformer oil. Although the proposed algorithm achieves superior enhancement performance, processing a 1 280×720 resolution image requires an average of 2.16 s, which does not meet the demands for embedded real-time applications, such as robotic inspections. Future research will focus on optimizing the algorithm to improve its real-time performance.
A Context-Aware Multiple Receptive Field Fusion Network for Oriented Object Detection in Remote Sensing Images
YAO Tingting, ZHAO Hengxin, FENG Zihao, HU Qing
2025, 47(1): 233-243.   doi: 10.11999/JEIT240560
[Abstract](170) [FullText HTML](41) [PDF 3454KB](31)
Abstract:
  Objective  Recent advances in remote sensing imaging technology have made oriented object detection in remote sensing images a prominent research area in computer vision. Unlike traditional object detection tasks, remote sensing images, captured from a wide-range bird’s-eye view, often contain a variety of objects with diverse scales and complex backgrounds, posing significant challenges for oriented object detection. Although current approaches have made substantial progress, existing networks do not fully exploit the contextual information across multi-scale features, resulting in classification and localization errors during detection. To address this, a context-aware multiple receptive field fusion network is proposed, which leverages the contextual correlation in multi-scale features. By enhancing the feature representation capabilities of deep networks, the accuracy of oriented object detection in remote sensing images can be improved.  Methods  For input remote sensing images, ResNet-50 and a feature pyramid network are first employed to extract features at different scales. The features from the first four layers are then enhanced using a receptive field expansion module. The resulting features are processed through a high-level feature aggregation module to effectively fuse multi-scale contextual information. After obtaining enhanced features at different scales, a feature refinement region proposal network is designed to revise object detection proposals using refined feature representations, resulting in more accurate candidate proposals. These multi-scale features and candidate proposals are then input into the Oriented R-CNN detection head to obtain the final object detection results. The receptive field expansion module consists of two submodules: a large selective kernel convolution attention submodule and a shift window self-attention enhancement submodule, which operate in parallel. The large selective kernel convolution submodule introduces multiple convolution operations with different kernel sizes to capture contextual information under various receptive fields, thereby improving the network’s ability to perceive multi-scale objects. The shift window self-attention enhancement submodule divides the feature map into patches according to predefined window and step sizes and calculates the self-attention-enhanced feature representation of each patch, extracting more global information from the image. The high-level feature aggregation module integrates rich semantic information from the feature pyramid network with low-level features, improving detection accuracy for multi-scale objects. Finally, a feature refinement region proposal network is designed to reduce location deviation between generated region proposals and actual rotating objects in remote sensing images. The deformable convolution is employed to capture geometric and contextual information, refining the initial proposals and producing the final oriented object detection results through a two-stage region-of-interest alignment network.  Results and Discussions  The effectiveness and robustness of the proposed network are demonstrated on two public datasets: DIOR-R and HRSC2016. For DIOR-R dataset, the AP50, AP75 and AP50:95 metrics are used for evaluation. Quantitative and qualitative comparisons (Fig. 7, Table 1) demonstrate that the proposed network significantly enhances feature representation for different remote sensing objects, distinguishing objects with similar appearances and localizing objects at various scales more accurately. For the HRSC2016 dataset, the mean Average Precision (mAP) is used, and both mAP(07) and mAP(12) are computed for quantitative comparison. The results (Fig. 7, Table 2) further highlight the network’s effectiveness in improving ship detection accuracy in remote sensing images. Additionally, ablation studies (Table 3) demonstrate that each module in the proposed network contributes to improved detection performance for oriented objects in remote sensing images.  Conclusions  This paper proposes a context-aware multi-receptive field fusion network for oriented object detection in remote sensing images. The network includes a receptive field expansion module that enhances the perception ability for remote sensing objects of different sizes. The high-level feature aggregation module fully utilizes high-level semantic information, further improving localization and classification accuracy. The feature refinement region proposal network refines the first-stage proposals, resulting in more accurate detection. The qualitative and quantitative results on the DIOR-R and HRSC2016 datasets demonstrate that the proposed network outperforms existing approaches, providing superior detection results for remote sensing objects of varying scales.
A Modal Fusion Deep Clustering Method for Multi-sensor Fault Diagnosis of Rotating Machinery
WU Zhangjun, XU Renli, FANG Gang, SHAO Haidong
2025, 47(1): 244-259.   doi: 10.11999/JEIT240648
[Abstract](258) [FullText HTML](69) [PDF 4188KB](48)
Abstract:
  Objective  Rotating machinery is essential across various industrial sectors, including energy, aerospace, and manufacturing. However, these machines operate under complex and variable conditions, making timely and accurate fault detection a significant challenge. Traditional diagnostic methods, which use a single sensor and modality, often miss critical features, particularly subtle fault signatures. This can result in reduced reliability, increased downtime, and higher maintenance costs. To address these issues, this study proposes a novel modal fusion deep clustering approach for multi-sensor fault diagnosis in rotating machinery. The main objectives are to: (1) improve feature extraction through time-frequency transformations that reveal important temporal-spectral patterns, (2) implement an attention-based modality fusion strategy that integrates complementary information from various sensors, and (3) use a deep clustering framework to identify fault types without needing labeled training data.  Methods  The proposed approach utilizes a multi-stage pipeline for thorough feature extraction and analysis. First, raw multi-sensor signals, such as vibration data collected under different load and speed conditions, are preprocessed and transformed with the Short-Time Fourier Transform (STFT). This converts time-domain signals into time-frequency representations, highlighting distinct frequency components related to various fault conditions. Next, Gated Recurrent Units (GRUs) model temporal dependencies and capture long-range correlations, while Convolutional AutoEncoders (CAEs) learn hierarchical spatial features from the transformed data. By combining GRUs and CAEs, the framework encodes both temporal and structural patterns, creating richer and more robust representations than traditional methods that rely solely on either technique or handcrafted features. A key innovation is the modality fusion attention mechanism. In multi-sensor environments, individual sensors typically capture complementary aspects of system behavior. Simply concatenating their outputs can lead to suboptimal results due to noise and irrelevant information. The proposed attention-based fusion calculates modality-specific affinity matrices to assess the relationship and importance of each sensor modality. With learnable attention weights, the framework prioritizes the most informative modalities while diminishing the impact of less relevant ones. This ensures the fused representation captures complementary information, resulting in improved discriminative power. Finally, an unsupervised clustering module is integrated into the deep learning pipeline. Rather than depending on labeled data, the model assigns samples to clusters by refining cluster assignments iteratively using a Kullback-Leibler (KL) divergence-based objective. Initially, a soft cluster distribution is created from the learned features. A target distribution is then computed to sharpen and define cluster boundaries. By continuously minimizing the KL divergence between these distributions, the model self-optimizes over time, producing well-separated clusters corresponding to distinct fault types without supervision.  Results and Discussions  The proposed approach’s effectiveness is illustrated using multi-sensor bearing and gearbox datasets. Compared to conventional unsupervised methods—like traditional clustering algorithms or single-domain feature extraction techniques—this framework significantly enhances clustering accuracy and fault recognition rates. Experimental results show recognition accuracies of approximately 99.16% on gearbox data and 98.63% on bearing data, representing a notable advancement over existing state-of-the-art techniques. These impressive results stem from the synergistic effects of advanced feature extraction, modality fusion, and iterative clustering refinement. By extracting time-frequency features through STFT, the method captures a richer representation than relying solely on raw time-domain signals. The use of GRUs incorporates temporal information, enabling the capture of dynamic signal changes that may indicate evolving fault patterns. Additionally, CAEs reveal meaningful spatial structures from time-frequency data, resulting in low-dimensional yet highly informative embeddings. The modality fusion attention mechanism further enhances these benefits by emphasizing relevant modalities, such as vibration data from various sensor placements or distinct physical principles, thus leveraging their complementary strengths. Through the iterative minimization of KL divergence, the clustering process becomes more discriminative. Initially broad and overlapping cluster boundaries are progressively refined, allowing the model to converge toward stable and well-defined fault groupings. This unsupervised approach is particularly valuable in practical scenarios, where obtaining labeled data is costly and time-consuming. The model’s ability to learn directly from unlabeled signals enables continuous monitoring and adaptation, facilitating timely interventions and reducing the risk of unexpected machine failures. The discussion emphasizes the adaptability of the proposed method. Industrial systems continuously evolve, and fault patterns can change over time due to aging, maintenance, or shifts in operational conditions. The unsupervised method can be periodically retrained or updated with new unlabeled data. This allows it to monitor changes in machinery health and quickly detect new fault conditions without the need for manual annotation. Additionally, the attention-based modality fusion is flexible enough to support the inclusion of new sensor types or measurement channels, potentially enhancing diagnostic performance as richer data sources become available.  Conclusions  This study presents a modal fusion deep clustering framework designed for the multi-sensor fault diagnosis of rotating machinery. By combining time-frequency transformations with GRU- and CAE-based deep feature encoders, attention-driven modality fusion, and KL divergence-based unsupervised clustering, this approach outperforms traditional methods in accuracy, robustness, and scalability. Key contributions include a comprehensive multi-domain feature extraction pipeline, an adaptive modality fusion strategy for heterogeneous sensor data integration, and a refined deep clustering mechanism that achieves high diagnostic accuracy without relying on labeled training samples. Looking ahead, there are several promising directions. Adding more modalities—like acoustic emissions, temperature signals, or electrical measurements—could lead to richer feature sets. Exploring semi-supervised or few-shot extensions may further enhance performance by utilizing minimal labeled guidance when available. Implementing the proposed model in an industrial setting, potentially for real-time use, would also validate its practical benefits for maintenance decision-making, helping to reduce operational costs and extend equipment life.
Saliency Object Detection Utilizing Adaptive Convolutional Attention and Mask Structure
ZHU Lei, YUAN Jinyao, WANG Wenwu, CAI Xiaoman
2025, 47(1): 260-270.   doi: 10.11999/JEIT240431
[Abstract](260) [FullText HTML](83) [PDF 2990KB](55)
Abstract:
  Objective   Salient Object Detection (SOD) aims to replicate the human visual system’s attentional processes by identifying visually prominent objects within a scene. Recent advancements in Convolutional Neural Networks (CNNs) and Transformer-based models have improved performance; however, several limitations remain: (1) Most existing models depend on pixel-wise dense predictions, diverging from the human visual system’s focus on region-level analysis, which can result in inconsistent saliency distribution within semantic regions. (2) The common application of Transformers to capture global dependencies may not be ideal for SOD, as the task prioritizes center-surround contrasts in local areas rather than global long-range correlations. This study proposes an innovative SOD model that integrates CNN-style adaptive attention and mask-aware mechanisms to enhance contextual feature representation and overall performance.  Methods   The proposed model architecture comprises a feature extraction backbone, contextual enhancement modules, and a mask-aware decoding structure. A CNN backbone, specifically Res2Net, is employed for extracting multi-scale features from input images. These features are processed hierarchically to preserve both spatial detail and semantic richness. Additionally, this framework utilizes a top-down pathway with feature pyramids to enhance multi-scale representations. High-level features are further refined through specialized modules to improve saliency prediction. Central to this architecture is the ConvoluTional attention-based contextual Feature Enhancement (CTFE) module. By using adaptive convolutional attention, this module effectively captures meaningful contextual associations without relying on global dependencies, as seen in Transformer-based methods. The CTFE focuses on modeling center-surround contrasts within relevant regions, avoiding unnecessary computational overhead. Features refined by the CTFE module are integrated with lower-level features through the Feature Fusion Module (FFM). Two fusion strategiesAttention-Fusion and Simple-Fusion—were evaluated to identify the most effective method for merging hierarchical features. The decoding process is managed by the Mask-Aware Transformer (MAT) module, which predicts salient regions by restricting attention to mask-defined areas. This strategy ensures that the decoding process prioritizes regions relevant to saliency, enhancing semantic consistency while reducing noise from irrelevant background information. The MAT module’s ability to generate both masks and object confidence scores makes it particularly suited for complex scenes. Multiple loss functions guide the training process: Mask loss, computed using Dice loss, ensures that predicted masks closely align with ground truth. Ranking loss prioritizes the significance of salient regions, while edge loss sharpens boundaries to clearly distinguish salient objects from their background. These objectives are optimized jointly using the Adam optimizer with a dynamically adjusted learning rate.  Results and Discussions   Experiments were conducted using the PyTorch framework on an RTX 3090 GPU, with training configurations optimized for SOD datasets. The input resolution was set to 384×384 pixels, and data augmentation techniques, such as horizontal flipping and random cropping, were applied. The learning rate was initialized at 6e–6 and adjusted dynamically, with the Adam optimizer employed to minimize the combined loss functions. Experimental evaluations were performed on four widely used datasets: SOD, DUTS-TE, DUT-OMRON, and ECSSD. The proposed model demonstrated exceptional performance across all datasets, showing significant improvements in Mean Absolute Error (MAE) and maximum F-measure metrics. For instance, on the DUTS-TE dataset, the model achieved an MAE of 0.023 and a maximum F-measure of 0.9508, exceeding competing methods such as MENet and VSCode. Visual comparisons indicate that the proposed method generates saliency maps that closely align with the ground truth, effectively addressing challenging scenarios including fine structures, multiple objects, and complex backgrounds. In contrast, other methods often incorporate irrelevant regions or fail to accurately capture object details. Ablation experiments validated the effectiveness of crucial components. For example, the incorporation of the CTFE module resulted in a reduction of MAE from 0.109 to 0.102. Additionally, the Simple-Fusion strategy outperformed the Attention-Fusion approach, yielding a lower MAE and a higher maximum F-measure score. The integration of IOU and BCE-based edge loss further enhanced boundary sharpness, demonstrating superior performance compared to Canny-based edge loss. Heatmaps illustrate the contributions of the CTFE and MAT modules in emphasizing salient regions while preserving semantic consistency. The CTFE effectively accentuates center-surround contrasts, while the MAT captures global object-level semantics. These visualizations highlight the model’s ability to focus on critical areas while minimizing background noise.  Conclusions   This study presents a novel SOD framework that integrates CNN-style adaptive attention with mask-aware decoding mechanisms. The proposed model addresses the limitations of existing approaches by enhancing semantic consistency and contextual representation while avoiding excessive dependence on global variables. Comprehensive evaluations demonstrate its robustness, generalization capability, and significant performance enhancements across multiple benchmarks. Future research will investigate further optimization of the architecture and its application to multimodal SOD tasks, including RGB-D and RGB-T saliency detection.
Cryption and Network Information Security
The Small-state Stream Cipher Algorithm Draco-F Based on State-bit Indexing Method
ZHANG Runlian, FAN Xin, ZHAO Hao, WU Xiaonian, WEI Yongzhuang
2025, 47(1): 271-278.   doi: 10.11999/JEIT240524
[Abstract](152) [FullText HTML](46) [PDF 1054KB](20)
Abstract:
  Objective   The Draco algorithm is a stream cipher based on the Consisting of the Initial Value and Key-prefix (CIVK) scheme. It claims to provide security against Time Memory Data TradeOff (TMDTO) attacks. However, its selection function has structural flaws that attackers can exploit. These weaknesses can compromise its security. To address these vulnerabilities and lower the hardware costs associated with the Draco algorithm, this paper proposes an improved version called Draco-F. This new algorithm utilizes state bit indexing and dynamic initialization.  Methods   Firstly, to address the small cycle problems of the selection function and the high hardware costs in the Draco algorithm, the Draco-F algorithm introduces a new selection function. This function employs state bit indexing to extend the selection function’s period and reduce hardware costs. Specifically, the algorithm generates three index values based on 17 status bits from two Nonlinear Feedback Shift Registers (NFSRs). These index values serve as subscripts to select three bit of data stored in non-volatile memory. The output bit of the selection function is produced through specified nonlinear operations on these three bit of data. Secondly, while ensuring uniform usage of NFSR state bits, the Draco-F algorithm further minimizes hardware costs by simplifying the output function. Finally, Draco-F incorporates dynamic initialization techniques to prevent key backtracking.  Results and Discussions   Security analysis of the Draco-F algorithm, including evaluations against universal TMDTO attacks, zero stream attacks, selective IV attacks, guessing and determining attacks, key recovery attacks, and randomness testing, demonstrates that Draco-F effectively avoids the security vulnerabilities encountered by the original Draco algorithm, thereby offering enhanced security. Software testing results indicate that the Draco-F algorithm achieves a 128-bit security level with an actual 128-bit internal state and higher key stream throughput compared to the Draco algorithm. Additionally, hardware testing results reveal that the circuit area of the Draco-F algorithm is smaller than that of the Draco algorithm.  Conclusions   In comparison to the Draco algorithm, the Draco-F algorithm significantly enhances security by addressing its vulnerabilities. It also offers higher key stream throughput and a reduced circuit area.
Information of National Natural Science Foundation
Overview on Application and Funding Statistics of the National Nature Science Foundation of China in the Electronics and Technology Area for 2024
JIA Renxu, WEN Jun, SUN Ling
2025, 47(1): 279-286.   doi: 10.11999/JEIT250000
[Abstract](530) [FullText HTML](92) [PDF 1376KB](187)
Abstract:
In this report, the application and funding statistics of several projects in the electronics and technology area under Division I of Information Science Department of the National Natural Science Foundation of China in 2024 are summarized . These projects include key program, general program, young scientists fund, fund for less developed regions, excellent young scientists fund and national science fund for distinguished young scholars. Their distribution characteristics and hot topics are sorted out from application codes, the age of applicants, the changes in the past five or ten years. Through the above analysis, it is intended to provide references for the researchers to understand the research directions that need to be strengthened and the impact of some reform measures on the application and funding of projects in this field.
News
more >
Conference
more >
Author Center

Wechat Community