Email alert
Latest Articles
Articles in press have been peer-reviewed and accepted, which are not yet assigned to volumes/issues, but are citable by Digital Object Identifier (DOI).
Display Method:
Available online , doi: 10.11999/JEIT240698
Abstract:
Objective In the Internet of Vehicles (IoV), where differentiated services coexist, the system is progressively evolving towards safety and collaborative control applications, such as autonomous driving. Current research primarily focuses on optimizing mechanisms for high reliability and low latency, with Quality of Service (QoS) parameters commonly used as benchmarks, while the timeliness of vehicle status updates receives less attention. Merely optimizing metrics like transmission delay and throughput is insufficient for ensuring that vehicles obtain status information in a timely manner. For example, in security-critical IoV applications, which require the exchange of state information between vehicles, meeting only the constraints of delay interruption probability or data transmission interruption does not fully address the high timeliness requirements of security services. To tackle this challenge and meet the stringent timeliness demands of security and collaborative applications, this paper proposes a user power control and resource allocation strategy aimed at ensuring information freshness. Methods This paper investigates user power control and resource allocation strategies to ensure information freshness. First, the problem of maximizing the Quality of Experience (QoE) for Vehicle-to-Infrastructure (V2I) users under the constraint of freshness in Vehicle-to-Vehicle (V2V) status updates is formulated based on the system model. Then, by incorporating the queue backlog constraint, equivalent to the Age of Information (AoI) violation constraint, the extreme value theory is applied to optimize the tail distribution of AoI. Furthermore, using the Lyapunov optimization method, the original problem is transformed into minimizing the Lyapunov drift plus a penalty function, based on which the optimal user transmission power is determined. Finally, a resource allocation strategy based on Genetic Algorithm Improved Particle Swarm Optimization (GA-PSO) is proposed, leveraging a hypergraph structure to determine the optimal user channel reuse mode. Results and Discussions Simulation analysis indicates the following: 1. The proposed algorithm employs a channel gain differential partitioning method to cluster V2V links, effectively reducing intra-cluster interference. By integrating GA-PSO, it accelerates the search for the optimal channel reuse pattern in three-dimensional matching, minimizing signaling overhead and avoiding local optima. Compared with benchmark algorithms, the proposed approach increases V2I channel capacity by 7.03% and significantly improves the average QoE for V2I users (Fig. 4 ). 2. As vehicle speed increases, the distance between vehicles also grows, leading to higher transmission power for V2V communication to maintain link reliability and timeliness. This power increase results in reduced V2I channel capacity, subsequently lowering the average QoE for V2I users. Simulation results show a nearly linear relationship between vehicle speed and average QoE for V2I users, suggesting a relatively uniform effect of speed on V2I link capacity (Fig. 5 ). 3. Under varying Vehicle User Equipment (VUE) densities, the extreme event control framework is used to compare the conditional Complementary Cumulative Distribution Function (CCDF) of AoI and V2V link beacon backlog. The equivalent queue constraint, derived using extreme value theory, effectively controls the occurrence of extreme AoI violations. The simulations show improved AoI tail distribution across different VUE densities (Fig. 6 and Fig. 7 ). 4. With decreasing vehicle speed, the CCDF tail distribution of AoI improves (Fig. 8 ). Reduced speed shortens the transmission distance, decreasing V2V link path loss. This lower path loss, combined with less restrictive VUE transmission power limits, increases the V2V link transmission rate. As beacon transmission rates increase, beacon backlog is reduced, and the probability of exceeding a fixed AoI threshold decreases, ensuring the freshness of V2V beacon transmissions. 5. A comparison of curves under identical beacon reach rates (Fig. 9 ) reveals that worst-case AoI consistently increases with rising beacon reach rates. At low beacon arrival rates, the average AoI is high. However, once the V2V beacon queue accumulates beyond a certain threshold, further increases in the update arrival rate also raise the average AoI. In summary, the proposed scheme optimizes both the AoI tail distribution and the QoE for V2I users. Conclusions This paper investigates resource allocation and power control in vehicular network communication scenarios. By simultaneously considering the constraints of transmission reliability and status update timeliness in V2V links, restricted by the Signal-to-Interference-plus-Noise Ratio (SINR) threshold and the AoI outage probability threshold, the proposed strategy ensures both link reliability and information freshness. An extreme control framework is applied to minimize the probability of extreme AoI outage events in V2V links, ensuring the timeliness of transmitted information and meeting service requirements. The Lyapunov optimization method is then used to transform the original problem, yielding the optimal transmission power for both V2I and V2V links. Additionally, a GA-PSO-based three-dimensional matching algorithm is developed to determine the optimal spectrum sharing scheme among V2I, V2V, and subchannels. Numerical results demonstrate that the proposed scheme optimizes the AoI tail distribution while enhancing the QoE for all V2I users.
Available online , doi: 10.11999/JEIT240847
Abstract:
Objective Unmanned Aerial Vehicle-Assisted Federal Edge Learning (UAV-Assisted FEL) communication addresses the data isolation problem and mitigates data leakage risks in terminal devices. However, eavesdroppers may exploit model updates in FEL to recover original private data, significantly threatening the system’s privacy and security. Methods To address this issue, this study proposes a secure aggregation and resource optimization scheme for UAV-Assisted FEL communication systems. Terminal devices train local models using local data and update parameters, which are transmitted to a global UAV. The UAV aggregates these parameters to generate new global model parameters. Eavesdroppers attempt to intercept the transmitted parameters to reconstruct the original data. To enhance security-privacy energy efficiency, the transmission bandwidth, CPU frequency, and transmit power of terminal devices, along with the CPU frequency of the UAV, are jointly optimized. An evolutionary Deep Deterministic Policy Gradient (DDPG) algorithm is proposed to solve this optimization problem. The algorithm intelligently interacts with the system to achieve secure aggregation and resource optimization while meeting latency and energy consumption requirements. Results and Discussions The simulation results validate the effectiveness of the proposed scheme. The experiments evaluate the effects of the scheme on key performance metrics, including system cost, secure transmission rate, and secure privacy energy efficiency, from multiple perspectives. As shown in (Fig. 2 ), with an increasing number of terminal devices, system cost, secure transmission rate, and secure privacy energy efficiency all increase. These results indicate that the proposed scheme ensures system security and enhances energy efficiency, even in multi-device scenarios. As shown in (Fig. 3 ), under varying global iteration counts, the system balances latency and energy consumption by either extending the duration to lower energy consumption or increasing energy consumption to reduce latency. The secure transmission rate rises with the number of global iterations, as fewer iterations allow the system to tolerate higher energy consumption and latency per iteration, leading to reduced transmission power from terminal devices to meet system constraints. Additionally, secure privacy energy efficiency improves with increasing global iterations, further demonstrating the scheme’s capacity to ensure system security and reduce system cost as global iterations increase. As shown in (Fig. 4 ), during UAV flight, secure privacy energy efficiency fluctuates, with higher secure transmission rates observed when the communication environment between terminal devices and the UAV is more favorable. As shown in (Fig. 5 ), the proposed scheme is compared with two baseline schemes: Scheme 1, which minimizes system latency, and Scheme 2, which minimizes system energy consumption. The proposed scheme significantly outperforms both baselines in cost overhead. Scheme 1 achieves a slightly higher secure transmission rate than the proposed scheme due to its focus on minimizing latency at the expense of higher energy consumption. Conversely, Scheme 2 shows a considerably lower secure transmission rate as it prioritizes minimizing energy consumption, resulting in lower transmission power and compromised secure transmission rates. The results indicate that the secure privacy energy efficiency of the proposed scheme significantly exceeds that of the baseline schemes, further demonstrating its effectiveness. Conclusions To enhance data transmission security and reduce system costs, this paper proposes a secure aggregation and resource optimization scheme for UAV-Assisted FEL. Under constraints of limited computational and communication resources, the scheme jointly optimizes the transmission bandwidth, CPU frequency, and transmission power of terminal devices, along with the CPU frequency of the UAV, to maximize the secure privacy energy efficiency of the UAV-Assisted FEL system. Given the complexity of the time-varying system and the strong coupling of multiple optimization variables, an advanced DDPG algorithm is developed to solve the optimization problem. The problem is first modeled as a Markov Decision Process, followed by the construction of a reward function positively correlated with the secure privacy energy efficiency objective. The proposed DDPG network then intelligently generates joint optimization variables to obtain the optimal solution for secure privacy energy efficiency. Simulation experiments evaluate the effects of the proposed scheme on key system performance metrics from multiple perspectives. The results demonstrate that the proposed scheme significantly outperforms other benchmark schemes in improving secure privacy energy efficiency, thereby validating its effectiveness.
Available online , doi: 10.11999/JEIT240560
Abstract:
Objective Recent advances in remote sensing imaging technology have made oriented object detection in remote sensing images a prominent research area in computer vision. Unlike traditional object detection tasks, remote sensing images, captured from a wide-range bird's-eye view, often contain a variety of objects with diverse scales and complex backgrounds, posing significant challenges for oriented object detection. Although current approaches have made substantial progress, existing networks do not fully exploit the contextual information across multi-scale features, resulting in classification and localization errors during detection. To address this, a context-aware multiple receptive field fusion network is proposed, which leverages the contextual correlation in multi-scale features. By enhancing the feature representation capabilities of deep networks, the accuracy of oriented object detection in remote sensing images can be improved. Methods For input remote sensing images, ResNet-50 and a feature pyramid network are first employed to extract features at different scales. The features from the first four layers are then enhanced using a receptive field expansion module. The resulting features are processed through a high-level feature aggregation module to effectively fuse multi-scale contextual information. After obtaining enhanced features at different scales, a feature refinement region proposal network is designed to revise object detection proposals using refined feature representations, resulting in more accurate candidate proposals. These multi-scale features and candidate proposals are then input into the Oriented R-CNN detection head to obtain the final object detection results. The receptive field expansion module consists of two submodules: a large selective kernel convolution attention submodule and a shift window self-attention enhancement submodule, which operate in parallel. The large selective kernel convolution submodule introduces multiple convolution operations with different kernel sizes to capture contextual information under various receptive fields, thereby improving the network’s ability to perceive multi-scale objects. The shift window self-attention enhancement submodule divides the feature map into patches according to predefined window and step sizes and calculates the self-attention-enhanced feature representation of each patch, extracting more global information from the image. The high-level feature aggregation module integrates rich semantic information from the feature pyramid network with low-level features, improving detection accuracy for multi-scale objects. Finally, a feature refinement region proposal network is designed to reduce location deviation between generated region proposals and actual rotating objects in remote sensing images. The deformable convolution is employed to capture geometric and contextual information, refining the initial proposals and producing the final oriented object detection results through a two-stage region-of-interest alignment network. Results and Discussions The effectiveness and robustness of the proposed network are demonstrated on two public datasets: DIOR-R and HRSC2016. For DIOR-R dataset, the AP50, AP75 and AP50:95 metrics are used for evaluation. Quantitative and qualitative comparisons (Fig. 7 ) demonstrate that the proposed network significantly enhances feature representation for different remote sensing objects, distinguishing objects with similar appearances and localizing objects at various scales more accurately. For the HRSC2016 dataset, the mean Average Precision (mAP) is used, and both mAP(07) and mAP(12) are computed for quantitative comparison. The results (Fig. 7 , Table 2 ) further highlight the network’s effectiveness in improving ship detection accuracy in remote sensing images. Additionally, ablation studies (Table 3 ) demonstrate that each module in the proposed network contributes to improved detection performance for oriented objects in remote sensing images. Conclusions This paper proposes a context-aware multi-receptive field fusion network for oriented object detection in remote sensing images. The network includes a receptive field expansion module that enhances the perception ability for remote sensing objects of different sizes. The high-level feature aggregation module fully utilizes high-level semantic information, further improving localization and classification accuracy. The feature refinement region proposal network refines the first-stage proposals, resulting in more accurate detection. The qualitative and quantitative results on the DIOR-R and HRSC2016 datasets demonstrate that the proposed network outperforms existing approaches, providing superior detection results for remote sensing objects of varying scales.
Available online , doi: 10.11999/JEIT231120
Abstract:
Objective Under the hundreds of kilometers of transmission distance in low-orbit satellite communication, both power consumption and latency are significantly higher than in ground-based networks. Additionally, many data collection services exhibit short burst characteristics. Conventional resource reservation-based access methods have extremely low resource utilization, whereas dynamic application-based access methods incur large signaling overhead and fail to meet the latency and power consumption requirements for satellite Internet of Things (IoT). Random access technology, which involves competition for resources, can better accommodate the short burst data packet services typical of satellite IoT. However, as the load increases, data packet collisions at satellite access points lead to a sharp decline in actual throughput under medium and high loads. In terrestrial wireless networks, technologies such as near-far effect management and power control are commonly employed to create differences in packet reception power. However, due to the large number of terminals covered and the long distance between the satellite and the Earth, these techniques are unsuitable for satellite IoT, preventing the establishment of an adequate carrier-to-noise ratio. Developing separation conditions suitable for satellite IoT access scenarios is a key research focus. Considering the future development of spaceborne digital phased array technology, this paper leverages the data-driven beamforming capability of the on-board phased array and introduces the concept of spatial auxiliary channels. By employing a sum-and-difference beam design method, it expands the dimensions for separating collision signals beyond the time, frequency, and energy domains. This approach imposes no additional processing burdens on the terminal and aligns with the low power consumption and minimal control design principles for satellite IoT. Methods To address packet collision issues in hotspot areas of satellite IoT services, this study extends the conventional time-slot ALOHA access framework by introducing an auxiliary receiving beam alongside the random access of conventional receiving beams. The main and auxiliary beams simultaneously receive signals from the same terminal. By optimizing the main lobe gain of the auxiliary beam, a difference in the Signal-to-Noise Ratio (SNR) between the signals received by the main and auxiliary beams is established. This difference is then separated using Successive Interference Cancellation (SIC) technology, leveraging the correlation between the received signals of the auxiliary and main beams to support the separation of collision signals and ensure reliable reception of satellite IoT signals. Results and Discussions Firstly, the system throughput of the proposed scheme is simulated (Fig. 4 ). The theoretical throughput derived in the previous section is consistent with the simulation results. When the normalized load reached 1.8392, the maximum system throughput is 0.81085 packets/slot. Compared with existing methods such as SA, CRDSA, and IRSA, the proposed scheme demonstrated improved system throughput and packet loss rate performance in both peak and high-load regions, with a peak throughput increase of approximately 120%. Secondly, the influence of amplitude, phase, and angle measurement errors on system performance is evaluated. The angle measurement error had a greater effect on throughput performance than amplitude and phase errors. Amplitude and phase errors had a smaller effect on the main lobe gain but a larger effect on the sidelobe gain (Tables 3 -5 ). Therefore, angle measurement errors have a considerable effect on throughput improvement. Regarding beamwidth, as beamwidth increased, the roll-off of the corresponding difference beam with 10 array elements is gentler than that with 32 array elements. However, the peak gain of the auxiliary beam decreased, leading to reduced system throughput for configurations with larger main lobe widths. Conclusions This paper presents an auxiliary beam design strategy for power-domain signal separation in satellite IoT scenarios, aiming to improve system throughput and packet loss rate performance. The approach incorporates spatial domain processing and proposes the concept of auxiliary receiving beams. By generating a difference beam derived from the main beam and using it as the auxiliary beam, the scheme constructs the required SNR difference for power-domain signal separation, enhancing the probability of successfully receiving collided signals. Simulation results indicate that, compared with SA, the peak system throughput increased by 120%, with significant improvements observed. Furthermore, the scheme demonstrated robustness by tolerating moderate system and measurement errors, facilitating large-capacity random access for satellite IoT terminals.
Available online , doi: 10.11999/JEIT240733
Abstract:
Objective Mobile Edge Computing (MEC) is a distributed computing paradigm that brings computational resources closer to users, alleviating issues such as high latency and interference found in cloud computing. To enhance the offloading performance of MEC systems and promote green communication, Reconfigurable Intelligent Surface (RIS), a low-cost and easily deployable technology, offers a promising solution. RIS consists of numerous low-cost reflecting elements that can adjust phase shifts to alter the amplitude and phase of incident signals, thereby reconstructing the electromagnetic environment. This transforms traditional passive adaptation into active control. However, the signal reflected by RIS must pass through a two-stage cascaded channel, which is susceptible to multiplicative fading, leading to limited performance gains when direct links are unobstructed. To mitigate this, the concept of active RIS has been proposed, integrating signal amplification circuits into RIS elements, which not only reflect but also amplify signals, effectively overcoming this issue. Additionally, RIS can only transmit or reflect incident signals, limiting coverage to half-space: either the user and base station must be on the same side (reflecting RIS) or on opposite sides (transmitting RIS). This constraint limits deployment flexibility. To address this, Simultaneously Transmitting And Reflecting Reconfigurable Intelligent Surface (STAR-RIS) is proposed, combining both transmission and reflection functions, where part of the signal is reflected to the same side, and the rest is transmitted to the opposite side. To address the challenges in practical RIS-assisted MEC systems, the active Simultaneously Transmitting And Reflecting Reconfigurable Intelligent Surface (aSTAR-RIS) is integrated into the MEC system to overcome geographic deployment constraints and effectively mitigate the effects of multiplicative fading. Methods Considering the computational resources available at the MEC server, the energy consumption of the aSTAR-RIS, and the phase shift coupling constraints, the task offloading ratio, computational resource allocation, Multi-User Detection (MUD) matrix, aSTAR-RIS phase shift, and transmission power are jointly optimized, resulting in a multivariable coupled weighted total latency minimization problem. To solve this problem, an iterative algorithm combining Block Coordinate Descent (BCD) and Penalty Dual Decomposition (PDD) algorithms is proposed. In each iteration, the original problem is decomposed into two subproblems: one for optimizing computational resource allocation and task offloading ratio, and the other for designing the aSTAR-RIS phase shift, MUD matrix, and transmission power. For the first subproblem, the Lagrange multiplier method is used to incorporate constraints into the objective function and enable efficient optimization. The optimal Lagrange multiplier and resource allocation are found using the bisection method. The second subproblem involves handling the fractional objective function using the weighted minimum mean square error algorithm. From the first-order conditions, the optimal MUD matrix is derived. For the aSTAR-RIS phase shift optimization, a non-convex phase shift coupling constraint is decoupled using the PDD algorithm. Results And discussions as shown in (Fig. 2 ), with increasing iterations, the weighted total latency steadily decreases and stabilizes, validating the effectiveness of the proposed algorithm. A comparison with three benchmark schemes reveals that, although the proposed scheme converges more slowly, it achieves the lowest weighted total latency upon convergence, with a 12.66% reduction compared to the passive STAR-RIS scheme. This improvement is mainly due to the power amplification effect, which reduces the impact of multiplicative fading, thereby enhancing the received signal at the base station and reducing latency. As illustrated in (Fig. 3 ), the weighted total latency decreases as the number of aSTAR-RIS elements increases, allowing for more reflection paths and higher channel gain. For fewer elements, aSTAR-RIS shows a significant performance gain over STAR-RIS, but as the number of elements grows, the performance of both aSTAR-RIS and passive STAR-RIS converges, primarily due to thermal noise and power constraints. Moreover, compared to the benchmark scheme that optimizes for maximum rate, the proposed scheme shows significant advantages in reducing latency. As shown in (Fig. 4 ), when the aSTAR-RIS power overhead increases, the weighted total latency decreases, further showing the potential of aSTAR-RIS in improving communication performance via active amplification. Conclusions This paper investigates a task offloading scheme for an aSTAR-RIS-assisted MEC system, which optimizes the task offloading ratio, computational resource allocation, MUD matrix, aSTAR-RIS phase shift, and transmission power to minimize total user delay. The optimization problem is solved using an iterative approach, decomposing the problem into two subproblems and applying the Lagrange multiplier method, PDD, and BCD algorithms. Simulation results demonstrate that the proposed algorithm significantly outperforms benchmark schemes in terms of weighted total latency. The findings validate the effectiveness of aSTAR-RIS in MEC systems, highlighting its advantages over passive STAR-RIS in task offloading, resource optimization, and communication performance.
Available online , doi: 10.11999/JEIT240640
Abstract:
Objective Breathing rate is a vital physiological indicator of human health. Abnormal changes in this rate can signify diseases like chronic obstructive pulmonary disease, sleep apnea syndrome, and nocturnal hypoventilation syndrome. Timely and accurate detection of these changes can help identify health risks early, enable professional medical intervention, and optimize treatment timing, thereby improving overall health. However, current detection methods often face limitations due to noise interference and “blind spot” issues, which impact accuracy and robustness. To address these challenges, this paper employs Wi-Fi devices to measure indoor human breathing rates using Integrated Sensing And Communication (ISAC) technology. By combining Variational Modal Decomposition (VMD) and Hilbert-Huang Transform (HHT), a new breathing rate sensing algorithm is proposed. This approach aims to enhance detection accuracy and robustness, resolve the “blind spot” problem in existing technologies, and offer an efficient and reliable solution for health monitoring. Methods Wi-Fi links with high environmental sensitivity were selected to construct the Channel State Information (CSI) ratio model. Subcarriers of the filtered CSI ratio time series were projected, and amplitude and phase information were combined to generate a candidate set of breathing mode signals. For each subcarrier, the sequence with the highest short-term breath noise ratio, determined by periodicity, was identified as the final breath pattern. A threshold was then applied to select relevant subcarriers. Time-frequency analysis using VMD and HHT eliminated modal components unrelated to the human breath rate, and the remaining components were reconstructed. Principal Component Analysis (PCA) was applied for dimensionality reduction, selecting components accounting for over 99% of the variance. The ReliefF algorithm was subsequently used to reconstruct the breath signal into a fused signal, from which the breathing rate was calculated using a peak detection algorithm. Results and Discussions Experiments were conducted in two scenarios: a conference office and a corridor. In both setups, a pair of transceivers was deployed, with a 2-meter distance maintained between the transmitter and receiver. The transmitter used one omnidirectional antenna, and the receiver had three antennas positioned perpendicular to the ground. Participants were seated on the vertical bisector of the Line Of Sight (LOS) path, synchronizing their breathing with a metronome as CSI data were recorded. Each test lasted 1 minute, with a confirmed breathing rate of 16 bpm. System parameters used in the experiments are detailed in Table 1. In the conference office scenario, this paper collected data at various distances from the participant to the transceiver. As illustrated in Figure 9, the Mean Estimation Accuracy (MEA) of our algorithm remains above 97%, even when the participant is 5 meters away. In contrast, the MEA of the other two methods drops by 4% and 5%, respectively. As the sensing distance increases, the multipath effect intensifies, leading to a gradual weakening of the reflected signal and greater noise interference. This impact significantly challenges the breathing detection accuracy of the other methods. The algorithm presented in this paper incorporates a VMD-HHT time-frequency analysis step. This enhancement allows for effective signal decomposition and feature extraction, markedly improving the accuracy of detecting the target breathing signal. Moreover, the method exhibits strong adaptability and robustness, effectively addressing noise interference and multipath effects in complex environments, thus demonstrating more stable performance. In the corridor scenario, we evaluated the algorithm's performance at varying distances. The average absolute error of the algorithm was measured with distances ranging from 2 meters to 5 meters. At 2 meters, the Mean Absolute Error (MAE) recorded was 0.37 bpm, and even at 5 meters, the MAE only increased to 0.45 bpm, remaining below 0.5 bpm. As the distance between the target and transceiver increased from 3 to 5 meters, the MAE gradually rose. This trend is attributed to the further attenuation of the signal reflected from the human target, along with the escalating multipath and signal attenuation effects in the environment. Conclusions The experimental results indicate that the MEA of this sensing method exceeds 97% in both the conference office and corridor scenarios. This effectively addresses the "blind spot" issue present in current technologies. The enhanced accuracy and robustness of the algorithm outperform existing sensing schemes. Moreover, this method broadens the application of ISAC in breathing detection and opens new avenues for developing intelligent health management systems in the future.
Available online , doi: 10.11999/JEIT240469
Abstract:
Objective In radar target tracking, tracking accuracy is often influenced by sensor measurement biases and measurement noise. This is particularly true when measurement biases change abruptly and measurement noise is unknown and time-varying. Ensuring effective target tracking under these conditions poses a significant challenge. An adaptive target tracking method is proposed, utilizing a marginalized cubature Kalman filter to address this issue. Methods (1) Initially, measurements taken at adjacent time points are differentiated to formulate the differential measurement equation, thereby effectively mitigating the influence of measurement biases that are either constant or change gradually between adjacent observations. Concurrently, the target states at these moments are expanded to create an extended state vector facilitating real-time filtering. (2) Following the differentiation of measurements, sudden changes in measurement biases may cause the differential measurement at the current moment to be classified as outliers. To identify the occurrence of these abrupt bias changes, a Beta-Bernoulli indicator variable is established. If such a change is detected, the differential measurement for that moment is disregarded, and the predicted state is adopted as the updated state. In the absence of any abrupt changes, standard filtering procedures are conducted. The Gaussian measurement noise, despite having unknown covariance, continues to follow a Gaussian distribution after differentiation, allowing its covariance matrix to be modeled using the inverse Wishart distribution. (3) A joint distribution is formulated for the target state, indicator variables, and the covariance matrix of the measurement noise. The approximate posteriors of each parameter are derived using variational Bayesian inference. (4) To mitigate the increased filtering burden arising from the high-dimensional extended state vector, the extended target state is marginalized, and a marginalized cubature Kalman filter for target tracking is implemented in conjunction with the cubature Kalman filtering method. Results and Discussions The target tracking performance is clearly illustrated, indicating that the proposed method accurately identifies abrupt measurement biases while effectively managing unknown time-varying measurement noise. This leads to a tracking performance that significantly exceeds that of the comparative methods. The findings further support the conclusions by examining the Root Mean Square Error (RMSE). Additionally, the stability of the proposed method is demonstrated. The results reveal that the computational load associated with the proposed method is greatly reduced through marginalization processing. This reduction occurs because, during the variational Bayesian iteration process, cubature sampling and integration are performed multiple times. Once the target state is marginalized, the dimensionality of the cubature sampling is halved, and the number of sampling points for each variational iteration is also reduced by half. As a result, the computational load during the nonlinear propagation of the sampling points decreases, with the amount of computation reduction increasing with the number of variational iterations. Furthermore, the results demonstrate that marginalization does not compromise tracking accuracy, thereby further validating the effectiveness of marginalization processing. This finding also confirms that marginalization processing can be extended to other nonlinear variational Bayesian filters based on deterministic sampling, providing a means to reduce computational complexity. Conclusions This paper proposes an adaptive marginalized cubature Kalman filter to improve target tracking in scenarios with measurement biases and unknown time-varying measurement noise. The approach incorporates measurement differencing to eliminate constant biases, constructs indicator variables to detect abrupt biases, and models the unknown measurement noise covariance matrix using the inverse Wishart distribution. A joint posterior distribution of the parameters is established, and the approximate posteriors are solved through variational Bayesian inference. Additionally, marginalization of the target state is performed before implementing tracking within the CKF framework, reducing the filtering burden. The results of our simulation experiments yield the following conclusions: (1) The proposed method demonstrates superior target tracking performance compared to existing techniques in scenarios involving abrupt measurement biases and unknown measurement noise; (2) The marginalization processing strategy significantly alleviates the filtering burden of the proposed filter, making it applicable to more complex nonlinear variational Bayesian filters, such as robust nonlinear random finite set filters, to reduce filtering complexity; (3) This filtering methodology can be extended to target tracking scenarios in higher dimensions.
Available online , doi: 10.11999/JEIT240741
Abstract:
Objective Federated Learning (FL) represents a distributed learning framework with significant potential, allowing users to collaboratively train a shared model while retaining data on their devices. However, the substantial differences in computing, storage, and communication capacities across FL devices within complex networks result in notable disparities in model training and transmission latency. As communication rounds increase, a growing number of heterogeneous devices become stragglers due to constraints such as limited energy and computing power, changes in user intentions, and dynamic channel fluctuations, adversely affecting system convergence performance. This study addresses these challenges by jointly incorporating assistance mechanisms and reducing device overhead to mitigate the impact of stragglers on model accuracy and training latency. Methods This paper designs a FL architecture integrating joint edge-assisted training and adaptive sparsity and proposes an adaptively sparse FL optimization algorithm based on edge-assisted training. First, an edge server is introduced to provide auxiliary training for devices with limited computing power or energy. This reduces the training delay of the FL system, enables stragglers to continue participating in the training process, and helps maintain model accuracy. Specifically, an optimization model for auxiliary training, communication, and computing resource allocation is constructed. Several deep reinforcement learning methods are then applied to obtain the optimized auxiliary training decision. Second, based on the auxiliary training decision, unstructured pruning is adaptively performed on the global model during each communication round to further reduce device delay and energy consumption. Results and Discussions The proposed framework and algorithm are evaluated through extensive simulations. The results demonstrate the effectiveness and efficiency of the proposed method in terms of model accuracy and training delay.The proposed algorithm achieves an accuracy rate approximately 5% higher than that of the FL algorithm on both the MNIST and CIFAR-10 datasets. This improvement results from low-computing-power and low-energy devices failing to transmit their local models to the central server during multiple communication rounds, reducing the global model's accuracy (Table 3 ).The proposed algorithm achieves an accuracy rate 18% higher than that of the FL algorithm on the CIFAR-10 dataset when the data on each device follow a non-IID distribution. Statistical heterogeneity exacerbates model degradation caused by stragglers, whereas the proposed algorithm significantly improves model accuracy under such conditions (Table 4 ).The reward curves of different algorithms are presented (Fig. 7 ). The reward of FL remains constant, while the reward of EAFL_RANDOM fluctuates randomly. ASEAFL_DDPG shows a more stable reward curve once training episodes exceed 120 due to the strong learning and decision-making capabilities of DDPG and DQN. In contrast, EAFL_DQN converges more slowly and maintains a lower reward than the proposed algorithm, mainly due to more precise decision-making in the continuous action space and an exploration mechanism that expands action selection (Fig. 7 ).When the computing power of the edge server increases, the training delay of the FL algorithm remains constant since it does not involve auxiliary training. The training delay of EAFL_RANDOM fluctuates randomly, while the delays of ASEAFL_DDPG and EAFL_DQN decrease. However, ASEAFL_DDPG consistently achieves a lower system training delay than EAFL_DQN under the same MEC computing power conditions (Fig. 9 ).When the communication bandwidth between the edge server and devices increases, the training delay of the FL algorithm remains unchanged as it does not involve auxiliary training. The training delay of EAFL_RANDOM fluctuates randomly, while the delays of ASEAFL_DDPG and EAFL_DQN decrease. ASEAFL_DDPG consistently achieves lower system training delay than EAFL_DQN under the same bandwidth conditions (Fig. 10 ). Conclusions The proposed sparse-adaptive FL architecture based on an edge-assisted server mitigates the straggler problem caused by system heterogeneity from two perspectives. By reducing the number of stragglers, the proposed algorithm achieves higher model accuracy compared with the traditional FL algorithm, effectively decreases system training delay, and improves model training efficiency. This framework holds practical value, particularly for FL deployments where aggregation devices are selected based on statistical characteristics, such as model contribution rates. Straggler issues are common in such FL scenarios, and the proposed architecture effectively reduces their occurrence. Simultaneously, devices with high model contribution rates can continue participating in multiple rounds of federated training, lowering the central server's frequent device selection overhead. Additionally, in resource-constrained FL environments, edge servers can perform more diverse and flexible tasks, such as partial auxiliary training and partitioned model training.
Available online , doi: 10.11999/JEIT240782
Abstract:
Objective In 2017, the PFP algorithm was introduced as an ultra-lightweight block cipher to address the demand for efficient cryptographic solutions in constrained environments, such as the Internet of Things (IoT). With a hardware footprint of approximately 1355 GE and low power consumption, PFP has attracted attention for its ability to deliver high-speed encryption with minimal resource usage. Its encryption and decryption speeds outperform those of the internationally recognized PRESENT cipher by a factor of 1.5, making it highly suitable for real-time applications in embedded systems. While the original design documentation asserts that PFP resists various traditional cryptographic attacks, including differential, linear, and impossible differential attacks, the possibility of undiscovered vulnerabilities remains unexplored. This study evaluates the algorithm’s resistance to related-key differential attacks, a critical cryptanalysis method for lightweight ciphers, to determine the actual security level of the PFP algorithm using formal cryptanalysis techniques. Methods To evaluate the security of the PFP algorithm, Satisfiability Modulo Theories (SMT) is used to model the cipher’s round function and automate the search for distinguishers indicating potential design weaknesses. SMT, a formal method increasingly applied in cryptanalysis, facilitates automated attack generation and the detection of cryptographic flaws. The methodology involved constructing mathematical models of the cipher’s rounds, which are tested for differential characteristics under various key assumptions. Two distinguisher models are developed: one based on single-key differentials and the other on related-key differentials, the latter being the focus of this analysis. These models automated the search for weak key differentials that could enable efficient key recovery attacks. The analysis leveraged the nonlinear substitution-permutation structure of the PFP round function to systematically identify vulnerabilities. The results are examined to estimate the probability of key recovery under different attack scenarios and assess the effectiveness of related-key differential cryptanalysis against the full-round PFP cipher. Results and Discussions The SMT-based analysis revealed a critical vulnerability in the PFP algorithm. A related-key differential characteristic with a probability of 2-62 is identified, persisting through 32 encryption rounds. This characteristic indicates a predictable pattern in the cipher’s behavior under related-key conditions, which can be exploited to recover the secret key. Such differentials are particularly concerning as they expose a significant weakness in the cipher’s resistance to related-key attacks, a critical threat in IoT applications where keys may be reused or related across multiple devices or sessions.Based on this finding, a key recovery attack is developed, requiring only 263 chosen plaintexts and 248 full-round encryptions to retrieve the 80-bit master key. The efficiency of this attack demonstrates the vulnerability of the PFP cipher to practical cryptanalysis, even with limited computational resources. The attack’s relatively low complexity suggests that PFP may be unsuitable for applications demanding high security, particularly in environments where adversaries can exploit related-key differential characteristics. Moreover, these results indicate that the existing resistance claims for the PFP cipher are insufficient, as they do not account for the effectiveness of related-key differential cryptanalysis. This challenges the assertion that the PFP algorithm is secure against all known cryptographic attacks, emphasizing the need for thorough cryptanalysis before lightweight ciphers are deployed in real-world scenarios.(Fig. 1 : Related-key differential characteristic with probability 2-62 in 32 rounds; Table 1 : Attack complexity and resource requirements for related-key recovery.) Conclusions In conclusion, this paper presents a cryptographic analysis of the PFP lightweight block cipher, revealing its vulnerability to related-key differential attacks. The proposed key recovery attack demonstrates that, despite its efficiency in hardware and speed, PFP fails to resist attacks exploiting related-key differential characteristics. This weakness is particularly concerning for IoT applications, where key reuse or related keys across devices is common. These findings highlight the need for further refinement in lightweight cipher design to ensure robust resistance against advanced cryptanalysis techniques. As lightweight ciphers continue to be deployed in security-critical systems, it is essential that designers consider all potential attack vectors, including related-key differentials, to strengthen security guarantees. Future work should focus on enhancing the cipher’s security by exploring alternative key-schedule designs or increasing the number of rounds to mitigate the identified vulnerabilities. Additionally, this study emphasizes the effectiveness of SMT-based formal methods in cryptographic analysis, providing a systematic approach for identifying previously overlooked weaknesses in cipher designs.
Available online , doi: 10.11999/JEIT240866
Abstract:
Objective : To address the balance between efficient energy utilization and information freshness in UAV-assisted data collection for the Internet of Things (IoT) using Reconfigurable Intelligent Surfaces (RIS). Methods : A data collection optimization policy based on deep reinforcement learning is proposed. Considering flight energy consumption, communication complexity, and Age of Information (AoI) constraints, a joint optimization scheme is designed using a Double Deep Q-Network (DDQN). The scheme integrates UAV trajectory planning, IoT device scheduling, and RIS phase adjustment, mitigating Q-value overestimation observed in traditional Q-learning methods. Results and Discussions : The proposed method enables the UAV to dynamically adjust its trajectory and communication strategy based on real-time environmental conditions, enhancing data transmission efficiency and reducing energy consumption. Simulation results demonstrate superior convergence compared with traditional methods (Fig. 3 ). The UAV trajectory shows that the proposed method effectively accomplishes the data collection task (Fig. 4 ). Furthermore, rational allocation of energy and communication resources allows dynamic adaptation to varying communication environment parameters, ensuring an optimal balance between energy consumption and AoI (Fig. 5 )(Fig. 6 ). Conclusions : The deep reinforcement learning-based optimization policy for UAV-assisted IoT data collection with RIS effectively resolves the trade-off between energy utilization and information freshness. This robust solution improves data collection efficiency in dynamic communication environments.
Available online , doi: 10.11999/JEIT240719
Abstract:
Objective Vehicular networks, as key components of intelligent transportation systems, are encountering increasing spectrum resource limitations within their dedicated 25 MHz communication band, as well as challenges from electromagnetic interference in typical communication environments. To address these issues, this paper integrates cognitive radio technology with radar sensing and introduces Hybrid-Reconfigurable Intelligent Surfaces (H-RIS) to jointly optimize radar sensing, data transmission, and computation. This approach aims to enhance spectrum resource utilization and the Joint Throughput Capacity (JTC) of vehicular networks. Methods A phased optimization approach is adopted to alternately optimize power allocation, time allocation, and reflection components in order to identify the best solution. The data transmission capacity of secondary users is characterized by defining a performance index for JTP. The problem is tackled through a two-stage optimization strategy where power allocation, time allocation, and reflection element optimization are solved iteratively to achieve the optimal solution. First, a joint optimization problem for sensing, communication, and computation is formulated. By jointly optimizing time allocation, H-RIS reflection element coefficients, and power allocation, the goal is to maximize the joint throughput capacity. The block coordinate descent method decomposes the optimization problem into three sub-problems. In the optimization of reflection element coefficients, a stepwise approach is employed, where passive reflection elements are fixed to optimize active reflection elements and vice versa. Results and Discussions The relationship between joint throughput and the number of iterations for the proposed Alternating Optimization Iterative Algorithm (AOIA) is shown (Figure 4 ). The results indicate that both algorithms converge after a finite number of iterations. The correlation between the target user's joint throughput and radar power is presented (Figure 5 ). In the H-RIS-assisted Integrated Sensing Communication and Computation Vehicle-to-Everything (ISCC-V2X) scenario, the joint throughput of the Aimed Secondary User (ASU) is maximized through optimal power configuration (Figure 5 ). The comparison of the target user joint throughput with radar system power for the proposed algorithm and baseline schemes is shown (Figure 6 ), demonstrating that the proposed method significantly outperforms random Reconfigurable Intelligent Surfaces (RIS) and No-RIS schemes under the same parameter settings. Furthermore, the proposed H-RIS optimization scheme outperforms both Random H-RIS and traditional passive optimization RIS in terms of joint throughput.The relationship between target user joint throughput and the number of H-RIS reflection elements is illustrated (Figure 7 ). The results show that the proposed scheme provides a significant performance improvement over both Random RIS and No-RIS schemes under the same parameter settings. The relationship between the transmit power of the target secondary user's joint throughput and the transmit power of the ASU is depicted (Figure 9 ), highlighting that joint throughput increases with transmit power in all scenarios. The relationship between joint throughput and the number of active reflection elements for the proposed algorithm and other benchmark schemes is shown (Figure 10 ), demonstrating that joint throughput increases with the number of active reflection elements in H-RIS scenarios, with the proposed scheme exhibiting a faster growth rate than Random H-RIS. The relationship between ASU joint throughput, radar sensing time, and radar power is presented (Figure 11 ), revealing that an optimal joint time and power allocation strategy exists. This strategy maximizes ASU joint throughput while ensuring H-RIS presence and sufficient protection for the primary user. Conclusion To address the challenges of spectrum resource scarcity and low data transmission efficiency in vehicular networks, this paper focuses on improving the joint throughput of intelligent vehicle users, enhancing spectrum utilization, and achieving efficient data transmission in the H-RIS-assisted ISCC-V2X scenario. A joint optimization method for vehicular network perception, communication, and computation based on H-RIS is explored. The introduction of H-RIS aims to enhance data transmission efficiency while considering the interests of both primary and secondary users. The joint optimization problem for the target secondary user's perception, communication, and computation is analyzed. First, the joint allocation scenario for the H-RIS-assisted ISCC-V2X system is constructed, introducing the signal model, radar perception model, communication model, and computation model. Using these models, a joint optimization problem is formulated. Through alternating optimization, the optimal H-RIS reflection element coefficients, time allocation vector, and power allocation vector are derived to maximize the joint throughput. Simulation results demonstrate that the incorporation of H-RIS significantly improves the joint throughput of the target secondary user. Furthermore, an optimal power allocation scheme is identified that maximizes the joint throughput. When both time allocation and power allocation are considered jointly, simulations show the existence of an optimal scheme that maximizes the joint throughput of the target secondary user.
Available online , doi: 10.11999/JEIT240590
Abstract:
Objective: As economic and social development rapidly progresses, the demand for applications across various sectors is increasing. The use of higher frequency bands for future 6G communication is advancing to facilitate enhanced perception. Additionally, due to the inherent similarities in signal processing and hardware configurations between sensing and communication, Integrated Sensing And Communication (ISAC) is becoming a vital area of research for future technological advancements. However, during sudden emergencies, communication coverage and target detection in rural and remote areas with limited infrastructure face considerable challenges. The integration of communication and sensing in Unmanned Aerial Vehicles (UAVs) presents a unique opportunity for equipment flexibility and substantial research potential. Despite this, current academic research primarily focuses on single UAV systems, often prioritizing communication rates while neglecting fairness in multi-user environments. Furthermore, existing literature on multiple UAV systems has not sufficiently addressed the variations in user or target numbers and their random distributions, which impedes the system’s capability to adaptively allocate resources based on actual demands and improve overall efficiency. Therefore, exploring the application of integrated communication and sensing technologies within multi-UAV systems to provide essential services to ground-based random terminals holds significant practical importance. Methods: This paper addresses the scenario in which ground users and sensing targets are randomly distributed within clusters. The primary focus is on the spatial deployment of UAVs and their beamforming techniques tailored for ground-based equipment. The system seeks to enhance the lower bound of user transmission achievable rates by optimizing the communication and sensing beamforming variables of the UAVs, while also adhering to essential communication and sensing requirements. Key constraints considered include the aerial-ground coverage correlation strategy, UAV transmission power, collision avoidance distances, and the spatial deployment locations. To effectively address the non-convex optimization problem, the study divides it into two sub-problems: the joint optimization of aerial-ground correlation and planar position deployment, and the joint optimization of communication and sensing beamforming. The first sub-problem is solved using an improved Mean Shift algorithm (MS), which focuses on optimizing aerial-ground correlation variables alongside UAV planar coordinate variables (Algorithm 1). The second sub-problem employs a quadratic transformation technique to optimize communication beamforming variables (Algorithm 2), further utilizing a successive convex approximation strategy to tackle the optimization challenges associated with sensing beamforming (Algorithm 3). Ultimately, a Block Coordinate Descent algorithm is implemented to alternately optimize the two sets of variables (Algorithm 4), leading to a relatively optimal solution for the system. Results and Discussions: Algorithm 1 focuses on establishing aerial-ground correlations and determining the planar deployment of UAVs. During the clustering phase, users and targets are treated as equivalent sample entities, with ground sample points generated through a Poisson clustering random process. These points are subsequently categorized into nine optimal clusters using an enhanced mean shift algorithm. Samples within the same Voronoi category are assigned to a single UAV, positioned at the mean shift center for optimal service coverage. Algorithm 4 addresses the beamforming requirements for UAVs servicing ground users or targets. Remarkably, multiple UAVs achieve convergence within seven iterations concerning regional convergence. The dynamic interplay between communication and sensing resources is highlighted by variations in the number of sensing targets and the altitude of UAV deployment. The fairness-first approach proposed in this paper, in contrast to a rate-centric strategy, ensures maximum individual transmission quality while maintaining balanced system performance. Furthermore, the overall scheme, referred to as MS+BCD, is compared with two benchmark algorithms: Block Coordinate Descent beamforming optimization with Central point Sensing Deployment (CSD+BCD) and Random Sensing Beamforming with Mean Shift deployment (MS+RSB). The proposed optimization strategy clearly demonstrates advantages in system effectiveness, irrespective of changes in beam pattern gain or increases in UAV antenna numbers. Conclusions: This paper addresses the multi-UAV coverage challenge within the framework of Integrated Sensing and Communication. With a focus on equitable user transmission rates, this study incorporates constraints related to communication and sensing power, beam pattern gain, and aerial-ground correlation. By employing an enhanced Mean Shift algorithm along with the Block Coordinate Descent method, this research optimizes a variety of parameters, including aerial-ground correlation strategies, UAV planar deployment, and communication-sensing beamforming. The objective is to maximize the system’s transmission rate while ensuring high-quality user transmission and fair resource allocation, thereby providing a novel approach for multi-UAV systems enhanced by integrated communication and sensing. Future research will extend these findings to tackle additional altitude optimization challenges and to ensure equitable resource distribution across different UAV coverage zones.
Available online , doi: 10.11999/JEIT240524
Abstract:
Objective The Draco algorithm is a stream cipher based on the Consisting of the Initial Value and Key-prefix (CIVK) scheme. It claims to provide security against Time Memory Data TradeOff (TMDTO) attacks. However, its selection function has structural flaws that attackers can exploit. These weaknesses can compromise its security. To address these vulnerabilities and lower the hardware costs associated with the Draco algorithm, this paper proposes an improved version called Draco-F. This new algorithm utilizes state bit indexing and dynamic initialization. Methods Firstly, to address the small cycle problems of the selection function and the high hardware costs in the Draco algorithm, the Draco-F algorithm introduces a new selection function. This function employs state bit indexing to extend the selection function’s period and reduce hardware costs. Specifically, the algorithm generates three index values based on 17 status bits from two Nonlinear Feedback Shift Registers (NFSRs). These index values serve as subscripts to select three bits of data stored in non-volatile memory. The output bit of the selection function is produced through specified nonlinear operations on these three bits of data. Secondly, while ensuring uniform usage of NFSR state bits, the Draco-F algorithm further minimizes hardware costs by simplifying the output function. Finally, Draco-F incorporates dynamic initialization techniques to prevent key backtracking. Results and Discussions Security analysis of the Draco-F algorithm, including evaluations against universal TMDTO attacks, zero stream attacks, selective IV attacks, guessing and determining attacks, key recovery attacks, and randomness testing, demonstrates that Draco-F effectively avoids the security vulnerabilities encountered by the original Draco algorithm, thereby offering enhanced security. Software testing results indicate that the Draco-F algorithm achieves a 128-bit security level with an actual 128-bit internal state and higher key stream throughput compared to the Draco algorithm. Additionally, hardware testing results reveal that the circuit area of the Draco-F algorithm is smaller than that of the Draco algorithm. Conclusions In comparison to the Draco algorithm, the Draco-F algorithm significantly enhances security by addressing its vulnerabilities. It also offers higher key stream throughput and a reduced circuit area.
Available online , doi: 10.11999/JEIT240083
Abstract:
Objective In the 6G era, the rapid increase in wireless devices coupled with a scarcity of spectrum resources necessitates the enhancement of system capacity, data rates, and latency. To meet these demands, Integrated Sensing And Communications (ISAC) technology has been proposed. Unlike traditional methods where communication and radar functionalities operate separately, ISAC merges wireless communication with radar sensing, utilizing a shared infrastructure and spectrum. This innovative approach maximizes the efficiency of compact wireless hardware and improves spectral efficiency. However, the integration of communication and radar signals into transmitted beams introduces vulnerabilities, as these signals can be intercepted by potential eavesdroppers, increasing the risk of data leakage. As a result, Physical Layer Security (PLS) becomes essential for ISAC systems. PLS capitalizes on the randomness and diversity inherent in wireless channels to create transmission schemes that mitigate eavesdropping risks and bolster system security. Nevertheless, PLS's effectiveness is contingent on the quality of wireless channels, and the inherently fluctuating nature of these channels leads to inconsistent security performance, posing significant challenges for system adaptability and optimization. Moreover, Intelligent Reflecting Surfaces (IRS) emerge as a pivotal technology in 6G networks, offering the capability to control wireless propagation and the environment by adjusting reflection phase shifts. This advancement facilitates the establishment of reliable communication and sensing links, thereby enhancing the ISAC system's sensing coverage, accuracy, wireless communication performance, and overall security. Consequently, IRS presents a vital solution for addressing PLS challenges in ISAC systems. In light of this, the paper proposes a design study focused on IRS-assisted ISAC systems incorporating cooperative jamming to effectively tackle security concerns. Methods This paper examines the impact of eavesdroppers on the security performance of ISAC systems and proposes the secure IRS-ISAC system model. The proposed model features a dual-functional base station equipped with antennas, an IRS with reflective elements, single-antenna legitimate users, and an eavesdropping device. To enhance system security, a jammer equipped with antennas is integrated into the system, transmitting interference signals to mitigate the effects of eavesdroppers. Given the constraints on maximum transmit power for both the base station and the jammer, as well as the IRS reflection phase shifts and radar Signal-to-Interference-plus-Noise Ratio (SINR), a joint optimization problem is formulated to maximize the system's secrecy rate. This optimization involves adjusting base station transmission beamforming, jammer precoding, and IRS phase shifts. The problem, characterized by multiple coupled variables, exhibits non-convexity, complicating direct solutions. To address this non-convex challenge, Alternating Optimization (AO) methods are first employed to decompose the original problem into two sub-problems. Semi-Definite Relaxation (SDR) algorithms, along with auxiliary variable introductions, are then applied to transform the non-convex optimization issue into a convex form, enabling a definitive solution. Finally, a resource allocation algorithm based on alternating iterations is proposed to ensure secure operational efficiency. Results and Discussions The simulation results substantiate the security and efficacy of the proposed algorithm, as well as the superiority of the IRS-ISAC system. Specifically, the system secrecy rate in relation to the number of iterations is illustrated, demonstrating the convergence of the proposed algorithm across varying numbers of base station transmit antennas. The findings indicate that the algorithm reaches the maximum system secrecy rate and stabilizes at the fifth iteration, which shows its excellent convergence characteristics. Furthermore, an increase in the number of transmit antennas correlates with a notable enhancement in the system secrecy rate. This improvement can be attributed to the additional spatial degrees of freedom afforded by the base station's antennas, which enable the projection of legitimate information into the null space of all eavesdropper channels—effectively reducing the information received by eavesdroppers and boosting the overall system secrecy rate. The system secrecy rate is presented as a function of the transmit power of the base station. The results indicate that an increase in the base station's maximum transmit power corresponds with an increase in the system secrecy rate. This enhancement occurs because higher transmit power effectively mitigates path loss, thereby improving the quality of the signal. The IRS-assisted ISAC system significantly outperforms scenarios without IRS, thanks to the introduction of additional non-line-of-sight links. Additionally, the proposed scheme demonstrates superior performance compared to the random scheme in the joint design of transmit beamforming and reflection coefficients, validating the effectiveness of the algorithm. The system secrecy rate is illustrated in relation to the number of IRS reflection elements. The results reveal that the system secrecy rates for both the proposed and random methods increase as the number of IRS elements rises. This can be attributed to the incorporation of additional reflective elements, which facilitate enhanced passive beamforming gain and expand the spatial freedom available for optimizing the propagation environment, thereby strengthening anti-eavesdropping capabilities. In contrast, the system secrecy rate for the scheme without IRS remains constant. Notably, as the number of IRS elements increases, the gap in secrecy rates between the proposed scheme and the random scheme expands, highlighting the significant advantage of optimizing the IRS phase shift in improving system performance. The radar SINR is depicted concerning the transmit power of the base station. The results indicate that as the maximum transmit power of the base station increases, the SINR of the radar likewise improves. The proposed scheme outperforms the two benchmark schemes in this respect, attributable to the optimization of the IRS phase shift matrix, which not only enhances system security but also effectively conserves energy resources within the communication system. This enables a more efficient allocation of resources to improve sensing performance. By incorporating IRS into the ISAC system, performance in the sensing direction is markedly enhanced while simultaneously bolstering system security. Conclusions This paper addresses the potential for eavesdropping by proposing a secure resource allocation algorithm for ISAC systems with the support of IRS. A secrecy rate maximization problem is formulated, subject to constraints on the transmit power of the base station and jammer, the IRS reflection phase shifts, and the radar SINR. This formulation involves the joint design of transmit beamforming, jammer precoding, and IRS reflection beamforming. The interdependencies among these variables create significant challenges for direct solution methods. To overcome these complexities, the AO algorithm is employed to decompose the non-convex problem into two sub-problems. SDR techniques are then applied to transform these sub-problems into convex forms, enabling their resolution with convex optimization tools. Our simulation results indicate that the proposed method considerably outperforms two benchmark schemes, confirming the algorithm’s effectiveness. These findings highlight the considerable potential of IRS in bolstering the security performance of ISAC systems.
Available online , doi: 10.11999/JEIT240012
Abstract:
Objective: The field of cellular mobile communication is advancing toward post-5G (5.5G, Beyond 5G, 5G Advanced) and 6th Generation (6G) standards. This evolution involves a shift from traditional sub-6 GHz operating frequency bands to higher frequency ranges, including millimeter wave (mmWave), terahertz (THz), and even visible light frequencies, which intersect with radar operating bands. Technologies such as Orthogonal Frequency Division Multiplexing (OFDM) and Multiple Input Multiple Output (MIMO) have gained widespread application in both wireless communication and radar domains. Given the shared characteristics and commonalities in signal processing and operating frequency bands between these two fields, “Integrated Sensing And Communication (ISAC)” has emerged as a significant research focus in wireless technologies like 5G Advanced (5G-A), 6G, Wireless Fidelity (WiFi), and radar. This development points toward a network perception paradigm that combines communication, sensing, and computing. The "ISAC" concept aims to unify wireless communication systems (including cellular and WiFi) with wireless sensing technologies (such as radar) and even network Artificial Intelligence (AI) computing capabilities into a cohesive framework. By integrating these elements, the physical layer can share frequencies and Radio Frequency (RF) hardware resources, leading to several advantages: spectrum conservation, cost reduction, minimized hardware size and weight, and enhanced communication perception. In this article, the focus of communication perception integration is primarily on radar communication. ISAC necessitates that both communication and sensing utilize the same radio frequency band and hardware resources. The diverse characteristics of multiple frequency bands, along with the varying hardware requirements for communication and sensing, present increased challenges for ISAC hardware design. Effective hardware design for ISAC systems demands a well-considered architecture and device design for RF transceivers. Key considerations include the receiver’s continuous signal sensing, link budget, and noise figure, all of which are sensitive to factors such as system size, weight, power consumption, and cost. A comprehensive review of relevant literature reveals that while studies on overall architecture, waveform design, signal processing, and THz technology exist within the ISAC domain, they often center on theoretical models and software simulation. Hardware design and technical verification methodologies are sporadically addressed across different studies. Although some literature details specific hardware designs and validation approaches, these are limited in number compared to the rich body of theoretical and algorithmic research, indicating a need for more comprehensive and systematic reviews focused specifically on ISAC hardware design. Methods: This paper summarizes the hardware designs, verification technologies, and systemic hardware verification platforms pertinent to beyond 5G, 6G, and WiFi ISAC systems. Additionally, recent research on related hardware designs and verification both domestically and internationally is reviewed. The analysis addresses the challenges in hardware design, including the conflicting requirements between communication and sensing systems, In Band Full Duplex (IBFD) Self-Interference Cancellation (SIC), Power Amplifier (PA) efficiency, and the need for more accurate circuit performance modeling. Results and Discussions: Initially, the design of ISAC transceiver architectures from existing research is summarized and compared. Subsequently, an overview and analysis of current ISAC IBFD self-interference suppression strategies, low Peak to Average Power Ratio (PAPR) waveforms, high-performance PA designs, precise device modeling techniques, and systemic hardware verification platforms are presented. Finally, the paper provides a summary of the findings. Future challenges in ISAC hardware design are discussed, including the effects of hardware defects on sensing accuracy, ultra-large scale MIMO systems, high-frequency IBFD, and ISAC hardware designs for Unmanned Aerial Vehicle (UAV) applications. The performance metrics of ISAC IBFD architectures are compared, while the various ISAC transceiver architectures are outlined. Representative hardware verification platforms for ISAC systems are presented. The different ISAC transceiver architectures summarized in this paper are illustrated. Conclusions: In recent years, preliminary research has been conducted on integrated air interface architecture, transceiver hardware design, systematic hardware verification, and demonstration of sensing technologies such as 5G-A, 6G, and WiFi, both domestically and internationally. However, certain limitations persist. Beyond 5G networks, post-5G and 6G ISAC hardware verification platforms primarily operate at the link level rather than at the network system level. This focus on ISAC without the integration of computing functions leads to increased volume and power consumption costs and a reliance on commercial instruments and SDR platforms. Furthermore, the IBFD self-interference suppression technology has yet to fully satisfy the demands of future ultra-large-scale MIMO systems, necessitating further integration with large-scale artificial intelligence model technologies. In light of impending technological challenges and issues of openness, it is crucial for academia and industry to collaborate in addressing these challenges and researching viable solutions. To expedite testing optimization and industrial implementation, practical hardware design transition solutions are required that balance advancements in high-frequency support, receiver architecture, and networking architecture, facilitating the efficient realization of the “ideal” of ISAC.
Available online , doi: 10.11999/JEIT240051
Abstract:
Objective Reconfigurable Intelligent Surface (RIS), an innovative technology for 6G communication, can effectively reduce hardware costs and energy consumption. Most researchers examine the joint BeamForming (BF) design problem in RIS-assisted Multiple-Input Single-Output (MISO) systems or single-user Multiple-Input Multiple-Output (MIMO) systems. However, few investigate the non-convex joint BF optimization problem for RIS-assisted multi-user MISO systems. The existing joint BF design approaches for these systems primarily rely on iterative algorithms that are complex, and some methods have a limited application range. Methods To address the issue, general low-complexity joint BF designs for RIS-assisted multi-user systems are considered. The communication system consists of a Base Station (BS) with an M -antenna configuration utilizing a Uniform Rectangular Array (URA), a RIS with N reflecting elements also arranged in a URA, and K single-antenna User Equipment (UEs). It is assumed that the transmission channel between the BS and UEs experiences blocking due to fading and potential obstacles in a dynamic wireless environment. The non-convex optimization challenge of joint BF design is analyzed, with the goal of maximizing the sum data rate for RIS-aided multi-user systems. The design process involves three main steps: First, the RIS reflection matrix \begin{document}${\boldsymbol{\varTheta}} $\end{document} is designed based on the perfect channel state information obtained from both the BS-RIS and RIS-UE links. This design exploits the approximate orthogonality of the beam steering vectors for all transmitters and receivers using the URA (as detailed in Lemma 1). Second, the transmit BF matrix W at the BS is derived using the zero-forcing method. Third, the power allocation at the BS for multiple users is optimized using the Water-Filling (WF) algorithm. The proposed scheme is applicable to both single-user and multi-user scenarios, accommodating Line-of-Sight (LoS) paths, Rician channels with LoS paths, as well as Non-LoS (NLoS) paths. The computational complexity of the proposed joint BF design is quantified at a total complexity of \begin{document}${\mathcal{O}}(N+K^2M+K^3) $\end{document} . Compared with existing schemes, the computational complexity of the proposed design is reduced by at least an order of magnitude. Results and Discussions To verify the performance of the proposed joint BF scheme, simulation tests were conducted using the MATLAB platform. Five different schemes were considered for comparison: Scheme 1: BF design and Water-Filling Power Allocation (WFPA) proposed in this paper, utilizing Continuous Phase Shift (CPS) design without accounting for the limitations of the RIS phase shifter's accuracy. Scheme 2: Proposed Beamforming (PBF) and WFPA with 2-bit Phase Shift (2PS) design, taking phase shift accuracy limitations into consideration. Scheme 3: 1-bit Phase Shift (1PS) design under PBF and WFPA. Scheme 4: 2PS design under Random BeamForming (RBF) and WFPA. Scheme 5: Equal Power Allocation (EPA) design under PBF and CPS. Initial numerical results demonstrate that the proposed BF design can achieve a high sum data rate, which can be further enhanced by employing optimal power allocation. Furthermore, under identical simulation conditions, the LoS scenario exhibited superior sum data rate performance compared to the Rician channel scenario, with a performance advantage of approximately 6 bit/s/Hz. This difference can be attributed to the presence of multiple paths in the Rician channel, which increases interference and decreases the signal-to-noise ratio, thereby reducing the sum data rate. Additionally, when the distance between BS and UEs is fixed, and the RIS is positioned on the straight line between the BS and the UEs, the system sum data rate initially decreases and then increases as the distance between the RIS and UEs increases due to path loss. The simulation results confirm that when the RIS is situated near the UEs (i.e., further from the BS), improved data rate performance can be achieved. This improvement arises because the path loss of the RIS-UE link is greater than that of the BS-RIS link. Therefore, optimal data rate performance is attained when the RIS is closer to the UEs. Moreover, both the simulation results and theoretical analysis indicate that the sum data rate is influenced by the RIS location, offering valuable insights for the selection of RIS positioning. Conclusions This paper proposes a general low-complexity BF design for RIS-assisted multi-user communication systems. Closed-form solutions for transmit BF, power distribution of the BS, and the reflection matrix of the RIS are provided to maximize the system's sum data rate. Simulation results indicate that the proposed BF design achieves higher data rates than alternative schemes. Additionally, both the simulation findings and theoretical analysis demonstrate that the sum data rate varies with the RIS's location, providing a reference criterion for optimizing RIS placement.
Available online , doi: 10.11999/JEIT240431
Abstract:
Objective Salient Object Detection (SOD) aims to replicate the human visual system’s attentional processes by identifying visually prominent objects within a scene. Recent advancements in Convolutional Neural Networks (CNNs) and Transformer-based models have improved performance; however, several limitations remain: (1) Most existing models depend on pixel-wise dense predictions, diverging from the human visual system’s focus on region-level analysis, which can result in inconsistent saliency distribution within semantic regions. (2) The common application of Transformers to capture global dependencies may not be ideal for SOD, as the task prioritizes center-surround contrasts in local areas rather than global long-range correlations. This study proposes an innovative SOD model that integrates CNN-style adaptive attention and mask-aware mechanisms to enhance contextual feature representation and overall performance. Methods The proposed model architecture comprises a feature extraction backbone, contextual enhancement modules, and a mask-aware decoding structure. A CNN backbone, specifically Res2Net, is employed for extracting multi-scale features from input images. These features are processed hierarchically to preserve both spatial detail and semantic richness. Additionally, this framework utilizes a top-down pathway with feature pyramids to enhance multi-scale representations. High-level features are further refined through specialized modules to improve saliency prediction. Central to this architecture is the ConvoluTional attention-based contextual Feature Enhancement (CTFE) module. By using adaptive convolutional attention, this module effectively captures meaningful contextual associations without relying on global dependencies, as seen in Transformer-based methods. The CTFE focuses on modeling center-surround contrasts within relevant regions, avoiding unnecessary computational overhead. Features refined by the CTFE module are integrated with lower-level features through the Feature Fusion Module (FFM). Two fusion strategiesAttention-Fusion and Simple-Fusion—were evaluated to identify the most effective method for merging hierarchical features. The decoding process is managed by the Mask-Aware Transformer (MAT) module, which predicts salient regions by restricting attention to mask-defined areas. This strategy ensures that the decoding process prioritizes regions relevant to saliency, enhancing semantic consistency while reducing noise from irrelevant background information. The MAT module’s ability to generate both masks and object confidence scores makes it particularly suited for complex scenes. Multiple loss functions guide the training process: Mask loss, computed using Dice loss, ensures that predicted masks closely align with ground truth. Ranking loss prioritizes the significance of salient regions, while edge loss sharpens boundaries to clearly distinguish salient objects from their background. These objectives are optimized jointly using the Adam optimizer with a dynamically adjusted learning rate. Results and Discussions Experiments were conducted using the PyTorch framework on an RTX 3090 GPU, with training configurations optimized for SOD datasets. The input resolution was set to 384×384 pixels, and data augmentation techniques, such as horizontal flipping and random cropping, were applied. The learning rate was initialized at 6e–6 and adjusted dynamically, with the Adam optimizer employed to minimize the combined loss functions. Experimental evaluations were performed on four widely used datasets: SOD, DUTS-TE, DUT-OMRON, and ECSSD. The proposed model demonstrated exceptional performance across all datasets, showing significant improvements in Mean Absolute Error (MAE) and maximum F-measure metrics. For instance, on the DUTS-TE dataset, the model achieved an MAE of 0.023 and a maximum F-measure of 0.9508, exceeding competing methods such as MENet and VSCode. Visual comparisons indicate that the proposed method generates saliency maps that closely align with the ground truth, effectively addressing challenging scenarios including fine structures, multiple objects, and complex backgrounds. In contrast, other methods often incorporate irrelevant regions or fail to accurately capture object details. Ablation experiments validated the effectiveness of crucial components. For example, the incorporation of the CTFE module resulted in a reduction of MAE from 0.109 to 0.102. Additionally, the Simple-Fusion strategy outperformed the Attention-Fusion approach, yielding a lower MAE and a higher maximum F-measure score. The integration of IOU and BCE-based edge loss further enhanced boundary sharpness, demonstrating superior performance compared to Canny-based edge loss. Heatmaps illustrate the contributions of the CTFE and MAT modules in emphasizing salient regions while preserving semantic consistency. The CTFE effectively accentuates center-surround contrasts, while the MAT captures global object-level semantics. These visualizations highlight the model’s ability to focus on critical areas while minimizing background noise. Conclusions This study presents a novel SOD framework that integrates CNN-style adaptive attention with mask-aware decoding mechanisms. The proposed model addresses the limitations of existing approaches by enhancing semantic consistency and contextual representation while avoiding excessive dependence on global variables. Comprehensive evaluations demonstrate its robustness, generalization capability, and significant performance enhancements across multiple benchmarks. Future research will investigate further optimization of the architecture and its application to multimodal SOD tasks, including RGB-D and RGB-T saliency detection.
Available online , doi: 10.11999/JEIT240645
Abstract:
Objective: To address degradation issues such as color distortion, low brightness, and detail loss in images captured under transformer oil. Methods: This paper proposes a multi-scale weighted Retinex algorithm for image enhancement. First, to alleviate color distortion, a hybrid dynamic color channel compensation algorithm is proposed. This algorithm compensates dynamically based on the attenuation of each channel in the captured image. Next, a sharpening weight strategy is proposed to tackle detail loss. Finally, a pyramid multi-scale fusion strategy is used to combine different-scale Retinex reflection components with their corresponding weight maps, resulting in clearer images under transformer oil. Results and Discussion: (Fig.5 ), (Fig.6 ), (Fig.7 ), (Table 1 ) Conclusions: Experimental results demonstrate that the algorithm effectively addresses the complex degradation issues of images captured under transformer oil.
Available online , doi: 10.11999/JEIT240650
Abstract:
Objective The provision of satellite navigation services through Low Earth Orbit (LEO) constellations has become a prominent topic in the Position, Navigation and Timing (PNT) system. Although LEO satellites offer low spatial propagation loss and high signal power at ground level. However, their high-speed movement results in significant dynamics in the signal, leading to considerable Doppler frequency shifts that affect signal reception on the ground. This dynamic environment increases the frequency search space required by receivers. Furthermore, LEO constellations typically comprise hundreds or even thousands of satellites to achieve global coverage, further expanding the search space for satellite signals at terminals. Consequently, during cold start conditions, the LEO satellite navigation system faces a substantial increase in the search range for navigation signals, presenting significant challenges for signal acquisition. Existing GPS, BDS, GALILEO, and other navigation signals primarily utilize BPSK-CDMA modulation, relying on spread spectrum sequences to differentiate various satellite signals. However, these existing signals exhibit limited resistance to Doppler frequency offsets. Therefore, research into signal waveforms that are more suitable for LEO satellite navigation systems is crucial. Such research aims to enhance the anti-Doppler frequency offset capability and multi-access performance under conditions involving numerous satellites, thereby improving the signal acquisition performance of LEO navigation terminals and enhancing the overall availability of LEO navigation systems. Methods This paper adopts a multi-faceted research approach including theoretical analysis, simulation experiments, and comparative analysis. Since the performance of the correlation function directly impacts signal acquisition performance, an initial theoretical analysis of the correlation function and the multiple access capabilities of the proposed signal is conducted. Following this, the corresponding capture detection metrics and decision-making methods are proposed based on the principles of signal capture. The investigation continues with a focus on optimizing capture parameters, followed by verification of the signal's acquisition performance through simulations and experiments. Additionally, the performance of the proposed signal is compared to that of traditional navigation signals using both theoretical and simulation analyses. Results and Discussions The theoretical analysis outcomes reveal that the proposed Code-phase Shift Key-Linear Frequency Modulated (CSK-LFM) signal exhibits lower Doppler loss, delay loss, and multiple access loss when compared to the traditional Binary Phase Shift Keying–Code Division Multiple Access (BPSK-CDMA) signal. To minimize the loss of signal detection capacity, it is advisable to expand the signal bandwidth and reduce the spread spectrum ratio during the signal design phase. A satellite parallel search method is developed for the acquisition of the CSK-LFM signal, employing a Partial Match Filter-Fast Fourier Transformations (PMF-FFT) approach. A parameter optimization model has also been developed to enhance the acquisition performance of the CSK-LFM signal. Furthermore, the acquisition performance of CSK-LFM and BPSK-CDMA signals are compared. Under the same conditions, the acquisition and search space required for the BPSK-CDMA signal is larger than that of the CSK-LFM signal. It is noteworthy that, under equivalent dynamic conditions, the acquisition performance of the CSK-LFM signal is approximately 1 dB superior to that of the BPSK-CDMA signal. Lastly, experimental results confirm that the proposed satellite parallel search method based on the PMF-FFT acquisition algorithm is effective for the acquisition of CSK-LFM signals. Conclusions To address the challenge of achieving rapid signal acquisition in low-orbit satellite navigation systems, a hybrid modulation scheme, CSK-LFM is designed. The LFM modulation improves the signal's Doppler tolerance, while the use of diverse pseudo-code phases enables multiple access broadcasts from different satellites. This design compresses the three-dimensional search space involving satellite count, time delay, and Doppler shift. Additionally, a satellite parallel search method is implemented based on a PMF-FFT acquisition algorithm for the CSK-LFM signal. An optimization model for acquisition parameters is also developed to enhance performance. Our comparative analysis of the acquisition performance between CSK-LFM and BPSK-CDMA signals demonstrates that at a signal intensity of 40 dBHz, the navigation signal using CSK-LFM modulation achieves an acquisition performance approximately 1 dB superior to that of the BPSK-CDMA modulation signal under identical conditions; furthermore, the signal search space can be reduced to one-tenth that of the BPSK-CDMA modulation signal.
Available online , doi: 10.11999/JEIT240648
Abstract:
Objective Rotating machinery is essential across various industrial sectors, including energy, aerospace, and manufacturing. However, these machines operate under complex and variable conditions, making timely and accurate fault detection a significant challenge. Traditional diagnostic methods, which use a single sensor and modality, often miss critical features, particularly subtle fault signatures. This can result in reduced reliability, increased downtime, and higher maintenance costs. To address these issues, this study proposes a novel modal fusion deep clustering approach for multi-sensor fault diagnosis in rotating machinery. The main objectives are to: (1) improve feature extraction through time-frequency transformations that reveal important temporal-spectral patterns, (2) implement an attention-based modality fusion strategy that integrates complementary information from various sensors, and (3) use a deep clustering framework to identify fault types without needing labeled training data. Methods The proposed approach utilizes a multi-stage pipeline for thorough feature extraction and analysis. First, raw multi-sensor signals, such as vibration data collected under different load and speed conditions, are preprocessed and transformed with the Short-Time Fourier Transform (STFT). This converts time-domain signals into time-frequency representations, highlighting distinct frequency components related to various fault conditions. Next, Gated Recurrent Units (GRUs) model temporal dependencies and capture long-range correlations, while Convolutional AutoEncoders (CAEs) learn hierarchical spatial features from the transformed data. By combining GRUs and CAEs, the framework encodes both temporal and structural patterns, creating richer and more robust representations than traditional methods that rely solely on either technique or handcrafted features. A key innovation is the modality fusion attention mechanism. In multi-sensor environments, individual sensors typically capture complementary aspects of system behavior. Simply concatenating their outputs can lead to suboptimal results due to noise and irrelevant information. The proposed attention-based fusion calculates modality-specific affinity matrices to assess the relationship and importance of each sensor modality. With learnable attention weights, the framework prioritizes the most informative modalities while diminishing the impact of less relevant ones. This ensures the fused representation captures complementary information, resulting in improved discriminative power. Finally, an unsupervised clustering module is integrated into the deep learning pipeline. Rather than depending on labeled data, the model assigns samples to clusters by refining cluster assignments iteratively using a Kullback-Leibler (KL) divergence-based objective. Initially, a soft cluster distribution is created from the learned features. A target distribution is then computed to sharpen and define cluster boundaries. By continuously minimizing the KL divergence between these distributions, the model self-optimizes over time, producing well-separated clusters corresponding to distinct fault types without supervision. Results and Discussions The proposed approach’s effectiveness is illustrated using multi-sensor bearing and gearbox datasets. Compared to conventional unsupervised methods—like traditional clustering algorithms or single-domain feature extraction techniques—this framework significantly enhances clustering accuracy and fault recognition rates. Experimental results show recognition accuracies of approximately 99.16% on gearbox data and 98.63% on bearing data, representing a notable advancement over existing state-of-the-art techniques. These impressive results stem from the synergistic effects of advanced feature extraction, modality fusion, and iterative clustering refinement. By extracting time-frequency features through STFT, the method captures a richer representation than relying solely on raw time-domain signals. The use of GRUs incorporates temporal information, enabling the capture of dynamic signal changes that may indicate evolving fault patterns. Additionally, CAEs reveal meaningful spatial structures from time-frequency data, resulting in low-dimensional yet highly informative embeddings. The modality fusion attention mechanism further enhances these benefits by emphasizing relevant modalities, such as vibration data from various sensor placements or distinct physical principles, thus leveraging their complementary strengths. Through the iterative minimization of KL divergence, the clustering process becomes more discriminative. Initially broad and overlapping cluster boundaries are progressively refined, allowing the model to converge toward stable and well-defined fault groupings. This unsupervised approach is particularly valuable in practical scenarios, where obtaining labeled data is costly and time-consuming. The model’s ability to learn directly from unlabeled signals enables continuous monitoring and adaptation, facilitating timely interventions and reducing the risk of unexpected machine failures. The discussion emphasizes the adaptability of the proposed method. Industrial systems continuously evolve, and fault patterns can change over time due to aging, maintenance, or shifts in operational conditions. The unsupervised method can be periodically retrained or updated with new unlabeled data. This allows it to monitor changes in machinery health and quickly detect new fault conditions without the need for manual annotation. Additionally, the attention-based modality fusion is flexible enough to support the inclusion of new sensor types or measurement channels, potentially enhancing diagnostic performance as richer data sources become available. Conclusions This study presents a modal fusion deep clustering framework designed for the multi-sensor fault diagnosis of rotating machinery. By combining time-frequency transformations with GRU- and CAE-based deep feature encoders, attention-driven modality fusion, and KL divergence-based unsupervised clustering, this approach outperforms traditional methods in accuracy, robustness, and scalability. Key contributions include a comprehensive multi-domain feature extraction pipeline, an adaptive modality fusion strategy for heterogeneous sensor data integration, and a refined deep clustering mechanism that achieves high diagnostic accuracy without relying on labeled training samples. Looking ahead, there are several promising directions. Adding more modalities—like acoustic emissions, temperature signals, or electrical measurements—could lead to richer feature sets. Exploring semi-supervised or few-shot extensions may further enhance performance by utilizing minimal labeled guidance when available. Implementing the proposed model in an industrial setting, potentially for real-time use, would also validate its practical benefits for maintenance decision-making, helping to reduce operational costs and extend equipment life.
Available online , doi: 10.11999/JEIT240595
Abstract:
The following is a brief introduction of this paper from four elements. Objective The complex and variable nature of the underwater environment presents significant challenges for the field of underwater target tracking. Factors such as environmental noise, interference, and reverberation can severely distort and obscure the signals emitted or reflected by underwater targets, complicating accurate detection and tracking efforts. Additionally, advancements in weak target signal technology add further complexity, as they necessitate sophisticated methods to enhance and interpret signals that may be lost amidst background noise. The challenges associated with underwater target tracking are multifaceted. One major issue is the interference that can compromise the integrity and reliability of target information. Another critical challenge lies in the difficulty of extracting useful feature information from the often chaotic underwater environment. Traditional tracking methods, which typically rely on basic signal processing techniques, frequently fall short in addressing these complexities. Underwater tracking technology serves as a cornerstone in several key fields, including marine science, military strategy, and marine resource development. Notably, effective underwater tracking is essential for monitoring, sonar detection, and the deployment of underwater weapons within the military sector. Considering the significance of underwater tracking technology, there is an urgent need for innovative methods to address the existing challenges. Therefore, this paper proposes a unified approach that views the target and environment as an integrated system, extracting coupled features—specifically, active waveguide invariants—and fusing these features into the motion model to enhance underwater tracking capabilities. Methods: This paper presents an enhanced extended Kalman filter tracking method, which is built upon the active waveguide invariant distribution. The mathematical model for active waveguide invariant representation is derived based on the foundational theory of target scattering characteristics in shallow water waveguides, with specific consideration of transmitter-receiver separation. This derivation establishes the constraint relationships among distance, frequency, and the active waveguide invariant distribution. These constraints are subsequently incorporated into the state vector of the extended Kalman filter to enhance the alignment between the target motion model and the actual trajectory, thereby improving tracking accuracy. The method includes image preprocessing steps such as filtering, edge detection, and edge smoothing, followed by the application of the Radon transform to extract essential parameters, including distance and frequency. The Radon transform is refined using threshold filtering to improve parameter extraction. The active waveguide invariant distribution is then computed, and the tracking performance of the method is validated through simulation experiments and real measurement data. The simulation setup involves a rigid ball as the target in a shallow water environment, modeled using a constant velocity assumption. Real measurement data is collected under similar conditions at the Xin’An River experimental site. For both simulations and real measurements, a steel ball model target and constant velocity model are employed, with equipment deployed on the same horizontal plane. Results and Discussions: First, the distribution of the constant of propagation within the active waveguide was obtained through simulation experiments. A comparison was made between the Invariant Distribution-Extended Kalman Filter (ID-EKF), the Extended Kalman Filter (EKF), and the Invariance Extended Kalman Filter (IEKF). In trajectory tracking, the ID-EKF demonstrated closer alignment to the true trajectory compared to both the EKF and IEKF. Additionally, in terms of the mean square error of the predicted posterior position, the ID-EKF exhibited a lower error rate. As indicated by the overall estimation accuracy, the ID-EKF achieved approximately 50% greater accuracy than the EKF and about 30% higher accuracy than the IEKF. Subsequently, the ID-EKF algorithm was validated in a real-world scenario using actual measurement data. The acoustic field interference stripes were obtained through the processing of received echo signals, and the distribution of the constant of propagation was extracted by manually selecting points and conducting a maximum search, followed by curve fitting using the joint edge curve fitting method. Results from Monte Carlo simulation experiments demonstrated a decreasing order of tracking accuracy for the ID-EKF, IEKF, and EKF, consistent with the simulation results. The overall estimation accuracy of the ID-EKF was approximately 60% higher than that of the EKF and about 40% superior to that of the IEKF. Conclusions: This paper presents a novel tracking method based on the extended Kalman filter, informed by the interference characteristics of target and environmental coupling in shallow water waveguides. The effectiveness of this method is substantiated through both theoretical simulation data and empirical lake measurement data. The active waveguide invariant distribution was derived using the Radon transform, which facilitated the implementation of the ID-EKF tracking. Results from both simulations and experiments reveal that the extracted active invariant value distribution manifests in two scenarios: either coinciding with 1 or exhibiting significant deviation from 1. When the extracted invariant value is markedly different from 1, the ID-EKF demonstrates a reduced tracking error and a more pronounced convergence relative to other tracking algorithms, highlighting the importance of precisely extracting this value to enhance the ID-EKF’s performance. Conversely, when the extracted value is close to 1, the tracking error of the ID-EKF aligns more closely with that of the IEKF algorithm. In both cases, it is evident that the extracted invariant value is pivotal in enhancing the accuracy of the tracking algorithm. Future research will prioritize the extraction of more accurate invariant values to facilitate the development of higher-precision tracking algorithms.
Available online , doi: 10.11999/JEIT240308
Abstract:
Objective Inverse Lithography Technology (ILT) provides improved imaging effects and a larger process window compared to traditional Optical Proximity Correction (OPC). As chip manufacturing continually reduces process dimensions, ILT has become the leading lithography mask correction technology. This paper first introduces the basic principles and several common implementation methods of the reverse lithography algorithm. It then reviews current research on using reverse lithography technology to optimize lithography masks, as well as analyzes the advantages and existing challenges of this technology. Methods The general process of generating mask patterns in ILT is exemplified using the level set method. First, the target graphics, light sources, and other inputs are identified. Then, the initial mask pattern is created and a pixelated model is constructed. A photolithography model is then established to calculate the aerial image. The general photoresist threshold model is represented by a sigmoid function, which helps derive the imaging pattern on the photoresist. The key element of the ILT algorithm is the cost function, which measures the difference between the wafer image and the target image. The optimization direction is determined based on the chosen cost function. For instance, the continuous cost function can calculate gradients, enabling the use of gradient descent to find the optimal solution. Finally, when the cost function reaches its minimum, the output mask is generated. Results and Discussions This paper systematically introduces several primary methods for implementing ILT. The level set method's main concept is to convert a two-dimensional closed curve into a three-dimensional surface. Here, the closed curve is viewed as the set of intersection lines between the surface and the zero plane. During the ILT optimization process, the three-dimensional surface shape remains continuous. This continuity allows the ILT problem to be transformed into a multivariate optimization problem, solvable using gradient algorithms, machine learning, and other methods. Examples of the level set method's application can be found in both mask optimization and light source optimization. The level set mathematical framework effectively addresses two-dimensional curve evolution. When designing the ILT algorithm, a lithography model determines the optimization direction and velocity for each mask point, employing the level set for mask evolution. Intel has proposed an algorithm that utilizes a pixelated model to optimize the entire chip. However, this approach incurs significant computational costs, necessitating larger mask pixel sizes. Notably, the pixelated model is consistently used throughout the process, with a defined pixelated cost function applicable to multi-color masks. The frequency domain method for calculating the ILT curve involves transforming the mask from the spatial domain into the frequency domain, followed by lithography model calculations. This approach generates a mask with continuous pixel values, which is then gradually converted into a binary mask through multiple steps. When modifying the cost function in frequency domain optimization, all symmetric and repetitive patterns are altered uniformly, preserving symmetry. The transition of complex convolution calculations into multiplication processes within the frequency domain significantly reduces computational complexity and can be accelerated using GPU technology. Due to the prevalent issue of high computational complexity in various lithography mask optimization algorithms, scholars have long pursued machine learning solutions for mask optimization. Early research often overlooked the physical model of photolithography technology, training neural networks solely based on optimized mask features. This oversight led to challenges such as narrow process windows. Recent studies have, however, integrated machine learning with other techniques, enabling the physical model of lithography technology to influence neural network training, resulting in improved optimization results. While the ILT-optimized mask lithography process window is relatively large, its high computational complexity limits widespread application. Therefore, combining machine learning with the ILT method represents a promising research direction. Conclusions Three primary techniques exist for optimizing masks using ILT: the Level Set Method, Intel Pixelated ILT Method, and Frequency Domain Calculation of Curve ILT. The Level Set Method reformulates the ILT challenge into a multivariate optimization issue, utilizing a continuous cost function. This approach allows for the application of established methods like gradient descent, which has attracted significant attention and is well-documented in the literature. In contrast, the Intel method relies entirely on pixelated models and pixelated cost functions, though relevant literature on this method is limited. Techniques in the frequency domain can leverage GPU operations to substantially enhance computational speed, and advanced algorithms also exist for converting curve masks into Manhattan masks. The integration of ILT with machine learning technologies shows considerable potential for development. Further research is necessary to effectively reduce computational complexity while ensuring optimal results. Currently, ILT technology faces challenges such as high computational demands and obstacles in full layout optimization. Collaboration among experts and scholars in integrated circuit design and related fields is essential to improve ILT computational speed and to integrate it with other technologies. We believe that as research on ILT-related technologies advances, it will play a crucial role in helping China’s chip industry overcome technological bottlenecks in the future.
Available online , doi: 10.11999/JEIT240297
Abstract:
Objective As the digital landscape evolves, the rise of innovative applications has led to unprecedented levels of spectrum congestion. This congestion poses significant challenges for the seamless operation and expansion of wireless networks. Among the various solutions being explored, Dual-Functional Radar-Communication (DFRC) emerges as a key technology. It offers a promising pathway to alleviate the growing spectrum crunch. DFRC systems are designed to harmonize radar sensing and communication within the same spectral resources, maximizing efficiency and minimizing waste. However, implementing DFRC systems presents significant challenges, particularly in mitigating mutual interference between communication and radar functions. If this interference is not addressed, it can severely degrade the performance of both systems, undermining the dual-purpose design of DFRC. Additionally, achieving high communication rates under these constraints adds complexity that must be carefully managed. Therefore, tackling interference mitigation while ensuring robust and high-speed communication capabilities is a fundamental challenge the research community must address urgently within DFRC systems. Successfully resolving these issues will pave the way for widespread DFRC adoption and drive advancements across various fields, from autonomous driving to smart cities, fundamentally transforming our interactions with the world. Methods Multi-carrier Complementary-Coded Division Multiple Access (MC-CDMA) is a sophisticated spread spectrum communication technology that utilizes the unique properties of complementary codes to enhance system performance. A key advantage of MC-CDMA is the ideal correlation characteristics of these codes. Theoretically, they can eliminate interference between communication users and radar systems. However, this requires a data block length of 1. Since a guard interval must be added after the data block, a length of 1 results in many guard intervals during transmission, lowering the communication user’s transmission rate. To address this issue, this paper expands the spread spectrum codes used by both communication users and radars. The communication code is expanded by repetition, while the radar code is extended using Kronecker products and Golay complementary pairs, matching the data block length. This approach ensures that even if the data block length exceeds 1, the radar signal remains unaffected by the communication users. Results and Discussions The proposed scheme effectively addresses interference between radar and communication, while also improving the data rate for communication users. Experimental simulation results demonstrate that the proposed scheme performs well in terms of bit error rate, anti-Doppler frequency shift capability, and target detection. Conclusions Waveform design is crucial in DFRC systems. This paper presents a new DFRC waveform based on MC-CDMA technology. The scheme generates an integrated waveform through code division, enhancing user data rates and preventing random communication data from interfering with the radar waveform. To achieve this, the communication and radar codes are both extended. The communication code uses repetition for extension, while the radar code employs Golay complementary pairs. Theoretical analysis and simulation results suggest that, compared to traditional spread spectrum schemes, the proposed approach allows for interference-free transmission for both communication and radar, achieves a low bit error rate, and provides excellent data rates. On the radar side, the proposed waveform exhibits a low peak sidelobe ratio and excellent Doppler tolerance, allowing for accurate target detection. Additionally, the approach facilitates rapid generation and strong online design capabilities through the direct design of complementary codes.
Available online , doi: 10.11999/JEIT240464
Abstract:
Objective In response to the rapid growth of mobile users and the limited distribution of ground infrastructure, this research addresses the challenges faced by vehicular networks. It emphasizes the need for efficient computation offloading and resource optimization, highlighting the role of Unmanned Aerial Vehicles (UAVs), RoadSide Units (RSUs), and Base Stations (BSs) in enhancing overall system performance. Methods This paper presents an innovative research methodology that proposes an energy harvesting-assisted air-ground cooperative computation offloading architecture. This architecture integrates UAVs, RSUs, and BSs to effectively manage the dynamic task queues generated by vehicles. By incorporating Energy Harvesting (EH) technology, UAVs can capture and convert ambient renewable energy, ensuring a continuous power supply and stable computing capabilities. To address the challenges associated with time-varying channel conditions and high mobility of nodes, a Mixed Integer Programming (MIP) problem is formulated. An iterative process is used to adjust offloading decisions and computing resource allocations at low cost, aiming to optimize overall system performance. The approach is outlined as follows: Firstly, an innovative framework for energy harvesting-assisted air-ground cooperative computation offloading is introduced. This framework enables the collaborative management of dynamic task queues generated by vehicles through the integration of UAVs, RSUs, and BSs. The inclusion of EH technology ensures that UAVs maintain a continuous power supply and stable computing capabilities, addressing limitations due to finite energy resources. Secondly, to address system complexities—such as time-varying channel conditions, high node mobility, and dynamic task arrivals—a MIP problem is formulated. The objective is to optimize system performance by determining effective joint offloading decisions and resource allocation strategies, minimizing global service delays while meeting various dynamic and long-term energy constraints. Thirdly, an Improved Actor-Critic Algorithm (IACA), based on reinforcement learning principles, is introduced to solve the formulated MIP problem. This algorithm utilizes Lyapunov optimization to decompose the problem into frame-level deterministic optimizations, thereby enhancing its manageability. Additionally, a genetic algorithm is employed to compute target Q-values, which guides the reinforcement learning process and enhances both solution efficiency and global optimality. The IACA algorithm is implemented to iteratively refine offloading decisions and resource allocations, striving for optimized system performance. Through the integration of these research methodologies, this paper makes significant contributions to the field of air-ground cooperative computation offloading by providing a novel framework and algorithm designed to address the challenges posed by limited energy resources, fluctuating channel conditions, and high node mobility. Results and Discussions The effectiveness and efficiency of the proposed framework and algorithm are evaluated through extensive simulations. The results illustrate the capability of the proposed approach to achieve dynamic and efficient offloading and resource optimization within vehicular networks.The performance of the IACA algorithm is illustrated, emphasizing its efficient convergence. Over the course of 4,000 training episodes, the agent continuously interacted with the environment, refining its decision-making strategy and updating network parameters. As shown, the loss function values for both the Actor and Critic networks progressively decreased, indicating improvements in their ability to model the real-world environment. Meanwhile, a rising trend in reward values is observed as training episodes increase, ultimately stabilizing, which signifies that the agent has discovered a more effective decision-making strategy. The average system delay and energy consumption relative to time slots are presented. As the number of slots increases, the average delay decreases for all algorithms except for RA, which remains the highest due to random offloading. RLA2C demonstrates superior performance over RLASD due to its advantage function. IACA, trained repeatedly in dynamic environments, achieves an average service delay that closely approximates CPLEX’s optimal performance. Additionally, it significantly reduces average energy consumption by minimizing Lyapunov drift and penalties, outperforming both RA and RLASD. The impact of task input data size on system performance is examined. As the data size increases from 750 kbit to 1,000 kbit, both average delay and energy consumption rise. The IACA algorithm, with its effective interaction with the environment and enhanced genetic algorithm, consistently produces near-optimal solutions, demonstrating strong performance in both energy efficiency and delay management. In contrast, the performance gap between RLASD and RLA2C widens compared to CPLEX due to unstable training environments for larger tasks. RA leads to significant fluctuations in average delay and energy consumption. The effect of the Lyapunov parameter V on average delay and energy consumption at T=200 is illustrated. With V, performance can be finely tuned; as V increases, average delay decreases while energy consumption rises, eventually stabilizing. The IACA algorithm, with its enhanced Q-values, effectively optimizes both delay and energy. Furthermore, the impact of UAV energy thresholds and counts on average system delay is demonstrated. IACA avoids local optima and adapts effectively to thresholds, outperforming RLA2C, RLASD, and RA. An increase in the number of UAVs initially reduces delay; however, an excess can lead to increased delay due to limited computing power. Conclusions The proposed EH-assisted collaborative air-ground computing offloading framework and IACA algorithm significantly improve the performance of vehicular networks by optimizing offloading decisions and resource allocations. Simulation results validate the effectiveness of the proposed methodology in reducing average delay, enhancing energy efficiency, and increasing system throughput. Future research could focus on integrating more advanced energy harvesting technologies and further refining the proposed algorithm to better address the complexities associated with large-scale vehicular networks. (While specific figures or tables are not referenced in this summary due to format constraints, the simulations conducted within the paper provide comprehensive quantitative results to support the findings discussed.)
Available online , doi: 10.11999/JEIT240677
Abstract:
Objective: As a vital component of the global communication network, satellite communication attracts significant attention for its capacity to provide seamless global coverage and establish an integrated space-ground information network. Time-Hopping (TH), a widely used technique in satellite communication, is distinguished by its strong anti-jamming capabilities, flexible spectrum utilization, and high security levels. In an effort to enhance data transmission security, a system utilizing randomly varying TH patterns has been developed. To tackle the challenge of limited transmission power, symbols are distributed across different time slots and repeatedly transmitted according to random TH patterns. At the receiver end, a coherent combining strategy is implemented for signals originating from multiple time slots. To minimize Signal-to-Noise Ratio (SNR) loss during this combining process, precise estimation of TH patterns and multi-hop carrier phases is essential. The randomness of the TH patterns and multi-hop carrier phases further complicates parameter estimation by increasing its dimensionality. Additionally, the low transmission power leads to low-SNR conditions for the received signals in each time slot, complicating parameter estimation even more. Traditional exhaustive search methods are hindered by high computational complexity, highlighting the pressing need for low-complexity multidimensional parameter estimation techniques tailored specifically for TH communication systems. Methods: Firstly, a TH communication system featuring randomly varying TH patterns is developed, where the time slot index of the signal in each time frame is determined by the TH code. Both parties involved in the communication agree that this TH code will change randomly within a specified range. Building on this foundation, a mathematical model for estimating TH patterns and multi-hop carrier phases is derived from the perspective of maximum likelihood estimation, framing it as a multidimensional nonlinear optimization problem. Moreover, guided by a coherent combining strategy and constrained by low SNR conditions at the receiver, a Cross-Entropy (CE) iteration aided algorithm is proposed for the joint estimation of TH patterns and multi-hop carrier phases. This algorithm generates multiple sets of TH code and carrier phase estimates randomly based on a predetermined probability distribution. Using the SNR loss of the combined signal as the objective function, the CE method incorporates an adaptive importance sampling strategy to iteratively update the probability distribution of the estimated parameters, facilitating rapid convergence towards optimal solutions. Specifically, in each iteration, samples demonstrating superior performance are selected according to the objective function to calculate the probability distribution for the subsequent iteration, thereby enhancing the likelihood of reaching the optimal solution. Additionally, to account for the randomness inherent in the iterations, a global optimal vector set is established to document the parameter estimates that correspond to the minimum SNR loss throughout the iterative process. Finally, simulation experiments are conducted to assess the performance of the proposed algorithm in terms of iterative convergence speed, parameter estimation error, and the combined demodulation Bit Error Rate (BER). Results and Discussions: The estimation errors for the TH code and carrier phase were simulated to evaluate the parameter estimation performance of the proposed algorithm. With an increase in SNR, the accuracy of TH code estimation approaches unity. When a small phase quantization bit width is applied, the Root Mean Square Error (RMSE) of the carrier phase estimation is primarily constrained by the grid search step size. Conversely, as the phase quantization bit width increases, the RMSE gradually converges to a fixed value. Regarding the influence of phase quantization on combined demodulation, as the phase quantization bit width increases, nearly theoretical BER performance can be achieved. A comparison between the proposed algorithm and the exhaustive search method reveals that the proposed algorithm significantly reduces the number of search trials compared to the grid search method, with minimal loss in BER performance. An increase in the variation range of the TH code necessitates a larger number of candidate groups for the CE method to maintain a low combining SNR loss. However, with a greater TH code variation range, the number of search iterations and its growth rate in the proposed algorithm are significantly lower than those in the exhaustive search method. Regarding transmission power in the designed TH communication method, as the number of hops in the multi-hop combination increases, the required SNR per hop decreases for the same BER performance, indicating that maximum transmission power can be correspondingly reduced. Conclusions: A TH communication system with randomly varying TH patterns tailored for satellite communication applications has been designed. This includes the presentation of a multi-hop signal coherent combining technique. To address the multidimensional parameter estimation challenge associated with TH patterns and multi-hop carrier phases under low SNR conditions, a CE iteration-aided algorithm has been proposed. The effectiveness of this algorithm is validated through simulations, and its performance regarding iterative convergence characteristics, parameter estimation error, and BER performance has been thoroughly analyzed. The results indicate that, in comparison to the conventional grid search method, the proposed algorithm achieves near-theoretical optimal BER performance while maintaining lower complexity.
Available online , doi: 10.11999/JEIT240643
Abstract:
Objective Integrated Sensing and Communication (ISAC) systems are considered key technologies for the upcoming 6G networks, offering a unified platform for wireless communication and environmental sensing. To enhance the security of ISAC systems, an Integrated Sensing and Covert Communication (ISACC) system is proposed. Additionally, an Intelligent Reflecting Surface (IRS)-assisted ISACC scheme is proposed to address the limitation of existing ISACC research, which cannot be applied to scenarios without a Line-of-Sight (LoS) link between the Base Station (BS) and the target. In this context, the average Cramér-Rao Lower Bound (CRLB) is adopted as a metric for sensing performance, aiming to overcome the limitations of traditional beampatterns in quantifying sensing performance directly. Methods The detection performance at warden Willie is first analyzed. An analytical expression for the average CRLB is then derived. Based on this, an optimization problem is formulated to minimize the average CRLB, subject to communication rate, covertness, and IRS phase shift constraints. The optimization problem is challenging to solve directly due to the coupling of the sensing covariance matrix, communication beamforming, and IRS reflective beamforming in the objective function, communication rate constraint, and covertness constraint. To tackle this, the optimization problem is decomposed into two subproblems: one for the sensing covariance matrix and communication beamforming optimization, and another for the IRS reflection beamforming optimization. An Alternating Optimization-based Penalty Successive Convex Approximation (AO-PSCA) algorithm is proposed to solve the two subproblems iteratively. Results and Discussions The relationship between the average CRLB, the number of IRS reflection elements, and the number of BS antennas is presented (Fig. 2 ). As observed, the average CRLB obtained by the AO-PSCA algorithm and the IRS random phase algorithm decreases as the number of IRS elements increases. This is because a larger number of IRS elements not only enhances covert communication performance but also improves the quality of the virtual link between the BS and the sensing target. Additionally, the proposed AO-PSCA algorithm outperforms the IRS random phase scheme, highlighting the importance of designing IRS reflection coefficients. Furthermore, as the number of BS antennas increases, the average CRLB decreases, since more antennas simultaneously improve both target sensing and covert communication performance. The relationship between the average CRLB, covertness threshold, and communication rate threshold is shown (Fig. 3 ). It can be seen that the average CRLB decreases as the covertness parameter\begin{document}$\varepsilon $\end{document} increases. This indicates that increasing the covertness parameter improves the sensing performance of the ISACC system improves with\begin{document}$\varepsilon $\end{document} . The reason is that a larger covertness value of\begin{document}$\varepsilon $\end{document} makes it easier to satisfy the covertness constraints, thereby allowing more resources for communication and sensing. In contrast, the average CRLB increases with the communication rate requirement, as a larger value of\begin{document}$ \varGamma $\end{document} requires more system resources, leaving fewer resources for radar sensing. The relationships between the average CRLB, average maximum transmit power, and symbol length, as well as between average maximum transmit power, communication signal power, and sensing signal power, are presented (Fig. 4 ). It can be observed that the average CRLB decreases as the average maximum transmit power increases. This is due to the increase in both sensing and communication signal powers with higher transmit power. The average CRLB also decreases as the symbol length increases, as a larger symbol length improves target sensing performance. The relationship between the beampattern, angle, and average maximum transmit power is illustrated (Fig. 5 ). The beampatterns are focused on their main lobe, with the sensing target located at 0°. Due to communication rate and covertness constraints, random fluctuations appear in the side lobe regions of the beampatterns. Moreover, the beampattern values increase with the average maximum transmit power, indicating that increasing transmit power effectively enhances both target sensing and covert communication performance. Conclusions The IRS-assisted ISACC system is investigated in this work. An optimization problem is formulated to minimize the average CRLB, subject to constraints on covertness, maximum transmit power, communication rate, and IRS phase shifts. The AO-PSCA algorithm is proposed for the joint design of the sensing covariance matrix, communication beamforming, and IRS phase shifts. Simulation results demonstrate that the proposed ISACC scheme, assisted by IRS, can effectively balance target sensing and covert wireless communication performance.
Available online , doi: 10.11999/JEIT240710
Abstract:
Objective Visible Light Communication (VLC) is emerging as a key technology for future communication systems, offering advantages such as abundant and license-free spectrum, immunity to electromagnetic interference, and low-cost front-end devices. Light Emitting Diodes (LEDs) serve a dual purpose, providing both communication and illumination in indoor environments. However, VLC links are vulnerable, as the interruption of the Line of Sight (LoS) can disrupt communication. The Optical Intelligent Reconfigurable Surface (IRS) has been proposed to enhance communication performance and robustness by reconfiguring optical channels. Two main types of optical IRS materials, mirror-based and meta-surface-based, are commonly used. Mirror-based IRS units introduce additional Non-LoS (NLoS) links with constant reflectance.A cell-free VLC network with the assistance of a newly proposed tunable IRS is proposed and fully investigated. The reflectance of the optical IRS can be dynamically adjusted, allowing it to function as a transmitter by modulating signals on the reflectance with stable incident light. In this system, at least one LED must operate in illumination mode to emit light with constant intensity when any IRS unit is in modulation mode. The IRS can also function in reflection mode to provide additional reflective links, enhancing signal strength. The tunable IRS increases the number of Access Points (APs), enabling ultra-dense VLC networks that significantly improve throughput and spectral efficiency. The system model for a tunable IRS-assisted cell-free VLC network is derived, and the channel gain is calculated using the Lambertian model. The transmission rate for each user is determined by the work mode of the APs and the IRS’s association with the LEDs and users, represented by binary variables. The primary objective of this study is to maximize the total throughput of the IRS-aided VLC network. Methods An optimization problem is formulated to maximize network throughput by jointly optimizing the work mode of the LEDs and IRS units, along with user-IRS associations. Given the non-convex nature of this integer optimization problem, it is decomposed into two sub-problems. (1) Problem P2: With fixed numbers of LEDs and IRS units in modulation mode, a Deep Deterministic Policy Gradient (DDPG)-based Deep Reinforcement Learning (DRL) algorithm is applied to optimize the work mode of each AP and the user-AP associations. The binary variables are relaxed to continuous values in the range [0,1]. The optimization problem is modeled as a Markov Decision Process (MDP), where the state corresponds to the channel gains, the action represents the optimization variables, and the reward is the network throughput. To ensure convergence, the reward is adjusted to reflect the negative of any unsatisfied constraints, and the noise in the DDPG model is dynamically modeled using two random variables. (2) Problem P1: The optimization problem is then solved by considering all possible combinations of the number of LEDs and IRS units in modulation mode. Results and Discussions Simulations for the indoor tunable IRS-aided system are performed using Python with PyTorch. The simulation parameters for the indoor scenario and the neural network configurations in the DDPG algorithm are shown (Table 1 , Table 2 ), respectively. The results demonstrate the following: (1) The convergence and final reward of the modified DDPG algorithm (denoted as DDPG-O) are compared with the unmodified version (denoted as DDPG-N) in solving Problem P2 (Fig. 4 ). The results show that the modified DDPG algorithm converges efficiently and achieves an access and association policy that maximizes network throughput. (2) The maximized throughput for various numbers of LEDs in modulation mode, along with varying optical power, is presented when solving Problem P1 (Fig. 5 ). It is observed that the policy with one lighting LED achieves the maximum throughput with appropriate IRS units in modulation mode. (3) The relationship between maximized throughput and the number of IRS units is analyzed in (Fig. 6 ). The total throughput increases as the number of IRS units grows, although the increase is not linear. (4) Simulations with the same number of users and LEDs are also considered (Fig. 7 ). It is observed that the total network throughput with and without IRS APs is nearly identical when the number of users does not exceed the number of LEDs. Thus, the VLC network benefits more when the number of users exceeds the number of LEDs. Conclusions A tunable IRS-assisted cell-free VLC network has been proposed, where IRS units either operate in reflection mode to provide additional NLoS channels or in modulation mode to enable wireless access for users. The channel and transmission models are developed, and an optimization problem is formulated to jointly select the working mode of APs and user associations with the objective of maximizing network throughput. A modified DDPG algorithm is applied to solve for the optimal policy. The optimization problem is further tackled by exploring all possible combinations of modulating LEDs and IRS units. Simulation results verify the effectiveness of the proposed algorithm, showing that the network throughput can be significantly improved by incorporating IRS APs, particularly when the number of users is large.
Available online , doi: 10.11999/JEIT240659
Abstract:
Objective With the rapid pace of digital transformation and the smart upgrading of the economy and society, the Internet of Things (IoT) has become a critical element of new infrastructure. Current wide-area IoT networks primarily rely on 5G terrestrial infrastructure. While these networks continue to evolve, challenges persist, particularly in remote or disaster-affected areas. The high cost and vulnerability of base stations hinder deployment and maintenance in these locations. Satellite networks provide seamless coverage, flexibility, and reliability, making them compelling alternatives to terrestrial networks for achieving global connectivity. Satellite-assisted Internet of Things (SIoT) can deliver ubiquitous and reliable connectivity for IoT devices. Typically, IoT devices offload tasks to edge servers or cloud platforms due to their limited power, computing, and caching resources. Mobile Edge Computing (MEC) helps reduce latency by caching content and placing edge servers closer to IoT devices. Low Earth Orbit (LEO) satellites with integrated processing units can also serve as edge computing nodes. Although cloud platforms offer abundant computing resources and a reliable power supply, the long distance between IoT devices and the cloud results in higher communication latency. With the explosive growth of IoT devices and the diversification of application requirements driven by 5G, it is essential to design a collaborative architecture that integrates cloud, edge, and end devices. Recent research has extensively explored MEC-enhanced SIoT systems. However, many studies focus solely on edge or cloud computing, with little emphasis on their integration, satellite mobility, or resource constraints. Furthermore, LEO satellites providing edge services face challenges due to their limited onboard resources and the high mobility of the satellite constellation, complicating resource allocation and task offloading. Single-satellite solutions may not satisfy performance expectations during peak demand. Inter-Satellite Collaboration (ISC) technology, which utilizes visible light communications, can significantly increase system capacity, extend coverage, reduce individual satellite resource consumption, and prolong network operational life. Although some studies address three-tier architectures involving IoT devices, satellites, and clouds, proposing load balancing mechanisms through ISC for optimizing offloading and resource allocation, many rely on static assumptions about network topologies and user associations. In practice, LEO satellites require frequent switching and dynamic adjustments in offloading strategies to maintain service quality due to their high-speed mobility. Therefore, there is a need for a method of task offloading and resource allocation in a dynamic environment that considers satellite mobility and limited resources. To address these research gaps, this paper proposes a dynamic ISC-enhanced cloud-edge-end SIoT network model. By formulating the joint optimization problem of offloading decisions and resource allocation as a Mixed Integer Non-Linear Programming (MINLP) problem, a Model-assisted Adaptive Deep Reinforcement Learning (MADRL) algorithm is developed to achieve minimum system cost in a changing environment. Methods The LEO satellite mobility model and the SIoT network model with ISC are constructed to analyze end-to-end latency and system energy consumption. This evaluation considers three modes: local computing, edge computing, and cloud computing. A joint optimization MINLP problem is formulated, focusing on task offloading and resource allocation to minimize system costs. A MADRL algorithm is introduced, integrating traditional optimization techniques with deep reinforcement learning. The algorithm operates in two parts. The first part optimizes communication and computational resource allocation using a model-assisted binary search algorithm and gradient descent method. The second part trains a Q-network to adapt offloading decisions based on stochastic task arrivals through an adaptive deep reinforcement learning approach. Results and Discussions Simulation experiments were conducted under various dynamic scenarios. The MADRL algorithm exhibits strong convergence properties, as demonstrated in the analysis. Comparisons of different learning rates and exploration decay factors reveal optimal parameter values. Incorporating satellite mobility reduces system costs by 41% compared to static scenarios, enabling dynamic resource allocation and improved efficiency. Integrating ISC reduces system costs by 22.1%. This demonstrates the effectiveness of inter-satellite load balancing in improving resource utilization. Additionally, the MADRL algorithm achieves a 3% reduction in system costs compared to the Deep Q Learning (DQN) algorithm, highlighting its adaptability and efficiency in dynamic environments. System costs decrease as satellite speed increases, with the MADRL algorithm consistently outperforming other methods. Conclusions This paper presents an innovative dynamic SIoT model that integrates IoT devices, LEO satellites, and a cloud computing center. The model addresses the latency and energy consumption issues faced by IoT devices in remote and disaster-stricken areas. The task offloading and resource allocation problem that minimizes system cost is constructed by incorporating ISC techniques to enhance satellite edge performance and by taking satellite mobility into account. A MADRL algorithm that combines traditional optimization with deep reinforcement learning is proposed. This approach effectively optimizes task offloading decisions and resource allocation. Simulation results demonstrate that our model and algorithm significantly reduce system costs. Specifically, the incorporation of satellite mobility and ISC technology leads to cost reductions of 41% and 22.1%, respectively. Compared to benchmark algorithms, the MADRL shows superior performance across various test environments, highlighting its significant application advantages.
Available online , doi: 10.11999/JEIT240624
Abstract:
Objective Recently, task offloading techniques based on reinforcement learning in Multi-access Edge Computing (MEC) have attracted considerable attention and are increasingly being utilized in industrial applications. Algorithms for task offloading that rely on single-agent reinforcement learning are typically developed within a decentralized framework, which is preferred due to its relatively low computational complexity. However, in large-scale MEC environments, such task offloading policies are formed solely based on local observations, often resulting in partial observability challenges. Consequently, this can lead to interference among agents and a degradation of the offloading policies. In contrast, traditional multi-agent reinforcement learning algorithms, such as the Multi-Agent Deep Deterministic Policy Gradient (MADDPG), consolidate the observation and action vectors of all agents, thereby effectively addressing the partial observability issue. Optimal joint offloading policies are subsequently derived through online training. Nonetheless, the centralized training and decentralized execution model inherent in MADDPG causes computational complexity to increase linearly with the number of mobile devices (MDs). This scalability issue restricts the ability of MEC systems to accommodate additional devices, ultimately undermining the system's overall scalability. Methods First, a task offloading queue model for large-scale MEC systems is developed to handle delay-sensitive tasks with deadlines. This model incorporates both the transmission process, where tasks are offloaded via wireless channels to the edge server, and the computation process, where tasks are processed on the edge server. Second, the offloading process is defined as a Partially Observable Markov Decision Process (POMDP) with specified observation space, action space, and reward function for the agents. The Mean-Field Multi-Agent Task Offloading (MF-MATO) algorithm is subsequently proposed. Long Short-Term Memory (LSTM) networks are utilized to predict the current state vector of the MEC system by analyzing historical observation vectors. The predicted state vector is then input into fully connected networks to determine the task offloading policy. The incorporation of LSTM networks addresses the partial observability issue faced by agents during offloading decisions. Moreover, mean field theory is employed to approximate the Q-value function of MADDPG through linear decomposition, resulting in an approximate Q-value function and a mean-field-based action approximation for the MF-MATO algorithm. This mean-field approximation replaces the joint action of agents. Consequently, the MF-MATO algorithm interacts with the MEC environment to gather experience over one episode, which is stored in an experience replay buffer. After each episode, experiences are sampled from the buffer to train both the policy network and the Q-value network. Results and Discussions The simulation results indicate that the average cumulative rewards of the MF-MATO algorithm are comparable to those of the MADDPG algorithm, outperforming the other comparison algorithms during the training phase. (1) The task offloading delay curves for MD using the MF-MATO and MADDPG algorithms show a synchronous decline throughout the training process. Upon reaching training convergence, the delays consistently remain lower than those of the single-agent task offloading algorithm. In contrast, the average delay curve for the single-agent algorithm exhibits significant variation across different MD scenarios. This inconsistency is attributed to the single-agent algorithm's inability to address mutual interference among agents, resulting in policy degradation for certain agents due to the influence of others. (2) As the number of MD increases, the MF-MATO algorithm's performance regarding delay and task drop rate increasingly aligns with that of MADDPG, while exceeding all other comparison algorithms. This enhancement is attributed to the improved accuracy of the mean-field approximation as the number of MD rises. (3) A rise in task arrival probability leads to a gradual increase in the average delay and task drop rate curves for both the MF-MATO and MADDPG algorithms. When the task arrival probability reaches its maximum value, a significant rise in both the average delay and task drop rate is observed across all algorithms, due to the high volume of tasks fully utilizing the available computational resources. (4) As the number of edge servers increases, the average delay and task drop rate curves for the MF-MATO and MADDPG algorithms show a gradual decline, whereas the performance of the other comparison algorithms experiences a marked improvement with only a slight increase in computational resources. This suggests that the MF-MATO and MADDPG algorithms effectively optimize computational resource utilization through cooperative decision-making among agents. The simulation results substantiate that, by reducing computational complexity, the MF-MATO algorithm achieves performance in terms of delay and task drop rate that is consistent with that of the MADDPG algorithm. Conclusions The task offloading algorithm proposed in this paper, which is based on LSTM networks and mean field approximation theory, effectively addresses the challenges associated with task offloading in large-scale MEC scenarios. By utilizing LSTM networks, the algorithm alleviates the partially observable issues encountered by single-agent approaches, while also enhancing the efficiency of experience utilization in multi-agent systems and accelerating algorithm convergence. Additionally, mean field approximation theory reduces the dimensionality of the action space for multiple agents, thereby mitigating the computational complexity that traditional MADDPG algorithms face, which increases linearly with the number of mobile devices. As a result, the computational complexity of the MF-MATO algorithm remains independent of the number of mobile devices, significantly improving the scalability of large-scale MEC systems.
Available online , doi: 10.11999/JEIT240348
Abstract:
Objective Sensor arrays are widely used to capture the spatio-temporal information of incident signal sources, with their configurations significantly affecting the accuracy of Direction Of Arrival (DOA) estimation. The Degrees Of Freedom (DOF) of conventional Uniform Linear Array (ULA) are limited by the number of physical sensors, and dense array deployments lead to severe mutual coupling effects. Emerging sparse arrays offer clear advantages by reducing hardware requirements, increasing DOF, mitigating mutual coupling, and minimizing system redundancy through flexible sensor deployment, making them a viable solution for high-precision DOA estimation. Among various sparse array designs, the Coprime Array (CA)—consisting of two sparse ULAs with coprime inter-element spacing and sensor counts—has attracted considerable attention due to its reduced mutual coupling effects. However, the alternately deployed subarrays result in a much lower number of Continuous Degrees Of Freedom (cDOF) than anticipated, which degrades the performance of subspace-based DOA estimation algorithms that rely on spatial smoothing techniques. Although many studies have explored array configuration optimization and algorithm design, real-time application demands indicate that optimizing array configurations is the most efficient approach to improve DOA estimation performance. Methods This study examines the weight functions of CA and identifies a significant number of redundant virtual array elements in the difference coarray. Specifically, all virtual array elements in the difference coarray exhibit weight functions of two or more, a key factor reducing the available cDOF and DOF. To address this deficiency, the conditions for generating redundant virtual array elements in the cross-difference sets of subarrays are analyzed, and two types of coprime arrays with translated subarrays, namely, CATrS-I and CATrS-II are devised. These designs aim to increase available cDOF and DOF and enhance DOA estimation performance. Firstly, without altering the number of physical sensors, the conditions for generating redundant virtual array elements in the cross-difference sets are modified by translating any subarray of CA to an appropriate position. Then, the precise range of translation distances is determined, and the closed-form expressions for cDOF and DOF, the hole positions in the difference coarray, and weight functions of CATrS-I and CATrS-II are derived. Finally, the optimal configurations of CATrS-I and CATrS-II are obtained by solving an optimization problem that maximizes cDOF and DOF while maintaining a fixed number of physical sensors. Results and Discussions Theoretical analysis shows that the proposed CATrS-I and CATrS-II can reduce the weight functions of most virtual array elements in the difference coarray to 1, thus increasing the available cDOF and DOF while maintaining the same number of physical sensors. Comparisons with several previously developed sparse arrays highlight the advantages of CATrS-I and CATrS-II. Specifically, the Augmented Coprime Array (ACA), which doubles the number of sensors in one subarray, and the Reference Sensor Relocated Coprime Array (RSRCA), which repositions the reference sensor, achieve only a limited reduction in redundant virtual array elements, particularly those associated with small virtual array elements. As a result, their mutual coupling effects are similar to those of the original CA. In contrast, the proposed CATrS-I and CATrS-II significantly reduce both the number of redundant virtual array elements and the weight functions corresponding to small virtual array elements by translating one subarray to an optimal position. This adjustment effectively mitigates mutual coupling effects among physical sensors. Numerical simulations further validate the superior DOA estimation performance of CATrS-I and CATrS-II in the presence of mutual coupling, demonstrating their superiority in spatial spectrum and DOA estimation accuracy compared to existing designs. Conclusions Two types of CATrS are proposed for DOA estimation by translating the subarrays of CA to appropriate distances. This design effectively reduces the number of redundant virtual array elements in the cross-difference sets, leading to a significant increase in cDOF and DOF, while mitigating mutual coupling effects among physical sensors. The translation distance of the subarray is analyzed, and the closed-form expressions for cDOF and DOF, the hole positions in the difference coarray, and the weight functions of virtual array elements are derived. Theoretical analysis and simulation results demonstrate that the proposed CATrS-I and CATrS-II offer superior performance in terms of cDOF, DOF, mutual coupling suppression, and DOA estimation accuracy. Future research will focus on further reducing redundant virtual array elements in the self-difference sets by disrupting the uniform deployment of subarrays and extending these ideas to more generalized and complex sparse array designs to further enhance array performance.
Available online , doi: 10.11999/JEIT240018
Abstract:
Objective The growing demand for advanced service applications and the stringent performance requirements envisioned in future 6G networks have driven the development of Integrated Sensing and Communication (ISAC). By combining sensing and communication capabilities, ISAC enhances spectral efficiency and has attracted significant research attention. However, real-world signal propagation environments are often suboptimal, making it difficult to achieve optimal transmission and sensing performance under harsh or dynamic conditions. To address this, Simultaneously Transmitting and Reflecting Reconfigurable Intelligent Surfaces (STAR-RIS) enable a full-space programmable wireless environment, offering an effective solution to enhance wireless system capabilities. In large-scale 6G industrial scenarios, STAR-RIS panels could be deployed on rooftops and walls for comprehensive coverage. As the number of reflecting elements increases, near-field effects become significant, rendering the conventional far-field assumption invalid. This paper explores the application of large-scale STAR-RIS in near-field ISAC systems, highlighting the role of near-field effects in enhancing sensing and communication performance. It highlights the importance of incorporating near-field phenomena into system design to exploit the additional degrees of freedom provided by large-scale STAR-RIS for improved localization accuracy and communication quality. Methods First, near-field ISAC system is formulated, where a large-scale STAR-RIS assists both sensing and communication processes. The theoretical framework of near-field steering vectors is applied to derive the steering vectors for each link, including those from the Base Station (BS) to the STAR-RIS, from the STAR-RIS to communication users, from the STAR-RIS to sensing targets, and from sensing targets to sensors. Based on these vectors, a system model is constructed to characterize the relationships among transmitted signals, signals reflected or transmitted via the STAR-RIS, and received signals for both communication and sensing.Next, the Cramér-Rao Bound (CRB) is then derived by calculating the Fisher Information Matrix (FIM) for three-dimensional (3D) parameter estimation of the sensing target, specifically its azimuth angle, elevation angle, and distance. The CRB serves as a theoretical benchmark for estimation accuracy. To optimize sensing performance, the CRB is minimized subject to communication requirements defined by a Signal-to-Interference-plus-Noise Ratio (SINR) constraint. The optimization involves jointly designing the BS precoding matrices, the transmit signal covariance matrices, and the STAR-RIS transmission and reflection coefficients to balance accurate sensing with reliable communication. Since the joint design problem is inherently nonconvex, an augmented Lagrangian formulation is employed. The original problem is decomposed into two subproblems using alternating optimization. Schur complement decomposition is first applied to transform the target function, and semidefinite relaxation is then used to convert each nonconvex subproblem into a convex one. These subproblems are alternately solved, and the resulting solutions are combined to achieve a globally optimized system configuration. This two-stage approach effectively reduces the computational complexity associated with high-dimensional, nonconvex optimization typical of large-scale STAR-RIS setups. Results and Discussions Simulation results under varying SINR thresholds indicate that the proposed STAR-RIS coefficient design achieves a lower CRB root than random coefficient settings (Fig. 2 ), demonstrating that optimizing the transmission and reflection coefficients of the STAR-RIS improves sensing precision. Additionally, the CRB root decreases as the number of Transmitting-Reflecting (T-R) elements increases in both the proposed and random designs, indicating that a larger number of T-R elements provides additional degrees of freedom. These degrees of freedom enable the system to generate more targeted beams for both sensing and communication, enhancing overall system performance.The influence of sensor elements on sensing accuracy is further analyzed by varying the number of sensing elements (Fig. 3 ). As the number of sensing elements increases, the CRB root declines, indicating that a larger sensing array improves the capture and processing of backscattered echoes, thereby enhancing the overall sensing capability. This finding highlights the importance of sufficient sensing resources to fully exploit the benefits of near-field ISAC systems.The study also examines three-dimensional localization of the sensing target under different SINR thresholds (Fig. 4 , Fig. 5 ). Using Maximum Likelihood Estimation (MLE), the proposed method demonstrates highly accurate target positioning, validating the effectiveness of the joint design of precoding matrices, signal covariance, and STAR-RIS coefficients. Notably, near-field effects introduce distance as an additional dimension in the sensing process, absent in conventional far-field models. This additional dimension expands the parameter space, enhancing range estimation and contributing to more precise target localization. These results emphasize the potential of near-field ISAC for meeting the demanding localization requirements of future 6G systems.More broadly, the findings highlight the significant advantages of employing large-scale STAR-RIS in near-field settings for ISAC tasks. The improved localization accuracy demonstrates the synergy between near-field physics and advanced beam management techniques facilitated by STAR-RIS. These insights also suggest promising applications, such as industrial automation and precise positioning in smart factories, where reliable and accurate sensing is essential. Conclusions A large-scale STAR-RIS-assisted near-field ISAC system is proposed and investigated in this study. The near-field steering vectors for the links among the BS, STAR-RIS, communication users, sensing targets, and sensors are derived to construct an accurate near-field system model. The CRB for the 3D estimation of target location parameters is formulated and minimized by jointly designing the BS transmit beamforming matrices, the transmit signal covariance, and the STAR-RIS transmission and reflection coefficients, while ensuring the required communication quality. The nonconvex optimization problem is divided into two subproblems and addressed iteratively using semidefinite relaxation and alternating optimization techniques. Simulation results confirm that the proposed optimization scheme effectively reduces the CRB, enhancing sensing accuracy and demonstrating that near-field propagation provides an additional distance domain beneficial for both sensing and communication tasks. These findings suggest that near-field ISAC, enhanced by large-scale STAR-RIS, is a promising research direction for future 6G networks, combining increased degrees of freedom with high-performance integrated services.
Available online , doi: 10.11999/JEIT240395
Abstract:
Objective Ambient Backscatter Communication (AmBC) is an emerging, low-power, low-cost communication technology that utilizes ambient Radio Frequency (RF) signals for passive information transmission. It has demonstrated significant potential for various wireless applications. However, in AmBC systems, the reflected signals are often severely weakened due to double fading effects and signal obstruction from environmental obstacles. This results in a substantial reduction in signal strength, limiting both communication range and overall system performance. To address these challenges, Intelligent Reflecting Surface (IRS) technology has been integrated into AmBC systems. IRS can enhance reflection link gain by precisely controlling reflected signals, thereby improving system performance. However, the passive nature of both the IRS and tags makes accurate channel estimation a critical challenge. This study proposes an efficient channel estimation algorithm for IRS-assisted AmBC systems, aiming to provide theoretical support for optimizing system performance and explore the feasibility of achieving high-precision channel estimation in complex environments—key to the practical implementation of this technology. Methods This study develops a general IRS-assisted AmBC system model, where the system channel is divided into multiple subchannels, each corresponding to a specific IRS reflection element. To minimize the Mean Squared Error (MSE) in channel estimation, the Least Squares (LS) method is used as the estimation criterion. The joint optimization problem for channel estimation is explored by integrating various IRS reflection modes, including ON/OFF, Discrete Fourier Transform (DFT), and Hadamard modes. The communication channel is assumed to follow a Rayleigh fading distribution, with noise modeled as zero-mean Gaussian. Pilot signals are modulated using Quadrature Phase Shift Keying (QPSK). To thoroughly evaluate the performance of channel estimation, 1000 Monte Carlo simulations are conducted, with MSE and the Cramer-Rao Lower Bound (CRLB) serving as performance metrics. Simulation experiments conducted on the Matlab platform provide a comprehensive comparison and analysis of the performance of different algorithms, ultimately validating the effectiveness and accuracy of the proposed algorithm. Results and Discussions The simulation results demonstrate that the IRS-assisted channel estimation algorithm significantly improves performance. Under varying Signal-to-Noise Ratio (SNR) conditions, the MSE of methods based on DFT and Hadamard matrices consistently outperforms the ON/OFF method, aligning with the CRLB, thereby confirming the optimal performance of the proposed algorithms (Fig. 2, Fig. 3). Additionally, the MSE for direct and cascaded channels is identical when using the DFT and Hadamard methods, while the cascaded channel MSE for the ON/OFF method is twice that of the direct channel, highlighting the superior performance of the DFT and Hadamard approaches. As the number of IRS reflection elements increases, the MSE for the DFT and Hadamard methods decreases significantly, whereas the MSE for the ON/OFF method remains unchanged (Fig. 4, Fig. 5). This illustrates the ability of the DFT and Hadamard methods to effectively exploit the scalability of IRS, demonstrating better adaptability and estimation performance in large-scale IRS systems. Furthermore, increasing the number of pilot signals leads to a further reduction in MSE for the DFT and Hadamard methods, as more pilot signals provide higher-quality observations, thereby enhancing channel estimation accuracy (Fig. 6, Fig. 7). Although additional pilot signals consume more resources, their substantial impact on reducing MSE highlights their importance in improving estimation precision. Moreover, under high-SNR conditions, the MSE for all algorithms is lower than under low-SNR conditions, with the DFT and Hadamard methods showing more pronounced reductions (Fig. 4, Fig. 5). This indicates that the proposed methods enable more efficient channel estimation under better signal quality, further enhancing system performance. In conclusion, the channel estimation algorithms based on DFT and Hadamard matrices offer significant advantages in large-scale IRS systems and high-SNR scenarios, providing robust support for optimizing low-power, low-cost communication systems. Conclusions This paper presents a low-complexity channel estimation algorithm for IRS-assisted AmBC systems based on the LS criterion. Thechannel is decomposed into multiple subchannels, and the optimization of IRS phase shifts is designed to significantly enhance both channel estimation and transmission performance. Simulation results demonstrate that the proposed algorithm, utilizing the DFT and Hadamard matrices, achieves excellent performance across various SNR and system scale conditions. Specifically, the algorithm effectively reduces the MSE of channel estimation, exhibits higher estimation accuracy under high-SNR conditions, and shows low computational complexity and strong robustness in large-scale IRS systems. These results provide valuable insights for the theoretical modeling and practical application of IRS-assisted AmBC systems. The findings are particularly relevant for the development of low-power, large-scale communication systems, offering guidance on the design and optimization of IRS-assisted AmBC systems. Additionally, this work lays a solid theoretical foundation for the advancement of next-generation Internet of Things applications, with potential implications for future research on IRS technology and their integration with AmBC systems.
Available online , doi: 10.11999/JEIT240823
Abstract:
Objective As demand for 4K and 8K Ultra High Definition (UHD) videos increases, the latest generation of video coding standards has been developed to meet the growing need for UHD video transmission. UHD video coding requires processing more pixels and details, resulting in significant increases in computational complexity and resource consumption. Optimizing algorithms and implementing hardware acceleration are essential for achieving real-time encoding and decoding of UHD videos. In Alliance for Open Media Video 1 (AV1), richer intra-prediction modes have been introduced, expanding the number of modes from 10 in VP9 to 61, thereby increasing computational complexity. To address the added complexity of these modes and enhance hardware processing throughput, a hardware design for AV1 Rough Mode Decision (RMD) based on a fully pipelined architecture is proposed. Methods At the algorithm level, a 4×4 block is used as the minimum processing unit. RMD is applied to various sizes of Prediction Units (PUs) within a 64×64 Coding Tree Unit (CTU) following Z-order scanning. This approach allows for efficient processing of large blocks by dividing them into smaller, manageable units. To reduce computational complexity, the SATD cost calculations for different PU sizes (e.g., 1:2, 1:4, 2:1, and 4:1) are performed using a cost accumulation approximation method based on the 1:1 PU. This method minimizes the need to recalculate costs for every possible configuration, thus improving efficiency and reducing computational load. At the hardware level, the architecture supports RMD for PUs of various sizes (4×4 to 32×32) within a 64×64 CTU. This architecture differs from traditional designs, which use separate circuits for each PU size. It optimizes logical resource use and minimizes downtime. The design incorporates a 28-stage pipeline that enables parallel processing of intra-prediction modes, ensuring RMD for at least 16 pixels per clock cycle and significantly enhancing throughput and encoding efficiency. Additionally, the design emphasizes circuit compatibility and reusability across various PU sizes, reducing redundancy and maximizing hardware resource utilization. Results and Discussions Software analysis shows that the proposed AV1 coarse mode decision algorithm reduces processing time by an average of 45.78% compared to the standard AV1 algorithm under the All-Intra (AI) configuration, while achieving a 1.94% improvement in BD-Rate. The testing platform is an Intel(R) Core(TM) i9-9900K CPU @ 3.60 GHz with 16.0 GB of DRAM. Compared to existing methods, the algorithm significantly reduces processing time while maintaining encoding efficiency. It offers an optimized trade-off, with a slight BD-Rate loss in exchange for substantial reductions in encoding time. Hardware analysis reveals that the proposed hardware architecture has a total circuit area of 0.556 mm² after synthesis, with a maximum operating frequency of 432.7 MHz, enabling real-time encoding of 8k@50.6fps video. Although the circuit area is slightly larger than in existing designs, the architecture demonstrates significant improvements in processing speed and video resolution capability, providing a balanced trade-off between hardware resource usage and throughput/area efficiency. These results further confirm the design's superiority in terms of hardware resource efficiency and processing performance. Conclusions This paper presents a high-throughput hardware design for AV1 RMD, capable of processing all PU sizes with 56 directional and 5 non-directional prediction modes. The design employs a 28-stage pipeline for parallel intra-frame prediction mode processing, enabling RMD for at least 16 pixels per clock cycle and significantly improving encoding efficiency. Techniques such as false-reconstructed reference pixels, Z-order scanning, PMCM circuit structures, and circuit reuse address the increased hardware resource demands of parallel processing. Experimental results show that the proposed algorithm reduces processing time by an average of 45.78% and improves BD-Rate by 1.94% compared to the AV1 standard, ensuring high speed and encoding quality. Circuit synthesis confirms the architecture's capability for real-time 8k@50.6fps video processing, meeting the demands of future UHD video encoding with exceptional performance and efficiency.
Available online , doi: 10.11999/JEIT240735
Abstract:
Objective Neural networks that demonstrate superior performance often necessitate complex architectures and substantial computational resources, thereby limiting their practical applications. Enhancing model performance without increasing network parameters has emerged as a significant area of research. Self-distillation has been recognized as an effective approach for simplifying models while simultaneously improving performance. Presently, research on self-distillation predominantly centers on models with Convolutional Neural Network (CNN) architectures, with less emphasis on Transformer-based models. It has been observed that due to their structural differences, different network models frequently extract varied semantic information for the same spatial locations. Consequently, self-distillation methods tailored to specific network architectures may not be directly applicable to other structures; those designed for CNNs are particularly challenging to adapt for Transformers. To address this gap, a self-distillation method for object segmentation is proposed, leveraging a Transformer feature pyramid to improve model performance without increasing network parameters. Methods First, a pixel-wise object segmentation model is developed utilizing the Swin Transformer as the backbone network. In this model, the Swin Transformer produces four layers of features. Each layer of mapped features is subjected to Convolution-Batch normalization-ReLU (CBR) processing to ensure that the backbone features maintain a uniform channel size. Subsequently, all backbone features are concatenated along the channel dimension, after which convolution operations are performed to yield pixel-wise feature representations. In the next phase, an auxiliary branch is designed that integrates Densely connected Atrous Spatial Pyramid Pooling (DenseASPP), Adjacent Feature Fusion Modules (AFFM), and a scoring module, facilitating self-distillation to guide the main network. The specific architecture is depicted. The self-distillation learning framework consists of four sub-branches, labeled FZ1 to FZ4, alongside a main branch labeled FZ0. Each auxiliary sub-branch is connected to different layers of the backbone network to extract layer-specific features and produce a Knowledge Representation Header (KRH) that serves as the segmentation result. The main branch is linked to the fully connected layer to extract fused features and optimize the mixed features from various layers of the backbone network. Finally, a top-down learning strategy is employed to guide the model’s training, ensuring consistency in self-distillation. The KRH0 derived from the main branch FZ0 integrates the knowledge KRH1-KRH4 obtained from each sub-branch FZ1-FZ4, steering the overall optimization direction for self-distillation learning. Consequently, the main branch and sub-branches can be regarded as teacher and student entities, respectively, forming four distillation pairs, with FZ0 directing FZ1-FZ4. This top-down distillation strategy leverages the main branch to instruct the sub-branches to learn independently, thereby enabling the sub-branches to acquire more discriminative features from the main branch while maintaining consistency in the optimization direction between the sub-branches and the main branch. Results and Discussions The results quantitatively demonstrate the segmentation performance of the proposed method. The data indicates that the proposed method consistently achieves superior segmentation results across all four datasets. On average, the metric Fβ of the proposed method exceeds that of the suboptimal method, Transformer Knowledge Distillation (TKD), by nearly 1%. Additionally, the mean Intersection over Union (mIoU) metric of the proposed method is 0.86% higher than that of the suboptimal method, Target-Aware Transformer (TAT). These results demonstrate that the proposed method effectively addresses the challenge of camouflage target segmentation. Notably, on the Camouflage Object Detection (COD) dataset, the proposed method improves Fβ by about 1.6% compared to TKD, while achieving an enhancement of 0.81% in mIoU relative to TAT. Among CNN methods, Poolnet+ (POOL+) attained the highest average Fβ, yet it falls short of the proposed method by 4.22%. This difference can be attributed to the Transformer’s capability to overcome the limitations of the restricted receptive field inherent in CNNs, thereby extracting a greater amount of semantic information from images. The results also show that the self-distillation method is similarly effective within the Transformer framework, significantly enhancing the segmentation performance of the Transformer model. The proposed method outperforms other self-distillation strategies, achieving the best segmentation results across all four datasets. When compared to the baseline model, the average metrics for Fβ and mIoU exhibit increases of 2% and 2.4%, respectively. Conclusions The proposed self-distillation algorithm enhances object segmentation performance and demonstrates the efficacy of self-distillation within the Transformer architecture.
Available online , doi: 10.11999/JEIT240503
Abstract:
Objective With the rapid increase in UAV numbers and the growing complexity of airspace environments, Detect-and-Avoid (DAA) technology has become essential for ensuring airspace safety. However, the existing Detection and Avoidance Alerting Logic for Unmanned Aircraft Systems (DAIDALUS) algorithm, while capable of providing basic avoidance strategies, has limitations in handling multi-aircraft conflicts and adapting to dynamic, complex environments. To address these challenges, integrating the DAIDALUS output strategies into the action space of a Markov Decision Process (MDP) model has emerged as a promising approach. By incorporating an MDP framework and designing effective reward functions, it is possible to enhance the efficiency and cost-effectiveness of avoidance strategies while maintaining airspace safety, thereby better meeting the needs of complex airspaces. This research offers an intelligent solution for UAV avoidance in multi-aircraft cooperative environments and provides theoretical support for the coordinated management of shared airspace between UAVs and manned aircraft. Methods The guidance logic of the DAIDALUS algorithm dynamically calculates the UAV’s collision avoidance strategy based on the current state space. These strategies are then used as the action space in an MDP model to achieve autonomous collision avoidance in complex flight environments. The state space in the MDP model includes parameters such as the UAV's position, speed, and heading angle, along with dynamic factors like the relative position and speed of other aircraft or potential threats. The reward function is crucial for ensuring the UAV balances flight efficiency and safety during collision avoidance. It accounts for factors such as success rewards, collision penalties, proximity to target point rewards, and distance penalties to optimize decision-making. Additionally, the discount factor determines the weight of future rewards, balancing the importance of immediate versus future rewards. A lower discount factor typically emphasizes immediate rewards, leading to faster avoidance actions, while a higher discount factor encourages long-term flight safety and resource consumption. Results and Discussions The DAIDALUS algorithm calculates the UAV’s collision avoidance strategy based on the current state space, which then serves as the action space in the MDP model. By defining an appropriate reward function and state transition probabilities, the MDP model is established to explore the impact of different discount factors on collision avoidance. Simulation results show that the optimal flight strategy, calculated through value iteration, is represented by the red trajectory (Fig. 7 ). The UAV completes its flight in 203 steps, while the comparative experiment trajectory (Fig. 8 ) consists of 279 steps, demonstrating a 27.2% improvement in efficiency. When the discount factor is set to 0.99 (Fig. 9 , Fig. 10 ), the UAV selects a path that balances immediate and long-term safety, effectively avoiding potential collision risks. The airspace intrusion rate is 5.8% (Fig. 11 , Fig. 12 ), with the closest distance between the threat aircraft and the UAV being 343 meters, which meets the safety requirements for UAV operations. Conclusions This paper addresses the challenge of UAV collision avoidance in complex environments by integrating the DAIDALUS algorithm with a Markov Decision Process model. The proposed decision-making method enhances the DAIDALUS algorithm by using its guidance strategies as the action space in the MDP. The method is evaluated through multi-aircraft conflict simulations, and the results show that: (1) The proposed method improves efficiency by 27.2% over the DAIDALUS algorithm; (2) Long-term and short-term rewards are considered by selecting a discount factor of 0.99 based on the relationship between the discount factor and reward values at each time step; (3) In multi-aircraft conflict scenarios, the UAV effectively handles various conflicts and maintains a safe distance from threat aircraft, with a clear airspace intrusion rate of only 5.8%. However, this study only considers ideal perception capabilities, and real-world flight conditions, including sensor noise and environmental variability, should be accounted for in future work.
Available online , doi: 10.11999/JEIT240446
Abstract:
Objective In the future, 6G will usher in a new era of intelligent interconnection and the integration of virtual and physical environments. This vision relies heavily on the deployment of numerous communication and sensing devices. However, the scarcity of frequency resources presents a significant challenge for sharing these resources efficiently. Integrated Sensing and Communication (ISAC) technology offers a promising solution, enabling both communication and sensing to share a common set of equipment and frequency resources. This allows for simultaneous target detection and information transmission, positioning ISAC as a key technology for 6G. ISAC research can be divided into two main approaches: Coexisting Radar and Communication (CRC) and Dual-Function Radar Communication (DFRC). The CRC approach designs separate systems for radar and communication, aiming to reduce interference between the two; however, this leads to increased system complexity. The DFRC approach integrates radar and communication into a single system, simplifying the design while still achieving both radar detection and communication functions. As a result, DFRC is the primary focus of ISAC research. Waveform design is a crucial component of ISAC systems, with two primary strategies: non-overlapped resource waveform design and fully unified waveform design. The fully unified design can be further classified into three types: sensing-centric, communication-centric, and joint design. Previous research has predominantly focused on sensing-centric or communication-centric designs, which limit the flexibility of the integrated waveform in balancing communication and sensing performance. Additionally, limited research has addressed ISAC in marine environments. This paper investigates waveform design for ISAC in marine environments, proposing a joint design approach that uses a weighting coefficient to adjust the communication and sensing performance of the integrated waveform. Methods Considering the characteristics of the marine environment, this paper proposes using Unmanned Aerial Vehicles (UAVs) as nodes in the ISAC system, owing to their flexibility, portability, and cost-effectiveness. The integrated waveform transmitted by UAVs can both communicate with downlink users and detect targets. The communication performance is evaluated using the achievable sum rate, while the sensing performance is assessed by the error between the covariance matrix of the integrated waveform and the standard radar covariance matrix. The optimization objective is to maximize the weighted sum of these two performance indices, subject to the constraint that UAV power does not exceed the maximum allowable value. The weighting coefficient represents the ratio of communication power to sensing power. Due to the non-convex rank-1 constraint and objective function, the optimization problem is non-convex. This paper decomposes the non-convex optimization problem into a series of convex subproblems using the Successive Convex Approximation (SCA) algorithm. The local optimal solution of the original problem is obtained by solving these convex subproblems. The communication and sensing performance of the integrated waveform can be adjusted by varying the weighting coefficient. The performance of the weighted integrated waveform design in a marine environment is simulated, and the results are presented. Results and Discussions Simulation results indicate that the integrated beam pattern exhibits two large lobes: one directed towards the target for detection, and the other towards the communication user (Fig.4). As the weighting coefficient increases, the lobes directed towards the communication users become more pronounced, reflecting the increased emphasis on communication performance. Furthermore, as the weighting coefficient increases, the sensing performance error (smaller error indicates better sensing performance) initially increases slowly before rising more rapidly. Meanwhile, the achievable sum rate of communication increases sharply. Eventually, both the sensing performance error and the communication sum rate curves flatten out (Fig. 6 ). Since the UAV's maximum power is limited to 10 W, further increases in the weighting coefficient beyond a certain point lead to diminishing returns in communication performance, as power constraints limit further improvement. At this point, the sensing performance error remains stable. Conclusions This paper investigates the waveform design for UAV-enabled ISAC systems in marine environments. A wireless propagation model for UAVs in such environments is developed, and an integrated waveform optimization method based on a weighted design is proposed. The SCA algorithm is used to solve the convex approximation. Simulation results demonstrate that when the weighting coefficient is between 0.2 and 0.5, the integrated waveform ensures strong communication performance while maintaining good sensing performance.
Available online , doi: 10.11999/JEIT240236
Abstract:
Objective Most existing research assumes that the Intelligent Reflecting Surface (IRS) is equipped with continuous phase shifters, which neglects the phase quantization error. However, in practice, IRS devices are typically equipped with discrete phase shifters due to hardware and cost constraints. Similar to the performance degradation caused by finite quantization bit shifters in directional modulation networks, discrete phase shifters in IRS systems introduce phase quantization errors, potentially affecting system performance. This paper analyzes the performance loss and approximate performance loss in a double IRS-aided amplify-and-forward relay network, focusing on Signal-to-Noise Ratio (SNR) and achievable rate under Rayleigh fading channels. The findings provide valuable guidance on selecting the appropriate number of quantization bits for IRS in practical applications. Methods Based on the weak law of large numbers, Euler’s formula, and Rayleigh distribution, closed-form expressions for the SNR performance loss and achievable rate of the discrete phase shifter IRS-aided amplify-and-forward relay network are derived. Additionally, corresponding approximate expressions for the performance loss are derived using the first-order Taylor series expansion. Results and Discussions The SNR performance loss at the destination is evaluated as a function of the number of IRS-1 elements (N), assuming that the number of IRS-2 elements (M) equals N (Fig. 2 ). It is evident that, regardless of whether the scenario involves actual or approximate performance loss, the SNR performance loss decreases as the number of quantization bits (k) increases but increases as N grows. When k = 1, the gap between the actual performance loss and the approximate performance loss widens with increasing N. This gap becomes negligible when k is greater than or equal to 2. Notably, when k = 4, the SNR performance loss is less than 0.06 dB. Furthermore, both the SNR performance loss and approximate performance loss gradually decelerate as N increases towards a larger scale. The achievable rate at the destination is evaluated as a function of the N, where M equals N (Fig. 3 ). It can be observed that, in all scenarios—whether there is no performance loss, with performance loss, or approximate performance loss—the achievable rate increases gradually as N increases. This is because both IRS-1 and IRS-2 provide greater performance gains as N grows. When k = 1, the difference in achievable rates between the performance loss and approximate performance loss scenarios increases with N. As k increases, the achievable rates with performance loss and approximate performance loss converge towards the no-performance-loss scenario. For example, when N = 1 024, the performance loss in achievable rate is about 0.15 bits/(s·Hz) at k = 2 and only 0.03 bits/(s·Hz) at k = 3. The achievable rate is evaluated as a function of k (Fig. 4 ). The performance loss in achievable rate increases with N and M. When k = 3, the achievable rates with performance loss and approximate performance loss decrease by 0.04 bits/(s·Hz) compared to the no performance loss scenario. When k = 1, the differences in achievable rates between the no performance loss, performance loss, and approximate performance loss scenarios grow with increasing N and M. Remarkably, the achievable rate for the system with N = 1 024 and M = 128 outperforms that of N = 128 and M = 1 024. This suggests that increasing N provides a more significant improvement in rate performance than increasing M. Conclusions This paper investigates a double IRS-assisted amplify-and-forward relay network and analyzes the system performance loss caused by phase quantization errors in IRS equipped with discrete phase shifters under Rayleigh fading channels. Using the weak law of large numbers, Euler’s formula, and Rayleigh distribution, closed-form expressions for SNR performance loss and achievable rate are derived. Approximate performance loss expressions are also derived based on a first-order Taylor series expansion. Simulation results show that the performance losses in SNR and achievable rate decrease with increasing quantization bits, but increase with the number of IRS elements. When the number of quantization bits is 4, the performance losses in SNR and achievable rate are less than 0.06 dB and 0.03 bits/(s·Hz), respectively, suggesting that the system performance loss is negligible when using 4-bit phase quantization shifters.
Available online , doi: 10.11999/JEIT240612
Abstract:
Objective With the continuous advancement of research on Reconfigurable Intelligent Surfaces (RIS), various application scenarios have emerged. Among these, Active Reconfigurable Intelligent Surfaces (ARIS) attracts significant attention from the academic community. While some studies focus on dual Passive RIS-assisted communication systems, others investigate dual RIS-assisted systems incorporating ARIS. Existing literature consistently demonstrates that dual RIS configurations outperform single RIS setups in terms of achievable Signal-to-Noise Ratio (SNR), power gain, and energy transfer efficiency, with dual RIS systems achieving approximately ten times higher energy transfer efficiency.However, most existing studies on RIS focus on optimizing the performance of reflection coefficients in one or more distributed RIS-aided systems, primarily serving users within their respective coverage areas, without sufficiently addressing the benefits of single-reflection links. While dual RIS systems can effectively mitigate the limitations of antenna numbers and improve transmission reliability and efficiency, single-reflection links can still significantly enhance channel capacity, especially under low transmission power conditions. This paper proposes a novel approach wherein dual-reflection links and two single-reflection links jointly serve users. The goal is to maximize the downlink capacity of dual RIS-assisted Multiple-Input Single-Output (MISO) systems by strategically configuring the interaction between the two RISs. Methods In this paper, four combinatorial models of RIS are investigated: the Transmitter-PRIS PRIS-Receiver (TPPR), Transmitter-ARIS PRIS-Receiver (TAPR), Transmitter-PRIS ARIS-Receiver (TPAR), and Transmitter-ARIS ARIS-Receiver (TAAR). The optimization objective of all models is to maximize the communication rate by optimizing the antenna beamforming vector of the base station and the phase shift matrix of the RIS. Due to the coupling of the three variables in the objective function, the model is non-convex, making it difficult to obtain an optimal solution. To address the coupling problem, the Alternating Optimization (AO) algorithm is employed, where one phase shift vector is fixed while the other is optimized alternately. To tackle the non-convex problem, SCA is applied to iteratively approximate the optimal solution by solving a series of convex subproblems. Results and Discussions Building on the research methods outlined above and employing the SCA and AO algorithms, experimental results are obtained. The system capacity of each combination model increases with rising amplification power (Fig. 2 ). However, once the amplification factor reaches a certain threshold, the capacity curves of all models begin to flatten due to the constraints imposed by the maximum amplification power.Further demonstration of the system capacity performance of different combination models as transmit power increases is shown in (Fig. 3 ). Across all dual-RIS combination models, system capacity improves with higher transmit power and outperforms the Single-Active model in all scenarios.In (Fig. 3(a) ), under low transmit power conditions, regions of the curves corresponding to higher amplification power overlap due to the constraint of the amplification factor. As transmit power increases, system capacity stabilizes, which can be attributed to the proximity of ARIS to the base station, allowing it to receive stronger signals. Under high transmit power, system capacity continues to improve due to the influence of PRIS. Unlike ARIS, PRIS reflects the optimized signal path without being constrained by amplification power. Consequently, as transmit power increases, the signal strength received by PRIS is enhanced.In (Fig. 3(b) ), system capacity increases with transmit power, showing trends similar to those in (Fig. 3(a) ). In the TPAR combined model, the amplification factor constraint dominates, causing the system capacity curves to exhibit similar behavior across different amplification power levels. Under low transmit power, the signal strength at ARIS does not exceed the maximum amplification power budget. As transmit power increases, the amplification power constraint increasingly affects system capacity, leading to a gradual slowdown in the curve's upward trend until it flattens. At high transmit power levels, the system capacity curve of the TPAR model levels off due to the low signal strength received by ARIS when it is positioned farther from the base station. This positioning necessitates higher transmit power to overcome the amplified power constraint. Thus, it is recommended that ARIS be deployed as close to the user as possible.In (Fig. 3(c) ), the TAAR combined model leverages the characteristics of both ARIS and PRIS in a dual ARIS-assisted scenario. Under low transmit power conditions, significant capacity gains are achieved. However, at high transmit power, the system capacity is constrained by the maximum amplification power of ARIS and eventually levels off. The system capacity trends in (Fig. 3(a) ) and (Fig. 3(b) ) consistently increase with higher transmit power. This is because both combination models integrate the advantages of PRIS and ARIS, ensuring high performance under both high and low transmit power conditions.In (Fig. 3(d)) , where ARIS is positioned on the user side, comparison with (Fig. 3(c) ) reveals that, under high transmit power, the system capacity of both combination models is nearly identical, regardless of the amplification power level. This suggests that in strong transmit power scenarios, the additional gains from ARIS are limited. Conclusions This paper provides an in-depth analysis of the optimization of dual RIS-assisted MISO communication systems, confirming their superiority over single RIS configurations. However, several potential research directions remain unexplored. Most current studies assume ideal channel models, whereas real-world applications often involve complex channel conditions that significantly affect system performance. Future research could investigate the performance of dual RIS systems under these practical conditions, paving the way for more robust and applicable solutions.
Available online , doi: 10.11999/JEIT240666
Abstract:
Objective: In monitoring Internet of Things (IoT) systems, it is essential for sensor devices to transmit collected data to the Access Point (AP) promptly. The timely transmission of information can be enhanced by increasing transmission power, as higher power levels tend to improve the reliability of data transfer. However, sensor devices typically have limited transmission power, and beyond a certain threshold, increases in power yield diminishing returns in terms of transmission timeliness. Therefore, effectively managing transmission power to balance timeliness and Energy Efficiency (EE) is crucial for sensor devices. This paper investigates the trade-off between the Age of Information (AoI) and EE in multi-device monitoring systems, where sensor devices communicate monitoring data to the AP using short packets with support from Intelligent Reflective Surface (IRS). To address packet collisions that occur when multiple devices access the same resource block, an access control protocol is developed, and closed-form expressions are derived for both the average AoI and EE. Based on these expressions, the average AoI-EE ratio is introduced as a metric that can be minimized to achieve an optimal balance between AoI and EE through transmission power optimization. Methods: Deriving the closed-form expression for the average AoI is challenging due to two factors. Firstly, obtaining the exact distribution of the composite channel gain is difficult. Secondly, in short-packet communications, the packet error rate expression involves a complementary cumulative distribution function with a complex structure, complicating the averaging process. However, the Moment Matching (MM) technique can approximate the probability distribution of the composite channel gain as a gamma distribution. To address the second challenge, a linear function is used to approximate the packet error rate, yielding an approximate expression for the average packet error rate. Additionally, to examine the relationship between the ratio of average AoI and EE with transmission power, the second derivative of this ratio is calculated and analyzed. Finally, the optimal transmission power is determined using the binary search algorithm. Results and Discussions: Firstly, the paper examines the division of a time slot into varying numbers of resource blocks and analyzes their AoI performances. The findings indicate that AoI performance does not increase monotonically with an increase in the number of resource blocks. Specifically, while a greater number of resource blocks enhances the probability of device access, it concurrently reduces the size of each resource block, leading to an increase in packet error rates during information transmission. Therefore, it is essential to strategically plan the number of resource blocks allocated for each time slot. Additionally, the results demonstrate that the AoI performance of the proposed access control scheme exceeds that of traditional random access and periodic sampling schemes. In the random access scheme, devices occupy resource blocks at random, which may lead to multiple devices occupying the same block and resulting in transmission collisions that compromise the reliability of information transmission. Conversely, while devices in the periodic sampling scheme can reliably access resource blocks within each cycle, one cycle includes multiple time slots, thus necessitating a prolonged wait for information transmission. Moreover, it is noted that at lower information transmission power levels, the periodic sampling scheme can achieve higher EE. This is attributed to the low transmission power resulting in substantially higher packet error rates across all schemes; however, the periodic sampling scheme manages to secure larger resource blocks, leading to lower packet error rates and a reduced likelihood of energy waste during signal transmission. As information transmission power increases, the advantages of the periodic sampling scheme begin to diminish, and the EE of the proposed access control scheme ultimately exceeds that of the periodic sampling scheme. Finally, the paper investigates the relationship between the ratio of average AoI and EE with the information transmission power. The analysis reveals that this ratio is a convex function that initially decreases and subsequently increases with rising transmission power, indicating the existence of an optimal power level that minimizes the ratio. Conclusions: This study examines the trade-off between timeliness and EE in IRS-assisted short-packet communication systems. An access control protocol is proposed to mitigate packet collisions, and both timeliness and EE are analyzed. The ratio of average AoI to EE is introduced as a metric to balance AoI and EE, with optimization of transmission power shown to minimize this ratio. Simulation results validate the theoretical analysis and demonstrate that the proposed access control protocol achieves an improved AoI-EE trade-off. Future research will focus on optimizing the deployment location of the IRS to further enhance the balance between timeliness and EE.
Available online , doi: 10.11999/JEIT240561
Abstract:
Objective: High-quality wireless communication enabled by Unmanned Aerial Vehicles (UAVs) is set to play a crucial role in the future. In light of the limitations posed by traditional terrestrial communication networks, the deployment of UAVs as nodes within aerial access networks has become a vital component of emerging technologies in Beyond Fifth Generation (B5G) and sixth generation (6G) communication systems. However, the presence of infrastructure obstructions, such as trees and buildings, in complex urban environments can hinder the Line-of-Sight (LoS) link between UAVs and ground users, leading to a significant degradation in channel quality. To address this challenge, researchers have proposed the integration of Reconfigurable Intelligent Surfaces (RIS) into UAV communication systems, providing an energy-efficient and flexible passive beamforming solution. RIS consists of numerous adjustable electromagnetic units, with each element capable of independently configuring various phase shifts. By adjusting both the amplitude and phase of incoming signals, RIS can intelligently reflect signals from multiple transmission paths, thereby achieving directional signal enhancement or nulling through beamforming. Given the limitations of conventional joint beamforming methods—such as their exclusive focus on optimizing the RIS phase shift matrix and lack of universality—a novel joint beamforming approach based on a Cooperative Co-Evolutionary Algorithm (CCEA) is proposed. This method aims to enhance Spectrum Efficiency (SE) in multi-user scenarios involving RIS-assisted UAV communications. Methods: The proposed approach begins by optimizing the RIS phase shift matrix, followed by the design of the beam shape for RIS-reflected waves. This process modifies the spatial energy distribution of RIS reflections to improve the Signal-to-Interference-plus-Noise Ratio (SINR) at the receiver. To address challenges in existing optimization algorithms, an Evolutionary Algorithm (EA) is introduced for the first time, and a cooperative co-evolutionary structure based on EA is developed to decouple joint beamforming subproblems. The central concept of CCEA revolves around decomposing complex problems into several subproblems, which are then solved through distributed parallel evolution among subpopulations. The evaluation of individuals within each subpopulation, representing solutions to their respective subproblems, relies on collaboration among different populations. Specifically, this involves merging individuals from one subpopulation with representative individuals from others to create composite solutions. Subsequently, the overall fitness of these composite solutions is assessed to evaluate individual performance within each subpopulation. Results and Discussions: The simulation results demonstrate that, in comparison to joint beamforming, which focuses solely on designing the RIS phase shift matrix, further optimizing the shape of the reflected beam from the RIS significantly enhances the accuracy and effectiveness of the main lobe coverage over the user's position, resulting in improved SE. Although Maximum Ratio Transmission (MRT) precoding can maximize the output SINR of the desired signal, it may also lead to considerable inter-user interference, which subsequently diminishes the SE. Therefore, the implementation of joint beamforming is essential. The optimization algorithms proposed in this paper are effective for both the actual amplitude-phase shift model and the ideal RIS amplitude-phase shift model. However, factors such as dielectric loss associated with the actual circuit structure of the RIS can attenuate the strength of the reflected wave reaching the client, thereby reducing the SINR at the receiving end and ultimately lowering the SE. Additionally, the increase in SE achievable through Deep Reinforcement Learning (DRL) and Alternating Optimization (AO) is limited when compared to CCEA. Unlike the optimization of individual action strategies employed in DRL, the CCEA algorithm produces a greater variety of solutions by utilizing crossover and mutation among individuals within the population, thereby mitigating the risk of local optimization. Moreover, CCEA can optimize the spatial distribution of the reflected waves through a more sophisticated design of the RIS reflecting beam shape. This results in an enhanced signal intensity at the receiving end, allowing for a higher SE compared to AO and DRL, which primarily focus on optimizing the RIS phase shift matrix. Conclusions: In light of the limitations observed in previous joint beamforming optimization methods, this paper introduces a novel joint beamforming optimization approach based on CCEA. This method effectively decomposes the joint beam optimization problem into two distinct sub-problems: the design of the RIS reflection beam waveform and the beamforming design at the transmitter. These sub-problems are addressed through independent parallel evolution, utilizing two separate sub-populations. Notably, for RIS passive beamforming, this approach innovatively optimizes the RIS phase shift matrix alongside the design of the RIS reflected beam shape for the first time. Numerical simulation results indicate that, compared to joint beamforming strategies that focus solely on optimizing the RIS phase shift matrix, a more meticulous design of the RIS reflected waveform can significantly alter the intensity distribution of reflected waves in 3D space. This alignment enables the reflected beam to converge on the user's location while mitigating interference, thereby enhancing the system's SE. Furthermore, the CCEA algorithm demonstrates the capability to achieve effective coverage of RIS reflected beams for users, regardless of varying base station and user locations. The optimization process leads to a reduction in Peak Side Lobe Level (PSLL) and an improvement in SE by at least 5 dB, showing its spatial applicability across diverse scenarios. Future research will aim to further investigate the application of evolutionary algorithms and swarm intelligence optimization techniques in joint beamforming optimization, as well as explore the potential of RIS beam waveform design to optimize communication systems, adapting to increasingly complex and diversified communication requirements.
Available online , doi: 10.11999/JEIT240521
Abstract:
Objective: In contemporary warfare, radar systems serve a crucial role as vital instruments for detection and tracking. Their performance is essential, often directly impacting the progression and outcome of military engagements. As these systems operate in complex and hostile environments, their susceptibility to adversarial interference becomes a significant concern. Recent advancements in active jamming techniques, particularly compound active jamming, present considerable threats to radar systems. These jamming methods are remarkably adaptable, employing a range of signal types, parameter variations, and combination techniques that complicate countermeasures. Not only do these jamming signals severely impair the radar’s ability to detect and track targets, but they also exhibit rapid adaptability in high-dynamic combat scenarios. This swift evolution of jamming techniques renders traditional radar jamming recognition models ineffective, as they struggle to address the fast-changing nature of these threats. To counter these challenges, this paper proposes a novel incremental learning method designed for recognizing compound active jamming in radar systems. This innovative approach seeks to bridge the gaps of existing methods when confronted with incomplete and dynamic jamming conditions typical of adversarial combat situations. Specifically, it tackles the challenge of swiftly updating models to identify novel out-of-database compound jamming while mitigating the performance degradation caused by imbalanced sample distributions. The primary objective is to enhance the adaptability and reliability of radar systems within complex electronic warfare environments, ensuring robust performance against increasingly sophisticated and unpredictable jamming techniques. Methods: The proposed method commences with prototypical learning within a meta-learning framework to achieve efficient feature extraction. Initially, a feature extractor is trained utilizing in-database single jamming signals. This extractor is thoroughly designed to proficiently capture the features of out-of-database compound jamming signals. Subsequently, a Zero-Memory Incremental Learning Network (ZMILN) is developed, which incorporates hyperdimensional space and cosine similarity techniques. This network facilitates the mapping and storage of prototype vectors for compound jamming signals, thereby enabling the dynamic updating of the recognition model. To address the challenges associated with imbalanced test sample distributions, a Transductive Information Maximization (TIM) testing module is introduced. This module integrates divergence constraints into the mutual information loss function, refining the recognition model to optimize its performance across imbalanced datasets. The implementation begins with a comprehensive modeling of radar active jamming signals. Linear Frequency Modulation (LFM) signals, frequently utilized in contemporary radar systems, are chosen as the foundation for the transmitted radar signals. The received signals are modeled as a blend of target echo signals, jamming signals, and noise. Various categories of radar active jamming, including suppression jamming and deceptive jamming, are classified, and their composite forms are examined. For feature extraction, a five-layer Convolutional Neural Network (CNN) is employed. This CNN is specifically designed to transform input radar jamming time-frequency image samples into a hyperdimensional feature space, generating 512-dimensional prototype vectors. These vectors are then stored within the prototype space, with each jamming category corresponding to a distinct prototype vector. To enhance classification accuracy and efficiency, a quasi-orthogonal optimization strategy is utilized to improve the spatial arrangement of these prototype vectors, thereby minimizing overlap and confusion between different categories and increasing the precision of jamming signal recognition. The ZMILN framework addresses two primary challenges in recognizing compound jamming signals: the scarcity of new-category samples and the limitations inherent in existing models when it comes to identifying novel categories. By integrating prototypical learning with hyperdimensional space techniques, the ZMILN enables generalized recognition from in-database single jamming signals to out-of-database compound jamming. To further enhance model performance in the face of imbalanced sample conditions, the TIM module maximizes information gain by partitioning the test set into supervised support and unsupervised query sets. The ZMILN model is subsequently fine-tuned using the support set, followed by unsupervised testing on the query set. During the testing phase, the model computes the cosine similarity between the test samples and the prototype vectors, ultimately yielding the final recognition results. Results and Discussions: The proposed method exhibits notable effectiveness in the recognition of radar compound active jamming signals. Experimental results indicate an average recognition accuracy of 93.62% across four single jamming signals and seven compound jamming signals under imbalanced test conditions. This performance significantly exceeds various baseline incremental learning methods, highlighting the superior capabilities of the proposed approach in the radar jamming recognition task. Additionally, t-distributed Stochastic Neighbor Embedding (t-SNE) visualization experiments present the distribution of jamming features at different stages of incremental learning, further confirming the method’s effectiveness and robustness. The experiments simulate a realistic radar jamming recognition scenario by categorizing “in-database” jamming as single types included in the base training set, and “out-of-database” jamming as novel compound types that emerge during the incremental training phase. This configuration closely resembles real-world operational conditions, where radar systems routinely encounter new and evolving jamming techniques. Quantitative performance metrics, including accuracy and performance degradation rates, are utilized to assess the model’s capacity to retain knowledge of previously learned categories while adapting to new jamming types. Accuracy is computed at each incremental learning stage to evaluate the model’s performance on both old and new categories. Furthermore, the performance degradation rate is calculated to measure the extent of knowledge retention, with lower degradation rates indicative of stronger retention of prior knowledge throughout the learning process. Conclusions: In conclusion, the proposed Zero-Memory Incremental Learning method for recognizing radar compound active jamming is highly effective in addressing the challenges posed by rapidly evolving and complex radar jamming techniques. By leveraging a comprehensive understanding of individual jamming signals, this method facilitates swift and dynamic recognition of out-of-database compound jamming across diverse and high-dynamic conditions. This approach not only enhances the radar system’s capabilities in recognizing novel compound jamming but also effectively mitigates performance degradation resulting from imbalanced sample distributions. Such advancements are essential for improving the adaptability and reliability of radar systems in complex electronic warfare environments, where the nature of jamming signals is in constant flux. Additionally, the proposed method holds significant implications for other fields facing incremental learning challenges, particularly those involving imbalanced data and rapidly emerging categories. Future research will focus on exploring open-set recognition models, further enhancing the cognitive recognition capabilities of radar systems in fully open and highly dynamic adversarial environments. This work lays the groundwork for developing more agile cognitive closed-loop recognition systems, ultimately contributing to more resilient and adaptable radar systems capable of effectively managing complex electronic warfare scenarios.
Available online , doi: 10.11999/JEIT240488
Abstract:
Objective: Previous studies have extensively examined the performance of Intelligent Reflecting Surface (IRS)-assisted wireless communications by varying the location of the IRS. However, relocating the IRS alters the sum of the distances between the IRS and the base station, as well as the distances to users, leading to discrepancies in reflective channel transmission distances, which introduces a degree of unfairness. Additionally, the assumption that the path loss indices for the base station-to-IRS and IRS-to-user channels are equal is overly idealistic. In practical scenarios, the user's height is typically much lower than that of the base station, and the IRS may be positioned closer to either the base station or the user. This disparity results in significantly different path loss indices for the two channels. Consequently, this paper focuses on identifying the optimal deployment location of the IRS while keeping the total distance fixed. The IRS is modeled to move along an ellipsoid or ellipsoidal plane defined by the base station and the user as focal points. The analysis provides insights into the optimal deployment of the IRS while taking into account a broader range of application scenarios, specifically addressing different path loss indices for the base station-to-IRS and IRS-to-user channels given a predetermined sum of the transmitting powers. Methods: Utilizing concepts of phase alignment and the law of large numbers, closed-form expressions for the reachability rate of both passive and active IRS-assisted wireless networks are initially derived for two scenarios: the line-of-sight channel and the Rayleigh channel. Following this, the study analyzes how the path loss exponents from the base station to the IRS and from the IRS to the user impact the optimal deployment location of the IRS. Results and Discussions: The reachability rate of a passive IRS-assisted wireless network, considering IRS locations under both line-of-sight and Rayleigh channels, is illustrated. It is evident that the optimal deployment location of the IRS is nearest to either the base station or the user when β1=β2. When β1>β2, the optimal deployment location of the IRS is obtained solely at the base station, while the least effective deployment location shifts progressively closer to the user. Conversely, a contrasting result is obtained when β1<β2. The above results verify the correctness of the theoretical derivation in Section 3.1.3. The reachability rate of an active IRS-assisted wireless network as a function of IRS location under line-of-sight and Rayleigh channels is depicted. The figure indicates that when β1=β2, the system’s reachability rate under the line-of-sight channel exceeds that of the Rayleigh channel, with the optimal deployment location of the active IRS positioned in proximity to the user. When β1>β2 (fixed β2, increasing β1), the optimal deployment location of the active IRS progressively approaches the base station. And when β1<β2, the optimal deployment location shifts closer to the user. The optimal deployment location of the IRS for IRS-assisted wireless networks operating under a Rayleigh channel, reflecting variations in the path loss index β, is portrayed. Notably, for passive IRS systems, regardless of the path loss index variations, the optimal deployment locations across three different cases yield consistent conclusions with those derived. For the active IRS, when β1=β2=β1, the optimal deployment location gradually distances itself from the user ultimately approaching the IRS location at m (directly above the midpoint of the line connecting the base station and user). Conversely, when β1>β2, the optimal deployment position of the IRS increasingly aligns with the base station along an elliptical trajectory; conversely, when β1<β2, it shifts towards the user. The optimal deployment location of the active IRS under both line-of-sight and Rayleigh channels as a function of Igressively approaches the base station. And wRS reflected power PI is displayed. The analysis indicates that in both channel conditions, as the IRS reflected power increases, the optimal deployment location for the active IRS progressively moves closer to the base station along an elliptical trajectory as PI gradually increases. And at β1=β2 and PI=PB, the optimal deployment location of the active IRS maintains an equal distance from both the base station and the user. The system's reachability rate in relation to the distance r from the base station to the active IRS, accounting for different user noise \begin{document}$\sigma_{\mathrm{u}}^2 $\end{document} and amplified noise \begin{document}$\sigma_{\mathrm{i}}^2 $\end{document} of the active IRS, is presented. When fixing \begin{document}$\sigma_{\mathrm{i}}^2 $\end{document} and gradually increasing \begin{document}$\sigma_{\mathrm{u}}^2 $\end{document} , the optimal deployment location of the active IRS is situated closer to the user. Conversely, when fixing \begin{document}$\sigma_{\mathrm{u}}^2 $\end{document} and gradually increasing \begin{document}$\sigma_{\mathrm{u}}^2 $\end{document} , the optimal deployment location gradually approaches the base station. Additionally, irrespective of increased noise levels, the system’s reachability rate demonstrates a tendency to decline. Conclusions: This paper examines the maximization of system reachable rates by varying the deployment locations of passive and active IRSs in line-of-sight and Rayleigh channel transmission scenarios. In the analysis, fixed positions are assumed for both the base station and the user, with the sum of the base station-to-IRS and IRS-to-user distances kept constant. Phase alignment and the law of large numbers are employed to derive a closed-form expression for the reachable rate. Theoretical analysis and simulation results provide several key insights: When β1<β2, the optimal deployment locations for both passive and active IRS are close to the user, the least favorable deployment locations for passive IRS move progressively closer to the base station as the difference between β1 and β2 increases. When β1=β2, the optimal deployment location for the active IRS remains near the user, while the passive IRS can be effectively placed near either the base station or the user. When β1>β2, the optimal deployment location of the passive IRS remains close to the base station. As the difference between β1 and β2 ncreases, the optimal deployment location of the active IRS gradually shifts closer to the base station. Additionally, as the amplified noise of the active IRS increases, its optimal deployment location moves closer to the base station. Conversely, when the noise at the user increases, the optimal deployment location of the active IRS is always closer to the user.
Available online , doi: 10.11999/JEIT240601
Abstract:
Objective: Next-generation communication networks will enhance converged “endogenous sensing” and communication service capabilities by improving information transmission. Integrated Sensing and Communication (ISAC) is a key technology for achieving the 6G vision and has attracted significant attention from both academia and industry. The integration of ISAC with emerging technologies, such as Reconfigurable Intelligent Surface (RIS) and Movable Antenna (MA), is currently a hot research topic. Because the same waveforms are used for both communication and target sensing, ISAC systems are more vulnerable to information leakage. Unlike Physical Layer Security (PLS)-based designs, it is necessary not only to prevent the signals of legitimate users from being eavesdropped but also to hide the existence of communication behavior activities from malicious targets. This paper examines a generic Integrated Sensing and Covert Communication (ISCC) system involving multiple sensing targets (wardens) and multiple covert users. To facilitate communication between the Base Station (BS) and legitimate users, a simultaneously transmitting and reflecting RIS with movable elements (ME-STAR-RIS) is deployed. Inspired by the MA concept, the ME-STAR-RIS features movable elements that allow for Flexible And Passive Beamforming (FAPB). A key challenge is to design a rational architecture that minimizes the control cost of the ME-STAR-RIS. Our goal is to create an effective beamforming and element deployment strategy for this system and to investigate the benefits of element-level movement at the STAR-RIS. Methods: First, a Discrete Element Position (DEP)-based coupled phase-shift model for STAR-RIS is proposed. This model aims to reduce control costs associated with the movability and phase shifts of STAR-RIS elements. Then, a joint beamforming optimization problem is formulated based on this model. The goal is to jointly optimize active beamforming at the ISAC BS and flexible passive beamforming (including element positions, phase shifts, and amplitude coefficients) at the ME-STAR-RIS. This is intended to maximize the probing beam gain at the sensing target while adhering to covert communication quality constraints. The problem formulated is non-convex and presents strong coupling, making it challenging to solve. To address this, we develop an effective algorithm leveraging Semi-Definite Program (SDP), Block Coordinate Descent (BCD), Successive Convex Approximation (SCA), and Penalty Convex-Concave Procedure (PCCP) techniques. By introducing auxiliary variables and employing the SDP method, the original problem can be transformed into a more manageable Augmented Lagrangian form. Our approach features a two-layer iterative algorithm. In the inner loop, the element placement problem is modeled as a binary integer programming problem, using a penalty-based SCA method to solve it. In the outer layer, a penalty-based BCD method is proposed to maintain constraints on the coupled STAR-RIS phase shift upon convergence. Results and Discussions: The simulation results validate the effectiveness of the proposed algorithm and provide significant insights. The performance evaluation indicates that the STAR-RIS with 15 movable elements achieves 80% of the performance of a fixed full-array STAR-RIS with 30 elements while halving the required elements. This highlights the potential for a limited number of movable elements to approximate the performance of a fully fixed array. Furthermore, the proposed algorithm consistently converges to a high-performance smooth point, meeting constraints on array element positions and phase shift differences. The results also show that moving the elements leads to a narrower and stronger detection beam, enhancing the system's performance. Additionally, the findings reveal a trade-off between communication, sensing, and covert presence. Specifically, as the communication Signal-to-Interference-Noise Ratio (SINR) threshold increases, the sensing performance decreases. Due to covert communication constraints, beamforming design freedom is limited, requiring additional system resources for covertness, which ultimately reduces overall sensing performance. Conclusions: This paper examines the ME-STAR-RIS-assisted pass-sense integrated system through the lens of covert communication. The BS senses target nodes and communicates with legitimate users via a ME-STAR-RIS. To ensure data security, it is essential to conceal communication activities from potential targets. A joint active-passive covert beamforming scheme designed for the ME-STAR-RIS-assisted ISAC system is designed. This scheme aims to maximize probing power while maintaining covert communication quality. This paper serves as an initial exploration of the STAR-RIS with movable elements. Simulation results indicate that element-level mobility offers advantages for the STAR-RIS-assisted ISAC system. Several issues warrant further investigation, including channel estimation, non-ideal Channel State Information (CSI), and optimization of array element positions in practical settings.
Available online , doi: 10.11999/JEIT250000
Abstract:
In this report, the application and funding statistics of several projects in the electronics and technology area under Division I of Information Science Department of the National Natural Science Foundation of China in 2024 are summarized . These projects include key program, general program, young scientists fund, fund for less developed regions, excellent young scientists fund and national science fund for distinguished young scholars. Their distribution characteristics and hot topics are sorted out from application codes, the age of applicants, the changes in the past five or ten years. Through the above analysis, it is intended to provide references for the researchers to understand the research directions that need to be strengthened and the impact of some reform measures on the application and funding of projects in this field.
In this report, the application and funding statistics of several projects in the electronics and technology area under Division I of Information Science Department of the National Natural Science Foundation of China in 2024 are summarized . These projects include key program, general program, young scientists fund, fund for less developed regions, excellent young scientists fund and national science fund for distinguished young scholars. Their distribution characteristics and hot topics are sorted out from application codes, the age of applicants, the changes in the past five or ten years. Through the above analysis, it is intended to provide references for the researchers to understand the research directions that need to be strengthened and the impact of some reform measures on the application and funding of projects in this field.
Available online , doi: 10.11999/JEIT240663
Abstract:
Covert communication is considered an important branch in the field of network security, which allows for secure data transmission in monitored environments. However, challenges such as complex communication environments and wide coverage areas are encountered in practical communication systems, making the deployment of covert communication difficult. To address this issue, a wireless covert communication system assisted by Intelligent Reflective Surfaces (IRS) and Unmanned Aerial Vehicle (UAV) is proposed in this paper. In this system, IRS is introduced as relay node to forward signals from the transmitter. UAV is utilized as a friendly node for the transmitter, and artificial noise is transmitted to disrupt malicious users’ detection of covert communication. Under the condition of receiver uncertainty regarding the received noise, the minimum error detection probability is derived, and the optimization problem of the system is established with the objective of maximizing the covert communication rate while considering interruption probability as a constraint. The Dinkelbach algorithm is employed to solve the optimization problem. Simulation results demonstrate that the maximum covert communication rate can be achieved when the phase shift of the IRS elements and the UAV’s transmission power are optimized.
Covert communication is considered an important branch in the field of network security, which allows for secure data transmission in monitored environments. However, challenges such as complex communication environments and wide coverage areas are encountered in practical communication systems, making the deployment of covert communication difficult. To address this issue, a wireless covert communication system assisted by Intelligent Reflective Surfaces (IRS) and Unmanned Aerial Vehicle (UAV) is proposed in this paper. In this system, IRS is introduced as relay node to forward signals from the transmitter. UAV is utilized as a friendly node for the transmitter, and artificial noise is transmitted to disrupt malicious users’ detection of covert communication. Under the condition of receiver uncertainty regarding the received noise, the minimum error detection probability is derived, and the optimization problem of the system is established with the objective of maximizing the covert communication rate while considering interruption probability as a constraint. The Dinkelbach algorithm is employed to solve the optimization problem. Simulation results demonstrate that the maximum covert communication rate can be achieved when the phase shift of the IRS elements and the UAV’s transmission power are optimized.
Available online , doi: 10.11999/JEIT240241
Abstract:
The 6th Generation (6G) mobile communication network is required to provide Ultra-Reliable and Low-Latency Communication(URLLC) services for large-scale nodes. Considering the multi-user massive Multiple-Input Multiple-Out(MIMO) technology-assisted URLLC downlink communication scenario, system performance is characterized based on the Finite BlockLength(FBL) regime theory, and an efficient power allocation algorithm is proposed to improve the users’ transmission rate under fairness issue. Specifically, the traditional MIMO systems utilize the global Singular Value Decomposition(SVD) linear precoding scheme, leading to high complexity and inability to guarantee the fairness of rates among users. To deal with these challenges, a precoding scheme based on the local SVD is proposed to effectively suppress inter-user interference and intra-user interference of MIMO system with relatively low complexity. Secondly, the optimization problem is formulated, where the power allocation factors are optimized to Maximize Minimum Rate (MMR) among users. In order to efficiently solve the non-convex problem containing high-dimensional variables which are coupled with each other, the Shannon capacity term in the objective function is relaxed by introducing auxiliary variables and piecewise McCormick envelopes, and it is transformed into convex functions, thereby reformulating the MMR problem. An optimization algorithm based on the Successive Convex Approximation (SCA) is proposed to solve the reformulated problem effectively. Simulation results validate the convergence and accuracy of the proposed optimization algorithm, and it is shown that the proposed optimization algorithm has advantages over the existing schemes in terms of system MMR performance and robustness.
The 6th Generation (6G) mobile communication network is required to provide Ultra-Reliable and Low-Latency Communication(URLLC) services for large-scale nodes. Considering the multi-user massive Multiple-Input Multiple-Out(MIMO) technology-assisted URLLC downlink communication scenario, system performance is characterized based on the Finite BlockLength(FBL) regime theory, and an efficient power allocation algorithm is proposed to improve the users’ transmission rate under fairness issue. Specifically, the traditional MIMO systems utilize the global Singular Value Decomposition(SVD) linear precoding scheme, leading to high complexity and inability to guarantee the fairness of rates among users. To deal with these challenges, a precoding scheme based on the local SVD is proposed to effectively suppress inter-user interference and intra-user interference of MIMO system with relatively low complexity. Secondly, the optimization problem is formulated, where the power allocation factors are optimized to Maximize Minimum Rate (MMR) among users. In order to efficiently solve the non-convex problem containing high-dimensional variables which are coupled with each other, the Shannon capacity term in the objective function is relaxed by introducing auxiliary variables and piecewise McCormick envelopes, and it is transformed into convex functions, thereby reformulating the MMR problem. An optimization algorithm based on the Successive Convex Approximation (SCA) is proposed to solve the reformulated problem effectively. Simulation results validate the convergence and accuracy of the proposed optimization algorithm, and it is shown that the proposed optimization algorithm has advantages over the existing schemes in terms of system MMR performance and robustness.
Available online , doi: 10.11999/JEIT240302
Abstract:
As the scale of Unmanned Aerial Vehicle (UAV) systems and the demand for higher communication rates continue to grow, UAV Optical Mobile Communications (UAV-OMC) has emerged as a promising technical direction. However, it is difficult for traditional UAV-OMC to support multiple UAVs’ communications. In this paper, based on the Optical Intelligent Reflecting Surface (OIRS) technology, we propose a distributed OMC system for UAV clusters. By setting the OIRS on a specific UAV, we utilize OIRS to spread the optical signal from a single UAV node to multiple UAV nodes. While retaining the high energy efficiency and high speed of the UAV-OMC system, this system can support the communication of distributed UAV clusters. This paper conducts mathematical modeling of the proposed system. When modeling the system, we took into account a series of realistic factors, such as OIRS beam control, relative motion between UAVs, UAV jitter, which fit the actual system. Closed-form expressions for the system's Bit Error Rate (BER) and asymptotic outage probability are also derived. Based on theoretical analysis and simulation results, the effect of each parameter and system design have been discussed.
As the scale of Unmanned Aerial Vehicle (UAV) systems and the demand for higher communication rates continue to grow, UAV Optical Mobile Communications (UAV-OMC) has emerged as a promising technical direction. However, it is difficult for traditional UAV-OMC to support multiple UAVs’ communications. In this paper, based on the Optical Intelligent Reflecting Surface (OIRS) technology, we propose a distributed OMC system for UAV clusters. By setting the OIRS on a specific UAV, we utilize OIRS to spread the optical signal from a single UAV node to multiple UAV nodes. While retaining the high energy efficiency and high speed of the UAV-OMC system, this system can support the communication of distributed UAV clusters. This paper conducts mathematical modeling of the proposed system. When modeling the system, we took into account a series of realistic factors, such as OIRS beam control, relative motion between UAVs, UAV jitter, which fit the actual system. Closed-form expressions for the system's Bit Error Rate (BER) and asymptotic outage probability are also derived. Based on theoretical analysis and simulation results, the effect of each parameter and system design have been discussed.
Available online , doi: 10.11999/JEIT240003
Abstract:
As a new information communication technology based on software and hardware resource sharing and information sharing, Integration of Sensing and Communication (ISAC) can integrate wireless sensing into Wi-Fi platforms, providing an efficient method for low-cost indoor localization. Focusing on the problem of real-time and accuracy of indoor positioning parameter estimation, a joint parameter estimation algorithm based on three Dimensional (3D) Matrix Pencil (MP) is proposed. First, the Channel State Information (CSI) data is analyzed and a 3D matrix containing Angle of Arrival (AoA), Time of Flight (ToF), and Doppler Frequency Shift (DFS) is constructed. Secondly, the 3D matrix is smoothed and the 3D MP algorithm is used for parameter estimation, the direct path is found by clustering. Finally, the triangulation method is used for positioning to verify the effectiveness of the proposed algorithm. Experimental results show that compared with the MUltiple SIgnal Classification (MUSIC) parameter estimation algorithm, there is no need for complicated peak search steps, and the computational complexity is reduced by 90%. Compared with the two-dimensional MP algorithm, adding DFS can effectively improve the resolution and accuracy of parameter estimation. The actual test verifies that the proposed algorithm can achieve an average positioning accuracy of 0.56 m at a confidence level of 67% indoors. Therefore, the proposed algorithm effectively improves the real-time and accuracy of the existing indoor positioning parameter estimation.
As a new information communication technology based on software and hardware resource sharing and information sharing, Integration of Sensing and Communication (ISAC) can integrate wireless sensing into Wi-Fi platforms, providing an efficient method for low-cost indoor localization. Focusing on the problem of real-time and accuracy of indoor positioning parameter estimation, a joint parameter estimation algorithm based on three Dimensional (3D) Matrix Pencil (MP) is proposed. First, the Channel State Information (CSI) data is analyzed and a 3D matrix containing Angle of Arrival (AoA), Time of Flight (ToF), and Doppler Frequency Shift (DFS) is constructed. Secondly, the 3D matrix is smoothed and the 3D MP algorithm is used for parameter estimation, the direct path is found by clustering. Finally, the triangulation method is used for positioning to verify the effectiveness of the proposed algorithm. Experimental results show that compared with the MUltiple SIgnal Classification (MUSIC) parameter estimation algorithm, there is no need for complicated peak search steps, and the computational complexity is reduced by 90%. Compared with the two-dimensional MP algorithm, adding DFS can effectively improve the resolution and accuracy of parameter estimation. The actual test verifies that the proposed algorithm can achieve an average positioning accuracy of 0.56 m at a confidence level of 67% indoors. Therefore, the proposed algorithm effectively improves the real-time and accuracy of the existing indoor positioning parameter estimation.
Available online , doi: 10.11999/JEIT240059
Abstract:
Recently, metasurface antenna technology has raised meticulous attention from scholars in the communications, radar, and antenna communities, owing to its great capability in flexible control of electromagnetic waves. In particular, the active tunable device used in the metasurface antenna element is one of the most significant components that affect the performance of the entire system. In this paper, a 95 to 105 GHz digitally controlled attenuator with 5-bit resolution is designed in a 0.13 μm SiGe BiCMOS process. The attenuator employs two different topological structures, reflective and simplified T-type. The 4 dB and 8 dB reflective attenuation units utilize cross-coupled broadband couplers instead of traditional 3 dB couplers or directional couplers, achieving high attenuation precision and low insertion loss. On the other hand, the 0.5 dB, 1 dB, and 2 dB attenuation units adopt a simplified T-type structure. Furthermore, the utilization of RC positive and negative slope correction networks applied separately to different attenuation units enables phase compensation, significantly improving the additional phase shift of the attenuator. Within the desired frequency range of 95~105 GHz, the attenuator achieves an attenuation range of 0~15.5 dB with a step of 0.5 dB in a compact size of 0.12 mm2. It exhibits a simulated insertion loss below 2.5 dB, a simulated amplitude Root Mean Square (RMS) error less than 0.25 dB, and a simulated phase RMS error is better than 2.2°. The proposed W-band attenuator can serve as a key component empowering the hardware implementation of an integrated Transmit/Receive (T/R) metasurface antenna system with simultaneous radiation and scattering control.
Recently, metasurface antenna technology has raised meticulous attention from scholars in the communications, radar, and antenna communities, owing to its great capability in flexible control of electromagnetic waves. In particular, the active tunable device used in the metasurface antenna element is one of the most significant components that affect the performance of the entire system. In this paper, a 95 to 105 GHz digitally controlled attenuator with 5-bit resolution is designed in a 0.13 μm SiGe BiCMOS process. The attenuator employs two different topological structures, reflective and simplified T-type. The 4 dB and 8 dB reflective attenuation units utilize cross-coupled broadband couplers instead of traditional 3 dB couplers or directional couplers, achieving high attenuation precision and low insertion loss. On the other hand, the 0.5 dB, 1 dB, and 2 dB attenuation units adopt a simplified T-type structure. Furthermore, the utilization of RC positive and negative slope correction networks applied separately to different attenuation units enables phase compensation, significantly improving the additional phase shift of the attenuator. Within the desired frequency range of 95~105 GHz, the attenuator achieves an attenuation range of 0~15.5 dB with a step of 0.5 dB in a compact size of 0.12 mm2. It exhibits a simulated insertion loss below 2.5 dB, a simulated amplitude Root Mean Square (RMS) error less than 0.25 dB, and a simulated phase RMS error is better than 2.2°. The proposed W-band attenuator can serve as a key component empowering the hardware implementation of an integrated Transmit/Receive (T/R) metasurface antenna system with simultaneous radiation and scattering control.
Available online , doi: 10.11999/JEIT221203
Abstract:
To overcome the limitation of the Federated Learning (FL) when the data and model of each client are all heterogenous and improve the accuracy, a personalized Federated learning algorithm with Collation game and Knowledge distillation (pFedCK) is proposed. Firstly, each client uploads its soft-predict on public dataset and download the most correlative of the k soft-predict. Then, this method apply the shapley value from collation game to measure the multi-wise influences among clients and quantify their marginal contribution to others on personalized learning performance. Lastly, each client identify it’s optimal coalition and then distill the knowledge to local model and train on private dataset. The results show that compared with the state-of-the-art algorithm, this approach can achieve superior personalized accuracy and can improve by about 10%.
To overcome the limitation of the Federated Learning (FL) when the data and model of each client are all heterogenous and improve the accuracy, a personalized Federated learning algorithm with Collation game and Knowledge distillation (pFedCK) is proposed. Firstly, each client uploads its soft-predict on public dataset and download the most correlative of the k soft-predict. Then, this method apply the shapley value from collation game to measure the multi-wise influences among clients and quantify their marginal contribution to others on personalized learning performance. Lastly, each client identify it’s optimal coalition and then distill the knowledge to local model and train on private dataset. The results show that compared with the state-of-the-art algorithm, this approach can achieve superior personalized accuracy and can improve by about 10%.
Available online , doi: 10.11999/JEIT210265
Abstract:
The application of Frequency Diverse Array and Multiple Input Multiple Output (FDA-MIMO) radar to achieve range-angle estimation of target has attracted more and more attention. The FDA can simultaneously obtain the degree of freedom of transmitting beam pattern in angle and range. However, its performance is degraded due to the periodicity and time-varying of the beam pattern. Therefore, an improved Estimating Signal Parameter via Rotational Invariance Techniques (ESPRIT) algorithm to estimate the target’s parameters based on a new waveform synthesis model of the Time Modulation and Range Compensation FDA-MIMO (TMRC-FDA-MIMO) radar is proposed. Finally, the proposed method is compared with identical frequency increment FDA-MIMO radar system, logarithmically increased frequency offset FDA-MIMO radar system and MUltiple SIgnal Classification (MUSIC) algorithm through the Cramer Rao lower bound and root mean square error of range and angle estimation, and the excellent performance of the proposed method is verified.
The application of Frequency Diverse Array and Multiple Input Multiple Output (FDA-MIMO) radar to achieve range-angle estimation of target has attracted more and more attention. The FDA can simultaneously obtain the degree of freedom of transmitting beam pattern in angle and range. However, its performance is degraded due to the periodicity and time-varying of the beam pattern. Therefore, an improved Estimating Signal Parameter via Rotational Invariance Techniques (ESPRIT) algorithm to estimate the target’s parameters based on a new waveform synthesis model of the Time Modulation and Range Compensation FDA-MIMO (TMRC-FDA-MIMO) radar is proposed. Finally, the proposed method is compared with identical frequency increment FDA-MIMO radar system, logarithmically increased frequency offset FDA-MIMO radar system and MUltiple SIgnal Classification (MUSIC) algorithm through the Cramer Rao lower bound and root mean square error of range and angle estimation, and the excellent performance of the proposed method is verified.
Available online , doi: 10.11999/JEIT201066
Abstract:
To solve the problem of Group Repetition Interval (GRI) selection in the construction of the enhanced LORAN (eLORAN) system supplementary transmission station, a screening algorithm based on cross interference rate is proposed mainly from the mathematical point of view. Firstly, this method considers the requirement of second information, and on this basis, conducts a first screening by comparing the mutual Cross Rate Interference (CRI) with the adjacent Loran-C stations in the neighboring countries. Secondly, a second screening is conducted through permutation and pairwise comparison. Finally, the optimal GRI combination scheme is given by considering the requirements of data rate and system specification. Then, in view of the high-precision timing requirements for the new eLORAN system, an optimized selection is made in multiple optimal combinations. The analysis results show that the average interference rate of the optimal combination scheme obtained by this algorithm is comparable to that between the current navigation chains and can take into account the timing requirements, which can provide referential suggestions and theoretical basis for the construction of high-precision ground-based timing system.
To solve the problem of Group Repetition Interval (GRI) selection in the construction of the enhanced LORAN (eLORAN) system supplementary transmission station, a screening algorithm based on cross interference rate is proposed mainly from the mathematical point of view. Firstly, this method considers the requirement of second information, and on this basis, conducts a first screening by comparing the mutual Cross Rate Interference (CRI) with the adjacent Loran-C stations in the neighboring countries. Secondly, a second screening is conducted through permutation and pairwise comparison. Finally, the optimal GRI combination scheme is given by considering the requirements of data rate and system specification. Then, in view of the high-precision timing requirements for the new eLORAN system, an optimized selection is made in multiple optimal combinations. The analysis results show that the average interference rate of the optimal combination scheme obtained by this algorithm is comparable to that between the current navigation chains and can take into account the timing requirements, which can provide referential suggestions and theoretical basis for the construction of high-precision ground-based timing system.