Advanced Search
Articles in press have been peer-reviewed and accepted, which are not yet assigned to volumes /issues, but are citable by Digital Object Identifier (DOI).
Display Method:
Light Field Angular Reconstruction Based on Template Alignment and Multi-stage Feature Learning
YU Mei, ZHOU Tao, CHEN Yeyao, JIANG Zhidi, LUO Ting, JIANG Gangyi
 doi: 10.11999/JEIT240481
[Abstract](28) [FullText HTML](13) [PDF 10603KB](1)
Abstract:
  Objective  By placing a micro-lens array between the main lens and imaging sensor, a light field camera captures both intensity and directional information of light in a scene. However, due to sensor size, dense spatial sampling results in sparse angular sampling during light field imaging. Consequently, angular super-resolution reconstruction of Light Field Images (LFIs) is essential. Existing deep learning-based LFI angular super-resolution reconstruction typically achieves dense LFIs through two approaches. The direct generation approach models the correlation between spatial and angular information from sparse LFIs and then upsamples along the angular dimension to reconstruct the light field. The indirect approach, on the other hand, generates intermediate outputs, reconstructing LFIs through operations on these outputs and the inputs. LFI coding methods based on sparse sampling generally select partial Sub Aperture Images (SAIs) for compression and transmission, using angular super-resolution to reconstruct the LFI at the decoder. In LFI scalable coding, the SAIs are divided into multiple viewpoint layers, some of which are selectively transmitted based on bit rate allocation, while the remaining layers are reconstructed at the decoder. Although existing deep learning-based angular super-resolution methods yield promising results, they lack flexibility and generalizability across different numbers and positions of reference SAIs. This limits their ability to reconstruct SAIs from arbitrary viewpoints, making them unsuitable for LFI scalable coding. To address this, a Light Field Angular Reconstruction method based on Template Alignment and multi-stage Feature learning (LFAR-TAF) is proposed, capable of handling different angular sparse templates with a single network model.  Methods  The process involves alignment, Micro-Lens Array Image (MLAI) feature extraction, sub-aperture level feature fusion, feature mapping to the target angular position, and SAI synthesis at the target angular position. First, the different viewpoint layers used in LFI scalable encoding are treated as different representations of the MLAI, referred to as light field sparse templates. To minimize discrepancies between these sparse templates and reduce the complexity of fitting the single network model, bilinear interpolation is employed to align the templates and generate corresponding MLAIs. The MLAI Feature Learning (MLAIFL) module then uses a Residual Dense Block (RDB) to extract preliminary features from the MLAIs, thereby mitigating the differences introduced by bilinear interpolation. Since the MLAI feature extraction process may partially disrupt the angular consistency of the LFI, a conversion mechanism from MLAI features to sub-aperture features is devised, incorporating an SAI-level Feature fusion (SAIF) module. In this step, the input MLAI features are reorganized along the channel dimension to align with the SAI dimension. Three 1×1 convolutions and two agent attention mechanisms are then employed for progressive fusion, supported by residual connections to accelerate convergence. In the feature mapping module, the extracted SAI features are mapped and adjusted to the target angular position based on the given target angular coordinates. Specifically, the SAI features are expanded in dimension using a recombination operator to match the spatial dimensions of the input features, and are concatenated with the input features. The concatenated target angular information is then fused with the light field features using a 1×1 convolution and RDB. The fused features are subsequently input into two RDBs to generate intermediate convolution kernel weights and bias maps. In the SAI synthesis module for the target angular position, the common viewpoints of different sparse templates serve as reference SAIs for the indirect synthesis method, ensuring the stability of the proposed approach. Using non-shared weight convolution kernels of the same dimension, the reference SAIs are convolved, and the preliminary results are combined with the generated bias map to synthesize the target SAI with enhanced detail.  Results and Discussions  The performance of LFAR-TAF is evaluated using two publicly available natural scene LFI datasets: the STFLytro dataset and Kalantari et al.’s dataset. To ensure non-overlapping training and testing sets, the same partitioning method as in current advanced approaches for natural scene LFIs is adopted. Specifically, 100 natural scene LFIs (100 Scenes) from Kalantari et al.’s dataset are used for training, while the test set consists of 30 LFIs (30 Scenes) from the Kalantari et al.’s dataset, as well as 15 reflection scenes and 25 occlusion scenes from the STFLytro dataset. LFAR-TAF is compared with six angular super-resolution reconstruction methods (ShearedEPI, Yeung et al.’s method, LFASR-geo, FS-GAF, DistgASR, and IRVAE) using PSNR and SSIM as objective quality metrics for angular reconstruction from 3×3 to 7×7. Experimental results demonstrate that LFAR-TAF achieves the highest objective quality scores across all three test datasets. Notably, the proposed method is capable of reconstructing SAIs at any viewpoint using either five reference SAIs or 3×3 reference SAIs, after training on angular reconstruction tasks from 3×3 to 7×7. Subjective visual comparisons further show that LFAR-TAF effectively restores color and texture details of the target SAI from the reference SAIs. Ablation experiments reveal that removing either the MLAIFL or SAIF module results in decreased objective quality scores on the three test datasets, with the loss being more pronounced when the MLAIFL module is omitted. This highlights the importance of MLAI feature learning in modeling the spatial and angular correlations of LFIs, while the SAIF module enhances the micro-lens array to sub-aperture feature conversion process. Additionally, coding experiments are conducted to assess the practical performance of the proposed method in LFI coding. Two angular sparse templates (five SAIs and 3×3 SAIs) are tested on four scenarios from the EPFL dataset. The results show that encoding five SAIs achieves high coding efficiency at lower bit rates, while encoding nine SAIs from the 3×3 sparse template provides better performance at higher bit rates. These findings suggest that, to improve LFI scalable coding compression efficiency, different sparse templates can be selected based on the bit rate, and LFAR-TAF demonstrates stable reconstruction capabilities for various sparse templates in a single training process.  Conclusions  The proposed LFAR-TAF effectively handles different sparse templates with a single network model, enabling the flexible reconstruction of SAIs at any viewpoint by referencing SAIs with varying numbers and positions. This flexibility is particularly beneficial for LFI scalable coding. Moreover, the designed training approach can be applied to other LFI angular super-resolution methods, enhancing their ability to handle diverse sparse templates.
Robust Resource Optimization in Integrated Sensing, Communication, and Computing Networks Based on Soft Actor-Critic
LI Bin, SHEN Li, ZHAO Chuanxin, FEI Zesong
 doi: 10.11999/JEIT240716
[Abstract](39) [FullText HTML](9) [PDF 2661KB](8)
Abstract:
  Objective  Traditional approaches typically adopt a disjoint design that improves specific performance aspects under particular scenarios but often proves inadequate for addressing complex tasks in dynamic environments. Challenges such as real-time task offloading, efficient resource scheduling, and the simultaneous optimization of sensing, communication, and computing performance remain significant. The Integrated Sensing, Communication, and Computing (ISCC) architecture has been proposed to address these issues. In complex scenarios, the diversity of task types and varying requirements lead to inflexible offloading policies, limiting the system’s ability to adapt to real-time network changes. Moreover, computational uncertainty can undermine the robustness of resource scheduling, potentially resulting in performance degradation or task failure. Effectively addressing challenges like high user energy consumption and computational uncertainty while maintaining service quality is crucial for optimizing future network nodes. As network environments grow increasingly complex and user demands for high performance, low latency, and robust reliability rise, the optimization of resource efficiency and the achievement of mutual benefit across sensing, communication, and computing functions become urgent and critical. To meet this challenge, it is essential to advance the system towards higher intelligence and multi-dimensional connectivity. Furthermore, research on robust offloading in ISCC networks remains limited and warrants further investigation.  Methods  To address high user energy consumption and computational uncertainty in ISCC networks under complex scenarios, a robust resource allocation and decision optimization scheme is proposed. The goal is to minimize the total energy consumption of users. The proposed scheme takes into account common constraints and computational uncertainty commonly encountered in practical applications, offering a viable optimization approach for ISCC network design. First, to tackle the challenge of accurately predicting task complexity, potential biases arising from resource allocation and processing estimations are analyzed. These biases reflect real-world unpredictability, where task size can be measured but completion time remains uncertain, potentially leading to resource waste or performance degradation. To mitigate this, a robust computational resource allocation problem is formulated to manage the uncertainty caused by task offloading effectively. Second, the problem of minimizing users’ total energy is established by jointly optimizing task offloading ratios, beamforming, and resource allocation, subject to constraints such as power consumption, processing time, and radar estimation information rate. However, due to the multi-variable, non-convex, and NP-hard nature of this optimization problem, traditional methods fail to provide efficient solutions. To address this, a Markov decision process is modeled, and an optimization algorithm based on Soft Actor-Critic (SAC) is proposed.  Results and Discussions  The simulation results demonstrate that the proposed SAC-based algorithm outperforms existing methods in terms of performance and flexibility in dynamic and complex scenarios. Specifically, the learning rate affects the convergence speed of the algorithm, but its impact on final performance is minimal (Fig. 3). Compared to the Proximal Policy Optimization (PPO) and Advantage Actor-Critic (A2C) algorithms, the proposed algorithm achieves faster training speeds. Thanks to its flexible and unique design, the proposed algorithm exhibits stronger exploration capabilities and remains more stable during training (Fig. 4). The robust design enhances adaptability, resulting in higher overall reward values (Fig. 5). In terms of total user energy consumption, the proposed algorithm reduces energy use by approximately 9.57% compared to PPO and by 40.72% compared to A2C. As the number of users increases and more users access the network, signal interference intensifies, transmission rates decrease, and task offloading costs rise. In such scenarios, the proposed algorithm shows greater flexibility in policy adjustment, maintaining energy consumption at a relatively low level, outperforming both PPO and A2C. This advantage becomes more pronounced as the number of users grows or load pressure increases (Fig. 6). Overall, the proposed algorithm offers a robust and efficient solution for resource allocation and optimization in dynamic and complex environments, demonstrating exceptional adaptability and reliability in multi-user and multi-task scenarios. These results not only highlight the superior performance of the SAC algorithm but also highlight its potential in addressing multi-variable, non-convex problems.  Conclusions  This paper presents an optimization algorithm based on SAC, which not only achieves outstanding performance in terms of energy consumption, latency, and task offloading efficiency but also demonstrates excellent scalability and adaptability in multi-user, multi-task, and complex scenarios. A robust computational resource allocation scheme is proposed to address the uncertainty in offloading decisions. Simulation results show that the proposed algorithm can adapt to complex and dynamic network environments through flexible policy decisions, providing both theoretical support and a technical reference for further research on ISCC networks in such scenarios. Future research could explore incorporating multi-base station collaboration to enhance the robustness of ISCC networks, enabling them to better handle even more complex network environments.
Accelerated Channel Simulation Algorithm for Large-Scale Battlefield
LIU Chang, LI Weishi, XU Qiang, SHI Chengzhe, SHAO Shihai
 doi: 10.11999/JEIT240655
[Abstract](97) [FullText HTML](15) [PDF 2843KB](19)
Abstract:
  Objective  In large-scale battlefield environments, the testing and training of electromagnetic spectrum operation equipment rely on simulations within a vast digital electromagnetic environment. However, the computational complexity of large-scale electromagnetic channel simulations is high, hindering the improvement of computational speed. Traditional time-domain radiosity algorithms experience exponential growth in complexity with increasing reflection orders, while frequency-domain radiosity algorithms face limitations in time-delay resolution due to constraints in Fast Fourier Transform (FFT) points. This paper proposes an iterative time-domain radiosity algorithm that accelerates channel simulation, while maintaining high accuracy and time-delay resolution.  Methods  The proposed iterative time-domain radiosity algorithm uses a recursive modeling approach that reuses and corrects channel data from previous moments to reduce computational complexity. The algorithm begins by discretizing reflective surfaces in the environment into facets, which represent small, discrete elements capturing the reflective properties of the environment. The channel impulse response between the transmitter, facets, and receiver is modeled as a sum of direct and reflected components. The reflection process is described using shape factors that account for attenuation, delay, and visibility between facets, which are essential for accurately modeling interactions between the transmitter, facets, and receiver.To reduce computational complexity, the algorithm reuses channel data from the previous moment, leveraging small changes in the geographical location of the equipment between adjacent time steps. This reuse is possible due to the spatial coherence of the environment, ensuring that the previous channel data remains relevant with only minor adjustments. The channel data is then corrected by adjusting the delay and attenuation components based on changes in the direct shape factors between the transmitter and facets. This correction process ensures that the channel data remains accurate despite the reuse of prior calculations. The algorithm further employs a facet channel search method to approximate the channel by selecting the strongest reflection channels, thereby reducing the computational burden. This method involves identifying the top N strongest reflection channels within each facet, where N is determined by the desired balance between computational complexity and accuracy. By focusing on the strongest reflection channels, the algorithm significantly reduces the number of required calculations while maintaining high accuracy. The combination of data reuse, correction, and low-complexity approximation makes the proposed algorithm highly efficient for large-scale channel simulation.  Results and Discussions  Simulation results show that the proposed iterative time-domain radiosity algorithm improves computational speed by an order of magnitude, while maintaining accuracy, compared to the traditional time-domain radiosity algorithm (Fig. 10). This improvement is achieved by reusing and correcting channel data from previous moments, significantly reducing the number of recursive calculations required. The enhanced computational speed is particularly crucial in large-scale battlefield environments, where traditional algorithms struggle with high computational complexity.In comparison to the frequency-domain radiosity algorithm, the proposed algorithm provides higher time-delay resolution, making it better suited for large-scale battlefield environments (Fig. 8). The time-delay resolution of the frequency-domain radiosity algorithm is constrained by the number of FFT points, which must be set to a large value to achieve high resolution in large-scale environments. In contrast, the iterative time-domain radiosity algorithm maintains high time-delay resolution without the need for large FFT points, making it more efficient for large-scale simulations.The computational complexity of the iterative time-domain radiosity algorithm is significantly lower than that of both the traditional time-domain radiosity algorithm and the frequency-domain radiosity algorithm (Table 1). The traditional time-domain radiosity algorithm’s complexity grows exponentially with the number of reflections, while the iterative algorithm reduces complexity by reusing and correcting previous calculations. The frequency-domain radiosity algorithm also faces high complexity due to the large number of FFT points required for high time-delay resolution. The proposed algorithm’s ability to reduce computational complexity while maintaining accuracy makes it a highly effective solution for large-scale channel simulations. Furthermore, the iterative time-domain radiosity algorithm demonstrates high consistency with the traditional time-domain algorithm in terms of average delay and root mean square delay spread, with average deviations of 0.04% and 0.9%, respectively. This indicates that the proposed algorithm preserves high accuracy while significantly improving computational efficiency. Its ability to accurately model the channel’s time-delay characteristics is critical for applications in large-scale battlefield environments, where precise channel simulation is essential for the effective testing and training of electromagnetic spectrum operation equipment.  Conclusions  The iterative time-domain radiosity algorithm proposed in this paper significantly enhances computational speed while maintaining accuracy and high time-delay resolution, addressing the computational challenges of channel simulation in large-scale battlefield environments. By reusing and correcting channel data from previous moments and employing a low-complexity approximation method, the algorithm reduces the computational burden without compromising accuracy. This makes it particularly well-suited for large-scale battlefield environments, where traditional algorithms struggle with high computational complexity and limited time-delay resolution. Future work could explore further optimizations and extend the algorithm to other electromagnetic environments, such as urban or indoor scenarios, where similar challenges in computational complexity and time-delay resolution may arise. Additionally, the algorithm could be adapted for real-time simulation systems, where rapid and accurate channel simulation is critical for decision-making and operational planning.
Research on Security, Privacy, and Energy Efficiency in Unmanned Aerial Vehicle-Assisted Federal Edge Learning Communication Systems
LU Weidang, FENG Kai, DING Yu, LI Bo, ZHAO Nan
 doi: 10.11999/JEIT240847
[Abstract](100) [FullText HTML](30) [PDF 3462KB](26)
Abstract:
  Objective  Unmanned Aerial Vehicle-Assisted Federal Edge Learning (UAV-Assisted FEL) communication addresses the data isolation problem and mitigates data leakage risks in terminal devices. However, eavesdroppers may exploit model updates in FEL to recover original private data, significantly threatening the system’s privacy and security.  Methods  To address this issue, this study proposes a secure aggregation and resource optimization scheme for UAV-Assisted FEL communication systems. Terminal devices train local models using local data and update parameters, which are transmitted to a global UAV. The UAV aggregates these parameters to generate new global model parameters. Eavesdroppers attempt to intercept the transmitted parameters to reconstruct the original data. To enhance security-privacy energy efficiency, the transmission bandwidth, CPU frequency, and transmit power of terminal devices, along with the CPU frequency of the UAV, are jointly optimized. An evolutionary Deep Deterministic Policy Gradient (DDPG) algorithm is proposed to solve this optimization problem. The algorithm intelligently interacts with the system to achieve secure aggregation and resource optimization while meeting latency and energy consumption requirements.  Results and Discussions  The simulation results validate the effectiveness of the proposed scheme. The experiments evaluate the effects of the scheme on key performance metrics, including system cost, secure transmission rate, and secure privacy energy efficiency, from multiple perspectives. As shown in (Fig. 2), with an increasing number of terminal devices, system cost, secure transmission rate, and secure privacy energy efficiency all increase. These results indicate that the proposed scheme ensures system security and enhances energy efficiency, even in multi-device scenarios. As shown in (Fig. 3), under varying global iteration counts, the system balances latency and energy consumption by either extending the duration to lower energy consumption or increasing energy consumption to reduce latency. The secure transmission rate rises with the number of global iterations, as fewer iterations allow the system to tolerate higher energy consumption and latency per iteration, leading to reduced transmission power from terminal devices to meet system constraints. Additionally, secure privacy energy efficiency improves with increasing global iterations, further demonstrating the scheme’s capacity to ensure system security and reduce system cost as global iterations increase. As shown in (Fig. 4), during UAV flight, secure privacy energy efficiency fluctuates, with higher secure transmission rates observed when the communication environment between terminal devices and the UAV is more favorable. As shown in (Fig. 5), the proposed scheme is compared with two baseline schemes: Scheme 1, which minimizes system latency, and Scheme 2, which minimizes system energy consumption. The proposed scheme significantly outperforms both baselines in cost overhead. Scheme 1 achieves a slightly higher secure transmission rate than the proposed scheme due to its focus on minimizing latency at the expense of higher energy consumption. Conversely, Scheme 2 shows a considerably lower secure transmission rate as it prioritizes minimizing energy consumption, resulting in lower transmission power and compromised secure transmission rates. The results indicate that the secure privacy energy efficiency of the proposed scheme significantly exceeds that of the baseline schemes, further demonstrating its effectiveness.  Conclusions  To enhance data transmission security and reduce system costs, this paper proposes a secure aggregation and resource optimization scheme for UAV-Assisted FEL. Under constraints of limited computational and communication resources, the scheme jointly optimizes the transmission bandwidth, CPU frequency, and transmission power of terminal devices, along with the CPU frequency of the UAV, to maximize the secure privacy energy efficiency of the UAV-Assisted FEL system. Given the complexity of the time-varying system and the strong coupling of multiple optimization variables, an advanced DDPG algorithm is developed to solve the optimization problem. The problem is first modeled as a Markov Decision Process, followed by the construction of a reward function positively correlated with the secure privacy energy efficiency objective. The proposed DDPG network then intelligently generates joint optimization variables to obtain the optimal solution for secure privacy energy efficiency. Simulation experiments evaluate the effects of the proposed scheme on key system performance metrics from multiple perspectives. The results demonstrate that the proposed scheme significantly outperforms other benchmark schemes in improving secure privacy energy efficiency, thereby validating its effectiveness.
Research on Channel Modeling for Aerial Reconfigurable Intelligent Surfaces-assisted Vehicle Communications
PAN Xuting, SHI Wangqi, XIONG Baiping, GUO Daoxing, JIANG Hao
 doi: 10.11999/JEIT240874
[Abstract](18) [FullText HTML](13) [PDF 3567KB](3)
Abstract:
  Objective   The Internet of Vehicles (IoV) is a global innovation focus, enabling ubiquitous interconnection among vehicles, roads, and people, thereby reducing traffic congestion and improving traffic safety. Vehicle-to-Vehicle (V2V) communication represents one of the most prominent application scenarios in IoV. This study addresses the reduced efficiency of V2V communication caused by environmental obstacles such as buildings and trees. It proposes the deployment of Reconfigurable Intelligent Surfaces (RIS) on Unmanned Aerial Vehicles (UAVs), leveraging their high mobility and on-demand deployment capability to enhance V2V communication under 6G networks. The model improves communication link quality and stability by utilizing the reflective properties of aerial RIS to mitigate signal attenuation and interference. This research develops a geometry-based Three-Dimensional (3D) dynamic channel model that incorporates the effects of UAV rotation, trajectory movement, and attitude changes on channel characteristics, enabling adaptation to dynamic and non-stationary communication scenarios. The findings provide a theoretical foundation for designing and optimizing RIS-assisted wireless communication systems through statistical analyses in the temporal, spatial, and frequency domains.  Methods   RIS can regulate incident electromagnetic waves to optimize communication system performance and are regarded as a crucial innovation in Sixth Generation (6G) wireless communication technology. Deploying RIS on UAVs effectively addresses reduced information transmission efficiency caused by obstacles such as trees and buildings, leveraging UAVs' flexible trajectories and on-demand deployment capabilities. This study proposes a geometry-based 3D dynamic channel model, considering the UAV's trajectory, three degrees of rotational freedom (pitch, yaw, and roll angles), and attitude changes. Channel propagation components are divided into aerial RIS array components and Non-Line-of-Sight (NLoS) components. Each RIS unit is modeled as an independent reflector capable of altering the propagation path by adjusting its phase and amplitude. The model incorporates time-varying spatial phases and Doppler frequency shifts, capturing the characteristics of dynamic propagation environments. Mathematical expressions for the Complex Impulse Responses (CIRs) are derived, along with analytical formulas for spatial Cross-Correlation Functions (CCFs), temporal Auto-Correlation Functions (ACFs), Frequency Correlation Functions (FCFs), and channel capacity. Various V2V communication scenarios are simulated by adjusting the velocity, direction, and acceleration of transmitters, receivers, and UAVs. Numerical simulations validate the proposed model's effectiveness by defining four UAV trajectories and various vehicle motion states. Additionally, the temporal, spatial, and frequency correlation characteristics under different motion states are investigated. Finally, the effects of RIS physical attributes, such as the number and size of units, and UAV altitude on channel capacity are analyzed, along with dynamic variations in the power delay profile.  Results and Discussions   Simulation results demonstrate that the proposed channel model accurately captures channel characteristics. Specifically, the model presents various UAV flight trajectories (Fig. 5) and analyzes the temporal autocorrelation properties under different motion states of the transmitter and receiver (Fig. 6). It is observed that the temporal correlation exhibits significant non-stationarity across different motion states. However, the introduction of RIS significantly mitigates the decline in correlation. The model also compares the temporal autocorrelation properties corresponding to different UAV flight attitudes and altitudes (Fig. 7, Fig. 9). It is found that as the UAV's initial altitude increases, multipath effects decrease, and the rate of decline in temporal autocorrelation function values gradually slows. Subsequently, the spatial cross-correlation of the proposed channel model is investigated for different propagation paths, revealing an increase in correlation with the Rician factor (Fig. 8). The frequency correlation function values are also examined under varying distances between the transmitter and receiver (Fig. 10), showing that while the correlation declines, it gradually stabilizes as the frequency interval increases. Finally, the impact of the RIS's physical properties on channel capacity and the power delay profile is studied (Fig. 11, Fig. 12). It is observed that increasing the size and number of RIS array elements enhances channel capacity. Additionally, as delay increases, the power exhibits multiple smaller peaks before gradually decaying. These findings provide a valuable theoretical foundation for the future design and optimization of RIS-assisted wireless communication systems.  Conclusions   This paper presents a geometry-based 3D non-stationary channel model for V2V communications, innovatively incorporating aerial RIS implemented by UAVs equipped with RIS. The model accounts for the time-varying motion trajectories of ground vehicle terminals and UAVs, as well as the fading effects due to UAV attitude variations. Analytical expressions for spatiotemporal-frequency correlation functions and channel capacity are derived from the proposed model, ensuring the accuracy of channel transmission characteristics. By adjusting the model's parameter configurations, it can accurately characterize the effects of various motion trajectories, dynamic states, UAV flight altitudes, and rotational angles on channel properties. These findings provide valuable insights for the design and performance analysis of RIS-assisted V2V communication systems.
Power Control and Resource Allocation Strategy for Information Freshness Guarantee in Internet of Vehicles
YANG Peng, KANG Yiming, YANG Jing, TANG Tong, ZHU Zhiyuan, WU Dapeng
 doi: 10.11999/JEIT240698
[Abstract](84) [FullText HTML](16) [PDF 3256KB](11)
Abstract:
  Objective  In the Internet of Vehicles (IoV), where differentiated services coexist, the system is progressively evolving towards safety and collaborative control applications, such as autonomous driving. Current research primarily focuses on optimizing mechanisms for high reliability and low latency, with Quality of Service (QoS) parameters commonly used as benchmarks, while the timeliness of vehicle status updates receives less attention. Merely optimizing metrics like transmission delay and throughput is insufficient for ensuring that vehicles obtain status information in a timely manner. For example, in security-critical IoV applications, which require the exchange of state information between vehicles, meeting only the constraints of delay interruption probability or data transmission interruption does not fully address the high timeliness requirements of security services. To tackle this challenge and meet the stringent timeliness demands of security and collaborative applications, this paper proposes a user power control and resource allocation strategy aimed at ensuring information freshness.  Methods  This paper investigates user power control and resource allocation strategies to ensure information freshness. First, the problem of maximizing the Quality of Experience (QoE) for Vehicle-to-Infrastructure (V2I) users under the constraint of freshness in Vehicle-to-Vehicle (V2V) status updates is formulated based on the system model. Then, by incorporating the queue backlog constraint, equivalent to the Age of Information (AoI) violation constraint, the extreme value theory is applied to optimize the tail distribution of AoI. Furthermore, using the Lyapunov optimization method, the original problem is transformed into minimizing the Lyapunov drift plus a penalty function, based on which the optimal user transmission power is determined. Finally, a resource allocation strategy based on Genetic Algorithm improved Particle Swarm Optimization (GA-PSO) is proposed, leveraging a hypergraph structure to determine the optimal user channel reuse mode.  Results and Discussions  Simulation analysis indicates the following: 1. The proposed algorithm employs a channel gain differential partitioning method to cluster V2V links, effectively reducing intra-cluster interference. By integrating GA-PSO, it accelerates the search for the optimal channel reuse pattern in three-dimensional matching, minimizing signaling overhead and avoiding local optima. Compared with benchmark algorithms, the proposed approach increases V2I channel capacity by 7.03% and significantly improves the average QoE for V2I users (Fig. 4). 2. As vehicle speed increases, the distance between vehicles also grows, leading to higher transmission power for V2V communication to maintain link reliability and timeliness. This power increase results in reduced V2I channel capacity, subsequently lowering the average QoE for V2I users. Simulation results show a nearly linear relationship between vehicle speed and average QoE for V2I users, suggesting a relatively uniform effect of speed on V2I link capacity (Fig. 5). 3. Under varying Vehicle User Equipment (VUE) densities, the extreme event control framework is used to compare the conditional Complementary Cumulative Distribution Function (CCDF) of AoI and V2V link beacon backlog. The equivalent queue constraint, derived using extreme value theory, effectively controls the occurrence of extreme AoI violations. The simulations show improved AoI tail distribution across different VUE densities (Fig. 6 and Fig. 7). 4. With decreasing vehicle speed, the CCDF tail distribution of AoI improves (Fig. 8). Reduced speed shortens the transmission distance, decreasing V2V link path loss. This lower path loss, combined with less restrictive VUE transmission power limits, increases the V2V link transmission rate. As beacon transmission rate increase, beacon backlog is reduced, and the probability of exceeding a fixed AoI threshold decreases, ensuring the freshness of V2V beacon transmissions. 5. A comparison of curves under identical beacon reach rate (Fig. 9) reveals that worst-case AoI consistently increases with rising beacon reach rate. At low beacon arrival rate, the average AoI is high. However, once the V2V beacon queue accumulates beyond a certain threshold, further increases in the update arrival rate also raise the average AoI. In summary, the proposed scheme optimizes both the AoI tail distribution and the QoE for V2I users.  Conclusions  This paper investigates resource allocation and power control in vehicular network communication scenarios. By simultaneously considering the constraints of transmission reliability and status update timeliness in V2V links, restricted by the Signal-to-Interference-plus-Noise Ratio (SINR) threshold and the AoI outage probability threshold, the proposed strategy ensures both link reliability and information freshness. An extreme control framework is applied to minimize the probability of extreme AoI outage events in V2V links, ensuring the timeliness of transmitted information and meeting service requirements. The Lyapunov optimization method is then used to transform the original problem, yielding the optimal transmission power for both V2I and V2V links. Additionally, a GA-PSO-based three-dimensional matching algorithm is developed to determine the optimal spectrum sharing scheme among V2I, V2V, and subchannels. Numerical results demonstrate that the proposed scheme optimizes the AoI tail distribution while enhancing the QoE for all V2I users.
Adaptively Sparse Federated Learning Optimization Algorithm Based on Edge-assisted Server
CHEN Xiao, QIU Hongbing, LI Yanlong
 doi: 10.11999/JEIT240741
[Abstract](173) [FullText HTML](39) [PDF 3257KB](36)
Abstract:
  Objective  Federated Learning (FL) represents a distributed learning framework with significant potential, allowing users to collaboratively train a shared model while retaining data on their devices. However, the substantial differences in computing, storage, and communication capacities across FL devices within complex networks result in notable disparities in model training and transmission latency. As communication rounds increase, a growing number of heterogeneous devices become stragglers due to constraints such as limited energy and computing power, changes in user intentions, and dynamic channel fluctuations, adversely affecting system convergence performance. This study addresses these challenges by jointly incorporating assistance mechanisms and reducing device overhead to mitigate the impact of stragglers on model accuracy and training latency.  Methods  This paper designs a FL architecture integrating joint edge-assisted training and adaptive sparsity and proposes an adaptively sparse FL optimization algorithm based on edge-assisted training. First, an edge server is introduced to provide auxiliary training for devices with limited computing power or energy. This reduces the training delay of the FL system, enables stragglers to continue participating in the training process, and helps maintain model accuracy. Specifically, an optimization model for auxiliary training, communication, and computing resource allocation is constructed. Several deep reinforcement learning methods are then applied to obtain the optimized auxiliary training decision. Second, based on the auxiliary training decision, unstructured pruning is adaptively performed on the global model during each communication round to further reduce device delay and energy consumption.  Results and Discussions  The proposed framework and algorithm are evaluated through extensive simulations. The results demonstrate the effectiveness and efficiency of the proposed method in terms of model accuracy and training delay.The proposed algorithm achieves an accuracy rate approximately 5% higher than that of the FL algorithm on both the MNIST and CIFAR-10 datasets. This improvement results from low-computing-power and low-energy devices failing to transmit their local models to the central server during multiple communication rounds, reducing the global model’s accuracy (Table 3).The proposed algorithm achieves an accuracy rate 18% higher than that of the FL algorithm on the MNIST-10 dataset when the data on each device follow a non-IID distribution. Statistical heterogeneity exacerbates model degradation caused by stragglers, whereas the proposed algorithm significantly improves model accuracy under such conditions (Table 4).The reward curves of different algorithms are presented (Fig. 7). The reward of FL remains constant, while the reward of EAFL_RANDOM fluctuates randomly. ASEAFL_DDPG shows a more stable reward curve once training episodes exceed 120 due to the strong learning and decision-making capabilities of DDPG and DQN. In contrast, EAFL_DQN converges more slowly and maintains a lower reward than the proposed algorithm, mainly due to more precise decision-making in the continuous action space and an exploration mechanism that expands action selection (Fig. 7).When the computing power of the edge server increases, the training delay of the FL algorithm remains constant since it does not involve auxiliary training. The training delay of EAFL_RANDOM fluctuates randomly, while the delays of ASEAFL_DDPG and EAFL_DQN decrease. However, ASEAFL_DDPG consistently achieves a lower system training delay than EAFL_DQN under the same MEC computing power conditions (Fig. 9).When the communication bandwidth between the edge server and devices increases, the training delay of the FL algorithm remains unchanged as it does not involve auxiliary training. The training delay of EAFL_RANDOM fluctuates randomly, while the delays of ASEAFL_DDPG and EAFL_DQN decrease. ASEAFL_DDPG consistently achieves lower system training delay than EAFL_DQN under the same bandwidth conditions (Fig. 10).  Conclusions  The proposed sparse-adaptive FL architecture based on an edge-assisted server mitigates the straggler problem caused by system heterogeneity from two perspectives. By reducing the number of stragglers, the proposed algorithm achieves higher model accuracy compared with the traditional FL algorithm, effectively decreases system training delay, and improves model training efficiency. This framework holds practical value, particularly for FL deployments where aggregation devices are selected based on statistical characteristics, such as model contribution rates. Straggler issues are common in such FL scenarios, and the proposed architecture effectively reduces their occurrence. Simultaneously, devices with high model contribution rates can continue participating in multiple rounds of federated training, lowering the central server’s frequent device selection overhead. Additionally, in resource-constrained FL environments, edge servers can perform more diverse and flexible tasks, such as partial auxiliary training and partitioned model training.
Related-key Differential Cryptanalysis of Full-round PFP Ultra-lightweight Block Cipher
YAN Zhiguang, WEI Yongzhuang, YE Tao
 doi: 10.11999/JEIT240782
[Abstract](208) [FullText HTML](44) [PDF 1290KB](111)
Abstract:
  Objective   In 2017, the PFP algorithm was introduced as an ultra-lightweight block cipher to address the demand for efficient cryptographic solutions in constrained environments, such as the Internet of Things (IoT). With a hardware footprint of approximately 1355 GE and low power consumption, PFP has attracted attention for its ability to deliver high-speed encryption with minimal resource usage. Its encryption and decryption speeds outperform those of the internationally recognized PRESENT cipher by a factor of 1.5, making it highly suitable for real-time applications in embedded systems. While the original design documentation asserts that PFP resists various traditional cryptographic attacks, including differential, linear, and impossible differential attacks, the possibility of undiscovered vulnerabilities remains unexplored. This study evaluates the algorithm’s resistance to related-key differential attacks, a critical cryptanalysis method for lightweight ciphers, to determine the actual security level of the PFP algorithm using formal cryptanalysis techniques.  Methods   To evaluate the security of the PFP algorithm, Satisfiability Modulo Theories (SMT) is used to model the cipher’s round function and automate the search for distinguishers indicating potential design weaknesses. SMT, a formal method increasingly applied in cryptanalysis, facilitates automated attack generation and the detection of cryptographic flaws. The methodology involved constructing mathematical models of the cipher’s rounds, which are tested for differential characteristics under various key assumptions. Two distinguisher models are developed: one based on single-key differentials and the other on related-key differentials, the latter being the focus of this analysis. These models automated the search for weak key differentials that could enable efficient key recovery attacks. The analysis leveraged the nonlinear substitution-permutation structure of the PFP round function to systematically identify vulnerabilities. The results are examined to estimate the probability of key recovery under different attack scenarios and assess the effectiveness of related-key differential cryptanalysis against the full-round PFP cipher.  Results and Discussions  The SMT-based analysis revealed a critical vulnerability in the PFP algorithm. A related-key differential characteristic with a probability of 2–62 is identified, persisting through 32 encryption rounds. This characteristic indicates a predictable pattern in the cipher’s behavior under related-key conditions, which can be exploited to recover the secret key. Such differentials are particularly concerning as they expose a significant weakness in the cipher’s resistance to related-key attacks, a critical threat in IoT applications where keys may be reused or related across multiple devices or sessions.Based on this finding, a key recovery attack is developed, requiring only 263 chosen plaintexts and 248 full-round encryptions to retrieve the 80-bit master key. The efficiency of this attack demonstrates the vulnerability of the PFP cipher to practical cryptanalysis, even with limited computational resources. The attack’s relatively low complexity suggests that PFP may be unsuitable for applications demanding high security, particularly in environments where adversaries can exploit related-key differential characteristics. Moreover, these results indicate that the existing resistance claims for the PFP cipher are insufficient, as they do not account for the effectiveness of related-key differential cryptanalysis. This challenges the assertion that the PFP algorithm is secure against all known cryptographic attacks, emphasizing the need for thorough cryptanalysis before lightweight ciphers are deployed in real-world scenarios.(Fig. 2: Related-key differential characteristic with probability 2–62 in 32 rounds; Table 1: Attack complexity and resource requirements for related-key recovery.)  Conclusions   In conclusion, this paper presents a cryptographic analysis of the PFP lightweight block cipher, revealing its vulnerability to related-key differential attacks. The proposed key recovery attack demonstrates that, despite its efficiency in hardware and speed, PFP fails to resist attacks exploiting related-key differential characteristics. This weakness is particularly concerning for IoT applications, where key reuse or related keys across devices is common. These findings highlight the need for further refinement in lightweight cipher design to ensure robust resistance against advanced cryptanalysis techniques. As lightweight ciphers continue to be deployed in security-critical systems, it is essential that designers consider all potential attack vectors, including related-key differentials, to strengthen security guarantees. Future work should focus on enhancing the cipher’s security by exploring alternative key-schedule designs or increasing the number of rounds to mitigate the identified vulnerabilities. Additionally, this study emphasizes the effectiveness of SMT-based formal methods in cryptographic analysis, providing a systematic approach for identifying previously overlooked weaknesses in cipher designs.
Hybrid Reconfigurable Intelligent Surface Assisted Sensing Communication and Computation for Joint Power and Time Allocation in Vehicle Ad-hoc Network
SHU Feng, ZHANG Junhao, ZHANG Qi, YAO Yu, BIAN Hongyi, WANG Xianpeng
 doi: 10.11999/JEIT240719
[Abstract](176) [FullText HTML](58) [PDF 4550KB](35)
Abstract:
  Objective  Vehicular networks, as key components of intelligent transportation systems, are encountering increasing spectrum resource limitations within their dedicated 25 MHz communication band, as well as challenges from electromagnetic interference in typical communication environments. To address these issues, this paper integrates cognitive radio technology with radar sensing and introduces Hybrid-Reconfigurable Intelligent Surface (H-RIS) to jointly optimize radar sensing, data transmission, and computation. This approach aims to enhance spectrum resource utilization and the Joint Throughput Capacity (JTC) of vehicular networks.  Methods  A phased optimization approach is adopted to alternately optimize power allocation, time allocation, and reflection components in order to identify the best solution. The data transmission capacity of secondary users is characterized by defining a performance index for JTP. The problem is tackled through a two-stage optimization strategy where power allocation, time allocation, and reflection element optimization are solved iteratively to achieve the optimal solution. First, a joint optimization problem for sensing, communication, and computation is formulated. By jointly optimizing time allocation, H-RIS reflection element coefficients, and power allocation, the goal is to maximize the joint throughput capacity. The block coordinate descent method decomposes the optimization problem into three sub-problems. In the optimization of reflection element coefficients, a stepwise approach is employed, where passive reflection elements are fixed to optimize active reflection elements and vice versa.  Results and Discussions  The relationship between joint throughput and the number of iterations for the proposed Alternating Optimization Iterative Algorithm (AOIA) is shown (Figure 4). The results indicate that both algorithms converge after a finite number of iterations. The correlation between the target secondary user’s joint throughput and radar power is presented (Figure 5). In the H-RIS-assisted Integrated Sensing Communication and Computation Vehicle-to-Everything (ISCC-V2X) scenario, the joint throughput of the Aimed Secondary User (ASU) is maximized through optimal power configuration (Figure 5). The comparison of the target secondary user’s joint throughput with radar system power for the proposed algorithm and baseline schemes is shown (Figure 6), demonstrating that the proposed method significantly outperforms random Reconfigurable Intelligent Surfaces (RIS) and No-RIS schemes under the same parameter settings. Furthermore, the proposed H-RIS optimization scheme outperforms both Random H-RIS and traditional passive optimization RIS in terms of joint throughput.The relationship between the target secondary user’s joint throughput and the number of H-RIS reflection elements is illustrated (Figure 7). The results show that the proposed scheme provides a significant performance improvement over both Random RIS and No-RIS schemes under the same parameter settings. The relationship between the transmit power of the target secondary user’s joint throughput and the transmit power of the ASU is depicted (Figure 9), highlighting that joint throughput increases with transmit power in all scenarios. The relationship between joint throughput and the number of active reflection elements for the proposed algorithm and other benchmark schemes is shown (Figure 10), demonstrating that joint throughput increases with the number of active reflection elements in H-RIS scenarios, with the proposed scheme exhibiting a faster growth rate than Random H-RIS. The relationship between ASU joint throughput, radar sensing time, and radar power is presented (Figure 11), revealing that an optimal joint time and power allocation strategy exists. This strategy maximizes ASU joint throughput while ensuring H-RIS presence and sufficient protection for the primary user.  Conclusion  To address the challenges of spectrum resource scarcity and low data transmission efficiency in vehicular networks, this paper focuses on improving the joint throughput of intelligent vehicle users, enhancing spectrum utilization, and achieving efficient data transmission in the H-RIS-assisted ISCC-V2X scenario. A joint optimization method for vehicular network perception, communication, and computation based on H-RIS is explored. The introduction of H-RIS aims to enhance data transmission efficiency while considering the interests of both primary and secondary users. The joint optimization problem for the target secondary user’s perception, communication, and computation is analyzed. First, the joint allocation scenario for the H-RIS-assisted ISCC-V2X system is constructed, introducing the signal model, radar perception model, communication model, and computation model. Using these models, a joint optimization problem is formulated. Through alternating optimization, the optimal H-RIS reflection element coefficients, time allocation vector, and power allocation vector are derived to maximize the joint throughput. Simulation results demonstrate that the incorporation of H-RIS significantly improves the joint throughput of the target secondary user. Furthermore, an optimal power allocation scheme is identified that maximizes the joint throughput. When both time allocation and power allocation are considered jointly, simulations show the existence of an optimal scheme that maximizes the joint throughput of the target secondary user.
Pilot Design Method for OTFS System in High-Speed Mobile Scenarios
LI Yibing, TANG Yunhe, JIAN Xin, SUN Qian, CHEN Hao
 doi: 10.11999/JEIT240349
[Abstract](50) [FullText HTML](14) [PDF 959KB](7)
Abstract:
  Objective  Orthogonal Time Frequency Space (OTFS) have attracted significant attention in recent years due to excellent performance in high-speed mobile communication scenarios characterized by time-frequency double-selective channels. Accurate and efficient channel state information acquisition is critical for these systems. To address this, a channel estimation method based on compressed sensing is employed, using specialized pilot sequences. The performance of such channel estimation algorithms based on compressed sensing and the cross-correlation properties of the dictionary sets generated by these pilot sequences. which vary depending on the sequence design. This study addresses the pilot design problem in OTFS communication systems, proposing an optimization method to identify pilot sequences that enhance channel estimation accuracy effectively.  Methods  A pilot-assisted channel estimation algorithm based on compressed sensing is employed to estimate the delay and Doppler channel state information in OTFS systems for high-speed mobile scenarios. To improve channel estimation accuracy in the Delay-Doppler domain and achieve better performance than traditional pseudo-random sequences, this study proposes a pilot sequence optimization method using an Improved Genetic Algorithm (IGA). The algorithm takes the cross-correlation among dictionary set columns as the optimization goal, leveraging the GA's strong integer optimization capabilities to search for optimal pilot sequences.An adaptive adjustment strategy for crossover and mutation probabilities is also introduced to enhance the algorithm's convergence and efficiency. Additionally, to address the high computational complexity of the fitness function, the study analyzes the expressions for calculating cross-correlation among dictionary set columns and simplifies redundant calculations, thereby improving the overall optimization efficiency.  Results and Discussions  This study investigates the channel estimation performance of OTFS systems using different pilot sequences. The simulation parameters are presented in (Table 1), and the simulation results are shown in (Figure 2), (Figure 3), and (Figure 4). (Figure 2) illustrates the convergence performance of several commonly used group heuristic intelligent optimization algorithms applied to the pilot optimization problem, including the Particle Swarm Optimization (PSO) algorithm, Discrete Particle Swarm Optimization (DPSO) algorithm, Snake Optimization (SO) algorithm, and Genetic Algorithm (GA). The results indicate that the performance of common continuous optimization algorithms, such as PSO and SO, is comparable, while DPSO slightly outperforms traditional PSO, GA, due to its unique genetic and mutation mechanisms, demonstrates significantly faster convergence and better solutions. Furthermore, this study proposes a targeted IGA capable of adaptively adjusting crossover and mutation probabilities, leading to better solutions with fewer iterations. The objective function calculation process is also analyzed and simplified, reducing its computational complexity from \begin{document}$ {O}({\lambda ^2}k_p^2{l_p}) $\end{document} to \begin{document}$ {O}(\lambda {k_p}{l_p}) $\end{document} without altering the cross-correlation coefficient, which significantly reduces the computational load while maintaining optimization efficiency. (Figure 3) and (Figure 4) depict the Normalized Mean Square Error (NMSE) and Bit Error Rate (BER) performance of OTFS systems using different pilot sequences for channel estimation. The commonly used pseudo-random sequences, including m-sequences, Gold sequences, Zadoff-Chu sequences, and the optimized sequences generated by the proposed algorithm, are compared. The results demonstrate that the optimized pilot sequences generated by the proposed algorithm achieve superior channel estimation performance compared with other pilot sequences.  Conclusions  This study analyzes a pilot-assisted channel estimation method for OTFS systems based on compressed sensing and proposes a pilot sequence optimization approach using an IGA to address the pilot optimization challenge. The optimization objective function is constructed based on the correlation among dictionary set columns, and an adaptive adjustment strategy for crossover and mutation probabilities is proposed to enhance the algorithm's convergence speed and optimization capability, outperforming other commonly used group heuristic optimization algorithms. To address the high computational complexity associated with directly calculating cross-correlation coefficients, the calculation steps are simplified, reducing the complexity from \begin{document}$ {O}({\lambda ^2}k_p^2{l_p}) $\end{document} to \begin{document}$ {O}(\lambda {k_p}{l_p}) $\end{document}, while preserving the cross-correlation properties, thereby improving optimization efficiency. Simulation results demonstrate that the proposed optimized pilot sequences offer better channel estimation performance than traditional pseudo-random pilot sequences, with relatively low optimization complexity.
Low-Rank Regularized Joint Sparsity Modeling for Image Denoising
ZHA Zhiyuan, YUAN Xin, ZHANG Jiachao, ZHU Ce
 doi: 10.11999/JEIT240324
[Abstract](42) [FullText HTML](13) [PDF 4373KB](6)
Abstract:
  Objective  Image denoising aims to reduce unwanted noise in images, which has been a long-standing issue in imaging science. Noise significantly degrades image quality, affecting their use in applications such as medical imaging, remote sensing, and image reconstruction. Over recent decades, various image prior models have been developed to address this problem, focusing on different image characteristics. These models, utilizing priors like sparsity, Low-Rankness (LR), and Nonlocal Self-Similarity (NSS), have proven highly effective. Nonlocal sparse representation models, including Joint Sparsity (JS), LR, and Group Sparse Representation (GSR), effectively leverage the NSS property of images. They capture the structural similarity of image patches, even when spatially distant. Popular dictionary-based JS algorithms use a relaxed convex penalty to avoid NP-hard sparse coding, leading to an approximately sparse representation. However, these approximations fail to enforce LR on the image data, reducing denoising quality, especially in cases of complex noise patterns or high self-similarity. This paper proposes a novel Low-Rank Regularized Joint Sparsity (LRJS) model for image denoising, integrating the benefits of LR and JS priors. The LRJS model enhances denoising performance, particularly where traditional methods underperform. By exploiting the NSS in images, the LRJS model better preserves fine details and structures, offering a robust solution for real-world applications.  Methods  The proposed LRJS model integrates low-rank and JS priors to enhance image denoising performance. By exploiting the NSS property of images, the LRJS model strengthens the dependency between nonlocal similar patches, improving image structure representation and noise suppression. The low-rank prior reflects the smoothness and regularity inherent in the image, whereas the JS prior captures the sparsity of the image patches. Incorporating these priors ensures a more accurate representation of the underlying clean image, enhancing denoising performance. An alternating minimization algorithm is proposed to solve this optimization problem, alternating between the low-rank and JS terms to simplify the optimization process. Additionally, an adaptive parameter adjustment strategy dynamically tunes the regularization parameters, balancing LR and sparsity throughout the optimization. The LRJS model offers an effective approach for image denoising by combining low-rank and JS priors, solved using an alternating minimization framework with adaptive parameter tuning.  Results and Discussions  Experimental results on two image denoising tasks, Gaussian noise removal (Fig. 4, Fig. 5, Table 3, Table 4) and Poisson denoising (Fig. 6, Table 5), demonstrate that the proposed LRJS method outperforms several popular and state-of-the-art denoising algorithms in both objective metrics and visual perceptual quality, particularly for images with high self-similarity. In Gaussian noise removal, the LRJS method achieves significant improvements, especially with highly self-similar images. This improvement results from LRJS effectively leveraging the NSS prior, which strengthens the dependencies among similar patches, leading to better noise suppression while preserving image details. Compared with other methods, LRJS demonstrates greater robustness, particularly in retaining fine details and structures often lost with traditional denoising techniques. For Poisson denoising, the LRJS method also yields notable performance gains. It better manages the complexity of Poisson noise compared with other approaches, highlighting its versatility and robustness across different noise types. The visual quality of the denoised images shows fewer artifacts and more accurate recovery of details. Quantitative results in terms of PSNR and SSIM further validate the effectiveness of LRJS, positioning it as a competitive solution in image denoising. Overall, these experimental findings confirm that LRJS offers a reliable and effective approach, particularly for images with high self-similarity and complex noise models.  Conclusions  The LRJS model proposed in this paper improves image denoising performance by combining LR and JS priors. This dual-prior framework better captures the underlying image structure while suppressing noise, particularly benefiting images with high self-similarity. Experimental results demonstrate that the LRJS method not only outperforms traditional denoising techniques but also exceeds many state-of-the-art algorithms in both objective metrics and visual quality. By leveraging the NSS property of image patches, the LRJS model enhances the dependencies among similar patches, making it particularly effective for tasks requiring the preservation of fine details and structures. The LRJS method significantly enhances the quality of denoised images, especially in complex noise scenarios such as Gaussian and Poisson noise. Its robust alternating minimization algorithm with adaptive parameter adjustment ensures effective optimization, contributing to superior performance. The results further highlight the LRJS model’s ability to preserve image edges, textures, and other fine details often degraded in other denoising algorithms. Compared with existing techniques, the LRJS method demonstrates superior performance in handling high noise levels while maintaining image clarity and detail, making it a promising tool for applications such as medical imaging, remote sensing, and image restoration. Future research could focus on optimizing the model for more complex noise environments, such as mixed noise or real-world noise that is challenging to model. Additionally, exploring more efficient algorithms and integrating advanced techniques, such as deep learning, may further improve the LRJS model’s capability and applicability to diverse denoising tasks.
Covert Communication Of UAV Aided By Time Modulated Array Perception
MIAO Chen, QIN Yuxuan, MA Ruiqian, LIN Zhi, MA Yue, ZHANG Wentao, WU Wen
 doi: 10.11999/JEIT240606
[Abstract](103) [FullText HTML](27) [PDF 2298KB](30)
Abstract:
  Objective  With the widespread application of Unmanned Aerial Vehicle (UAV) communication technology in both military and civilian domains, ensuring the security of information transmission within UAV networks has garnered increasing attention.Covert communication serves as an effective approach to conceal information transmission. However, current technologies such as digital beamforming, while enhancing covert communication performance, increase system size and power consumption. A method for UAV short-packet covert communication based on Time Modulated Planar Array (TMPA) sensing is proposed. In this study, a TMPA-UAV covert communication system architecture is introduced, and a two-dimensional Direction of Arrival (DOA) estimation method is developed. A covert communication model is then established, and a closed-form expression for the covert constraint is derived using Kullback-Leibler (KL) divergence. Based on the estimated angle of Willie, the TMPA switching sequence is optimized to maximize the signal gain in the target direction while minimizing the gain in non-target directions. Covert throughput is selected as the optimization objective, and a one-dimensional search method is employed to determine the optimal data packet length and transmission power.  Results and Discussions  Simulations indicate that the root mean square error (RMSE) for DOA estimation in both directions approaches 0°, and the RMSE significantly decreases as the signal-to-noise ratio (SNR) increases (Figure 4). With a fixed elevation angle and azimuth angles varying between 0° and 60°, a comparison between the proposed method and the traditional DOA estimation method for time-modulated arrays demonstrates that the proposed method reduces the DOA estimation error to the order of 0.1°, significantly improving accuracy compared to conventional methods. Beamforming simulations based on the estimation results (Figure 6) show a sidelobe level (SLL) below -30 dB and a beamwidth of 5°, meeting design requirements. Covert communication simulations reveal the existence of an optimal data packet length that maximizes covert throughput (Figure 7). A stricter covert tolerance implies tighter constraints on covert communication (Figure 8), forcing Alice to use lower transmission power and shorter block lengths to communicate covertly with Bob. When the beamforming error angle is small, the system maintains a high covert throughput (Figure 9). Within a UAV flight height range of 50 m to 90 m, the covert throughput remains low; however, when the height exceeds 90 m, the throughput increases rapidly. Beyond 130 m, the UAV height has little impact on the maximum covert throughput, and performance reaches its optimal state. Therefore, controlling the UAV flight height appropriately is crucial for achieving effective communication between legitimate links.  Conclusions  This paper proposes a TMPA-based multi-antenna UAV sensing-assisted covert communication system for short packets. A TMPA-based DOA estimation method is introduced to determine the relative position of non-cooperative nodes. The CS algorithm is employed to optimize the beam radiation pattern, maximizing the gain at the legitimate destination node while creating nulls at the non-cooperative node's location. Furthermore, a closed-form expression for covert constraints is derived based on the KL divergence, and covert throughput is maximized through the joint optimization of packet length and transmission power. Simulations investigate the relationships between the number of array elements, covert tolerance, beam direction error angles, UAV height, and covert throughput. Results show that an optimal packet length exists to maximize covert throughput. Additionally, increasing the number of array elements and relaxing covert constraints can enhance covert throughput. Practical system design should comprehensively consider the optimization of these factors.
Optimizing Age of Information in LoRa Networks via Deep Reinforcement Learning
CHENG Kefei, CHEN Caidie, LUO Jia, CHEN Qianbin
 doi: 10.11999/JEIT240404
[Abstract](79) [FullText HTML](22) [PDF 0KB](0)
Abstract:
Age of Information (AoI) is a measure of information freshness. For the time-sensitive Internet of Things, minimizing AoI is particularly important. This paper analyzes the AoI optimization strategy under Slot-Aloha protocol in an intelligent transportation environment based on LoRa network. This paper establishes a system model of transmission collisions and waiting time between packets under the Slot-Aloha protocol, and points out through analysis that during the LoRa uplink transmission process, as the number of packets increases, AoI is mainly affected by packet collisions. In order to overcome the problem that the action space is too large, which makes it difficult to achieve effective solutions, this paper adopts the method of mapping the continuous action space to the discrete action space, and uses the Soft Actor-Critical (SAC) algorithm to optimize AoI under LoRa network. Simulation results show that the SAC algorithm is superior to traditional algorithms and traditional deep reinforcement learning algorithms, and can effectively reduce the average AoI of the network.  Objective  With the rapid development of intelligent transportation systems, the real-time and accuracy of traffic data have become particularly important, especially in the transmission systems of traffic monitoring cameras and other equipment. Long-distance Low-power R adio frequency network (LoRa) has become an important technology for connecting sensors in the field of intelligent transportation due to its advantages of low power consumption, high coverage and long-distance communication. However, in an urban environment, LoRa networks face problems such as data collisions that may occur frequently when devices send data, which may affect the timeliness of information, which in turn affects the effectiveness of traffic management decisions. Therefore, how to optimize the timeliness of data packets in the LoRa network and improve the communication efficiency of the system has become a key issue. The research of this paper aims to solve the problem of how to effectively optimize AoI in LoRa networks, especially under the slotted Aloha protocol, to study the impact of factors such as packet collisions and over-the-air transmission time on AoI. On this basis, this paper proposes an optimization method based on deep reinforcement learning, using the Soft Actor-Critic algorithm to optimize AoI, in order to achieve lower latency and higher data transmission success rate in an intelligent transportation environment where data is frequently transmitted., thereby improving the overall performance of the system and the real-time nature of information transmission, and meeting the needs of intelligent transportation for information freshness.  Method  Based on the requirements for information freshness in intelligent transportation scenarios, this paper studies the optimization problem of packet AoI in LoRa networks under the slotted Aloha protocol. Aiming at the frequent data transmission in LoRa network, a system model based on LoRa packet collision is established, focusing on analyzing the impact of packet collision and over-the-air transmission time under the slotted Aloha protocol on AoI in LoRa network, providing theoretical support for improving information transmission efficiency. Considering that the temporal evolution of AoI is Markov, this paper models the optimization problem as a Markov Decision Process (MDP) and uses the SAC algorithm in deep reinforcement learning to solve it.  Results and Discussions  This paper analyzes the change of AoI during collision (Fig. 2), and establishes a collision model during transmission of each data packet (Fig. 4). The simulation results show that the SAC algorithm is better than the TD algorithm and the traditional algorithm (Fig. 6). As the number of terminals increases, the system average AoI increases (Fig. 7), and the change of the system average AoI under different time slots for SAC and TD3 algorithms (Fig. 8).  Conclusions  In view of the lack of research on AoI in LoRa networks, this paper studies the AoI optimization problem of LoRa uplink packet transmission based on the intelligent traffic management environment, and proposes a packet collision model under the slotted Aloha protocol. The greedy algorithm and SAC algorithm are used to optimize AoI respectively. Simulation results show that the greedy algorithm is better than the traditional deep reinforcement learning algorithm and worse than the SAC algorithm. SAC algorithm can effectively improve the AoI optimization problem in LoRa networks. In addition, this paper only considers AoI optimization problems in the network and does not jointly consider issues such as energy consumption and packet loss rate. In view of this deficiency, future research can further consider the balance between energy consumption, packet loss rate, and AoI optimization to reduce energy consumption and packet loss rate. In addition, this paper has not yet covered the research of heterogeneous scenarios. In a transmission environment where LoRa networks coexist with other communication technologies (such as Wi-Fi, Bluetooth, NB-IoT, etc.), interoperability, data consistency, and network management between different communication protocols and device types will bring new challenges. By conducting AoI optimization research in heterogeneous transmission environments, the performance and reliability of LoRa networks in complex application scenarios such as intelligent traffic management can be further improved.
Energy Aware Reconfigurable Intelligent Surfaces Assisted Unmanned Aerial Vehicle Age of Information Enabled Data Collection Policies
ZHANG Tao, ZHANG Qian, ZHU Yingwen, DAI Chen
 doi: 10.11999/JEIT240866
[Abstract](101) [FullText HTML](32) [PDF 2092KB](22)
Abstract:
  Objective  : To address the balance between efficient energy utilization and information freshness in UAV-assisted data collection for the Internet of Things (IoT) using Reconfigurable Intelligent Surfaces (RIS).  Methods  : A data collection optimization policy based on deep reinforcement learning is proposed. Considering flight energy consumption, communication complexity, and Age of Information (AoI) constraints, a joint optimization scheme is designed using a Double Deep Q-Network (DDQN). The scheme integrates UAV trajectory planning, IoT device scheduling, and RIS phase adjustment, mitigating Q-value overestimation observed in traditional Q-learning methods.  Results and Discussions  : The proposed method enables the UAV to dynamically adjust its trajectory and communication strategy based on real-time environmental conditions, enhancing data transmission efficiency and reducing energy consumption. Simulation results demonstrate superior convergence compared with traditional methods (Fig. 3). The UAV trajectory shows that the proposed method effectively accomplishes the data collection task (Fig. 4). Furthermore, rational allocation of energy and communication resources allows dynamic adaptation to varying communication environment parameters, ensuring an optimal balance between energy consumption and AoI (Fig. 5)(Fig. 6).  Conclusions  : The deep reinforcement learning-based optimization policy for UAV-assisted IoT data collection with RIS effectively resolves the trade-off between energy utilization and information freshness. This robust solution improves data collection efficiency in dynamic communication environments.
Cross-Entropy Iteration Aided Time-Hopping Pattern Estimation and Multi-hop Coherent Combining Algorithm
MIAO Xiaqing, WU Rui, YUE Pingyue, ZHANG Rui, WANG Shuai, PAN Gaofeng
 doi: 10.11999/JEIT240677
[Abstract](87) [FullText HTML](29) [PDF 2419KB](22)
Abstract:
  Objective:   As a vital component of the global communication network, satellite communication attracts significant attention for its capacity to provide seamless global coverage and establish an integrated space-ground information network. Time-Hopping (TH), a widely used technique in satellite communication, is distinguished by its strong anti-jamming capabilities, flexible spectrum utilization, and high security levels. In an effort to enhance data transmission security, a system utilizing randomly varying TH patterns has been developed. To tackle the challenge of limited transmission power, symbols are distributed across different time slots and repeatedly transmitted according to random TH patterns. At the receiver end, a coherent combining strategy is implemented for signals originating from multiple time slots. To minimize Signal-to-Noise Ratio (SNR) loss during this combining process, precise estimation of TH patterns and multi-hop carrier phases is essential. The randomness of the TH patterns and multi-hop carrier phases further complicates parameter estimation by increasing its dimensionality. Additionally, the low transmission power leads to low-SNR conditions for the received signals in each time slot, complicating parameter estimation even more. Traditional exhaustive search methods are hindered by high computational complexity, highlighting the pressing need for low-complexity multidimensional parameter estimation techniques tailored specifically for TH communication systems.  Methods:   Firstly, a TH communication system featuring randomly varying TH patterns is developed, where the time slot index of the signal in each time frame is determined by the TH code. Both parties involved in the communication agree that this TH code will change randomly within a specified range. Building on this foundation, a mathematical model for estimating TH patterns and multi-hop carrier phases is derived from the perspective of maximum likelihood estimation, framing it as a multidimensional nonlinear optimization problem. Moreover, guided by a coherent combining strategy and constrained by low SNR conditions at the receiver, a Cross-Entropy (CE) iteration aided algorithm is proposed for the joint estimation of TH patterns and multi-hop carrier phases. This algorithm generates multiple sets of TH code and carrier phase estimates randomly based on a predetermined probability distribution. Using the SNR loss of the combined signal as the objective function, the CE method incorporates an adaptive importance sampling strategy to iteratively update the probability distribution of the estimated parameters, facilitating rapid convergence towards optimal solutions. Specifically, in each iteration, samples demonstrating superior performance are selected according to the objective function to calculate the probability distribution for the subsequent iteration, thereby enhancing the likelihood of reaching the optimal solution. Additionally, to account for the randomness inherent in the iterations, a global optimal vector set is established to document the parameter estimates that correspond to the minimum SNR loss throughout the iterative process. Finally, simulation experiments are conducted to assess the performance of the proposed algorithm in terms of iterative convergence speed, parameter estimation error, and the combined demodulation Bit Error Rate (BER).  Results and Discussions:   The estimation errors for the TH code and carrier phase were simulated to evaluate the parameter estimation performance of the proposed algorithm. With an increase in SNR, the accuracy of TH code estimation approaches unity. When a small phase quantization bit width is applied, the Root Mean Square Error (RMSE) of the carrier phase estimation is primarily constrained by the grid search step size. Conversely, as the phase quantization bit width increases, the RMSE gradually converges to a fixed value. Regarding the influence of phase quantization on combined demodulation, as the phase quantization bit width increases, nearly theoretical BER performance can be achieved. A comparison between the proposed algorithm and the exhaustive search method reveals that the proposed algorithm significantly reduces the number of search trials compared to the grid search method, with minimal loss in BER performance. An increase in the variation range of the TH code necessitates a larger number of candidate groups for the CE method to maintain a low combining SNR loss. However, with a greater TH code variation range, the number of search iterations and its growth rate in the proposed algorithm are significantly lower than those in the exhaustive search method. Regarding transmission power in the designed TH communication method, as the number of hops in the multi-hop combination increases, the required SNR per hop decreases for the same BER performance, indicating that maximum transmission power can be correspondingly reduced.  Conclusions:   A TH communication system with randomly varying TH patterns tailored for satellite communication applications has been designed. This includes the presentation of a multi-hop signal coherent combining technique. To address the multidimensional parameter estimation challenge associated with TH patterns and multi-hop carrier phases under low SNR conditions, a CE iteration-aided algorithm has been proposed. The effectiveness of this algorithm is validated through simulations, and its performance regarding iterative convergence characteristics, parameter estimation error, and BER performance has been thoroughly analyzed. The results indicate that, in comparison to the conventional grid search method, the proposed algorithm achieves near-theoretical optimal BER performance while maintaining lower complexity.
Cover
Cover
2024, 46(12).  
[Abstract](40) [PDF 2889KB](87)
Abstract:
Contents
Contents
2024, 46(12): 1-4.  
[Abstract](26) [FullText HTML](14) [PDF 242KB](9)
Abstract:
Overviews
Federated Learning Technologies for 6G Industrial Internet of Things: From Requirements, Vision to Challenges, Opportunities
LIU Miao, XIA Yuhong, ZHAO Haitao, GUO Liang, SHI Zheng, ZHU Hongbo
2024, 46(12): 4335-4353.   doi: 10.11999/JEIT240574
[Abstract](293) [FullText HTML](80) [PDF 4369KB](77)
Abstract:
With the rapid development of 6G technology and the evolution of the Industrial Internet of Things (IIoT), federated learning has gained significant attention in the industrial sector. This paper explores the development and application potential of federated learning in 6G-driven IIoT, analyzing 6G’s prospects and how its high speed, low latency, and reliability can support data privacy, resource optimization, and intelligent decision-making. First, existing related work is summarized, and the development requirements along with the vision for applying federated learning technology in 6G industrial IoT scenarios are outlined. Based on this, a new paradigm for industrial federated learning, featuring a hierarchical cross-domain architecture, is proposed to integrate 6G and digital twin technologies, enabling ubiquitous, flexible, and layered federated learning. This supports on-demand and reliable distributed intelligent services in typical Industrial IoT scenarios, achieving the integration of Operational and Communication Information Technology (OCIT). Next, the potential research challenges that federated learning might face towards 6G industrial IoT(6G IIoT-FL) are analyzed and summarized, followed by potential solutions or recommendations. Finally, relevant future directions worth attention in this field are highlighted in the study, with the aim of providing insights for subsequent research to some extent.
Wireless Communication and Internet of Things
Resource Allocation Algorithm for Multiple-Input Single-Output Symbiotic Radio with Imperfect Channel State Information
XU Yongjun, WANG Mingyang, TIAN Qinyu, ZHANG Haibo, XUE Qing
2024, 46(12): 4354-4362.   doi: 10.11999/JEIT231366
[Abstract](172) [FullText HTML](48) [PDF 2448KB](55)
Abstract:
To overcome the effect of channel estimation errors on the ineffectiveness of conventional optimal resource allocation algorithms, a robust resource allocation algorithm with imperfect Channel State Information(CSI) is proposed in Multiple-Input Single-Output(MISO) symbiotic radio systems. Considering the constraints of the minimum throughput of users, transmission time, maximum transmit power of the base station, and the reflection coefficients of users, based on bounded channel uncertainties, a robust throughput-maximization resource allocation problem is formulated by jointly optimizing transmission time, beamforming vectors, and reflection coefficients. The original problem is transformed into a convex problem by applying the Lagrange dual theory, the variable substitution, and the alternating optimizing methods. Simulation results verified that the throughput of the proposed algorithm is improved by 11.7% and the outage probability is reduced by 5.31% by comparing it with the non-robust resource allocation algorithm.
An Intelligent Driving Strategy Optimization Algorithm Assisted by Direct Acyclic Graph Blockchain and Deep Reinforcement Learning
HUANG Xiaoge, LI Chunlei, LI Wenjing, LIANG Chengchao, CHEN Qianbin
2024, 46(12): 4363-4372.   doi: 10.11999/JEIT240407
[Abstract](160) [FullText HTML](46) [PDF 3509KB](34)
Abstract:
The application of Deep Reinforcement Learning (DRL) in intelligent driving decision-making is increasingly widespread, as it effectively enhances decision-making capabilities through continuous interaction with the environment. However, DRL faces challenges in practical applications due to low learning efficiency and poor data-sharing security. To address these issues, a Directed Acyclic Graph (DAG)blockchain-assisted deep reinforcement learning Intelligent Driving Strategy Optimization (D-IDSO) algorithm is proposed. First, a dual-layer secure data-sharing architecture based on DAG blockchain is constructed to ensure the efficiency and security of model data sharing. Next, a DRL-based intelligent driving decision model is designed, incorporating a multi-objective reward function that optimizes decision-making by jointly considering safety, comfort, and efficiency. Additionally, an Improved Prioritized Experience Replay with Twin Delayed Deep Deterministic policy gradient (IPER-TD3) method is proposed to enhance training efficiency. Finally, braking and lane-changing scenarios are selected in the CARLA simulation platform to train Connected and Automated Vehicles (CAVs). Experimental results demonstrate that the proposed algorithm significantly improves model training efficiency in intelligent driving scenarios, while ensuring data security and enhancing the safety, comfort, and efficiency of intelligent driving.
Partially Overlapping Channels Dynamic Allocation Method for UAV Ad-hoc Networks in Emergency Scenario
WANG Bowen, ZHENG Jian, SUN Yanjing, HU Wenxin, NIE Tong, WANG Jingjing
2024, 46(12): 4373-4382.   doi: 10.11999/JEIT240377
[Abstract](186) [FullText HTML](55) [PDF 2765KB](26)
Abstract:
The Flying Ad-hoc NETworks (FANETs) are widely used in emergency rescue scenarios due to their high mobility and self-organization advantages. In emergency scenarios, a large number of user paging requests lead to a challenging coordination between the surge in local traffic and the limited spectrum resources, significant channel interference issues in FANETs are resulted from. There is an urgent need to extend the high spectrum utilization advantage of Partially Overlapping Channels (POCs) to emergency scenarios. However, the adjacent channel characteristics of POCs leads to complex interference that is difficult to characterize. Therefore, partial overlapping channel allocation methods in FANETs are studied in this paper. By utilizing geometric prediction to reconstruct time-varying interference graphs and characterizing the POCs interference model with the interference-free minimum channel spacing matrix, a Dynamic Channel Allocation algorithm for POCs based on Upper Confidence Bounds (UCB-DCA) is proposed. This algorithm aims to solve for an approximately optimal channel allocation scheme through distributed decision-making. Simulation results demonstrate that the algorithm achieves a trade-off between network interference and channel switching times, and has good convergence performance.
Research on Channel Modeling and Characteristics Analysis for RIS-Enabled Near-Field Marine Communications Towards 6G
JIANG Hao, SHI Wangqi, ZHU Qiuming, SHU Feng, WANG Jiangzhou
2024, 46(12): 4383-4390.   doi: 10.11999/JEIT240518
[Abstract](362) [FullText HTML](132) [PDF 4022KB](94)
Abstract:
Reconfigurable Intelligent Surfaces (RIS) is considered as one of the potential key technologies for 6G mobile communications, which offers advantages such as low cost, low energy consumption, and easy deployment. By integrating RIS technology into marine wireless channels, it has the capability to convert the unpredictable wireless transmission environment into a manageable one. However, current channel models are struggling to accurately depict the unique signal transmission mechanisms of RIS-enabled base station to ship channels in marine communication scenarios, resulting in challenges in achieving a balance between accuracy and complexity for channel characterization and theoretical establishment. Therefore, this paper develops a segmented channel modeling method for near-field RIS-enabled marine communications, and then proposed a multi-domain joint parameterized statistical channel model for RIS-enabled marine communications. This approach focus on addressing the technical bottleneck of existing RIS channel modeling methods that face difficulties in achieving a balance between accuracy and efficiency, ultimately facilitating the rapid development of the 6G mobile communication industry in China.
Cache Oriented Migration Decision and Resource Allocation in Edge Computing
YANG Shouyi, HAN Haojin, HAO Wanming, CHEN Yihang
2024, 46(12): 4391-4398.   doi: 10.11999/JEIT240427
[Abstract](245) [FullText HTML](54) [PDF 1274KB](40)
Abstract:
Edge computing provides computing resources and caching services at the network edge, effectively reducing execution latency and energy consumption. However, due to user mobility and network randomness, caching services and user tasks frequently migrate between edge servers, increasing system costs. The migration computation model based on pre-caching is constructed and the joint optimization problem of resource allocation, service caching and migration decision-making is investigated. To address this mixed-integer nonlinear programming problem, the original problem is decomposed to optimize the resource allocation using Karush-Kuhn-Tucker condition and bisection search iterative method. Additionally, a Joint optimization algorithm for Migration decision-making and Service caching based on a Greedy Strategy (JMSGS) is proposed to obtain the optimal migration and caching decisions. Simulation results show the effectiveness of the proposed algorithm in minimizing the weighted sum of system energy consumption and latency.
Joint Optimization of Task Offloading and Resource Allocation for Unmanned Aerial Vehicle-assisted Edge Computing Network
ZHOU Xiaotian, YANG Xiaohui, ZHANG Haixia, DENG Yiqin
2024, 46(12): 4399-4408.   doi: 10.11999/JEIT240411
[Abstract](450) [FullText HTML](70) [PDF 3043KB](86)
Abstract:
It can effectively overcome the limitations of the ground environment, expand the network coverage and provide users with convenient computing services, through constructing the air-ground integrated edge computing network with Unmanned Aerial Vehicle (UAV) as the relay. In this paper, with the objective of maximizing the task completion amount, the joint optimization problem of UAV deployment, user-server association and bandwidth allocation is investigated in the context of the UAV assisted multi-user and multi-server edge computing network. The formulated joint optimization problem contains both continuous and discrete variables, which makes itself hard to solve. To this end, a Block Coordinated Descent (BCD) based iterative algorithm is proposed in this paper, involving the optimization tools such as differential evolution and particle swarm optimization. The original problem is decomposed into three sub-problems with the proposed algorithm, which can be solved independently. The optimal solution of the original problem can be approached through the iteration among these three subproblems. Simulation results show that the proposed algorithm can greatly increase the amount of completed tasks, which outperforms other benchmark algorithms.
Design and Optimization of Task-driven Dynamic Scalable Network Architecture in Spatial Information Networks
HE Lijun, JIA Ziye, LI Shiyin, WANG Yanting, WANG Li, LIU Lei
2024, 46(12): 4409-4421.   doi: 10.11999/JEIT240505
[Abstract](118) [FullText HTML](53) [PDF 5072KB](24)
Abstract:
At the present stage, the satellite subsystems in Space Information Networks (SINs) have their own systems and are separated from each other, which makes the network appear closed and fragmented, forming a severe resource barrier and resulting in weak collaborative application ability of space resources and low network expansion ability. The traditional architecture design adopts the “completely subversive” idea of the current space networks, which greatly increases the difficulty of actual deployment. Therefore, based on the current status of satellite networks, the idea of “upgrading step by step” is adopted to promote the evolution of the existing network architecture, and a dynamic and scalable architecture model is proposed in SINs from the perspective of mission drive, so as to realize the efficient and dynamic sharing of space resources among subsystems and promote the dynamic and efficient aggregation of space resources according to the changes in mission requirements. Firstly, a phased network architecture model is proposed, aiming at compatibility and upgrading of the existing network architecture. Then, the design of the core component coordinator is introduced, including network structure and working protocol, superframe structure, and efficient network resource allocation strategy, to realize the efficient transmission of spatial data. The simulation results show that the proposed network architecture realizes the efficient sharing of network resources and greatly improves the utilization rate of network resources.
Multi-Stage Game-based Topology Deception Method Using Deep Reinforcement Learning
HE Weizhen, TAN Jinglei, ZHANG Shuai, CHENG Guozhen, ZHANG Fan, GUO Yunfei
2024, 46(12): 4422-4431.   doi: 10.11999/JEIT240029
[Abstract](207) [FullText HTML](68) [PDF 10216KB](48)
Abstract:
Aiming at the problem that current network topology deception methods only make decisions in the spatial dimension without considering how to perform spatio-temporal multi-dimensional topology deception in cloud-native network environments, a multi-stage Flipit game topology deception method with deep reinforcement learning to obfuscate reconnaissance attacks in cloud-native networks. Firstly, the topology deception defense-offense model in cloud-native complex network environments is analyzed. Then, by introducing a discount factor and transition probabilities, a multi-stage game-based network topology deception model based on Flipit is constructed. Furthermore under the premise of analyzing the defense-offense strategies of game models, a topology deception generation method is developed based on deep reinforcement learning to solve the topology deception strategy of multi-stage game models. Finally, through experiments, it is demonstrated that the proposed method can effectively model and analyze the topology deception defense-offense scenarios in cloud-native networks. It is shown that the algorithm has significant advantages compared to other algorithms.
Multi-view Adaptive Probabilistic Load Forecasting Combing Bayesian Autoformer Network
ZHOU Shiqi, WANG Junfan, LAI Junsheng, YUAN Yujie, DONG Zhekang
2024, 46(12): 4432-4440.   doi: 10.11999/JEIT240398
[Abstract](211) [FullText HTML](98) [PDF 6578KB](27)
Abstract:
Establishing accurate short-term forecasting models for electrical load is crucial for the stable operation and intelligent advancement of power systems. Traditional methods have not adequately addressed the issues of data volatility and model uncertainty. In this paper, a multi-dimensional adaptive short-term forecasting method for electrical load based on Bayesian Autoformer network is proposed. Specifically, an adaptive feature selection method is designed to capture multi-dimensional features. By capturing multi-scale features and time-frequency localized information, the model is enhanced to handle high volatility and nonlinear features in load data. Subsequently, an adaptive probabilistic forecasting model based on Bayesian Autoformer network is proposed. It captures relationships of significant subsequence features and associated uncertainties in load time series data, and dynamically updates the probability prediction model and parameter distributions through Bayesian optimization. The proposed model is subjected to a series of experimental analyses (comparative analysis, adaptive analysis, robustness analysis) on real load datasets of three different magnitudes (GW, MW, and KW). The model exhibits superior performance in adaptability and accuracy, with average improvements in Root Mean Square Error (RMSE), Pinball Loss, and Continuous Ranked Probability Score (CRPS) of 1.9%, 24.2%, and 4.5%, respectively.
Radars and Navigation
Research on SAR Anti-jamming Imaging Method with Sparse CP-OFDM
SHI Haixu, XU Zhongqiu, LI Guangzuo, LIN Kuan, HONG Wen
2024, 46(12): 4441-4450.   doi: 10.11999/JEIT240092
[Abstract](132) [FullText HTML](48) [PDF 4431KB](20)
Abstract:
Synthetic Aperture Radar (SAR) is a microwave remote sensing imaging radar. In recent years, with the advancement of digital technology and radio frequency electronic technology, the jamming technology of SAR imaging is developed rapidly. The active jamming such as deception jamming based on Digital Radio Frequency Memory (DRFM) technology brings serious challenges to SAR imaging systems for civil use and military use. For research on SAR anti-jamming imaging against deception jamming, firstly, orthogonal waveform diversity design and waveform optimization is carried out for Orthogonal Frequency Division Multiplexing waveforms with Cyclic Prefixes (CP-OFDM). And the CP-OFDM wide band orthogonal waveform set with excellent autocorrelation peak sidelobe level and cross-correlation peak level is obtained. Then the sparse SAR imaging theory is introduced, which is combined with CP-OFDM. By using the sparse reconstruction method, the high-quality and high-precision imaging with anti-jamming capability is realized. Finally, simulation based on point targets, surface targets and real data is conducted, and it is proved that the method can completely remove the false targets generated by deception jamming, suppress sidelobes and achieve high-precision imaging.
Comprehensive Error in UAV Cluster Trajectory Deception for Networked Radar
SHI Chenguang, JIANG Zeyu, YAN Mu, ZHOU Jianjiang, WEN Wen
2024, 46(12): 4451-4458.   doi: 10.11999/JEIT240289
[Abstract](130) [FullText HTML](46) [PDF 4948KB](32)
Abstract:
In the process of trajectory deception against the networked radar using an Unmanned Aerial Vehicle (UAV) cluster, false target points are generated by delaying and forwarding intercepted radar signals. Errors such as radar station location errors, UAV jitter errors, and forwarding delay errors can all cause these false target points to deviate from their intended positions, thereby degrading the effectiveness of the deception. Considering known radar measurement positions, UAV preset positions, deception distances, and a specific Space Resolution Cell (SRC) of the networked radar, the boundary condition of successfully deceiving networked radar by a UAV cluster is analyzed in this paper. The impact patterns of these errors on deception effectiveness are also summarized in the paper. The numerical simulation results show that when all three kinds of errors are present, the derived results can effectively evaluate the deception ability of the UAV cluster to the networked radar.
A SAR Image Aircraft Target Detection and Recognition Network with Target Region Feature Enhancement
HAN Ping, ZHAO Han, LIAO Dayu, PENG Yanwen, CHENG Zheng
2024, 46(12): 4459-4470.   doi: 10.11999/JEIT240491
[Abstract](272) [FullText HTML](73) [PDF 15154KB](42)
Abstract:
In Synthetic Aperture Radar (SAR) image aircraft target detection and recognition, the discrete characteristics of aircraft target images and the similarity between structures can reduce the accuracy of aircraft detection and recognition. A SAR image aircraft target detection and recognition network with enhanced target area features is proposed in this paper. The network consists of three parts: Feature Protecting Cross Stage Partial Darknet (FP-CSPDarnet) for protecting aircraft features, Feature Pyramid Net with Adaptive fusion (FPN-A) for adaptive feature fusion, and Detection Head for target area scattering feature extraction and enhancement (D-Head). FP-CSPDarnet can effectively protect the aircraft features in SAR images while extracting features; FPN-A adopts multi-level feature adaptive fusion and refinement to enhance aircraft features; D-Head effectively enhances the identifiable features of the aircraft before detection, improving the accuracy of aircraft detection and recognition. The experimental results using the SAR-ADRD dataset have demonstrated the effectiveness of the proposed method, with an average accuracy improvement of 2.0% compared to the baseline network YOLOv5s.
High Sparsity and Low Sidelobe Near-field Focused Sparse Array for Three-Dimensional Imagery
YANG Lei, SONG Hao, SHEN Ruiyang, CHEN Yingjie, HU Zhongwei, HUO Xin, XING Mengdao
2024, 46(12): 4471-4482.   doi: 10.11999/JEIT231278
[Abstract](121) [FullText HTML](43) [PDF 4578KB](29)
Abstract:
In active electrical scanning millimeter-wave security imaging, the uniform array antenna has the bottleneck of uncontrolled cost and high complexity, which is difficult to be widely applied in practices. To this end, a near-field focused sparse array design algorithm for high sparsity and low sidelobes is proposed in this paper. It applies an improved three dimensional (3D) time-domain imaging algorithm to achieve high-accuracy 3D reconstruction. Firstly, the near-field focusing sparse array antenna model is constructed by taking the near-field focusing position and peak sidelobe level as constraints, where the \begin{document}$ {\ell _p} $\end{document}(0<p<1) norm of the weight vector regularization is established as the objective function. Secondly, by introducing auxiliary variables and establishing equivalent substitution models between sidelobe and focus position constraints and auxiliary variables, the problem of solving the array weight vector in the coupling of the objective function and complex constraints is developed. The model is simplified and solved through the idea of equivalent substitution. Then, the array excitation and position are optimized using a combination of complex number differentiation and heuristic approximation methods. Finally, the Alternating Direction Method of Multipliers (ADMM) is employed to achieve the focus position, peak sidelobe constraint, and array excitation in a cooperative manner. The sparse array 3D imaging is realized by improving the 3D time-domain imaging algorithm. The experimental results show that the proposed method is capable of obtaining lower sidelobe level with fewer array elements under the condition of satisfying the radiation characteristics of array antenna and near-field focusing. Applying raw millimeter-wave data, the advantages of sparse array 3D time-domain imaging algorithm are verified in terms of high accuracy and high efficiency.
Adaptive Fractional Fourier Transform Detection Method for Short Packets of Frequency-shifted Chirp Signal
XIU Menglei, DOU Gaoqi, FENG Shimin
2024, 46(12): 4483-4492.   doi: 10.11999/JEIT240370
[Abstract](138) [FullText HTML](28) [PDF 3292KB](27)
Abstract:
To address the pulse dispersion issue in detecting frequency-shifted chirp signals with traditional Fractional Fourier Transform (FrFT), an adaptive FrFT detection method is proposed in this paper. Leveraging the structural model of short packets and the Neyman-Pearson detection model, an analytical method is derived to evaluate the false alarm probability and missed detection probability of signal frame detection using an evaluation function and a decision threshold. Incorporating the pulse characteristics of traditional FrFT for complete chirp signals, a correction scheme for the fractional Fourier integral operator is proposed, and the peak distribution function of the frequency-shifted chirp symbol is derived for the adaptive FrFT. Addressing the search time shift issue in the adaptive FrFT detection process, the peak size and distribution of the frequency-shifted chirp symbol are analyzed, and the superiority of the adaptive FrFT detection compared to traditional FrFT is demonstrated.
Research on Combined Navigation Algorithm Based on Adaptive Interactive Multi-Kalman Filter Modeling
CHEN Guangwu, WANG Siqi, SI Yongbo, ZHOU Xin
2024, 46(12): 4493-4503.   doi: 10.11999/JEIT240426
[Abstract](228) [FullText HTML](87) [PDF 5604KB](32)
Abstract:
Practical applications struggle to obtain prior knowledge about inertial systems and sensors, affecting information fusion and positioning accuracy in combined navigation systems. To address the degradation of integrated navigation performance due to satellite signal quality changes and system nonlinearity in vehicle navigation, a Fuzzy Adaptive Interactive Multi-Model algorithm based on Multiple Kalman Filters (FAIMM-MKF) is proposed. It integrates a Fuzzy Controller based on satellite signal quality (Fuzzy Controller) and an Adaptive Interactive Multi-Model (AIMM). Improved Kalman filters such as Unscented Kalman Filter (UKF), Iterated Extended Kalman Filter (IEKF), and Square-Root Cubature Kalman Filter (SRCKF) are designed to match vehicle dynamics models. The method’s performance is verified through in-vehicle semi-physical simulation experiments. Results show that the method significantly improves vehicle positioning accuracy in complex environments with varying satellite signal quality compared to traditional interactive multi-model algorithms.
Two-stage Long-correlation Signal Acquisition Method for Through-the-earth Communication of the Ground Electrode Current Field
XU Zhan, ZHANG Xu, YANG Xiaolong
2024, 46(12): 4504-4512.   doi: 10.11999/JEIT240399
[Abstract](210) [FullText HTML](92) [PDF 3717KB](32)
Abstract:
Wireless through-the-earth communication provides a solution for information transmission in heavily shielded space. The received current field signal has low Signal-to-Noise Ratio (SNR), is easily distorted, and is greatly affected by carrier frequency offset, making signal acquisition difficult. In this paper, a long synchronization signal frame structure is designed and a two-stage long correlation signal acquisition algorithm is proposed that combines coarse and fine frequency offset estimation. In the first stage, the training symbols in the received time-domain signal are used for coarse estimation of sampling interval deviation based on the maximum likelihood algorithm, and the coarse estimation value of the sampling point compensation interval is calculated. In the second stage, the coarse estimation value and the received SNR are combined to determine the traversal range of the fine estimation value of the sampling point compensation interval. A long correlation template signal with local compensation is designed to achieve accurate acquisition of the current field signal. The algorithm’s performance is verified in a heavily shielded space located 30.26 m below the ground. Experimental results show that compared to traditional sliding correlation algorithms, the proposed algorithm has a higher acquisition success probability.
Electromagnetic Sensitivity Analysis of Curved Boundaries under the Method of Accompanying Variables
ZHANG Yuxian, ZHU Haige, FENG Xiaoli, YANG Lixia, HUANG Zhixiang
2024, 46(12): 4513-4521.   doi: 10.11999/JEIT240432
[Abstract](123) [FullText HTML](54) [PDF 6176KB](10)
Abstract:
Sensitivity analysis an evaluation method for the influence with variations of the design parameters on electromagnetic performance, which is utilized to calculate sensitivity information. This information guides the analysis of structural models to ensure compliance with design specifications. In the optimization design of electromagnetic structures by commercial software, traditional algorithms are often employed, involving adjustments to the geometry. However, this approach is known to be extensive in terms of computational time and resource consumption. In order to enhance the efficiency of model design, a stable and efficient processing scheme is proposed in the paper, known as the Adjoint Variable Method (AVM). This method achieves estimation of 1st~2nd order sensitivity on parameter transformations with only two algorithmic simulation conditions required. The application of AVM has predominantly been confined to the sensitivity analysis of rectangular boundary parameters, with this paper making the first extension of AVM to the sensitivity analysis of arc boundary parameters. Efficient analysis of the electromagnetic sensitivity of curved structures is accomplished based on the conditions designed for three distinct scenarios: fixed intrinsic parameters, frequency-dependent objective functions, and transient impulse functions. Compared to the Finite-Difference Method (FDM), a significant enhancement in computational efficiency is achieved by the proposed method. The effective implementation of the method substantially expands the application scope of AVM to curved boundaries, which can be utilized in optimization problems such as the electromagnetic structures of plasma models and the edge structures of complex antenna models. When computational resources are limited, the reliability and stability of electromagnetic structure optimization can be ensured by the application of the proposed method.
Image and Intelligent Information Processing
Design of Rotation Invariant Model Based on Image Offset Angle and Multibranch Convolutional Neural Networks
ZHANG Meng, LI Xiang, ZHANG Jingwei
2024, 46(12): 4522-4528.   doi: 10.11999/JEIT240417
[Abstract](118) [FullText HTML](36) [PDF 2393KB](15)
Abstract:
Convolutional Neural Networks (CNNs) exhibit translation invariance but lack rotation invariance. In recent years, rotating encoding for CNNs becomes a mainstream approach to address this issue, but it requires a significant number of parameters and computational resources. Given that images are the primary focus of computer vision, a model called Offset Angle and Multibranch CNN (OAMC) is proposed to achieve rotation invariance. Firstly, the model detect the offset angle of the input image and rotate it back accordingly. Secondly, feed the rotated image into a multibranch CNN with no rotation encoding. Finally, Response module is used to output the optimal branch as the final prediction of the model. Notably, with a minimal parameter count of 8 k, the model achieves a best classification accuracy of 96.98% on the rotated handwritten numbers dataset. Furthermore, compared to previous research on remote sensing datasets, the model achieves up to 8% improvement in accuracy using only one-third of the parameters of existing models.
Adjacent Coordination Network for Salient Object Detection in 360 Degree Omnidirectional Images
CHEN Xiaolei, WANG Xing, ZHANG Xuegong, DU Zelong
2024, 46(12): 4529-4541.   doi: 10.11999/JEIT240502
[Abstract](87) [FullText HTML](34) [PDF 6955KB](20)
Abstract:
To address the issues of significant target scale variation, edge discontinuity, and blurring in 360° omnidirectional images Salient Object Detection (SOD), a method based on the Adjacent Coordination Network (ACoNet) is proposed. First, an adjacent detail fusion module is used to capture detailed and edge information from adjacent features, which facilitates accurate localization of salient objects. Then, a semantic-guided feature aggregation module is employed to aggregate semantic feature information from different scales between shallow and deep features, suppressing the noise transmitted by shallow features. This helps alleviate the problem of discontinuous salient objects and blurred boundaries between the object and background in the decoding stage. Additionally, a multi-scale semantic fusion submodule is constructed to enlarge the receptive field across different convolution layers, thereby achieving better training of the salient object boundaries. Extensive experimental results on two public datasets demonstrate that, compared to 13 other advanced methods, the proposed approach achieves significant improvements in six objective evaluation metrics. Moreover, the subjective visualized detection results show better edge contours and clearer spatial structural details of the salient maps.
Emotion Recognition with Speech and Facial Images
XUE Peiyun, DAI Shutao, BAI Jing, GAO Xiang
2024, 46(12): 4542-4552.   doi: 10.11999/JEIT240087
[Abstract](205) [FullText HTML](67) [PDF 5663KB](49)
Abstract:
In order to improve the accuracy of emotion recognition models and solve the problem of insufficient emotional feature extraction, this paper conducts research on bimodal emotion recognition involving audio and facial imagery. In the audio modality, a feature extraction model of a Multi-branch Convolutional Neural Network (MCNN) incorporating a channel-space attention mechanism is proposed, which extracts emotional features from speech spectrograms across time, space, and local feature dimensions. For the facial image modality, a feature extraction model using a Residual Hybrid Convolutional Neural Network (RHCNN) is introduced, which further establishes a parallel attention mechanism that concentrates on global emotional features to enhance recognition accuracy. The emotional features extracted from audio and facial imagery are then classified through separate classification layers, and a decision fusion technique is utilized to amalgamate the classification results. The experimental results indicate that the proposed bimodal fusion model has achieved recognition accuracies of 97.22%, 94.78%, and 96.96% on the RAVDESS, eNTERFACE’05, and RML datasets, respectively. These accuracies signify improvements over single-modality audio recognition by 11.02%, 4.24%, and 8.83%, and single-modality facial image recognition by 4.60%, 6.74%, and 4.10%, respectively. Moreover, the proposed model outperforms related methodologies applied to these datasets in recent years. This illustrates that the advanced bimodal fusion model can effectively focus on emotional information, thereby enhancing the overall accuracy of emotion recognition.
LGDNet: Table Detection Network Combining Local and Global Features
LU Di, YUAN Xuan
2024, 46(12): 4553-4562.   doi: 10.11999/JEIT240428
[Abstract](188) [FullText HTML](69) [PDF 10100KB](31)
Abstract:
In the era of big data, table widely exists in various document images, and table detection is of great significance for the reuse of table information. In response to issues such as limited receptive field, reliance on predefined proposals, and inaccurate table boundary localization in existing table detection algorithms based on convolutional neural network, a table detection network based on DINO model is proposed in this paper. Firstly, an image preprocessing method is designed to enhance the corner and line features of table, enabling more precise table boundary localization and effective differentiation between table and other document elements like text. Secondly, a backbone network SwTNet-50 is designed, and Swin Transformer Blocks (STB) are introduced into ResNet to effectively combine local and global features, and the feature extraction ability of the model and the detection accuracy of table boundary are improved. Finally, to address the inadequacies in encoder feature learning in one-to-one matching and insufficient positive sample training in the DINO model, a collaborative hybrid assignments training strategy is adopted to improve the feature learning ability of the encoder and detection precision. Compared with various table detection methods based on deep learning, our model is better than other algorithms on the TNCR table detection dataset, with F1-Scores of 98.2%, 97.4%, and 93.3% for IoU thresholds of 0.5, 0.75, and 0.9, respectively. On the IIIT-AR-13K dataset, the F1-Score is 98.6% when the IoU threshold is 0.5.
Frequency Separation Generative Adversarial Super-resolution Reconstruction Network Based on Dense Residual and Quality Assessment
HAN Yulan, CUI Yujie, LUO Yihong, LAN Chaofeng
2024, 46(12): 4563-4574.   doi: 10.11999/JEIT240388
[Abstract](124) [FullText HTML](39) [PDF 5326KB](23)
Abstract:
With generative adversarial networks have attracted much attention because they provide new ideas for blind super-resolution reconstruction. Considering the problem that the existing methods do not fully consider the low-frequency retention characteristics during image degradation, but use the same processing method for high and low-frequency components, which lacks the effective use of frequency details and is difficult to obtain better reconstruction result, a frequency separation generative adversarial super-resolution reconstruction network based on dense residual and quality assessment is proposed. The idea of frequency separation is adopted by the network to process the high-frequency and low-frequency information of the image separately, so as to improve the ability of capturing high-frequency information and simplify the processing of low-frequency features. The base block in the generator is designed to integrate the spatial feature transformation layer into the dense wide activation residuals, which enhances the ability of deep feature representation while differentiating the local information. In addition, no-reference quality assessment network is designed specifically for super-resolution reconstructed images using Visual Geometry Group (VGG), which provides a new quality assessment loss for the reconstruction network and further improves the visual effect of reconstructed images. The experimental results show that the method has better reconstruction effect on multiple datasets than the current state-of-the-art similar methods. It is thus shown that super-resolution reconstruction using generative adversarial networks with the idea of frequency separation can effectively utilize the image frequency components and improve the reconstruction effect.
Circuit and System Design
A System-level Exploration and Evaluation Simulator for chiplet-based CPU
ZHANG Congwu, LIU Ao, ZHANG Ke, CHANG Yisong, BAO Yungang
2024, 46(12): 4575-4588.   doi: 10.11999/JEIT240299
[Abstract](258) [FullText HTML](121) [PDF 4968KB](47)
Abstract:
As Moore’s Law comes to an end, it is more and more difficult to improve the chip manufacturing process, and chiplet technology has been widely adopted to improve the chip performance. However, new design parameters introduced into the chiplet architecture pose significant challenges to the computer architecture simulator. To fully support exploration and evaluation of chiplet architecture, System-level Exploration and Evaluation simulator for Chiplet (SEEChiplet), a framework based on gem5 simulator, is developed in this paper. Firstly, three design parameters concerned about chiplet chip design are summarized in this paper, including: (1) chiplet cache system design; (2) Packaging simulation; (3) Interconnection networks between chiplet. Secondly, in view of the above three design parameters, in this paper: (1) a new private last level cache system is designed and implemented to expand the cache system design space; (2) existing gem5 global directory is modified to adapt to new private Last Level Cache (LLC) system; (3) two common packaging methods of chiplet and inter-chiplet network are modeled. Finally, a chiplet-based processor is simulated with PARSEC 3.0 benchmark program running on it, which proves that SEEChiplet can explore and evaluate the design space of chiplet.
News
more >
Conference
more >
Author Center

Wechat Community