Latest Articles

Articles in press have been peer-reviewed and accepted, which are not yet assigned to volumes/issues, but are citable by Digital Object Identifier (DOI).
Display Method:
Intelligent Resource Allocation Algorithm Based on Outdated CSI for Multi-Node URLLC
ZHAO Yizhen, GAO Wei, HU Yulin, ZHU Yao
Available online  , doi: 10.11999/JEIT260216
Abstract:
  Objective  Ultra-Reliable and Low-Latency Communications (URLLC) have found widespread applications in Industrial Internet-of-Things (IIoT) systems. However, in mobile operation scenarios such as transportation and inspection, the acquisition of instantaneous Channel State Information (CSI) is often impractical due to feedback overhead, forcing resource allocation decisions to be made based on outdated CSI. This mismatch significantly limits the achievable energy efficiency of the system. Traditional convex optimization methods have difficulty addressing such challenges, while classical Deep Reinforcement Learning (DRL) algorithms also exhibit inherent limitations in terms of convergence stability and policy performance when confronted with the stringent Quality-of-Service (QoS) constraints in URLLC. Motivated by these challenges, considering a multi-user URLLC system operating under outdated CSI in dynamic scenarios, this paper formulates an energy efficiency maximization problem while guaranteeing the communication latency and reliability requirements, and aims to design an efficient and stable algorithm for joint power and blocklength allocation.  Methods  To achieve the above objective, this paper proposes a Successive Convex Approximation (SCA)–assisted DRL framework for energy efficiency maximization under outdated CSI. Specifically, a SCA-based algorithm is first developed to derive a pre-allocation of transmit power and blocklength, yielding a feasible and physically interpretable yet relatively conservative baseline solution. Building upon this baseline, a Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm is employed to perform incremental refinement through interaction with the dynamic environment, thereby alleviating the conservative nature of SCA. Meanwhile, the SCA solution is incorporated as prior knowledge together with user location information into the state representation, which effectively narrows the policy search space and enables the DRL agent to better capture large-scale channel characteristics and system dynamics under outdated CSI, thereby enhancing the learning efficiency and stability.  Results and Discussions  The effectiveness of the proposed method is validated through the following simulation results. In the simulation, the proposed algorithm is evaluated against SCA, TD3 without SCA guidance, and TD3 without user location information. Simulation results demonstrate that the proposed method significantly outperforms all benchmark schemes in terms of convergence stability and system energy efficiency. During the training phase (Fig. 3), the average reward of the proposed algorithm increases steadily and converges stably, whereas removing location information leads to low and highly fluctuating rewards, and removing SCA guidance results in convergence to a much lower reward level, highlighting the importance of both prior guidance and location-aware state representation. Besides, during the actual operation stage of the system, the proposed algorithm achieves high and stable energy efficiency (Fig. 4), significantly outperforming comparative algorithms. Under outdated CSI, DRL-based methods outperform conservative optimization when transmission is successful, while the absence of location information or SCA guidance significantly degrades energy efficiency or increases transmission failures, verifying the two factors' effectiveness in improving energy efficiency and ensuring strategy validity. The simulation also examined the impact of key system parameters on energy efficiency. For basic resource parameters such as blocklength (Fig. 5) or power (Fig. 6), appropriately increasing their budget can help improve system energy efficiency. For parameters about reliability (Fig. 7), in order to avoid waste of resources, they should be reasonably set according to business requirements. Finally, the simulation of the average energy efficiency varying with the number of nodes and the number of network neurons provides certain reference basis for the configuration of the algorithm structure and the design of the network scale (Fig. 8).  Conclusions  In conclusion, this paper addresses the challenge of energy-efficient resource allocation for multi-user URLLC systems operating under outdated CSI by integrating SCA with DRL. That is, a TD3-based DRL approach is enhanced by introducing a SCA reference solution as prior guidance and incorporating user location information into the state representation. Such an optimization–learning dual-driven solution framework combines the interpretability and feasibility of model-based optimization with the adaptivity and expressive power of data-driven learning. The effectiveness of the proposed method is evaluated through simulations: (1) The proposed method achieves higher energy efficiency than pure optimization and conventional TD3 while satisfying URLLC latency and reliability constraints; (2) The SCA reference improves the stability and effectiveness of the strategy under outdated CSI; (3) Incorporating user location information enables more efficient decision-making. However, this work focuses on a single-cell multi-user scenario, and practical issues such as multi-cell interference, cooperative multi-base-station scheduling, and more complex mobility patterns are not considered. Future work will extend the proposed framework to more realistic multi-cell and multi-agent scenarios and investigate its applicability under more severe CSI imperfections.
A Lightweight Semi-supervised Brain Tumor Segmentation Network with Counterfactual Reasoning
FAN Yawen, WANG Chaoyuan, WANG Xin, ZHANG Xinchen, ZHOU Quan
Available online  , doi: 10.11999/JEIT251130
Abstract:
  Objective  Brain tumor segmentation plays a key role in clinical diagnosis and treatment planning. However, reliable annotation of medical images is costly and time-consuming, which limits the availability of large annotated datasets. To address this problem, this paper proposes a semi-supervised brain tumor segmentation method that combines a lightweight multimodal fusion segmentation network with counterfactual reasoning. The aim is to improve segmentation accuracy while maintaining sufficient efficiency for deployment in resource-limited clinical scenarios.  Methods  A parameter-sharing multimodal encoder-decoder network is designed to reduce model size and computational cost. An anatomical-structure consistency prior is incorporated to improve alignment with brain anatomy. During training, a teacher-student framework is used to generate counterfactual samples from model predictions. These samples guide learning from unlabeled MRI scans through a counterfactual consistency loss that enforces pixel-level consistency and feature-level semantic stability. This strategy helps the model extract structural information from unlabeled data while reducing the risk of boundary distortion caused by conventional data augmentation.  Results and Discussions  Experiments on the BraTS 2019 and BraTS 2021 datasets show that the proposed method consistently outperforms comparison models under limited-label conditions. On BraTS 2019, the proposed method achieves the best average Dice Similarity Coefficient (DSC) of 66.06%, and its average Intersection over Union (IoU) of 53.16% is comparable to those of other models. More importantly, it obtains the lowest average 95% Hausdorff Distance (HD95) of 7.60 mm, representing reductions of approximately 11% and 6% compared with UNet3D and LightMUnet, respectively (Tables 3 and 4). On BraTS 2021, the semi-supervised model improves the average DSC and IoU by 4.51% and 5.29%, respectively, and reduces the average HD95 by 0.68 mm compared with the baseline model (Tables 5 and 6). With only 10% labeled data, the proposed method achieves approximately 94% of the fully supervised performance in the main segmentation metrics. The model is also efficient, with only 1.657M parameters, a computational cost of 0.440 2 T, and an inference time of 0.093 7 s (Table 7). These results indicate that the proposed design achieves a favorable balance among segmentation accuracy, computational efficiency, and clinical deployment. The improvement is attributed to both the lightweight multimodal fusion segmentation network and the counterfactual mechanism, which guides the model to learn anatomically meaningful representations.  Conclusions  The proposed framework provides an effective solution for semi-supervised brain tumor segmentation. It balances accuracy, efficiency, and interpretability, and shows that causal reasoning can be integrated into medical image analysis in a practical manner.
Full-round Integral Cryptanalysis of the Lightweight Block Cipher INLEC
YU Bin, LIU Wenfen, CHEN Wen, GUO Ying, LU Yongcan, HUANG Yuehua
Available online  , doi: 10.11999/JEIT251131
Abstract:
  Objective  With the rapid development of telecommunication technology, Internet of Things (IoT) devices have been widely deployed in modern applications. However, their limited computing resources and energy supply create challenges for data privacy and security. To address these issues, Feng et al. proposed INLEC, a low-energy lightweight block cipher designed for resource-constrained IoT environments. The designers claimed that INLEC can resist differential, linear, impossible differential, and side-channel attacks. However, its security against integral cryptanalysis has not yet been evaluated. This paper presents a comprehensive full-round integral cryptanalysis of INLEC to assess its actual resistance to integral cryptanalysis.  Methods  The monomial prediction technique proposed by Hu et al. is used to construct a Mixed Integer Linear Programming (MILP) model for the monomial trails of INLEC. Based on this model, a 9-round integral distinguisher for INLEC is obtained. By further using the structural properties of the diffusion layer, the 9-round integral distinguisher is extended to a 10-round integral distinguisher by adding an initial round. This is the first 10-round integral distinguisher constructed for INLEC. To reduce the complexity of key recovery, a multi-key guessing method is proposed. Combined with the partial-sum technique, this method enables the first 14-round key recovery attack on INLEC. An integral cryptanalysis framework for the full-round INLEC cipher is therefore established.  Results and Discussions  The analysis shows that the 10-round integral distinguisher provides exploitable balanced bits for key recovery. Based on this distinguisher, the proposed 14-round key recovery attack achieves a data complexity of 263 chosen plaintexts and a time complexity of 289.843 14-round encryptions. These results indicate that the diffusion layer of INLEC does not fully eliminate integral properties within 10 rounds. The remaining structural properties can be used to support key recovery. This finding challenges the original security claims for INLEC and shows that integral properties should be considered when evaluating lightweight block ciphers for IoT applications.  Conclusions  This paper evaluates the resistance of the lightweight block cipher INLEC to integral cryptanalysis based on monomial prediction. A 9-round integral distinguisher is first constructed using a MILP model of monomial trails. The 9-round integral distinguisher is then extended to a 10-round integral distinguisher by exploiting the structural properties of the diffusion layer. A 14-round key recovery attack is further achieved by combining the partial-sum technique with the multi-key guessing method. The results show that INLEC has insufficient resistance to integral cryptanalysis and that its practical security may be lower than expected. Therefore, more rounds should be considered in the design of such ciphers to resist known integral attacks.
Facial Expression Recognition Model Based on an Improved YOLO12n
HAN Chuang, HUANG Jingyao, LAN Chaofeng
Available online  , doi: 10.11999/JEIT250936
Abstract:
  Objective  Facial Expression Recognition (FER) is a key technology in affective computing and intelligent human–computer interaction. In practical scenarios, recognition performance is often degraded by low resolution, complex illumination, partial occlusion, and class imbalance. Although deep learning-based methods have made substantial progress, lightweight models such as You Only Look Once version 12 nano (YOLO12n) still have limited feature extraction ability and reduced robustness under degraded imaging conditions. To address these limitations, this paper proposes an improved FER model, termed YOLO-FER. The model is designed to enhance feature representation, improve the discrimination of similar expressions, and maintain real-time detection performance in low-quality environments.  Methods  Based on the YOLO12n model, YOLO-FER introduces several targeted improvements. First, a C3k2_star module is constructed by embedding NewStarBlock into the original bottleneck structure. This design enhances high-dimensional nonlinear feature representation and alleviates feature loss during fusion, as shown in Fig. 2 and Fig. 3. Second, Multidimensional Collaborative Attention (MCA) is integrated with the A2C2f module to form A2C2f_MCA. This module performs joint modeling across the channel, height, and width dimensions to capture fine-grained facial features (Fig. 4). Third, a Low Resolution Feature Extractor (LRFE) module is placed at the end of the backbone. It enhances pixel-level feature representation under low-resolution and low-light conditions through dilated convolution and pixel attention (Fig. 5). Finally, Adaptive Threshold Focal Loss (ATFL) is used to dynamically adjust the contributions of easy and hard samples. This function mitigates class imbalance and improves the discrimination of similar expressions. The overall model structure is shown in Fig. 1. Experiments are conducted on the RAF-DB and Low Light Dataset (LLD) datasets. Precision (P), recall (R), F1 score, and mAP@0.5 are used as evaluation metrics.  Results and Discussions  Extensive experiments show that YOLO-FER outperforms the baseline YOLO12n and other YOLO-series models. As shown in Table 2, on the RAF-DB dataset, YOLO-FER achieves P=81.8%, R=81.9%, and mAP@0.5=87.6%, with a 3.8% improvement in mAP@0.5 over the baseline. On the LLD dataset (Table 3), YOLO-FER achieves an mAP@0.5 of 95.9%, representing a 5.0% improvement. These results indicate strong robustness under low-light conditions. The ablation studies in Table 2 and Table 3 confirm that each proposed module contributes to performance improvement. C3k2_star, A2C2f_MCA, LRFE, and ATFL all lead to consistent gains in detection accuracy. Their combination achieves the best performance with only a slight increase in parameters. The comparison with other YOLO variants in Table 5 further shows that YOLO-FER achieves a favorable balance between accuracy and model complexity. The mAP@0.5 curves in Fig. 8 show that the proposed model maintains consistent performance gains during training. The confusion matrix analysis in Fig. 9 and Table 4 demonstrates that the MCA module improves the discrimination of similar expressions, such as Angry and Disgust, and reduces misclassification. Grad-CAM visualization results (Fig. 13) indicate that YOLO-FER focuses more accurately on key facial regions, including the eyes, eyebrows, and mouth, than the baseline model. Experiments under degraded conditions (Fig. 14 and Table 13) further show that YOLO-FER maintains higher detection performance than YOLO12n and has a smaller overall performance drop. These findings confirm its robustness in low-quality scenarios. Although the number of parameters increases slightly from 2.5 M to 3.0 M, the inference speed remains competitive (Table 7), indicating that the proposed method retains real-time capability.  Conclusions  This paper proposes YOLO-FER, an improved FER model based on YOLO12n. The model improves feature extraction and robustness in low-quality image scenarios. By integrating C3k2_star, MCA, LRFE, and ATFL, YOLO-FER improves recognition performance and generalization ability. Experimental results on the RAF-DB and LLD datasets confirm that the model achieves high detection performance while maintaining efficient inference speed. The proposed method provides a practical solution for real-time FER applications in complex environments. Future work will focus on improving performance under extremely low-resolution conditions and exploring cross-domain generalization and micro-expression recognition.
Multi-agent Reinforcement Learning Method for Trajectory Optimization in Dual-UAV Cooperative Railway Inspection
HUANG Gaoyong, SONG Jun, FANG Xuming, YAN Li, HE Rong
Available online  , doi: 10.11999/JEIT251321
Abstract:
  Objective  Conventional railway inspection methods, including manual inspection and dedicated inspection vehicles, suffer from low efficiency, limited coverage, and safety risks, especially in hazardous or inaccessible areas. Unmanned Aerial Vehicles (UAVs) offer a promising alternative. However, deployment in strictly regulated railway protection zones remains challenging. In particular, single-UAV inspection is limited by restricted viewpoints, coverage blind spots, and poor data synchronization. To address these issues, this paper proposes a dual-UAV cooperative railway inspection framework. The objective is to jointly optimize the flight trajectories and inspection task sequence of two UAVs to maximize inspection task quality under coupled constraints, including energy consumption, obstacle avoidance, communication-rate constraints, and cooperative synchronization.  Methods  To solve this high-dimensional, non-convex, NP-hard problem, a two-stage hierarchical framework is proposed. In the first stage, the optimal cooperative observation positions for each inspection task are determined. Particle Swarm Optimization (PSO) is used to obtain the optimal three-dimensional coordinates of the two UAVs, thereby improving coverage and inspection quality. In the second stage, continuous trajectory optimization is formulated as a Multi-Agent Deep Reinforcement Learning (MADRL) problem. To improve convergence stability under strong safety constraints, a Risk-Adaptive Exploration Noise Mechanism (RAENM) is incorporated into the training process. The problem is then solved by an improved Multi-Agent Twin Delayed Deep Deterministic policy gradient (MATD3) algorithm under the Centralized Training with Decentralized Execution (CTDE) paradigm. Each UAV is modeled as an independent agent. Its state includes kinematic information, target position, remaining energy, and obstacle distance. Its action space defines the flight control variables. A composite reward function is designed to balance multiple objectives, including target approaching, energy saving, obstacle avoidance, railway-protection-zone compliance, and synchronized cooperative arrival.  Results and Discussions  The proposed framework is evaluated through simulations against several baseline algorithms. The results show that the improved MATD3 method achieves faster and more stable convergence, especially as the number of inspection tasks increases. In path planning, it generates more compact trajectories and the shortest total path length. For example, in the two-task scenario, the total path length is reduced to 13,025 m, about 4.5% shorter than that of the next best method. In addition, the proposed method achieves the lowest cumulative energy consumption in all tested scenarios. It also yields the smallest navigation error and the shortest arrival-time difference between the two UAVs at shared inspection points, indicating higher control accuracy and better spatiotemporal coordination. By reducing position deviation and improving synchronization, the proposed method achieves the highest inspection task quality in all evaluation settings.  Conclusions  This paper proposes a two-stage hierarchical framework for dual-UAV cooperative trajectory optimization in railway inspection. The framework combines PSO-based cooperative observation position optimization with improved MATD3-based trajectory learning. Simulation results show that the proposed method outperforms baseline methods in path efficiency, energy saving, cooperative synchronization, and inspection task quality. This study provides support for the deployment of intelligent multi-UAV systems in railway infrastructure inspection. Future work will consider more realistic factors, including communication uncertainty and dynamic environments.
Optimizing SATisfiability-Based Automatic Test Pattern Generation Systems: Unified Fault Set Construction,Modeling, and Solving
YAN Dapeng, HE Qirun, GUO Jing, WANG Boning, CAI Zhikuang
Available online  , doi: 10.11999/JEIT260025
Abstract:
  Objective  Boolean SATisfiability-Based Automatic Test Pattern Generation (SAT-Based ATPG) is widely used to generate tests for hard-to-detect single stuck-at faults and to prove fault untestability in combinational logic. When SAT-Based ATPG is applied to large netlists with dense fanout and reconvergence, its runtime and memory consumption are often dominated by three interacting issues. Representative fault lists produced by conventional dominance- or equivalence-based fault collapsing can remain large, increasing the number of SAT calls and enlarging the incremental context that must be maintained across faults. Meanwhile, SAT modeling may introduce redundant Conjunctive Normal Form (CNF) overhead, especially when an explicit faulty-circuit copy is constructed or when propagation constraints are encoded globally without locality control. In addition, fanout-reconvergence structures amplify assignment correlations along sensitized paths, and such correlations are often exposed only after repeated decisions and backtracking when only standard unit propagation is used. The unified optimization objective is therefore to reduce overall CNF size and solving cost while preserving completeness, so that a practical SAT-Based ATPG system remains efficient and stable across circuits of different scales.  Methods  A three-part framework is developed and implemented in an incremental SAT-Based ATPG flow, and the overall workflow is illustrated (Fig. 1). First, a checkpoint-driven dynamic fault-set construction method is proposed. Checkpoints are collected during netlist-to-directed-acyclic-graph conversion, including all primary inputs and all fanout branches, and XOR/XNOR outputs are additionally recorded as supplementary checkpoints to avoid over-collapsing XOR-related fault behavior. Representative faults are initialized on checkpoints by compact rules that combine dominance-oriented fault collapsing with equivalence-aware refinement, and solver-guided repair is performed when an untestable representative fault indicates potential masking under structural constraints. The procedure is summarized in Algorithm 1. Second, a SAT modeling method based on fault sensitization constraints is adopted to avoid explicit faulty-circuit duplication. Fault activation, propagation, and observability are represented by additional fault sensitization constraints over the original circuit variables, and auxiliary variables are introduced only when local bookkeeping is required. Constraint localization is restricted to the fault fanout cone, and cone-boundary and internal vertices are identified through a graph-traversal procedure (Fig. 2). Third, a dynamic implication learning mechanism oriented to fanout-reconvergence pairs is integrated into the incremental solving loop. Reconvergence pairs within the fault fanout cone are monitored under partial assignments, and structure-induced implications are injected either as implied assignments when a reconvergent output becomes functionally determined or as short conflict clauses when a branch-value combination becomes inconsistent with the fault sensitization constraints. The dynamic implication learning procedure is summarized in Algorithm 2.  Results and Discussions  The unified system is evaluated on ISCAS’85 and ISCAS’89 benchmark circuits, with TG-PRO used as the baseline implementation under the same SAT solver and termination settings. The checkpoint-driven dynamic fault-set construction method substantially reduces the representative fault space entering ATPG. Relative to the uncollapsed fault space, the average representative-fault ratio decreases from 51.38% to 42.01%, corresponding to an average fault-space reduction of 57.99%. The best-case ratio reaches 33.19% on large circuits with heavy reconvergence, which indicates that checkpoint-centered representative-fault allocation effectively suppresses redundancy without enlarging the untestable fault set (Table 1). The reduced fault-set size is reflected in preprocessing efficiency, and the total runtime for fault-set construction is consistently reduced, with an average reduction of 8.37% across the evaluated circuits (Fig. 3). For SAT model construction, the fault-sensitization-constraint encoding reduces CNF overhead relative to the baseline model construction. Across the benchmark set, the numbers of CNF clauses and CNF variables are reduced by 11.44% and 3.50%, respectively, which shows that avoiding explicit faulty-circuit duplication and localizing auxiliary constraints to the fault fanout cone effectively lowers memory demand (Table 2). The reduced CNF size and strengthened locality of constraints are further reflected in end-to-end runtime, and the total runtime of SAT modeling and solving is reduced across the evaluated benchmarks (Fig. 4). Dynamic implication learning further improves solving efficiency in reconvergence-heavy structures. Compared with static implication learning, CNF construction time increases by 3.0% on average because of the additional monitoring and injection operations, yet the overall runtime decreases by 4.42% on average, which indicates a favorable cost-benefit trade-off. The overhead attributed to dynamic implication learning accounts for 2.51% of the total runtime aggregated across circuits, which confirms that the injected implications and pruning clauses provide measurable solving benefits at limited extra cost (Table 3).  Conclusions  A unified optimization framework for SAT-Based ATPG is developed by combining checkpoint-driven dynamic fault-set construction, localized fault sensitization constraints for CNF modeling, and fanout-reconvergence-oriented dynamic implication learning. Representative faults are compressed through solver-guided repair of dominance and equivalence relations to avoid masking, CNF growth is controlled through duplication-free modeling localized to the fault fanout cone, and reconvergence correlations are exploited through incremental implication injection to strengthen propagation and enable early conflict pruning. Experimental results on standard benchmark circuits show consistent reductions in representative fault scale, CNF size, and total runtime, providing a practical approach for scaling SAT-Based ATPG to larger designs with complex fanout and reconvergence.
Research on UAV-assisted Dynamic-weight Edge Computing Offloading Strategy
WANG Yijun, WANG Yachu, SHAHD Batool, MIAO Ruixin
Available online  , doi: 10.11999/JEIT260054
Abstract:
  Objective  The increasing demands of the Internet of Things (IoT) for computational resources and real-time processing have highlighted the significance of Mobile Edge Computing (MEC). Traditional MEC relies on terrestrial base stations, resulting in coverage blind spots in remote or specialized environments. Unmanned Aerial Vehicle (UAV)-assisted MEC architectures exploit UAVs’ flexible deployment to expand service coverage. However, existing approaches for multi-terminal, multi-UAV scenarios often fail to optimize task offloading latency, system energy consumption, and adaptability to dynamic environments simultaneously. They also overlook optimal UAV selection when terminal devices are covered by multiple UAVs and lack adaptive mechanisms to adjust optimization objectives during task execution. This study addresses these challenges by integrating cooperative caching, offloading decision-making, and resource allocation strategies.  Methods  A three-tier microcloud-edge-terminal architecture is constructed, comprising a central cloud, multiple UAV edge servers with caching capabilities, and numerous mobile terminal devices. A cooperative caching mechanism reduces transmission delay during task execution. Task offloading adopts a fine-grained partial offloading mode, dividing complex tasks into dependent subtasks modeled through a Directed Acyclic Graph (DAG). The Cooperative Caching-Adaptive Hierarchical MultiVerse Optimizer (CCAH-MVO) algorithm is proposed. A hybrid coding scheme encodes offloading decisions, caching decisions, and resource allocation uniformly. A dynamic weight mechanism adaptively balances delay and energy consumption according to the system’s real-time energy state. Additionally, a UAV selection strategy is implemented for scenarios where terminals are covered by multiple UAVs. By simulating inter-universe material exchange and local refined search, the algorithm efficiently determines the optimal offloading strategy. MATLAB simulations validate the method under various experimental settings.  Results and Discussions  The simulation scenario involves 50 randomly distributed terminal devices and 5 UAVs in a 400 m × 400 m area. UAVs are deployed above terminal cluster centers, while terminals at cluster edges are simultaneously within the coverage of multiple UAVs (Fig. 5). The optimal UAV for each terminal is selected using the UAV selection function (Fig. 6), preventing resource bottlenecks and achieving balanced load distribution. In terms of delay performance, the CCAH-MVO algorithm maintains the lowest task delay across all task volumes, with a gradual increase as the number of tasks grows (Fig. 7). Delay under CCAH-MVO is consistently lower than that under fixed-weight strategies across the full task range, demonstrating the effectiveness of the dynamic adaptive mechanism in preserving low latency (Fig. 10). For energy consumption, differences among the algorithms are minor when task quantities are low. Under high task loads, the activation of the dynamic weight mechanism flattens the energy consumption curve (Fig. 8). When the number of tasks reaches 100, total energy consumption under CCAH-MVO is the lowest among all strategies and remains lower than the fixed-weight approach, reflecting effective control under critical energy conditions (Fig. 9). Regarding total system overhead, the CCAH-MVO algorithm consistently achieves the best performance. The gap with fixed-weight strategies widens when task numbers exceed 80, illustrating the dynamic weight mechanism’s collaborative optimization of delay and energy consumption (Fig. 11). Overall, by integrating the dynamic weight mechanism and balancing load through UAV selection, the CCAH-MVO algorithm effectively mitigates resource constraints and high task processing overhead in complex, dynamic UAV-assisted MEC environments. It ensures precise coordination between task delay and energy consumption across different load stages.  Conclusions  The proposed CCAH-MVO framework, incorporating a microcloud-edge-terminal architecture, cooperative caching mechanism, fine-grained partial offloading, dynamic weight adjustment, and UAV selection strategy, effectively addresses resource scheduling in complex multi-UAV MEC environments. Simulations show adaptive optimization of objectives, intelligent energy management, low latency, and reduced total system overhead, improving service stability and user experience. This research provides a practical solution for efficient UAV edge computing in dynamic environments. Future work will explore dynamic energy efficiency optimization and multi-node collaboration while maintaining low-latency performance.
Intelligent Protection Method for Personalized Location Privacy in 3D MCS Scenario
MIN Minghui, YE Jun, WEI Xipeng, MIN Bo, LI Shiyin
Available online  , doi: 10.11999/JEIT251237
Abstract:
  Objective  With the widespread adoption of intelligent mobile devices and growing reliance on location-based services, Mobile CrowdSensing (MCS) systems have become a critical infrastructure for urban sensing and smart city applications. In complex 3D environments such as hospitals and shopping malls, real-time user location data uploaded during task execution can be exploited by untrusted servers or external attackers, resulting in severe privacy risks. Existing location privacy protection methods are largely designed for 2D spaces and rely on fixed privacy budgets, lacking adaptability to dynamic user energy states, personalized privacy requirements, and inference attacks. These limitations hinder the simultaneous optimization of location privacy and service quality in 3D MCS systems. This paper proposes a personalized privacy-protection task assignment mechanism that integrates 3D Geo-Indistinguishability (3DGI) and distortion privacy, enabling dynamic optimization of location perturbation strategies and task allocation in complex 3D environments.  Methods  A dynamic 3D MCS system model is established, incorporating user energy states, task execution costs, individual privacy preferences, and attacker Bayesian inference behaviors. A reinforcement learning approach is adopted to learn personalized location perturbation strategies through continuous interaction with the environment. Specifically, a Proximal Policy Optimization (PPO)-based mechanism, PPOM, is proposed. It employs an Actor-Critic architecture to operate in a continuous action space for effective policy learning. A utility-driven reward function integrating user privacy feedback and server profit allows the system to optimize privacy protection and economic benefit simultaneously.  Results and Discussions  Extensive simulations on synthetic and GeoLife datasets demonstrate that PPOM outperforms 3DGI, 3DGI-PPOM, and LEAPER under Single-user Single-task (S-S) and Single-user Multi-task (S-M) scenarios. PPOM achieves superior 3D location privacy protection through personalized perturbation and two-dimensional action space design. Server net profit is maintained at a level comparable to 3DGI-PPOM while system utility is significantly improved, even under high user privacy preferences. LEAPER underperforms due to its 2D-oriented design. Overall, PPOM dynamically balances personalized privacy protection and server economic benefits in complex 3D MCS environments.  Conclusions  This study presents a reinforcement learning-based mechanism for personalized 3D location privacy protection and task assignment in dynamic MCS systems. Key contributions include: (1) a personalized privacy protection framework integrating 3DGI and distortion privacy, accounting for user energy status, task costs, privacy preferences, and attacker Bayesian inference in real time; (2) a perturbation policy optimization mechanism, PPOM, based on the PPO with an Actor-Critic structure, Gaussian sampling, and advantage-based learning to enhance robustness and stability in continuous high-dimensional action spaces; (3) a privacy-aware task assignment model using inferred locations from perturbed data, with a utility function jointly quantifying privacy protection and server profit, achieving dynamic trade-offs between user privacy and service quality under resource constraints.
A Radio Frequency Fingerprint Open-set Identification MethodCombining Multi-scale Wavelet Front-end and Hyperspherical Metric Learning
TIAN Xinyu, LI Zirui, ZHENG Qinghe, ZHOU Fuhui, YU Lisu, HUANG Chongwen, JIANG Weiwei, SHU Feng, ZHAO Yizhe
Available online  , doi: 10.11999/JEIT260214
Abstract:
  Objective  Open-set Radio Frequency Fingerprint (RFF) identification under low Signal-to-Noise Ratio (SNR) conditions is challenging because fingerprint features are easily masked by noise, multipath effects induce nonlinear distortions, and existing methods struggle with feature extraction and unknown device detection. This study proposes a deep learning framework that integrates a multi-scale wavelet front-end with hyperspherical metric learning to achieve robust open-set RFF identification.  Methods  The proposed method, MS-RANet, comprises three key components. First, a multi-scale wavelet front-end based on one-dimensional stationary wavelet transform performs full-resolution, multi-scale decomposition of I/Q signals, preserving discriminative fingerprint information while suppressing noise. Second, a multi-scale residual attention network incorporates deep residual learning, global self-attention, and Bidirectional LSTM (BiLSTM) to enhance sensitivity to subtle fingerprint features and capture long-range temporal dependencies. Third, hyperspherical metric learning constrains the feature space onto a unit hypersphere, optimizing angular margins to produce compact intra-class and separable inter-class feature distributions. Unknown devices are subsequently detected using cosine similarity.  Results and Discussions  Experiments on a high-fidelity IEEE 802.11 simulation dataset demonstrate the effectiveness of MS-RANet. The method achieves an average classification accuracy of 65.34% across SNR levels from –5 dB to 20 dB, and an Area Under the Curve (AUC) of 0.81 at –5 dB SNR, outperforming DNN, GRU, CNN-LSTM, ResNet50, and DRSN-CA. Confusion matrices and Receiver Operating Characteristic (ROC) curves confirm robustness under extreme channel conditions. t-SNE visualization shows well-separated, compact clusters for known devices, while unknown samples are effectively isolated from known class regions. Ablation studies verify the contributions of the multi-scale wavelet front-end, global attention, BiLSTM, and hyperspherical metric learning modules.  Conclusions  This study presents a robust open-set RFF identification method combining a multi-scale wavelet front-end with hyperspherical metric learning. The framework exhibits strong noise resilience, enhanced feature discrimination, and reliable detection of unknown devices under low-SNR and multipath fading conditions. Future work will focus on reducing computational complexity, improving inference speed, evaluating generalization across diverse scenarios and protocols, and integrating the method with complementary physical-layer security mechanisms for collaborative authentication.
Prior-guided Temporal Fusion Method for Multi-UAV Cooperative Obstacle-avoidance Route Planning
WANG Ao, LI Dapeng, XU Yifan, FAN Bingyang, HAN Guang, ZHAO Haitao
Available online  , doi: 10.11999/JEIT251231
Abstract:
  Objective  Traditional multi-agent reinforcement learning methods for multi-Unmanned Aerial Vehicle(UAV) cooperative obstacle-avoidance route planning in cluttered 3D environments often suffer from slow convergence, weak coordination, and limited global awareness under partial observability. To address these limitations, this paper proposes a prior-guided temporal fusion value-decomposition framework, termed Prior-Guided-LSTM-QMIX (PGL-QMIX). The method uses local heuristic scores derived from offline A* reference paths to guide decision-making under partial observability. The aim is to reduce route length, avoid collisions, and preserve real-time planning capability.  Methods   The multi-UAV cooperative obstacle-avoidance route-planning task is formulated as a Partially Observable Markov Decision Process (POMDP). In the offline stage, A* is used to generate a reference path for each UAV. During online execution, only the locally visible path segment is extracted, and heuristic scores are constructed from this local prior information and fused with each UAV’s local observation. An individual-level Long Short-Term Memory (LSTM) network is used to capture temporal dependencies in local perception and prior guidance, whereas a system-level LSTM-based mixing network dynamically generates the mixing weights and bias for value decomposition, thereby enabling coordinated joint action-value estimation. Potential-based reward shaping is further adopted to improve training stability.  Results and Discussions   Simulation results in 3D grid environments show that PGL-QMIX converges faster and more stably than QMIX, VDN, and MAPPO. Compared with QMIX, the proposed method reduces the average route length by 8.8%, 12.3%, and 16.1% in three scenarios, respectively. It also improves convergence speed by 20.5%, 26.6%, and 38.1%, and increases the steady-state task success rate by 5.22, 14.99, and 37.25 percentage points, respectively. In addition, the generated trajectories are shorter and more efficient across different map sizes.  Conclusions   PGL-QMIX improves coordination, safety, and route efficiency for multi-UAV cooperative obstacle avoidance in cluttered 3D environments. By integrating heuristic prior guidance, recurrent temporal fusion, and value decomposition, the proposed method achieves faster convergence, higher success rates, and better generalization than existing baselines. Future work will incorporate real UAV dynamic constraints and communication-aware cooperative obstacle avoidance.
A Quantum-resistant Threshold Signature Scheme for Database Audit Logs
CHEN Dajiang, ZHANG Yiwen, JIAO Lihua, WANG Baizheng, CHEN Ruidong
Available online  , doi: 10.11999/JEIT251320
Abstract:
  Objective  Database audit logs are a core basis for ensuring data integrity, accountability, and traceability in distributed systems. However, current audit-log protection mechanisms still rely on classical public-key signature algorithms such as RSA and ECDSA, which are vulnerable to quantum attacks. Shor’s algorithm can break integer-factorization- and discrete-logarithm-based cryptography in polynomial time, while Grover’s algorithm reduces the brute-force security of hash-based and symmetric primitives. These threats weaken the long-term reliability of existing database audit-log protection mechanisms in cloud and data-intensive environments. To address this issue, a quantum-resistant framework for database audit logs is proposed to satisfy practical requirements for efficiency, real-time verification, scalable deployment, and distributed trust management. The goal is to provide a robust cryptographic foundation for next-generation database audit-log systems with unforgeability and tamper resistance under quantum threats. Methods A hybrid hash-based signature layer is constructed by combining Few-Time Signature (FORS) and eXtended Merkle Signature Scheme-Tree (XMSS-T). FORS supports efficient signing for high-frequency log events, whereas XMSS-T organizes authentication paths in a Merkle-tree hierarchy for scalable state management. This combination yields a multi-level quantum-resistant signing structure. A Shamir (r,n) threshold secret-sharing mechanism is then adopted to split the signing key into multiple shares managed by independent audit agents. This design avoids a single point of failure, supports collaborative attestation, and ensures that no single party holds complete signing authority. In addition, a chained-hash structure is used to bind consecutive log entries through one-way linkage, thereby ensuring tamper evidence and chronological integrity. The framework further defines a complete set of system algorithms, including setup, key distribution, partial-signature generation, signature aggregation, log-chain update, and verification, all of which operate efficiently in a distributed setting. For formal security analysis, the scheme is modeled in the Quantum Random Oracle Model (QROM), and adversarial capabilities are characterized through UF-CMA, IND-CCA2, and IND-CKA2 games to capture forgery, decryption misuse, and index-indistinguishability attacks. A prototype implementation is developed and evaluated under realistic multi-node settings across different log scales, message sizes, interval configurations, and threshold ratios.  Results and Discussions  Experimental results show that the proposed scheme achieves a good balance between quantum-resistant security and system performance. For large-scale logs, the average signing latency increases linearly with log volume, which supports the efficiency of the chained-hash structure (Table 2). Compared with representative quantum-resistant signatures such as Dilithium and SPHINCS+, the threshold-signing design reduces the peak computational burden on individual nodes while preserving strong security guarantees. The system also maintains a stable throughput of about 2 000 operations per second. The message-size analysis shows that latency increases with message size but remains manageable even when the message exceeds 4 kB (Fig. 2(b)). Additionally, variation in the threshold ratio (r/n) has a measurable but moderate effect on system latency. A higher threshold improves resistance to collusion, but slightly increases delay (Fig. 2(e)). The interval-based chained-signing strategy further reduces the signing frequency and improves throughput without weakening log-integrity guarantees. These results indicate that the proposed scheme is well suited to cloud-based and distributed database environments that require real-time auditing and high-volume log processing.  Conclusions  A quantum-resistant mechanism for database audit logs is presented by integrating hash-based signatures, threshold secret sharing, and chained log-integrity protection. The scheme provides strong quantum-resistant security guarantees, including provable unforgeability, confidentiality, and tamper resistance, supported by formal proofs in the QROM. Experimental results show that the mechanism maintains high signing and verification efficiency under large-scale deployment, with good scalability across different log volumes, message sizes, and threshold settings. Owing to its distributed trust model and quantum-resistant cryptographic basis, the proposed scheme offers a practical and secure solution for next-generation database audit systems in cloud computing, big-data processing, and compliance-critical environments.
Jointly Improving Information Timeliness and Fidelity under Finite-Blocklength Source Coding in a Wireless IoT System
DUAN Jianxin, ZHANG Tianci, CHEN Zhengchuan, ZHANG Di, ZHU Xu, TIAN Zhong, WANG Min, ZHANG Lütianyang
Available online  , doi: 10.11999/JEIT251057
Abstract:
  Objective  Wireless Internet of Things (IoT) information update systems are essential for time-sensitive applications. In these systems, timely information delivery with high fidelity is critical for accurate sensing, estimation, and decision-making. However, short-packet transmission and strict latency requirements make classical asymptotic rate-distortion theory insufficient for characterizing practical system performance. Under finite-blocklength source coding, shorter source-coding blocklengths reduce latency but increase distortion, whereas longer source-coding blocklengths improve information fidelity at the cost of higher delay. This leads to a fundamental trade-off between information timeliness and information fidelity, which remains insufficiently characterized in the non-asymptotic regime.  Methods  Age of Information (AoI) and Mean Squared Error (MSE) are used to quantify information timeliness and information fidelity, respectively. Closed-form expressions for time-average AoI and time-average MSE are derived under finite-blocklength source coding. Based on distortion tolerance, excess distortion probability, and transmission rate, a joint optimization problem is formulated to minimize the weighted-sum objective of time-average AoI and time-average MSE. The monotonicity and convexity of the objective function are analyzed with respect to these design variables. An alternating iterative algorithm is then developed to jointly optimize distortion tolerance, excess distortion probability, and transmission rate.  Results and Discussions  Numerical simulations are conducted under different weight settings to examine the trade-off between information timeliness and information fidelity in representative operating scenarios. The proposed framework reveals the effect of finite-blocklength parameters on system performance. The results show that the proposed method balances AoI and MSE under different design priorities. At a transmit power of 20 dB, the weighted-sum metric of the scheme with the highest distortion tolerance is improved by approximately 33.7% compared with that of the scheme with the lowest distortion tolerance. The maximum relative error between the theoretical analysis and Monte Carlo simulations remains below 0.3%, verifying the accuracy of the derived analytical expressions.  Conclusions  This paper presents a non-asymptotic analysis of the timeliness-fidelity trade-off in a wireless IoT information update system by explicitly considering finite-blocklength source coding. By treating distortion tolerance, excess distortion probability, and transmission rate as design variables, the proposed framework verifies the necessity of finite-blocklength modeling and the advantage of joint parameter optimization. The results provide theoretical guidance for the design and optimization of timely and high-fidelity wireless IoT systems.
Spatial Information-guided Diffusion for Domain Adaptation Semantic Segmentation of Remote Sensing Images
LIANG Yan, LI Jun-Fan, SHAO Kai, HU Lin
Available online  , doi: 10.11999/JEIT260031
Abstract:
  Objective  Domain Adaptation Semantic Segmentation (DASS) is critical for remote sensing applications, including land-cover mapping, urban planning, and environmental monitoring. However, deep learning models often show severe performance degradation under domain shifts caused by imaging variation, geographic differences, and label-semantic heterogeneity. Conventional feature-alignment and generative adversarial network-based methods often fail to preserve semantic consistency. They are also sensitive to noisy supervision, especially when cross-domain gaps are large. This work aims to construct a robust DASS framework for semantically consistent image translation and reliable knowledge transfer.  Methods  A two-stage framework, termed Co-training Spatial-Guided DASS (CoSG-DASS), is proposed by integrating image translation and co-training. In the image-translation stage, a spatial information-guided latent diffusion model enhanced by ControlNet is designed. Semantic pseudo-labels and depth estimates are used as horizontal semantic and vertical spatial conditions to guide target-style image generation. To reduce the effect of noisy pseudo-labels, an Entropy-based Adaptive Guidance Intensity Module (EAGIM) is introduced. EAGIM estimates pixel-level confidence using information entropy and suppresses unreliable features. In the co-training stage, translated target-style images and unlabeled real target-domain images are used to train a segmentation model with a depth-guided segmentation head. Cross-entropy loss and adversarial loss are jointly used for optimization.  Results and Discussions  Extensive experiments are conducted on three cross-domain tasks. CoSG-DASS generates images that better match target-domain distributions. Quantitative results based on Fréchet Inception Distance (FID) show that the proposed method outperforms CycleGAN, UNI-Diff, and CRS-Diff in most settings (Table 1). Visual comparisons (Fig. 6) show that the method reduces edge blurring and category confusion. It also improves the separation of roads and vegetation and preserves small objects, such as vehicles. In the semantic segmentation stage, CoSG-DASS outperforms state-of-the-art domain adaptation methods. It improves mean Intersection over Union (mIoU) by 1.14%, 3.78%, and 2.49% on the cross-geographic task (Vaihingen IRRG→Potsdam IRRG), cross-imaging-mode task (Vaihingen IRRG→Potsdam RGB), and bidirectional label-semantic-heterogeneity tasks between DFC25 and LoveDA, respectively (Tables 24). Visual segmentation results (Fig. 7) confirm its strong boundary preservation and high accuracy in complex scenes. Ablation studies (Table 5) verify the contribution of the core components, including depth control, pseudo-label guidance, EAGIM, and the co-training strategy. Feature-distribution visualization based on Uniform Manifold Approximation and Projection (UMAP) further shows that CoSG-DASS reduces intra-class variation and increases inter-class separation after adaptation (Fig. 8).  Conclusions  CoSG-DASS alleviates domain shifts in remote sensing images through semantic-preserving diffusion-based translation and depth-guided co-training. It improves both image-translation quality and segmentation accuracy over existing methods. The proposed framework provides an effective solution for multi-source remote sensing interpretation. Future work will focus on extreme label-semantic heterogeneity and lightweight diffusion architectures.
A Study of the Effects of Amplitude and Phase Errors on the Angle-Measurement Accuracy of Phased Array Radar under Interference Cancellation Conditions
ZHAN Siheng, ZHOU Liang, SHEN Ruobin, ZHANG Jiahao, WANG Bin, MENG Jin
Available online  , doi: 10.11999/JEIT251195
Abstract:
  Objective  The electromagnetic environment is becoming increasingly complex, and mainlobe interference constrains the detection performance of phased array radars. Adaptive interference cancellation (AIC) can effectively suppress such interference but leads to mainlobe pattern distortion and introduces azimuth angle measurement errors. Most existing studies focus on interference cancellation mechanisms, with little attention paid to the angle measurement errors introduced by this technique. Amplitude-phase channel errors in the radar receive channel also degrade angle measurement accuracy. This paper investigates the influence of amplitude-phase channel errors in the receive channel on the angle measurement errors of monopulse phased array radars equipped with no difference-difference channel.  Methods  A monopulse phased array radar with no difference-difference channel is studied, and the amplitude-phase errors in the receiving channels are modeled as a normal distribution. The mean shows the systematic offset and the standard deviation shows random fluctuations. The operation principles of phased array radar receivers, monopulse radar systems, angle measurement theory, and mainlobe interference suppression and cancellation theory are introduced. Two angle measurement models are established through theoretical derivation: an ideal reference model and an amplitude-phase error model. Simulation results show that the radar’s effective angle measurement range is ±2.5° under ideal interference-free and error-free conditions. The jamming source is set at –1.2°, and the angle measurement results are taken as a reference for subsequent experiments. Monte Carlo simulations (100 independent tests for each parameter set) are used to analyze the statistical characteristics of angle measurement errors. Heatmaps are used to clearly show the absolute errors and obtain their variation laws.  Results and Discussions  (1) When there is no channel amplitude-phase error, the jamming angle is fixed at –1.2°; prior to interference cancellation, the target bearing matches the true value. After cancellation, the absolute error between the target signal and the true value near the beam normal is less than or equal to 0.1°, but null dips near the jamming angle cause abrupt changes in azimuth angle, and the error increases as the deviation from the beam normal increases. (2) Before cancellation, the azimuth angle measurement error increases with the absolute value of the amplitude mean and the incident angle, reaching a peak of approximately >0.06° at an amplitude mean of ±0.9 dB and an incident angle of ±2.5°. Within an incident angle range of ±2°, the error is typically <0.02°; when the amplitude mean is fixed, the error increases with the amplitude standard deviation; when the phase standard deviation is fixed, the error increases with the absolute value of the phase mean; it exceeds 0.15° at a phase mean of ±0.9°, and reaches approximately 0.6° at a phase standard deviation of 6° and an incident angle of ±2.5°. (3) After cancellation, phase error is most sensitive at an incident angle of 0.5°, where the azimuth angle measurement error reaches 0.4°. Outside this region, the error can be controlled within 0.2° and decreases rapidly as the deviation from the beam normal increases.  Conclusions  This paper quantifies the impact of amplitude-phase errors in the receiving channel on azimuth angle measurement errors before and after interference cancellation. The main conclusions are as follows: (1) Both amplitude and phase errors cause random fluctuations in azimuth angle measurements, with phase errors having a more significant impact; (2) In the absence of jamming, azimuth angle measurement errors are smallest near the beam axis and increase as the measurement approaches the boundaries of the effective angle measurement range; (3) In the presence of jamming and during cancellation, the azimuth angle measurement error peaks near the beam normal and decays rapidly. This study provides engineering guidance for azimuth angle measurement error assessment, error budgeting, and mainlobe interference suppression. Future research will focus on non-normal amplitude-phase errors, calibration dynamics, scenarios with multiple jamming sources, and experimental validation.
A Dimension-reduction Attack on Shortest Vector Problem Using Hints
YIN Risheng, CAO Jinzheng, MA Yongliu, WANG Hong, CHENG Qingfeng
Available online  , doi: 10.11999/JEIT251277
Abstract:
  Objective  Cryptographic algorithms based on the Learning With Errors (LWE) problem and its variants are widely used, including the key encapsulation mechanism Kyber and the digital signature scheme Dilithium. In many applications, the LWE secret is a short vector. Therefore, reducing LWE to the Shortest Vector Problem (SVP) is a common approach to cryptanalysis. Traditional SVP algorithms, including enumeration, lattice sieving, and lattice basis reduction, become difficult to apply directly in high-dimensional lattices because of their high computational cost. With the use of side-channel attacks, hints about the secret vector provide a new way to solve SVP. This paper proposes a dimension-reduction attack based on such hints. The method uses hints to reduce the problem dimension, thereby extending the practical range of enumeration and sieving.  Methods  Two types of hints are analyzed: integer hints and modular hints. For integer hints, which provide exact inner-product information about the shortest vector, the problem is formulated as a system of integer equations. The solution space of this system is then used to represent the shortest vector in a shorter linear form. Hermite normal form and Gaussian elimination are applied to obtain a particular solution and a fundamental solution system. This representation reduces the number of unknown coefficients that must be searched in enumeration or sampled in sieving. Thus, the search space is reduced, and the original SVP instance is transformed into a lower-dimensional problem. For modular hints, which provide inner-product information about the shortest vector modulo an integer, a conversion mechanism based on Coppersmith’s lemma is developed. For common-modulus modular equations, Lenstra-Lenstra-Lovász (LLL) lattice basis reduction is first used to reduce the norms of row vectors. Gaussian elimination is then applied to decrease the number of nonzero terms. Each resulting modular equation is screened according to Coppersmith’s lemma. Equations that satisfy the conversion condition are transformed into integer equations. For non-common-modulus modular equations, the moduli are first factorized into prime-power moduli. Equations with the same modulus are grouped and processed in the same manner. The resulting integer equations are then solved using the dimension-reduction enumeration or sieving method.  Results and Discussions  To evaluate the proposed dimension-reduction attack, the enumeration-based and sieving-based algorithms are compared with the lattice basis reduction algorithm in Algorithm 5 in terms of runtime and solution exactness. The effect of key parameters on dimension reduction is first analyzed. These parameters include the number of screening rounds (Fig. 2), the small-root bound (Fig. 3), and the modulus size (Fig. 4). The conversion efficiency of Algorithm 3 under different parameter settings is summarized in Table 1. The results show that more screening rounds generally improve the reduction effect, but this improvement has a saturation point. Beyond this point, additional rounds provide limited benefit. Finally, the computational efficiency of the proposed methods is compared with that of lattice basis reduction (Fig. 5). The results show that the computational cost of enumeration and sieving increases rapidly with dimension. However, up to dimension 90, the dimension-reduction attack can still use hints to reduce the dimension and obtain exact solutions more efficiently. Lattice basis reduction shows a slower increase in runtime as the dimension grows and is therefore more suitable for higher-dimensional SVP instances.  Conclusions  The proposed dimension-reduction attack provides a simple and effective method for solving SVP using hints. For integer hints, the solution space of the corresponding equation system is used to reduce the number of variables in enumeration and sieving. For modular hints, Coppersmith’s lemma is used to convert selected modular equations into integer equations, reducing the problem to the integer-hint case. The experiments show that, when sufficient hints are available, the method can effectively reduce the lattice dimension and extend the practical range of enumeration and sieving. Compared with lattice basis reduction, enumeration and sieving after dimension reduction can provide exact solutions within their applicable dimension range. Although the reduction effect tends to saturate as the number of hints increases, a moderate number of hints is sufficient to achieve effective dimension reduction. These results indicate that hint-based dimension-reduction attacks offer a practical route for exact SVP solving and provide useful evidence for the security evaluation of lattice-based cryptographic schemes.
A Survey of Processor Security
CHEN Congcong, GU Zhiyang, ZHANG Jiliang
Available online  , doi: 10.11999/JEIT260026
Abstract:
  Significance   Processor security is a cornerstone of modern information security. Cryptographic algorithms, operating systems, and applications have long relied on processors as trusted computing bases. However, as Moore’s Law slows, modern processors increasingly adopt aggressive microarchitectural optimization techniques to improve performance and energy efficiency, often without sufficient security consideration. This trend has led to frequent security vulnerabilities in recent years. In particular, microarchitectural timing channels, exemplified by Meltdown and Spectre, exploit timing differences caused by microarchitectural state changes to break fundamental hardware and software isolation, affecting billions of devices worldwide. At the same time, the boundary between architectural and microarchitectural behavior has become less clear, giving rise to new attack paradigms and turning timing channels from isolated hardware flaws into cross-layer system security problems.  Progress   Although substantial progress has been made in the study of timing channels, existing surveys still have several limitations. First, the mechanisms of timing channels are highly diverse, and the set of exploitable components continues to grow. Hardware-centric classification schemes are therefore insufficient to capture emerging and previously unknown attacks, and they often obscure the common features shared across different techniques. Second, as traditional microarchitectural channels become better understood and partially mitigated, leakage increasingly shifts to higher-level shared resources, including operating system policies and software-managed shared resources. However, previous studies have often treated software mainly as an execution context rather than a direct source of timing leakage. In addition, current discussions of defenses tend to emphasize individual techniques, with limited analysis of their scope and failure modes.  Contributions   This survey systematically reviews timing channels from a cross-layer perspective and unifies hardware- and software-based timing channels under a common abstraction. Four necessary conditions for timing channel exploitation are identified, and a unified classification framework is established based on the nature of shared mutable state and the mechanisms that make timing differences observable. Within this framework, representative attacks from the past decade are comprehensively reviewed, their attack procedures are systematically analyzed, and their common features are clarified. In addition, existing defense mechanisms are classified according to the leakage conditions they are intended to disrupt, and their scope and possible failure modes are examined. This survey also reviews current automated vulnerability detection methods.  Prospects   Future research on timing channels faces several emerging challenges. New microarchitectural optimization techniques continue to create new attack surfaces, while resource sharing at the software level may produce additional forms of timing leakage. Moreover, emerging platforms, including chiplet-based architectures, cloud computing environments, hardware accelerators, and heterogeneous systems, are likely to expose new types of timing channels that require systematic study.
PLS-YOLO: A Lightweight Model for Signal Modulation Recognition
ZHOU Xiaobo, ZHANG Fan, SHE Chao, ZHOU Guofei, MENG Jianping
Available online  , doi: 10.11999/JEIT251377
Abstract:
  Objective  As wireless communication evolves toward high communication efficiency, low latency, and ubiquitous connectivity, stringent demands are deployed on Automatic Modulation Recognition (AMR) technology to ensure link reliability within complex electromagnetic environments. While deep learning has significantly enhanced recognition accuracy compared to traditional methods, which are often characterized by high subjectivity and poor robustness, existing YOLO-based AMR models remain unoptimized for specific signal characteristics and deployment scenarios. These models typically suffer from excessive parameters and high computational complexity, rendering them unsuitable for resource-constrained hardware such as edge nodes and FPGAs, and unable to meet real-time communication demands. A lightweight AMR model based on YOLOv10n, denoted as PLS-YOLO, is proposed to resolve the critical bottlenecks that restrict practical deployment of AMR techniques. By employing target strategies such as optimizing network channel, replacing core modules, and enhancing down-sampling mechanism, the integration of modulation signal classification and localization are realized swiftly. Furthermore, significant reductions in model parameters and computational complexity are achieved, thereby facilitating the adaptation of AMR models to resource-limited scenarios and providing technical support.  Methods  The experimental methodology centers on two core stages: dataset preprocessing and the construction of the PLS-YOLO model. In the preprocessing phase, the public benchmark datasets RadioML2016.10a and RadioML2016.10b from the field of signal modulation recognition are utilized as the foundation. For the In-phase and Quadrature (IQ) signals within these datasets, the Short-Time Fourier Transform (STFT) is employed to map one-dimensional temporal signals into two-dimensional time-frequency spectrograms containing critical information such as phase and amplitude, thereby providing richer feature representations for the model. Subsequently, a random sampling strategy without replacement is adopted to stitch single time-frequency samples into 3×3 aggregated images (Fig. 4), while target labels matching the input format of YOLO series models are synchronously generated. The dataset is ultimately partitioned into training, validation, and test sets at a ratio of 7:1.5:1.5 via stratified sampling to ensure the consistency of signal type distribution across all subsets. The model construction is based on the YOLOv10n architecture, with specific improvements implemented to achieve balance between parameter quantity and recognition performance in modulation recognition tasks. The C2f module in the original backbone network is replaced by the CSPPC module, based on the CSP architecture and comprising feature splitting, partial convolution processing, and feature fusion, to achieve the dual objectives of parameter reduction and recognition rate enhancement. Furthermore, the feature dimensionality reduction process of the backbone network is reconstructed to effectively mitigate the surge in computational load caused by parameter redundancy. The traditional down-sampling module is replaced by the innovative CGBlock, enhancing the capability to capture features of complex modulation signals by fusing context-aware information, thereby elevating recognition performance. Finally, standard convolutions in the PSA module and the v10Detect module are replaced with Partial Convolutions to further reduce computational complexity, realizing a synergistic optimization of lightweight design and recognition performance.  Results and Discussions  Experimental results on the RadioML2016.10a dataset indicate that the PLS-YOLO model achieves a mean Average Precision (mAP) of 68.4% within the signal-to-noise ratio (SNR) range of -20 to 18 dB, which further increases to 94.3% when the SNR is no less than 0 dB. Compared with the basic YOLOv10n model, PLS-YOLO attains a slight mAP improvement of 0.6% while reducing the parameter count by 47.33% and computational complexity by 34.15%, alongside an increase in inference speed by 5 FPS (Table 2). These findings verify that the model effectively balances performance with lightweight requirements by significantly decreasing computational costs while enhancing precision. To validate robustness, supplementary experiments are conducted on the RadioML2016.10b dataset. As shown in Table 4, the model achieves an mAP of 72.6% across the -20 to 18 dB range and 95.4% for SNR ≥ 0 dB, outperforming mainstream models such as MCNET and LSTM2, thereby demonstrating the superior performance of PLS-YOLO. Furthermore, as illustrated in Fig.5, it is observed that converting IQ data into spectrograms for PLS-YOLO recognition is more adaptive to digital modulation signals, whereas performance on analog modulation signals remains suboptimal; consequently, future research should focus on enhancing the recognition capabilities for analog signals.  Conclusions  This study proposes PLS-YOLO, a lightweight Automatic Modulation Recognition model based on YOLOv10n. To achieve synergistic optimization of modulation recognition performance and model lightweighting, the model structure is systematically improved through targeted strategies, including network channel dimension reduction, core functional module iteration, down-sampling mechanism innovation, and partial convolution replacement. Consequently, core bottlenecks prevalent in existing YOLO-based AMR models—such as parameter redundancy, high computational complexity, and limited adaptability to resource-constrained scenarios like edge nodes and FPGAs—are significantly reduced. Experimental results on the RadioML2016.10a and RadioML2016.10b benchmark datasets demonstrate that PLS-YOLO exhibits superior comprehensive performance. While the integrity of integrated signal classification and localization functions is maintained, both parameter and computational complexity are significantly reduced compared to the baseline YOLOv10n, accompanied by a notable enhancement in recognition accuracy, thereby significantly outperforming mainstream comparative models. In conclusion, the effectiveness and feasibility of the proposed optimization strategies are verified, providing a reliable technical path for the engineering implementation of AMR technology, while the identified potential for improvement in analog modulation signal recognition clarifies specific directions for future research.
Shallow-Water Geoacoustic Parameter Inversion Using Stokes Parameters and an Attention-Enhanced Multi-Task U-Net
HUANG Qianzhuo, LI Xiaoman, BI Xuejie, ZHANG Zishi, TONG Han, LI Fei
Available online  , doi: 10.11999/JEIT251085
Abstract:
  Objective  Geoacoustic parameters in shallow water are critical for characterizing underwater acoustic propagation. Traditional inversion methods, however, are limited by high computational complexity, high cost, and strong dependence on the accuracy of environmental models. To address these issues, an efficient and robust inversion method is proposed to improve the reliability and stability of shallow-water geoacoustic parameter estimation while preserving computational efficiency.  Methods  This method is developed from the Stokes parameters of the vector acoustic field. Signals received by a single vector hydrophone are processed with a warping transform to separate and extract the normal modes propagating in a shallow-water waveguide. The extracted signals are then used to calculate the Stokes parameters, which are normalized and used as input features for the inversion model. An attention-enhanced multi-task U-Net is constructed with a shared encoder and multiple prediction branches to estimate key geoacoustic parameters, including compressional wave velocity, shear wave velocity, density, compressional wave attenuation, and shear wave attenuation. In addition, channel attention and spatial attention, together with a multi-task loss function with uncertainty weighting, are used to improve feature extraction and adaptively balance the different parameter inversion tasks.  Results and Discussions  The attention mechanism is shown to suppress fluctuations in model predictions and to improve the accuracy and stability of geoacoustic parameter inversion. When 200 test samples are evaluated, the mean absolute percentage errors of both compressional wave velocity and seabed density remain below 5% (Table 3). After the attention mechanism is introduced, the errors in compressional wave velocity and seabed density are further reduced to below 3% (Table 5), which indicates improved prediction accuracy for these key parameters. The proposed method is also shown to be insensitive to parameter mismatch and to have strong robustness to environmental variation. Furthermore, the method is validated with measured data from a shallow-water region in the northern South China Sea, and its effectiveness and reliability in practical applications are confirmed (Table 6 and Fig. 9). These results show that the attention-enhanced multi-task U-Net effectively captures critical features from the Stokes parameters and yields more stable and accurate geoacoustic parameter estimation in shallow-water environments.  Conclusions  The inversion method based on the Stokes parameters and an attention-enhanced multi-task U-Net effectively improves the accuracy and stability of shallow-water geoacoustic parameter estimation and shows strong performance in the prediction of compressional wave velocity, shear wave velocity, and density. However, limitations remain in the inversion of seabed attenuation. Future work should focus on improving feature extraction methods and network architecture and on testing the applicability of the method under more complex marine conditions.
SG-DDPG low intercept point beam design for FDA-MIMO short-range detector
JIA Jinwei, GAO Min, HAN Zhuangzhi, LIU Limin, YIN Yuanwei
Available online  , doi: 10.11999/JEIT260010
Abstract:
  Objective  Radio short-range detector is the most widely used short-range detector with the largest number of equipment in both domestic and international fields. However, in modern battlefields, the electromagnetic environment is becoming increasingly complex, and radio short-range detectors have to deal with various electromagnetic interferences. Especially, the fourth-generation jammer that implements the forwarding deception interference based on Digital Radio Frequency Memory (DRFM) is highly likely to cause failure situations such as premature detonation in radio proximity detectors. This significantly reduces the damage control capability of the radio short-range detector. Therefore, it is urgent to conduct research on the important issue of anti-forwarding deception interference for short-range detectors. Among them, improving the low interception capability of the radio detector can effectively resist the repeater deception jamming.  Methods  In this paper, the frequency diverse array (FDA) -multiple-input multiple-output (MIMO) technology is employed, and key factors influencing beam convergence are determined. Aiming at the design of spatial low-interception beam of short-range detector, the performance evaluation model of beam spatial low-interception is constructed. And then the FDA-MIMO low intercept point beam design technology based on Stage Guidance-Deep Deterministic Policy Gradient (SG-DDPG) algorithm is proposed. In the SG-DDPG algorithm, a multi-dimensional phased guidance reward function is designed. Through the Actor-Critic model, the gradient rise method is used to maximize the reward value, therefore the frequency offset of the array element with better beam convergence performance in the current environment is obtained. Meanwhile, the SG-DDPG algorithm applies to LPI point beam design across various radio detector fall angles, overcoming the technical bottleneck that the formula method for the array element frequency offset only applies when the radio detector’s fall angle is close to vertical.  Results and Discussions  The simulation shows that with the array element frequency offset optimized by the SG-DDPG algorithm, the FDA-MIMO beam exhibits a half-power beam width of 1 m in the distance dimension and 9.9° in the angular dimension. The beam convergence and LPI performance with the proposed method are significantly better than other classical frequency offset calculation methods. Thus, the proposed algorithm represents a new method for array element frequency offset optimization and LPI point beam design, effectively improving the radio detector’s LPI performance.  Conclusions  This paper presents an FDA-MIMO LPI point beam design method based on the SG-DDPG algorithm, with the array element frequency offset as the optimization objective. The simulation results show that: (1) The proposed method breaks through the limitation that the radio detector fall angle must be close to vertical when calculating the array element frequency offset by the formula method. The algorithm can be applied to the design of LPI beams under various radio detector fall angles, where it achieves improved LPI performance; (2) The half-beam width with the proposed method is only 1 m in the range dimension and 9.9° in the angle dimension, which is significantly better than the traditional methods. Under different fall angles, the interception area of the beam formed with the proposed method is the smallest, demonstrating the best LPI performance.
Full-Space Covert Transmission Assisted by XL-STAR-RIS for Integrated Sensing and Communication
XIE Wenwu, ZHANG Qinke, YANG Liang, WANG Ji, YU Chao, LIU Xinzhong, CUI Yaru
Available online  , doi: 10.11999/JEIT260145
Abstract:
  Objective  The evolution of 6G towards higher frequencies and larger antenna arrays positions ISAC as a key enabling technology. However, ISAC faces inherent challenges, including poor communication concealment and resource competition between sensing and communication functions. While covert communication and RIS offer promising solutions, existing research predominantly employs reflective RIS with limited half-space coverage and often operates under unrealistic far-field assumptions. To address these gaps, this paper proposes a novel near-field, full-space ISAC framework assisted by an XL-STAR-RIS. The core objective is to jointly optimize active and passive beamforming to enhance communication covertness and rate while strictly maintaining sensing performance, thereby providing a new paradigm for secure 6G networks.  Methods  The methodology begins with an analysis of the warden's detection capability, deriving a lower bound for its minimum detection error probability. Subsequently, a non-convex optimization problem is formulated to maximize the covert communication rate, constrained by sensing performance, a covertness threshold, and transmit power limits. The coupling between active beamforming vectors and passive RIS coefficients makes direct solution intractable. Therefore, an AO framework is adopted, decomposing the problem into two tractable subproblems. The active beamforming subproblem is solved using SDR enhanced with a PSCA method. The passive beamforming subproblem is handled via the Dinkelbach algorithm, incorporating a rank-one constraint penalization technique. These subproblems are solved iteratively within the AO loop until convergence is achieved.  Results and Discussions  Simulation results validate the proposed framework. The algorithm demonstrates efficient convergence within approximately 10 iterations. It achieves a superior covert communication rate of 11.5 bps/Hz, significantly outperforming baseline passive-RIS (9.8 bps/Hz) and non-RIS (8.0 bps/Hz) schemes. The performance advantage is further magnified with increased transmit power, highlighting excellent power adaptability. Crucially, the framework maintains robust performance under stringent conditions: it sustains a higher covert rate than benchmarks when sensing requirements are elevated, and preserves a high communication rate even under stricter covertness constraints. These results conclusively demonstrate that the joint XL-STAR-RIS beamforming optimization effectively balances the tripartite trade-off between communication, sensing, and covertness in near-field ISAC scenarios.  Conclusions  This paper presents an XL-STAR-RIS-assisted covert communication framework for near-field ISAC systems. By jointly designing active and passive beamforming through an efficient alternating optimization algorithm, the framework successfully balances communication rate, sensing accuracy, and transmission covertness. Comprehensive simulations confirm its superiority over conventional schemes, particularly under stringent operational constraints, proving its potential for secure, full-space 6G applications. Future work will focus on extending the framework to scenarios with imperfect channel knowledge, dynamic environments, and multi-RIS collaboration to enhance its practicality and robustness.
Index Modulation Design with Sparse Spatial Constellation and Dynamic Multi-RIS Block Selection for RIS-MIMO Systems
HUANG Fuchun, ZHU Han, TANG Xiaoqing, YANG Fan, HUANG Jie
Available online  , doi: 10.11999/JEIT251289
Abstract:
  Objective  This paper aims to address two main challenges in RIS-assisted MIMO index modulation (IM) systems: (1) the practical deployment difficulty of using a single large-scale RIS panel, and (2) the high complexity of designing efficient transmit spatial signal vectors. To overcome these issues, this paper proposes a joint design of sparse spatial constellation and dynamic multi-RIS block selection to enhance spectral efficiency, bit error rate (BER) performance, and deployment flexibility.  Methods  Inspired by the extended space index modulation (ESIM) paradigm, a new design of sparse spatial constellation with two active antennas (SCTA) is proposed, which leads to the SCTA-RIS-SM system. The idea is to mix primary and secondary PAM constellations to form a spatial constellation vector[x1,x2]T and modulated onto two active antennas. Thus, it not only maximizes the minimum Euclidean distance between transmit vectors but also significantly enhances the anti-interference capability. To get around the deployment difficulties of a single large RIS panel, an enhanced scheme of SCTA-MBRIS-SM is further proposed. This system employs a distributed array of multiple small RIS blocks and dynamically selects a subset of blocks for cooperative reflection, treating different “RIS block selection combinations” as a new index modulation dimension. Finally, theoretical analysis of spectral efficiency and average bit error rate is carried out, and Monte Carlo simulations are conducted to compare the proposed systems with several existing schemes.  Results and Discussions  Simulation results demonstrate that the proposed SCTA-RIS-SM system achieves notable signal-to-noise ratio (SNR) gains over RIS-SIM, RIS-SM, and DH RIS-SM systems under the same spectral efficiency (e.g., 10–12 bits/s/Hz) in near-field wideband scenarios. For instance, at BER = 10−3, SCTA-RIS-SM outperforms RIS-SIM by about 1.5–2.5 dB and DH RIS-SM by more than 6 dB. Furthermore, the SCTA-MBRIS-SM system, by exploiting additional index modulation from RIS block selection, further improves the BER performance and spectral efficiency compared to SCTA-RIS-SM without increasing the number of radio frequency chains. With total numbers of reflecting elements kept identical, the proposed multi-block scheme achieves up to 5 dB gain over RIS-SIM at BER = 10−3. Theoretical BER curves match well with simulation results in the high SNR region, validating the analytical derivations. The results also show that the performance advantage is maintained as the number of transmit antennas increases, and the system exhibits good compatibility with channel coding.  Conclusions  This paper addresses the challenges of large-scale RIS deployment and high-complexity spatial signal design in RIS-assisted MIMO systems. The proposed sparse spatial constellation with two active antennas optimizes the Euclidean distance distribution in the signal space, effectively improving system reliability. The introduction of dynamic multi-RIS block selection transforms hardware deployment constraints into a new dimension for spectral efficiency enhancement, offering a feasible path for practical large-scale RIS applications. Simulation results confirm that jointly optimizing the transmit spatial vector and the degrees of freedom of RIS reflections is an effective strategy for performance improvement. Future work will focus on robustness under imperfect channel state information, construction of higher-dimensional sparse constellations, extension to extremely large-scale MIMO scenarios, and multi-user communications.
Semantic-guided Unified Multi-scale Deep Unrolling Network for Pansharpening
CHEN Junjie, WANG Tingting, FANG Faming, ZHANG Guixu
Available online  , doi: 10.11999/JEIT251252
Abstract:
  Objective  With the rapid advancement of satellite imaging technologies, the demand for high-resolution multispectral remote sensing imagery has grown substantially across a wide range of applications. Due to the wide variety of satellite platforms, there exists a significant domain shift across datasets collected from different satellites. As a result, most existing deep learning (DL)-based pansharpening methods are trained individually for each satellite dataset, and consequently exhibit limited generalization capability across different satellites. To address these limitations, this study proposes a Semantic-guided Unified Multi-scale Deep Unrolling Network (SUM-DUN), which is designed based on classical optimization theory, adopting a 3D multi-scale deep unfolding architecture for integrated feature extraction and fusion. Leveraging multimodal large language models (MLLMs), the proposed method derives semantic textual prompts from the input images, which direct the model to adaptively adjust its feature representations and thereby enhance fusion quality. The proposed method aims to achieve unified remote sensing image fusion through tailored network architecture and prompt-guided mechanisms, thereby providing reliable support for high-level image interpretation tasks.  Methods  Following the Maximum A Posteriori(MAP) estimation principle, the optimization process for HRMS recovery is unfolded into the proposed SUM-DUN(Fig. 1). Each iteration stage of SUM-DUN consists of two main modules: a Gradient Descent Module (GDM) and a Semantic-guided Proximal Mapping Network (SPMN), which are used to approximate the operations in Eq. (5) and Eq. (6), respectively. GDM performs a gradient descent update based on the current feature estimate and the degradation model. The SPMN, implemented with a Transformer-based architecture as illustrated in Fig. 2(b), incorporates semantic textual prompts generated from the input image pair by MLLMs. These prompts guide the network to adaptively select appropriate feature propagation strategies for the current pair, helping suppress noise and mitigate discrepancies across different satellite sensors. Moreover, leveraging upsampling and downsampling operations, the network transmits MS and PAN features between iterative stages, thereby progressively preserving and enhancing multi-scale spatial and spectral information throughout the unfolding process.  Results and Discussions  To demonstrate the effectiveness of the proposed method, we compare the method against seven representative baselines, including 2 traditional methods (BDSD and PRACS) and 5 DL–based methods (AWFLN, FusionMamba, PanMamba, WFANet and TMDiff). For the reduced resolution evaluation, where ground-truth HRMS images are available, we adopt several widely-used reference based metrics, including Spectral Angle Mapper (SAM), Spatial Correlation Coefficient (SCC), Peak Signal-to-Noise Ratio(PSNR), Erreur Relative Global Adimensionnelle de Synthèse (ERGAS), Averaged Universal Image Quality Index(QAVE) and the Universal Image Quality Index for 4-band and 8-band images. These metrics jointly evaluate spectral fidelity, spatial consistency, and overall image quality. For the full-resolution evaluation, where ground-truth HRMS are unavailable, we rely on no-reference quality indices. Specifically, we employ the Hybrid Quality with No Reference (HQNR) metric, along with its spectral distortion component and spatial distortion component, to assess the fusion quality in real-world scenarios. Quantitative evaluations on the GF-1, QB, WV-2, and WV-4 test datasets demonstrate that the proposed method consistently achieves either the best or second-best performance across all metrics, under both reduced-resolution and full-resolution settings(Table 23). These results clearly indicate that the proposed method is capable of simultaneously preserving spectral fidelity and spatial consistency, while maintaining robust performance across different satellites and remaining effective in more challenging scenarios. The ablation studies validate the effectiveness of the 3D architecture, the multi-scale network design, and the spatial–channel prompt guidance mechanism, as removing or altering any of these components leads to varying degrees of performance degradation(Table 4-5).  Conclusions  This study proposes a semantic-guided unified multi-scale deep unfolding method for pansharpening, which leverages semantic prompts generated by a MLLM to facilitate efficient and unified fusion of images from different satellites. The proposed approach is built upon a deep unfolding framework and employs a 3D convolutional architecture to accommodate varying numbers of spectral bands across satellite datasets. The multi-scale network design is further incorporated to extract spatial and spectral features at different levels, thereby enhancing the fusion capability. In addition, the sematic prompt integration module is introduced to adaptively route spatial and channel features based on the extracted semantic information, enabling more effective feature propagation and improving both spatial detail reconstruction and spectral consistency. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance in terms of both visual quality and quantitative evaluation metrics.
Remote Sensing Land-Cover Classification Combining Multi-Modal and Multi-Scale Fusion with Mamba
XIE Wen, ZHU Chaotao, WANG Jin, MA Xiaomeng
Available online  , doi: 10.11999/JEIT251303
Abstract:
  Objective   The rapid development of remote sensing imaging technology has generated massive and diverse data for Remote Sensing Land-Cover Classification. In recent years, Mamba-based models have achieved successful applications in image processing owing to their distinctive architectures and powerful global modeling capabilities. Among them, multi-scale vision Mamba models demonstrate proficiency in handling complex spatial distributions, which aligns well with the characteristics of remote sensing scenes, including significant scale variations and complex orientations of ground objects. To fully exploit the advantages of the Mamba models in extracting and fusing features from remote sensing data, The Mamba-based Multi-Modal and Multi-Scale Fusion Model for Remote Sensing Land-Cover Classification (M3RS) is proposed.  Methods   The proposed model, M3RS, mainly consists of three stages to extract and fuse features. Firstly, the model employs a Multi-Scale Spatial Encoder based on Spatial Mamba to extract features from Light Detection And Ranging (LiDAR) images and Synthetic Aperture Radar(SAR) images. Due to the unique data structure of the HyperSpectral Image(HSI), a Multi-Scale Spatio-Spectral Encoder is proposed to extract the complex spatial-spectral features using Spatial Mamba and Spectral Mamba. Next, a Multi-Modal Feature Fusion Module including the proposed Cross-Mamba and Channel-Concatenated Mamba is introduced to fuse multimodal features. Cross-Mamba efficiently fuses multimodal spatial features by interacting with multimodal state space parameters, while Channel-Concatenated Mamba fully fuses multimodal features by constructing four channel scanning methods. Finally, the model adopts an improved Multi-Scale Feature Fusion Module to fuse multiscale features layer by layer, thereby obtaining highly discriminative classification evidence that can effectively improve the accuracy of Remote Sensing Land-Cover Classification.  Results and Discussions   Comparative experiments are conducted on three publicly available multimodal remote sensing land-cover classification datasets to evaluate the classification performance of the proposed model against seven mainstream models. The experimental results demonstrate that the proposed model significantly outperforms its counterparts in terms of Overall Accuracy (OA), Average Accuracy (AA), and Kappa coefficient. Specifically, on the Muufl dataset, the OA of the proposed model is 3.49%, 3.80%, and 4.02% higher than those of models based on CNN, Transformer and Mamba, respectively (Table. 2, Fig. 8). Furthermore, on the Houston2013 and Augsburg datasets, the OA of the proposed model surpasses all comparative algorithms by an average of 3.37% and 3.11%, respectively (Table. 3, Table. 4). The results indicate that the integration of a Multi-Modal Multi-Scale architecture with the Mamba model effectively enhances the accuracy of Remote Sensing Land-Cover Classification. In addition, an ablation experiment in this paper validates the contribution of each proposed module to improving classification accuracy (Table. 5). While Spectral Mamba significantly improves the accuracy, several fusion modules also make contributions to the overall performance to different degrees. Then, the hyperparameter experiment offers valuable hyperparameter configurations for multiscale remote sensing image fusion (Table. 6). Finally, compared with the Transformer model employing an identical multi-scale architecture, this Mamba model not only achieves improved classification accuracy but also reduces the parameter count by 37.4% and shortens the training time by 10.7%, reflecting the dual improvements in both accuracy and efficiency of the Mamba model (Fig. 9).  Conclusions   The proposed M3RS employs the Mamba model to fuse multimodal and multiscale features, effectively enhancing the performance of Remote Sensing Land-Cover Classification. Firstly, different encoders utilized in M3RS effectively address the disparities among multimodal data, thereby providing richer multimodal complementary information for fusion and classification. Subsequently, the proposed Cross-Mamba and Channel-Concatenation Mamba take the similarities and differences between Mamba and Transformer into account and respectively achieve efficient multimodal spatial feature interaction and comprehensive multimodal feature fusion, providing a hierarchical multimodal fusion approach. Moreover, the multiscale architecture overcomes the complex spatial distribution issues of remote sensing land covers, to a certain extent. And the proposed Multi-Scale Feature Fusion Module composed of Spatial Mamba and channel attention effectively integrates multiscale features and provides a reliable basis for the following classification. Based on this work, future research will continue to optimize the model by exploring the underlying principles of Mamba and conduct in-depth investigation into cross-attention mechanisms to refine the feature alignment process in multimodal interaction and ensure the reliability of feature fusion.
Research Status and Prospects of Mid-Wave Infrared Superlattice Detection Technology
LIU Ming, ZHAO Yaqi, GUAN Xiaoning, ZHANG Fan, LU Pengfei
Available online  , doi: 10.11999/JEIT260083
Abstract:
  Significance   Mid-wave infrared (MWIR) detectors are crucial for both civilian and military applications due to their high sensitivity and superior temperature discrimination. Type-II superlattice (T2SL), particularly InAs/GaSb and InAs/InAsSb material systems, have emerged as the most promising candidates for third-generation infrared photodetectors. This review systematically analyzes the current research status and future trends of MWIR T2SL detection technology, focusing on key performance parameters such as quantum efficiency, dark current density, and specific detectivity. The work aims to provide a comprehensive reference for material selection and performance optimization in this rapidly advancing field.  Progress   Significant progress has been achieved in suppressing dark current and enhancing photoresponse for MWIR T2SL detectors. In terms of dark current suppression, advanced barrier structures like nBn, XBn, and M-structures, designed via bandgap engineering, effectively block majority carrier transport while allowing efficient collection of photogenerated carriers. For instance, an nBn device utilizing an AlAsSb/InAsSb superlattice barrier demonstrated a dark current density of 2.01×10-5 A/cm2 at 150 K (Fig. 1a, b). Strain compensation techniques and optimized growth have further reduced bulk dark currents, with one device achieving 4.5×10-7 A/cm2 at 140 K (Fig. 2c, d). Device fabrication process optimization, including two-step etching and planar junction formation via Zn diffusion, have successfully minimized surface leakage currents (Fig. 3). For photoresponse enhancement, strategies include integrating micro-optical structures and optimizing epitaxial growth and device fabrication process optimization. The integration of metalenses has boosted peak responsivity to 9.01 A/W at 300 K (Fig. 4a). Guided-mode resonance architectures have enabled room-temperature external quantum efficiency of ~60% (Fig. 4b, c). Epitaxial optimizations, such as stepped absorbers and interfacial graded doping, have led to quantum efficiencies up to 59.4% at 150 K (Fig. 5c, d). Device fabrication process optimization like substrate removal and anti-reflection coating deposition have significantly improved quantum efficiency, with an average of 63.7% reported in the 3.7–4.8 μm range (Fig.6c,d). A comparative analysis shows that InAs/GaSb detectors primarily operate near 77 K, while InAs/InAsSb detectors demonstrate superior performance at higher temperatures, around 150 K (Fig. 7, Fig. 8). Overall, dark current densities are typically suppressed below 10-4 A/cm2, with peak quantum efficiencies approaching 80%.  Conclusions  T2SL materials, with their tunable band structure and low Auger recombination rates, are established as the core choice for high-performance MWIR detection. Current research has successfully addressed key challenges, dark current densities have been suppressed to the 10–6 A/cm2 level at ~150 K through innovative barrier designs and device fabrication process optimization, while quantum efficiencies have been enhanced to ~60% and beyond through optical and epitaxial engineering. The InAs/InAsSb material shows particular promise for high-operating-temperature applications.  Prospects   Future development will focus on several key directions: 1) Pushing the high-operating-temperature (HOT) limit further to maintain diffusion-limited performance at 180 K and above; 2) Advancing large-format focal plane array fabrication based on highly uniform material growth via mature molecular beam epitaxy to achieve >99% pixel operability; 3) Expanding into multi-color/multi-spectral detection capabilities by precisely tuning superlattice periods to enable integrated dual-band or multiband MWIR detection with reduced cross-talk; 4) Exploring novel device architectures and coupling multiple physical mechanisms to extend performance boundaries and application scopes.
A Spatiotemporal Coupling Traffic Flow Prediction Model with Dynamic Graph Recursion and State Space
ZHANG Hong, QI Fangzheng, LUO Shengjun, ZHANG Xijun, HOU Liang, HUANG Hairong
Available online  , doi: 10.11999/JEIT251198
Abstract:
  Objective  Accurate traffic flow prediction is crucial for intelligent transportation systems, but it remains challenging due to dynamically evolving spatial dependencies and long-range temporal correlations in urban road networks. To address these issues, this study proposes DGGRU-Mamba, a spatiotemporal traffic forecasting framework that integrates dynamic graph recurrent modeling with a structured state space mechanism and jointly captures adaptive spatial structures and long-term temporal dynamics.  Methods  The proposed DGGRU-Mamba consists of two core modules: Dynamic Graph Recurrent Modeling (DGRM) and Spatiotemporal Mamba (ST-Mamba). A spatiotemporal embedding generator is introduced to encode periodic temporal information and node-specific spatial features for adaptive graph construction. The DGRM module dynamically updates time-varying adjacency structures through gated graph recurrent units, enabling adaptive modeling of evolving spatial dependencies, while the ST-Mamba module employs structured state transitions to efficiently capture long-range temporal correlations. In addition, a dual-branch prediction scheme, including Forecast and Backcast branches, is adopted to improve multi-step prediction accuracy and alleviate cumulative errors.  Results and Discussions  DGGRU-Mamba is evaluated on four benchmark datasets, PEMS03, PEMS04, PEMS07, and PEMS08, using MAE, RMSE, and MAPE as evaluation metrics. Experimental results show that the proposed model achieves competitive performance across all datasets. On PEMS04, compared with the mainstream attention-based model STAEformer, DGGRU-Mamba reduces MAE, RMSE, and MAPE by about 4.2%, 3.8%, and 2.9%, respectively, while shortening the inference time by 4.82 s. These results indicate that the proposed framework improves prediction accuracy while maintaining high computational efficiency. The gains mainly stem from the complementary effects of DGRM and ST-Mamba, which enhance dynamic spatial dependency modeling and long-range temporal learning at lower computational cost.  Conclusions  A novel spatiotemporal traffic flow prediction framework, DGGRU-Mamba, is proposed for modeling dynamic spatial structures and long-term temporal dependencies in complex traffic networks. By integrating dynamic graph recurrent modeling with a structured state space mechanism, the framework achieves a favorable balance between prediction accuracy and computational efficiency. Extensive experiments on multiple benchmark datasets verify its effectiveness and scalability for multi-step traffic forecasting. Future work will consider external factors such as weather and traffic events to further improve practical applicability.
Household Appliance Plastics Identification by Fusing Multi-Level Feature Enhancement and Hierarchical Classification
CHONG Penghao, ZHENG Yunlong, YANG Aosong, GUO Mengci, LI Shifeng
Available online  , doi: 10.11999/JEIT260084
Abstract:
  Objective  Accurate identification of plastics in waste household appliance recycling remains challenging under low-resolution spectral conditions. In practical recycling environments, plastics often exhibit complex compositions, surface contamination, and aging effects, which increase classification difficulty. In particular, black plastics show strong light absorption and spectral overlap in the visible and near infrared (Vis–NIR) range, leading to reduced feature separability and degraded classification performance. Under such conditions, conventional single-stage classification models are often unable to maintain stable accuracy. To address this problem, an automated identification method for low dimensional multispectral feature spaces is proposed, aiming to improve the discriminative capability of limited spectral information and enhance classification accuracy for complex plastic categories.  Methods  A compact Vis–NIR multispectral acquisition system based on the AS7265x sensor is used to collect 18 channel reflectance data within the 410–940 nm range. A handheld acquisition device with a controlled optical structure is designed to reduce environmental interference and ensure measurement consistency (Fig.3). A total of 576 samples from five typical appliance plastics, including ABS, high-impact polystyrene (HIPS), polypropylene (PP), acrylonitrile styrene copolymer (AS), and PC/ABS blends, are collected from waste household appliances and subjected to preliminary surface cleaning prior to spectral acquisition. To improve feature representation, a multi-level feature engineering strategy is adopted. It integrates original spectral intensity features, nonlinear polynomial expansion features, and adjacent channel ratio features to characterize both global and local spectral information. The nonlinear expansion enhances reflectance variation representation, while the ratio features capture local spectral shape changes and reduce external disturbances. These features are combined into a 53-dimensional feature vector. Linear Discriminant Analysis (LDA) is then applied to enhance inter class separability. To handle spectral overlap and class imbalance, a Hierarchical Joint Classifier (HJC) is constructed. The HJC adopts a two stage classification framework: an XGBoost based primary classifier performs coarse classification to separate easily distinguishable samples and group spectrally similar black plastics, while a TabTransformer based secondary classifier is used for fine grained classification of difficult samples (Fig. 6). This hierarchical design reduces classification complexity and improves discrimination for challenging categories. Model performance is evaluated using five fold cross validation and an independent test set. Evaluation metrics, including accuracy, precision, recall, and F1 score, are calculated based on confusion matrices (Fig. 7). Comparative experiments are conducted with traditional machine learning methods, ensemble learning models, and deep learning approaches under different feature processing strategies (Fig. 8, Fig. 9).  Results and Discussions  The proposed HJC achieves a classification accuracy of 97.4% under five fold cross validation and 93.1% on the independent test set (Table 4). Compared with single stage classifiers and methods without feature enhancement, the proposed method shows improved performance and stability under low resolution spectral conditions. Comparative results indicate that the proposed method outperforms baseline approaches such as PCA combined with CNN, which achieves approximately 71.3% accuracy under the same dataset (Fig. 8). This improvement indicates that the proposed feature engineering strategy effectively enhances the discriminative capability of low dimensional spectral data. In addition, combining LDA with feature engineering further improves class separability compared with conventional PCA based methods. Confusion matrix analysis shows that misclassifications are mainly concentrated between spectrally similar black ABS and black HIPS samples, while other categories achieve high classification accuracy (Fig. 9). This result suggests that spectral overlap remains the main challenge under low resolution conditions. The hierarchical classification strategy mitigates this issue by focusing classification on difficult samples, thereby improving the overall generalization capability of the model. Overall, the proposed method demonstrates robustness under practical conditions, including spectral noise, limited channel resolution, and material heterogeneity, indicating its suitability for real world recycling applications.  Conclusions  A hierarchical classification method with multi-level spectral feature engineering is developed for plastic identification under low-resolution Vis–NIR conditions. Nonlinear and morphological features are incorporated into a two-stage framework to improve performance on spectrally similar materials. The results show consistent accuracy across different plastic types. The method is suitable for automated sorting in waste appliance recycling and can be applied to other material identification tasks with limited spectral information.
From Touch to Semantics: A Cross-Modal Framework for Zero-Shot Spiking Tactile Object Recognition
CHI Wei, XU Jin
Available online  , doi: 10.11999/JEIT260158
Abstract:
  Objective  Tactile perception is essential for robots to understand object properties and enable dexterous interactions. However, tactile data acquisition is costly and difficult to scale, limiting the applicability of conventional supervised learning in open-world scenarios. Zero-shot learning (ZSL) offers a promising solution by transferring knowledge from seen to unseen categories via semantic representations. However, existing tactile ZSL methods either rely on auxiliary visual information or depend on manually designed attributes, which are often subjective and lack generalization. Meanwhile, event-based tactile sensors produce sparse, asynchronous spiking signals with rich spatiotemporal dynamics, posing additional challenges for semantic modeling. Consequently, systematic studies on zero-shot recognition of such data remain limited. To address these issues, we propose a zero-shot object recognition framework for event-based spiking tactile perception, aiming to bridge low-level tactile dynamics with high-level semantics in a scalable manner.  Methods  The proposed framework consists of three key components (Fig. 1): spiking tactile feature extraction, semantic prototype construction, and cross-modal tactile–semantic alignment. First, a biomimetic spiking graph neural network is employed to model raw event-based tactile signals. By integrating leaky integrate-and-fire (LIF) neurons with graph-based message passing, the model captures both temporal firing dynamics and spatial relationships among tactile sensing units, producing discriminative and biologically interpretable high-level tactile embeddings. Second, instead of relying on manually annotated attributes, large language models (LLMs) are introduced to generate structured, fine-grained, and extensible tactile attribute descriptions for each object category. These textual descriptions are further encoded into continuous semantic vectors, forming class-level semantic prototypes with consistent dimensionality across categories. This strategy enables flexible semantic expansion and avoids the labor-intensive process of attribute engineering. Third, a bidirectional tactile–semantic alignment mechanism is designed to enhance generalization to unseen categories. Specifically, a forward mapping projects tactile embeddings into the semantic space for classification, while a reverse mapping reconstructs tactile features from semantic representations. A cycle-consistency constraint is imposed between the two mappings to enforce structural coherence and semantic stability across modalities. The overall framework is trained on seen categories only, and zero-shot inference is performed by matching tactile embeddings of unseen samples with their corresponding semantic prototypes in the shared embedding space.  Results and Discussions  The proposed method is evaluated under a strict zero-shot setting on event-based spiking tactile datasets with disjoint seen and unseen sets. Performance is assessed using mean class accuracy, Top-k accuracy, and semantic alignment score. The framework consistently outperforms state-of-the-art tactile ZSL baselines across all metrics. Ablation studies validate each component: removing the spiking graph neural network leads to notable performance degradation, confirming the importance of explicitly modeling spatiotemporal tactile dynamics; replacing LLM-generated semantics with manually defined attributes reduces generalization, highlighting the advantage of structured and semantically rich language-driven representations. t-SNE visualization shows that cycle-consistent alignment produces more compact intra-class clusters and clearer inter-class boundaries for unseen categories. The bidirectional alignment mechanism also improves semantic stability and reduces projection bias. These results indicate that combining biologically inspired spiking models with language-extended semantics offers a robust solution for open-set tactile perception.  Conclusions  This paper presents a novel zero-shot object recognition framework for spiking tactile perception by integrating spiking graph neural networks with semantic representations. The proposed method addresses the limitations of existing tactile ZSL approaches by avoiding reliance on visual data and manual attribute design, while effectively modeling the spatiotemporal dynamics of spiking tactile signals. Experimental results demonstrate superior performance under strict zero-shot settings, confirming the effectiveness and robustness of the proposed approach. This work provides a strong baseline for zero-shot spiking tactile recognition and provides a principled pathway toward open-world tactile cognition in robotic systems. Future work will explore multimodal extensions and real-world robotic deployment under noisy and dynamic sensing conditions.
Performance Optimization and Gate Oxide Electric Field Analysis of 1200V Trench SiC MOSFET Based on PCL-CSL Collaborative Design
FANG Shaoming, LI Hongda, GAO Yuan
Available online  , doi: 10.11999/JEIT260164
Abstract:
  Objective  SiC trench MOSFETs at 1200 V rating are widely recognized as key components in modern medium- and high-voltage power conversion systems due to their superior switching performance, low conduction loss, and high-temperature stability. However, conventional trench-based device structures suffer from severe electric field crowding at the trench corner and bottom gate oxide, which frequently causes the peak oxide electric field to exceed the safe operating limit of 3 MV/cm and seriously threatens long-term reliability. Moreover, significant performance trade-offs exist among breakdown voltage, specific on-resistance, threshold voltage, and gate oxide electric field, so it is difficult to achieve high efficiency and high robustness simultaneously. To overcome these bottlenecks, a synergistic structure combining deep P-type columns (PCL), Carrier Storage Layer (CSL), and locally thickened gate oxide is investigated in this work. The primary objective is to modulate the electric field distribution, suppress electric field concentration, improve carrier transport behavior, and realize a well-balanced device performance. This study aims to provide a systematic and practical design methodology for the development of high-reliability, high-performance 1200 V SiC trench MOSFETs for industrial applications.  Methods  Numerical device simulations are systematically performed using the TCAD simulation platform to comprehensively analyze and optimize the electrical performance of 1200 V SiC trench MOSFETs. To ensure simulation results, a full set of advanced physical models is rigorously implemented, including bandgap narrowing, Shockley-Read-Hall (SRH) recombination, Auger recombination, impact ionization for avalanche breakdown, incomplete ionization of dopants, and high-field saturation mobility models accounting for velocity saturation and scattering effects. A device structure integrated with deep PCL, CSL, and locally thickened bottom gate oxide is constructed to suppress the gate oxide electric field and improve device reliability. Key structural and process parameters are systematically swept and quantitatively analyzed, including epitaxial layer thickness and doping concentration, trench width and depth, P-well implantation dose, PCL spacing, and CSL implantation dose. Static electrical characteristics, such as threshold voltage (Vth), specific on-resistance (Ron,sp), breakdown voltage (BV), and peak gate oxide electric field (Eox,max) are extracted and evaluated. The optimal parameter combination is finally determined through a comprehensive trade-off analysis between conduction performance and long-term device reliability, achieving a balanced performance for high-power applications.  Results and Discussions  Simulation results demonstrate that the deep PCL structure effectively redirects electric field lines away from the trench bottom gate oxide and significantly alleviates electric field crowding. Combined with locally thickened bottom gate oxide, the peak gate oxide electric field is reduced to below 3 MV/cm, which satisfies industrial reliability standards. The introduction of the CSL layer broadens the vertical conduction path, relieves current crowding, and effectively reduces specific on-resistance. Parameter optimization reveals that epitaxial conditions, trench dimensions, PW implantation doses and CSL implantation doses directly determine the trade-off between breakdown voltage and conduction performance (Fig. 5, Fig. 6, Fig. 9, Fig. 10, Fig. 19), while PCL spacing exhibits a significant influence on electric field shielding (Fig. 16, Fig. 17). After multi-parameter optimization, the device achieves a threshold voltage of 4.7 V, a high breakdown voltage of 1708 V, a low specific on-resistance of 1.57 mΩ·cm2, and a safe gate oxide peak field of 2.5 MV/cm (Table 2), demonstrating excellent overall performance suitable for high-voltage power applications.  Conclusions  A synergistic PCL-CSL structural design for 1200 V SiC trench MOSFETs is researched and validated through TCAD simulation. To address the inherent bottlenecks of conventional SiC trench MOSFETs, such as high gate oxide electric field, restricted breakdown capability and contradictory trade-off between on-state conduction loss and device reliability, the modulation mechanism of deep PCL and CSL is comprehensively explored. The influences of epitaxial layer thickness and doping level, trench dimensions, P-well implantation doses, PCL spacing, and CSL implantation doses on key electrical performance and gate oxide long-term reliability are systematically clarified and summarized through extensive parameter sweeping and comparative analysis.Under the collaborative optimization of multiple structural factors, the optimized device configuration achieves a superior comprehensive performance balance, including low specific on-resistance for reduced conduction loss, high and stable breakdown voltage for high-voltage endurance, reasonable threshold voltage for normal switching operation, and suppressed electric field concentration near the trench bottom oxide. Benefiting from the rational doping profile and structural arrangement, the peak gate oxide electric field is effectively limited and steadily controlled below the 3 MV/cm industrial safety threshold, which greatly prevents oxide degradation and guarantees stable and robust device operation under high-bias and harsh working conditions.The structural strategy and parameter optimization method provide valuable guidance for the design, simulation, and process development of high-voltage and high-reliability SiC power devices. This work contributes to the performance improvement of wide-bandgap semiconductor devices and promotes their industrial application in renewable energy systems, electric vehicles, and medium-voltage power supplies.
Dynamic Focus and Semantic Prompt Network for Fine-Grained Pest Classification
LIU Changyuan, ZHAO Haijian, WU Haibin
Available online  , doi: 10.11999/JEIT260044
Abstract:
  Objective  Agricultural pest images are commonly affected by severe challenges, including complex background interference, significant appearance differences across morphological stages, diverse shooting angles, and massive scale variations. These issues result in distinct insufficiencies in feature extraction and morphological adaptability within existing fine-grained classification models. To address these challenges, an Agricultural Pest Multi-dimensional Dataset (APMD) comprehensively covering multiple morphological stages, viewing angles, and object scales is constructed. Furthermore, a fine-grained pest classification network based on dynamic focus and semantic prompts (DFS-PestNet) is proposed. A decoupled parallel architecture combining a main feature stream and a prompt enhancement stream is designed. Through a Spatial Dependency Perception (SDP) module, crucial discriminative regions (e.g., pest spots and wing veins) are dynamically focused upon to enhance local subtle feature extraction under complex backgrounds. An Advanced Haptic-Visual Prompting (AHVP) module is introduced to explicitly integrate category semantics and spatial position information into shallow and middle-level features, substantially improving adaptability to morphological variations across developmental stages. Simultaneously, Dual-Branch Saliency Sampling (DSS) is adopted to adaptively aggregate critical features of essential pest body parts through learnable prototype components and dual-branch saliency fusion. This strategy enhances the precise recognition capability for small targets, including tiny pests and early-stage larvae. Experimental results demonstrate that the proposed model achieves superior classification performance compared to baseline and mainstream methods on both public and self-constructed datasets. The effectiveness and application potential of the model in complex agricultural scenarios are fully validated, providing a reliable technical reference for intelligent pest monitoring and precise control in smart agriculture.  Methods  To tackle the problem of insufficient classification accuracy in existing models under complex background interference and multi-morphological conditions, the Agricultural Pest Multi-dimensional Dataset (APMD) is initially constructed. This comprehensive dataset encompasses extensive image data across various morphological stages of pests, multiple viewing angles, and different scales. Specifically, it contains a total of 15,680 images covering 58 distinct species, which are rigorously divided into training, validation, and testing sets with a standard ratio of 7:2:1 (Fig. 1) (Table 1). This dataset provides crucial and high-quality resource support for further research on fine-grained pest classification. Subsequently, the Dynamic Focus and Semantic Prompt Network for Fine-Grained Pest Classification (DFS-PestNet) is formally proposed. Within this network architecture, the Spatial Dependency Perception (SDP) module is carefully designed to adaptively locate and structurally enhance the key discriminative regions of pests. By successfully overcoming pose variations and complex background interference, more accurate fine-grained pest feature extraction is achieved. In addition, the Advanced Haptic-Visual Prompting (AHVP) module is introduced into the network pipeline to embed deep category semantics and spatial position information. This module guides the network to consistently focus on crucial discriminative features across different morphological periods, thereby effectively improving the overall recognition robustness regarding dramatic morphological changes throughout the pest life cycle. Furthermore, Dual-Branch Saliency Sampling (DSS) is proposed to adaptively aggregate the features of essential pest body parts. This strategy structurally strengthens the precise recognition capability for challenging small targets, effectively resolving the inherent difficulties of small target detection in fine-grained pest classification tasks.  Results and Discussions  The superior performance of the DFS-PestNet model in fine-grained pest classification tasks is comprehensively evaluated and verified through multi-dimensional experiments. Firstly, in terms of qualitative visualization analysis, Grad-CAM heatmaps intuitively indicate that compared to the baseline model, which is highly susceptible to severe interference from complex farmland backgrounds and plant stems, DFS-PestNet is capable of effectively suppressing background noise. It precisely focuses on fine-grained discriminative parts, such as pest heads and antennae (Fig. 6). Significant advantages are explicitly demonstrated in capturing features of tiny targets (e.g., leafhopper nymphs) and pests in different life stages (e.g., Chilo suppressalis hidden within stems). The t-SNE feature dimensionality reduction results further confirm that the proposed model effectively alleviates the feature confusion problem in multi-morphological scenarios, enabling high-dimensional features to exhibit clearer inter-class separation and tighter intra-class clustering within a two-dimensional visual space (Fig. 7). Secondly, regarding quantitative ablation and parameter optimization experiments, the ablation studies fully validate the powerful synergistic enhancement effect of the three major improved modules: SDP, AHVP, and DSS (Table 2). The organic combination of these three modules significantly increases the classification accuracy of the baseline model by 2.21%, successfully reaching 77.24%, with all core evaluation metrics achieving optimal values. Concurrently, hyperparameter optimization experiments explicitly determine the optimal number of prompt position tokens to be 6 and the optimal feature dropout rate to be 0.2 (Fig. 8). This specific configuration guarantees complete semantic expression while simultaneously achieving the best balance between simulating natural occlusion and enhancing overall model robustness. Finally, in comparative experiments with mainstream state-of-the-art models, DFS-PestNet achieves the highest accuracies of 77.24% and 98.01% on the large-scale public dataset IP102 and the highly challenging self-constructed multi-dimensional dataset APMD, respectively, when directly compared with existing frontier Convolutional Neural Network (CNN) and Transformer architectures, such as Gate-ViT and EST (Table 3) (Table 4). These quantitative results comprehensively lead to various fine-grained classification metrics. More importantly, while guaranteeing extremely high classification accuracy, the inference speeds of the proposed model reach remarkably high levels of 158 frames/s and 164 frames/s, respectively. In summary, DFS-PestNet achieves a perfect unification of top-tier classification accuracy and excellent inference efficiency in complex pest feature extraction across massive scales and multiple morphological stages, which lays a solid operational foundation for efficient deployment and implementation in practical smart agriculture scenarios.  Conclusions  To address the challenges of multi-morphological variations and small target recognition in fine-grained pest classification, the multi-dimensional dataset APMD is initially constructed, and the DFS-PestNet model is proposed based on the MPSA baseline. Specifically, the SDP module is introduced to adaptively focus on pose- and morphology-invariant discriminative features; the AHVP module embeds robust category semantics and spatial position information into shallow and middle-level networks; and the DSS module adaptively aggregates crucial body part features to significantly enhance small target detection. Experimental results consistently verify the superiority of DFS-PestNet over mainstream models on both the IP102 and APMD datasets across varying developmental stages, angles, and scales. Future work will focus on exploring lightweight model modifications for efficient edge deployment and investigating open-set recognition tasks to accurately issue early warnings for unknown pest categories in complex real-world environments.
Robust Optimization of Low-Altitude Communication and Computation Resources in Uncertain Environments
GONG Yucheng, LI Bin, WANG Xinyi, FEI Zesong
Available online  , doi: 10.11999/JEIT260090
Abstract:
  Objective  Low-altitude edge computing network is utilized to provide flexible computing services and enhance coverage for user equipment. However, the quality of service is often compromised by the significant uncertainty in task data sizes and the inevitable position jitter of UAVs caused by environmental disturbances. Existing robust solutions typically rely on deterministic uncertainty sets, which are often too conservative to accurately capture the stochastic nature of task demands. To address these challenges, a robust energy minimization framework is proposed for multi-UAV MEC networks. The primary objective is to minimize the weighted sum of system energy consumption. This is achieved by establishing a joint optimization model that coordinates UAV flight trajectories, task splitting decisions, and the allocation of computation and communication resources, explicitly accounting for the dual uncertainties of task magnitude and flight positioning.  Methods  To address the non-convexity and high coupling of the optimization variables, the problem is first modeled as a Markov Decision Process (MDP). A comprehensive state space is defined to characterize real-time system dynamics, while a continuous action space is designed for trajectory control and resource management. A Distributionally Robust Optimization Soft Actor-Critic (DRO-SAC) algorithm is developed to solve this MDP. Within this framework, an ambiguity set is constructed based on the L1-norm distance to characterize the distributional uncertainty of task demands. A maximum entropy reinforcement learning mechanism is then employed to learn the optimal policy against the worst-case distribution within the ambiguity set. Through this approach, UAV trajectories and power allocation are jointly optimized to ensure system robustness against dynamic environmental fluctuations.  Results and Discussions  The performance of the proposed DRO-SAC algorithm is evaluated through simulation experiments. It is observed that DRO-SAC achieves faster convergence and higher rewards compared to DDPG and PPO (Fig. 3). In terms of energy consumption, superior efficiency is consistently demonstrated by the proposed method under varying user densities (Fig. 4). Furthermore, the system's robustness against position errors is verified, with energy fluctuations maintained at a low level (Fig. 5). Finally, dynamic trajectory adjustments are visualized, confirming effective user coverage and energy minimization (Fig. 6).  Conclusions  A joint optimization framework based on DRO-SAC is proposed in this paper to address the uncertainties of task data size and UAV flight jitter in multi-UAV assisted MEC networks. By constructing an ambiguity set for task demand distribution and optimizing the worst-case expected objective, the limitations of traditional deterministic models in dynamic environments are effectively overcome. The weighted system energy consumption is minimized while satisfying latency and safety constraints. Finally, the superior convergence stability and energy efficiency of the proposed scheme are demonstrated through simulation results, even under conditions of limited resources and severe environmental fluctuations.
Secure Multi-Task Federated Panoptic Perception Algorithm for Connected Autonomous Vehicles
HUANG Xiaoge, CHEN Ming, TANG Yi, LIANG Chengchao, CHEN Qianbin
Available online  , doi: 10.11999/JEIT250749
Abstract:
With the rapid development of vehicular networks and deep learning, connected autonomous vehicles (CAV) are now capable of collecting image data from driving scenarios and leveraging Convolutional Neural Networks for feature extraction and processing, thereby enabling efficient perception of their surroundings. However, due to the inherent complexity of driving scenarios, single-task models struggle to address various perception demands. And the performance of deep learning models heavily relies on large-scale data, while the data collected by individual vehicles is insufficient for training models with generalization capabilities. Federated learning overcomes data silos by enabling CAV to upload local model gradients instead of raw data to a central server for aggregation, which can preserve data privacy. Therefore, we present a Secure Multi-Task Federated Panoptic Perception algorithm for vehicular network scenarios. Firstly, the panoptic perception model is constructed to allow CAV to execute multiple perception tasks simultaneously. Besides, a CAV selection strategy based on hybrid scoring is designed to select high-quality local models from vehicles. Finally, a global model aggregation scheme based on Shamir secret sharing is introduced to prevent data leakage in the event of server attacks or outages, which employs secret sharing during the aggregation process. Simulation results validate the effectiveness of the proposed algorithm.
Multipath Scheduling Algorithm for UAV Video Streaming
CAO Changlong, LI Lingzhi, SHI Lianmin, ZHAO Qingyue
Available online  , doi: 10.11999/JEIT260002
Abstract:
  Objective   With the rapid development of the low-altitude economy, Unmanned Aerial Vehicle (UAV) technologies have been widely adopted in scenarios such as emergency rescue, disaster monitoring, and urban security. In these applications, achieving stable, low-latency, and high-fidelity video feedback is critical to mission success. Multipath transport protocols can leverage bandwidth aggregation to improve video Quality of Experience (QoE), thereby providing strong support for UAV video streaming. However, under dynamic and heterogeneous network conditions, the actual performance of multipath transport protocols is highly dependent on the design of multipath scheduling algorithms. To address the challenges posed by dynamic and heterogeneous networks, a variety of scheduling algorithms have been proposed. Heuristic-based schedulers employ carefully designed rules to mitigate head-of-line blocking and inter-path load imbalance to some extent, but their reliance on predefined strategies limits adaptability in highly dynamic environments. Learning-based schedulers, in contrast, continuously learn the mapping between network conditions and scheduling rewards from real-time feedback, enabling adaptive performance optimization. Nevertheless, most existing learning-based schedulers are designed for general network scenarios and are not specifically optimized for UAV networks, and their effectiveness in guaranteeing video QoE remains insufficiently validated. Therefore, there is a pressing need for a multipath scheduling algorithm tailored to UAV video streaming scenarios to fully exploit the performance potential of multipath transport protocols.  Methods   To address the dynamic and heterogeneous challenges faced by multipath transport protocols in UAV video streaming scenarios, this paper proposes NeuroFly, a multipath scheduling framework based on the NeuralUCB algorithm. In NeuroFly, multipath traffic scheduling is formulated as a Contextual Multi-Armed Bandit (CMAB) problem. A context space is constructed by integrating path state information, video encoding features, and UAV mobility parameters to accurately characterize the transmission environment. In the action space, a frame-priority-driven redundancy transmission mechanism is introduced, where video frames are assigned different transmission priorities according to decoding dependencies, and differentiated redundancy strategies are applied to improve the probability of successful frame delivery. Furthermore, a multi-objective reward function is designed to guide the learning of optimal scheduling policies, enabling adaptive optimization under dynamic and heterogeneous network conditions. In addition, to cope with abrupt environmental changes caused by high UAV mobility, a context monitoring mechanism is integrated into NeuroFly to detect network variations and trigger a two-stage restart strategy. Specifically, a soft restart is activated when gradual context drift is detected to remove outdated historical experience, while a hard restart is performed upon abrupt changes by clearing the replay buffer and reinitializing model parameters to restart learning under a new distribution.  Results and Discussions   The proposed NeuroFly framework is extensively evaluated in both simulation and real-world environments. First, Mininet-WiFi is employed to simulate realistic UAV network environments to evaluate the overall video QoE performance. The results (Fig. 4) indicate that, compared with state-of-the-art heuristic and learning-based schedulers, NeuroFly achieves comprehensive improvements by fully utilizing aggregated multipath bandwidth. Specifically, the 99th-percentile latency is reduced by 19.9-51.0%, the average video frame rate is increased by up to 24.6%, image structural similarity is improved by up to 49.2%, and buffering time ratio is reduced by 13.4-77.6%, demonstrating its superior capability in guaranteeing video QoE. In addition, real-world experiments (Fig. 6) further confirm that NeuroFly delivers favorable video QoE optimization compared to mature solutions already deployed at scale in production environments in real UAV operational scenarios, demonstrating strong practical applicability and holding promise for large-scale deployment across diverse UAV operation scenarios in the future.  Conclusions   This paper addresses the key challenges of dynamicity, heterogeneity, and high time variability faced by multipath transport protocols in UAV video streaming scenarios, and proposes an intelligent multipath scheduling framework based on the NeuralUCB algorithm, termed NeuroFly. In this framework, the multipath traffic scheduling problem is formulated as a CMAB problem. By carefully designing the context space, action space, and a multi-objective reward function, online learning and adaptive optimization of traffic allocation policies are achieved. In addition, to further enhance robustness under drastic environmental variations, a lightweight context monitoring mechanism is introduced to continuously detect context distribution drift and restart the learning process when necessary, thereby improving adaptability to abrupt environmental changes. Finally, systematic evaluations are conducted on both simulation platforms and real-world UAV operational environments to comprehensively assess the effectiveness of the proposed approach. Simulation results demonstrate that, compared with state-of-the-art heuristic and learning-based schedulers, NeuroFly achieves consistent improvements across video QoE metrics. Real-world experimental results further indicate that, in real UAV operational scenarios, NeuroFly continues to provide favorable video QoE guarantees when compared with mature solutions that have been widely and long-term deployed in industrial practice. These results collectively validate the practicality, robustness, and engineering viability of NeuroFly, suggesting strong potential for large-scale deployment in UAV applications that are highly sensitive to real-time video quality, such as emergency response, power inspection, agricultural monitoring, and logistics delivery.
Optimal Weighted Subspace Fitting-based Direct Position Determination with HF/VHF Collaboration
YANG Gao-yuan, YIN Jie-xin, WANG Ding, YANG Bin
Available online  , doi: 10.11999/JEIT260001
Abstract:
  Objective   Passive localization is essential for target detection, navigation, and track tracking, particularly in military applications involving maritime and aerial targets. These targets often transmit across multiple frequency bands, including shortwave High Frequency(HF) and Very High Frequency (VHF). Existing localization methods largely rely on single-band approaches or two-step positioning techniques. Single-band methods underutilize the positional information available across different bands, while two-step methods lose information during intermediate parameter estimation (e.g., Direction-Of-Arrival (DOA); Time-Difference-Of-Arrival (TDOA)), reducing localization accuracy. Collaborative fusion of HF signals (via ionospheric reflection) and VHF signals (via Doppler effects from moving arrays) has been rarely addressed. To overcome low positioning accuracy and limited spatial resolution in over-the-horizon multi-target scenarios, this study proposes a novel collaborative Direct Position Determination (DPD) method designed to integrate the complementary strengths of HF and VHF signals, enhancing localization precision and robustness in complex electromagnetic environments.  Methods  An Optimal Weighted Subspace Fitting (OWSF) DPD algorithm is proposed. Comprehensive signal propagation models are established for heterogeneous observation platforms (Fig. 1). HF signal propagation is modeled using a two-dimensional DOA framework based on ionospheric reflection, incorporating azimuth and elevation angles to handle nonlinear over-the-horizon propagation. VHF signals are modeled using a space-time extended signal framework for a moving Unmanned Aerial Vehicle (UAV), exploiting Doppler effects to create a virtual large-aperture array that captures both one-dimensional angle and Frequency-Of-Arrival (FOA) information. Unlike traditional methods that process each band separately, the OWSF algorithm constructs a unified cost function that fuses the signal and noise subspaces of both HF and VHF data using optimal weighting matrices, balancing the contributions of different signal qualities. Target positions are then estimated by minimizing this cost function via grid search or Newton iteration. The Cramér-Rao Bound (CRB) under Earth-ellipsoid constraints is derived to provide the theoretical performance limit.  Results and Discussions   Simulations are conducted in a centralized processing scenario, where HF stations and UAV VHF signals are transmitted to a central station for joint processing (Fig. 2). The simulation involves three stationary targets and a collaborative system comprising HF stations and a UAV (Fig. 3, Table 2, Table 3). Performance comparisons demonstrate that the OWSF method consistently outperforms traditional two-step positioning methods and single-system DPD methods (DOA-only or FOA-only) in Root Mean Square Error (RMSE) (Fig. 4). When HF SNR is 5 dB lower than VHF SNR, OWSF exhibits superior robustness compared to Subspace Data Fusion (SDF) and Minimum Variance Distortionless Response (MVDR) methods, approaching the CRB at high SNR (Fig. 5). The impact of system parameters is further analyzed, showing that increasing the number of sampling points (Fig. 6) and array elements (Fig. 7) improves accuracy, particularly in low SNR regimes. Regarding spatial resolution, the OWSF algorithm generates sharper spectral peaks for distant targets and successfully resolves closely spaced targets that the SDF-DPD algorithm fails to distinguish (Fig. 8, Fig. 9).  Conclusions   The HF/VHF collaborative DPD method effectively integrates multidimensional observational information from ionospheric reflection and Doppler-based propagation. Simulation results demonstrate substantial improvements in localization accuracy, spatial resolution, and robustness, especially under low-SNR conditions or heterogeneous signal quality between bands. The derived CRB provides a solid theoretical benchmark, confirming that the method overcomes the limitations of single-band and two-step approaches. This approach offers a highly effective solution for over-the-horizon passive localization of multiple stationary targets.
Recent Advances in Remote Sensing Image-Text Retrieval Driven by Vision–Language Foundation Models
WU Hui, ZHAO Yan, ZHANG Peirong, HOU Yingyan, QI Xiyu, WANG Lei
Available online  , doi: 10.11999/JEIT260189
Abstract:
  Significance   Remote sensing image–text retrieval (RS-TIR) connects massive Earth observation imagery with natural-language queries and has become an important interface for geospatial intelligence systems. Compared with conventional content-based retrieval, RS-TIR enables users to search scenes, objects, spatial layouts, and functional regions through semantic descriptions instead of handcrafted visual cues. This capability is increasingly needed in natural resource monitoring, urban governance, disaster response, environmental assessment, and on-demand retrieval from rapidly growing satellite archives. However, the task remains fundamentally challenging because remote sensing imagery is captured from a nadir or near-nadir perspective, exhibits strong rotation invariance, contains extreme scale variation from tiny vehicles to large airports, and often involves domain-specific semantic descriptions such as land-use attributes, spatial distributions, and geoscientific relations. Meanwhile, the amount of high-quality image–text annotation is still limited relative to the scale of remote sensing data. These properties enlarge the semantic gap between images and language and constrain the generalization ability of traditional cross-modal retrieval methods. Against this background, the review focuses on how vision–language foundation models (VLMs) reshape RS-TIR by introducing large-scale contrastive pre-training, stronger transferable representations, and more flexible multimodal interaction mechanisms. The review also clarifies why remote sensing adaptation is necessary and why a dedicated synthesis of architectures, datasets, alignment mechanisms, and future directions is timely for the field.  Progress   The technical development of RS-TIR is organized from three complementary perspectives. First, the review summarizes the domain-specific challenges that shape the task, including visually isotropic topology with extreme scale variation, professionalized and fine-grained textual semantics, and the compounded semantic gap between overhead imagery and natural-language descriptions (Fig.3). The overall survey structure is then outlined to show the logical progression from task formulation to future challenges (Fig.1). From the methodological timeline, RS-TIR evolves from handcrafted visual descriptors and shallow semantic mapping to deep representation learning, and then to VLM-driven paradigms with broader generalization and zero-shot transfer ability (Fig.4, Table 2). Early methods rely on color, texture, shape, and hash-based retrieval, but they struggle to model high-level geospatial semantics and complex scene composition. Deep learning methods improve retrieval by learning joint embedding spaces, adopting dual-encoder or interaction-based architectures, and introducing multi-scale feature fusion and region-aware matching. These methods substantially enhance semantic consistency, yet they still depend heavily on labeled data and often suffer from limited robustness in open or cross-sensor scenarios. Second, the review summarizes the benchmark ecosystem used to evaluate these methods. Representative datasets span small-scale test sets such as Sydney-Caption and UCM-Caption, mainstream benchmarks such as RSICD and RSITMD, and recent large-scale training resources such as RS5M and SkyScript (Table 1). These datasets reveal a clear transition from small manually annotated corpora to web-scale or automatically generated image–text pairs, which in turn supports domain pre-training and larger model adaptation. Third, the review analyzes the core VLM techniques now driving progress in RS-TIR. The model spectrum and representative architecture families, including contrastive dual-encoder models, multimodal interaction models, and remote sensing foundation models integrated with large language models, are summarized systematically (Fig.5, Fig.6, Table 3). Domain adaptation routes are further grouped into continued remote sensing pre-training, parameter-efficient transfer learning, adapter-based tuning, prompt learning, and instruction tuning. At the semantic alignment level, the review emphasizes contrastive joint embedding, fine-grained multi-scale alignment, and the incorporation of remote sensing priors such as spatial topology and geolocation. Performance comparisons on RSICD and RSITMD show that the introduction of remote sensing VLMs, especially RemoteCLIP, GeoRSCLIP, iEBAKER, and LRSCLIP, leads to consistent gains in mean Recall and overall retrieval robustness (Table 4). In parallel, the review also tracks the extension of retrieval capability into unified multi-task remote sensing models, where retrieval, grounding, segmentation, and reasoning begin to share a common multimodal representation space.  Conclusions  Several conclusions are drawn from the comparative analysis. First, VLMs establish a new dominant paradigm for RS-TIR because they significantly narrow the cross-modal semantic gap while improving transferability across datasets and scenes. Second, there is no universally optimal architecture: dual-encoder models remain attractive for large-scale retrieval because of their efficiency, whereas interaction-based or instruction-enhanced models offer finer semantic alignment at higher computational cost. Third, domain adaptation is indispensable. Continued pre-training on remote sensing image–text corpora, parameter-efficient tuning, and prompt-based adaptation consistently outperform direct reuse of Internet-trained VLMs, indicating that remote sensing imagery differs too strongly from natural-image distributions to rely on generic pre-training alone. Fourth, the most effective recent methods do not improve performance through scale alone; they also exploit remote sensing-specific information, including multi-scale structures, foreground entities, explicit keyword reasoning, and spatial priors. Finally, the review shows that the field is shifting from isolated retrieval models toward more general geospatial multimodal systems. Retrieval is no longer treated only as a matching task, but also as a key capability supporting question answering, instruction following, knowledge augmentation, and coordinated reasoning in remote sensing applications.  Prospects   Future research is expected to move in four closely related directions. One direction is the unified representation of multi-source heterogeneous data, especially the integration of optical imagery with synthetic aperture radar, hyperspectral data, thermal infrared observations, and multi-temporal acquisitions. Another direction is knowledge-enhanced retrieval, where geospatial priors, land-use rules, remote sensing terminology, and external knowledge bases are incorporated into multimodal alignment and retrieval-augmented reasoning. A third direction is lifelong and open-world learning. Real deployment requires models to remain reliable under seasonal changes, sensor updates, regional domain shifts, cloud contamination, and newly emerging categories without catastrophic forgetting. The fourth direction concerns efficiency and deployability. Because practical remote sensing systems often operate under tight computational budgets, lightweight tuning, sparse computation, token reduction, model compression, and on-orbit or edge inference will become increasingly important. Interactive and explainable retrieval is also likely to grow in importance, allowing analysts to refine queries through dialogue and inspect the image regions or semantic cues that support retrieval decisions. Overall, continued progress in data construction, domain adaptation, semantic alignment, and efficient multimodal modeling is expected to make RS-TIR a more robust infrastructure capability for Earth observation applications.
Semi-passive Intelligent Reflecting Surface-assisted Integrated Sensing and Communication for Distributed and High-precision Joint Localization
HUANG Yi, XIONG Chaorui, TANG Xiaowei, SHI Yunmei
Available online  , doi: 10.11999/JEIT251039
Abstract:
  Objective   Integrated Sensing And Communication (ISAC) enables communication and sensing on a shared radio platform, supporting emerging applications such as autonomous driving and smart city infrastructure while improving spectral efficiency and reducing system cost. A key feature of ISAC systems is the reuse of communication signals for sensing and localization, which enables high-precision positioning without dedicated localization pilots. In semi-passive Intelligent Reflecting Surface (IRS)-aided ISAC systems, sensing performance is improved while low hardware complexity and power consumption are maintained. Compared with fully passive IRSs, semi-passive IRSs provide limited signal-processing capability for more flexible beam control, while avoiding the high hardware cost of fully active IRSs. In addition, a semi-passive IRS can cooperate with the sensing array at the Base Station (BS) to form a distributed sensing architecture. Through joint processing of the signals received at the BS and the IRS sensing arrays, the effective sensing aperture is enlarged, which improves the accuracy and robustness of channel-parameter estimation. However, existing studies mainly address fully passive or fully active IRSs in communication scenarios, whereas the sensing capability of semi-passive IRSs and their cooperation with BS arrays for high-precision localization remain insufficiently studied. Therefore, high-precision Three-Dimensional (3D) target localization under semi-passive IRS-assisted cooperative sensing is investigated.  Methods  A semi-passive IRS-assisted ISAC framework is proposed for cooperative 3D target localization. Sensing arrays are deployed at both the BS and IRS to jointly receive target-reflected Orthogonal Frequency Division Multiplexing (OFDM) signals, which are then delivered through reliable backhaul links to a central processor for joint processing. Two localization algorithms are proposed. The first is a parameter-decoupled two-step localization method. In this method, the Angle of Arrival (AoA) is first estimated by Fast Fourier Transform (FFT) with a refinement procedure, and the propagation delay is then estimated by the Spatial Smoothing MUltiple SIgnal Classification (MUSIC) algorithm. The target position is subsequently obtained by solving linear equations constructed from the estimated channel parameters and the geometric relationships among the arrays. The second is a Direct Position Determination (DPD) method, in which a maximum-likelihood optimization problem is formulated and a Newton-like algorithm is used to estimate the target position directly. By jointly using prior information, including spatial correlation among arrays, communication symbols, beamforming vectors, and IRS reflection coefficients, this method reduces the error propagation of the two-step localization method and improves localization accuracy and robustness. Furthermore, the Cramér-Rao Lower Bound (CRLB) for target-position estimation is derived under circularly symmetric complex Gaussian noise to provide a theoretical benchmark. Monte Carlo simulations are conducted to verify the proposed algorithms, examine the effect of the Rician K-factor on localization performance, and compare the proposed methods with conventional AoA/ToA-based localization methods.  Results and Discussions  Under the proposed semi-passive IRS-assisted ISAC framework, the two-step localization method achieves statistically efficient channel-parameter estimation, and its estimation error approaches the CRLB at high Signal-to-Noise Ratio (SNR) (Figs. 24). At low BS transmit power, severe path loss and noise distortion cause a clear gap between the Root Mean Square Error (RMSE) and the CRLB. As the transmit power increases, the sensing SNR increases and parameter-estimation accuracy is improved. Because the target position in the two-step localization method is obtained from linear equations constructed from the estimated channel parameters and known array geometry, the final localization accuracy follows the same trend as the intermediate parameter-estimation performance. However, because of error propagation in the two-stage process, the localization error deviates more clearly from the CRLB (Fig. 5). Increasing the number of OFDM symbols improves localization accuracy, but also increases latency, which indicates a trade-off between accuracy and delay in practical systems. Compared with the two-step localization method, the DPD method achieves higher localization accuracy under the same number of OFDM symbols (Fig. 5). By jointly processing the signals received from all sensing arrays and directly optimizing the target position under the maximum-likelihood criterion, error propagation is effectively avoided. In addition, spatial correlation among arrays, communication symbols, beamforming vectors, and IRS reflection coefficients are fully used, which further improves estimation performance. For the same localization accuracy, the DPD method requires fewer OFDM symbols or lower transmit power than the two-step localization method, which shows clear advantages in latency and energy efficiency. Simulation results also show that both proposed methods benefit from a larger Rician K-factor (Fig. 6), because a stronger line-of-sight component suppresses multipath interference. This effect is more evident in the high-SNR region, where small-scale fading becomes the main factor limiting performance. Finally, compared with conventional AoA/ToA-based localization methods, the proposed methods provide better localization accuracy and robustness (Fig. 7).  Conclusions  A semi-passive IRS-assisted ISAC system is proposed for 3D cooperative localization with reduced localization pilot overhead. Two localization algorithms are developed: a low-complexity two-step localization method and a high-accuracy DPD method. The theoretical performance limit is established through derivation of the CRLB. Simulation results verify that the two-step localization method enables high-precision localization, whereas the DPD method provides better performance, and its RMSE approaches the CRLB at high SNR. Both methods also show good scalability and robustness. Future work will address multi-target scenarios and resource optimization.
Cell-Free Joint Beamforming and AP–User/Target AssociationOptimization for Integrated Sensing and Communication
FANG Zhiyu, XIA Xiaochen, XU Kui, WEI Chen, XIE Wei, YE Zilü
Available online  , doi: 10.11999/JEIT250574
Abstract:
  Objective  Integrated Sensing And Communication (ISAC) is a key technology for Sixth-Generation (6G) networks. The cell-free architecture is a promising regional coverage paradigm for 6G. Cooperation among Access Points (APs) mitigates coverage imbalance, interference, and capacity limitations in conventional cellular systems, while enabling communication and sensing services for low-altitude targets with wide-area continuous coverage. However, existing studies on cell-free systems often rely on statistical channel models, which fail to capture realistic propagation characteristics in complex environments. The global Channel State Information (CSI) required for transmission optimization is difficult to obtain, and instantaneous CSI cannot be guaranteed due to the high mobility of low-altitude targets. To address these issues, a joint beamforming and AP–user/target association optimization method based on a Binary Radio Map (BRM) is proposed. The environmental information provided by the BRM is used to predict channels between APs and users/targets, thereby providing global channel information for joint optimization. On this basis, an ISAC satisfaction-based optimization model is constructed, and an iterative optimization algorithm for beamforming design and AP–user/target association is developed using a genetic algorithm.  Methods  First, the channels between APs and users/targets are predicted using environmental information derived from the BRM. An ISAC satisfaction-based optimization model is then established to unify communication and sensing performance. Due to the coupling between communication and sensing and the non-convex nature of the problem, the optimization problem is decomposed into two subproblems corresponding to communication and sensing beamforming. In each iteration, the beamforming design is reformulated as a Second-Order Cone Program (SOCP) to obtain beamforming matrices that maximize the satisfaction function. An iterative solution algorithm is applied to compute the communication and sensing beamforming matrices efficiently. Subsequently, based on the optimized satisfaction function, an AP–user/target association optimization method is designed using a genetic algorithm.  Results and Discussions  Simulation results verify the effectiveness of the BRM-assisted channel prediction and association optimization method. Compared with the conventional AP association method based on the shortest path, the proposed approach reduces the required transmission power by approximately 5 dBm while achieving higher user/target satisfaction (Fig. 7). As the transmission power increases, the satisfaction of users/targets gradually improves and approaches 1. In contrast, under the conventional scheme, a large gap remains between the maximum and minimum satisfaction values at the same transmission power (Fig. 8). When the transmission power is 40 dBm, the proposed method effectively reduces this disparity and balances performance among different users/targets. Although the null-space projection scheme leads to some degradation in sensing performance, the minimum received sensing power remains stable. This indicates that the overall system satisfaction is not affected and that sensing requirements are still satisfied (Fig. 9).  Conclusions  This study addresses the AP-user/target association problem in low-altitude airspace. The BRM is used to predict channels between APs and users/targets and to provide global channel information for joint optimization. By maximizing the minimum user/target satisfaction, ISAC beamforming is optimized, and AP-user/target association is iteratively refined using a genetic algorithm. Simulation results show that the proposed method effectively improves AP-user/target association and enhances integrated communication and sensing performance compared with existing approaches.
Joint Optimization of Service Placement and Task Offloading for QoS Balancing in Satellite-Terrestrial Integrated Networks
DAI Cuiqin, WANG Hongyun, LIAO Rongpeng, CHEN Qianbin
Available online  , doi: 10.11999/JEIT251294
Abstract:
  Objective  Satellite-Terrestrial Integrated Networks (STIN) integrate multi-source and multi-dimensional services from terrestrial and satellite networks, providing wide coverage, large capacity, and flexible networking. These features support global coverage and ubiquitous access for diverse services. However, the dynamic topology and heterogeneous, resource-constrained nodes in STIN complicate service placement at satellite-terrestrial edge nodes. This further increases the difficulty of matching user service requests with edge computing resources during task offloading, making it difficult to satisfy Quality of Service (QoS) requirements. To address this issue, a joint optimization scheme for QoS-Balanced Service Placement and Task Offloading (BQSPTO) is proposed. The scheme integrates a Delay, Security, and Privacy-aware QoS (DSPQoS) evaluation model with satellite-terrestrial collaboration, inter-satellite cooperation, and service migration. It enables joint optimization of service placement and task offloading in a cloud-edge-end architecture, while satisfying task latency, security, and privacy requirements.  Methods  The proposed scheme integrates service placement, task offloading, and QoS evaluation into a unified framework. First, a cloud-edge-end collaborative STIN model is constructed, including terminal devices, terrestrial edge servers, satellite edge nodes, and cloud servers. Task security is quantified using the attack avoidance probability derived from key-cracking capability, and task privacy is characterized by usage-pattern privacy and location privacy. A DSPQoS evaluation model is established by combining task completion latency, attack avoidance probability, and privacy level. Second, a service placement strategy is designed based on task popularity prediction and service migration. A cloud-edge-end collaborative full offloading strategy is developed by determining offloading locations and multi-node cooperation modes according to QoS performance. Based on the service placement strategy and task offloading decisions, an optimization problem is formulated to maximize the total QoS performance under communication and computation resource constraints. Third, the joint optimization problem is decomposed into service placement and task offloading subproblems. A Non-dominated Sorting Genetic Algorithm II (NSGA-II) is applied to the service placement subproblem, while a hybrid Grey Wolf Optimization (GWO) and Whale Optimization Algorithm (WOA) is applied to the task offloading subproblem. Alternating optimization is employed to iteratively update both decisions and obtain the final solution.  Results and Discussions  The QoS performance of the proposed BQSPTO scheme is evaluated through MATLAB simulations. The cloud-edge-end collaborative task processing model (Fig. 2) and the overall BQSPTO framework (Fig. 3) are analyzed. The proposed scheme is compared with three baseline methods: GWOBQ (Grey Wolf Optimization Algorithm-based BQSPTO Scheme), BSSLM (BQSPTO Scheme Without Service Migration), and HWGWTO (Hybrid Grey Wolf Optimization with Whale Algorithm Fusion for Task Offloading). Results show that BQSPTO achieves faster convergence and better avoids local optima, resulting in higher QoS performance (Fig. 4). Compared with GWOBQ, HWGWTO, and BSSLM, the QoS performance is improved by approximately 2.1%, 5.4%, and 4.8%, respectively. As the number of tasks increases, QoS performance improves for all methods, while BQSPTO consistently achieves the highest performance (Fig. 5). Latency, security, and privacy metrics increase with task volume, and BQSPTO maintains superior performance across these metrics, although trade-offs appear due to multi-objective optimization (Fig. 6). QoS performance decreases as the number of malicious users increases, while BQSPTO shows stronger robustness and stability (Fig. 7). As satellite capacity increases, the number of deployable service types grows, and QoS performance improves for all methods. BQSPTO remains superior under different capacity settings (Fig. 8).  Conclusions  A joint optimization scheme for service placement and task offloading in STIN is proposed under multi-objective QoS constraints. The DSPQoS evaluation model integrates latency, security, and privacy into a unified evaluation framework. The joint optimization problem is decomposed and solved using alternating optimization, enabling effective coordination between service placement and task offloading. Simulation results demonstrate that the proposed scheme achieves higher QoS performance, better convergence stability, and improved multi-objective balance under varying task loads, malicious user scales, and satellite capacities.
Context-Aware Fine-Grained Multimodal Emotion Recognition Based on Mamba
SUN Linhui, CHENG Leyang, YANG Xinyue, CHEN Shuaitong, LI Pingan, SHAO Xi
Available online  , doi: 10.11999/JEIT251307
Abstract:
  Objective  Multimodal Emotion Recognition(MER) aims to infer human emotional states by integrating speech and text signals. Existing MER methods often fail to use temporal and speaker context effectively and lack fine-grained intra- and inter-modal interaction modeling. These limitations reduce the ability to distinguish similar emotions. This study proposes a Context-Aware Fine-Grained Multimodal Emotion Recognition model based on the Mamba State Space Model(SSM), termed CA-FGMER-Mamba, to improve recognition accuracy in complex scenarios.  Methods  The CA-FGMER-Mamba model consists of five modules. First, text features are encoded using RoBERTa with explicit speaker identity injection and a three-segment contextual input. Audio features are extracted using OpenSMILE and reduced to 512 dimensions. Second, a Bidirectional Gated Recurrent Unit(Bi-GRU) integrates historical and future contextual dependencies. Third, intra-modal fine-grained filtering applies multi-head self-attention to emphasize key emotional cues and suppress redundancy. Fourth, inter-modal fine-grained fusion uses a Mamba SSM module to recalibrate features across time steps. This stage includes higher-order outer-product fusion, mean pooling, and a cross-modal interaction modulation module to adaptively adjust modality contributions. Finally, fused features are processed by a Bi-LSTM, followed by a self-attention layer and a fully connected network for classification. The model is optimized using a joint triplet loss and cross-entropy loss.  Results and Discussions  Experiments are conducted on the IEMOCAP and MELD datasets. On the IEMOCAP four-class task, CA-FGMER-Mamba achieves a Weighted Accuracy(WA) of 0.781 and an Unweighted Accuracy(UA) of 0.790, outperforming seven representative methods. On the six-class task, the model achieves a Weighted F1-score of 0.703 and shows strong performance in distinguishing similar emotions such as “happy” (0.646) and “excited” (0.803). On the MELD dataset, the model achieves a Weighted F1-score of 0.665, indicating strong generalization. Ablation experiments confirm that combining intra-modal and inter-modal fusion improves performance.  Conclusions  The CA-FGMER-Mamba model addresses key limitations in existing MER methods by integrating context-aware modeling with fine-grained intra- and inter-modal fusion based on the Mamba SSM. The Bi-GRU with speaker identity enhances modeling of temporal and role-related context and alleviates recency bias. Intra-modal self-attention and Mamba-based inter-modal recalibration improve feature extraction and cross-modal interaction modeling, enabling accurate discrimination of similar emotions. The cross-modal interaction modulation module adaptively adjusts modality contributions and enhances robustness. Experimental results demonstrate strong performance in WA, UA, and Weighted F1-score, with good generalization. Future work will explore multi-scale interaction mechanisms, multi-task learning strategies, and noise-aware modeling to further improve fusion accuracy and robustness.
Near-field tomographic imaging for uplink communication and coordinate reconstruction algorithm
YIN Lannuo, WANG Yong
Available online  , doi: 10.11999/JEIT250715
Abstract:
  Objective  With the rapid evolution of 6G network technology, communication systems are evolving toward high bandwidth, low latency, and massive connectivity. Against this backdrop, integrated sensing and communications (ISAC), as a novel system architecture, enables wireless signals to perform dual functions—transmitting information while simultaneously sensing the environment—thereby providing more intelligent and efficient services for 6G networks. Environmental reconstruction, a core component of ISAC systems, aims to restore the true spatial structure of targets and scenes using echo signals. However, current environmental reconstruction techniques in practical applications still face the following three major challenges: First, in 6G communication systems, the dense deployment of base stations (BS) causes building targets to reside in the near-field region of the imaging system, leading to severe coupling among the range, azimuth, and elevation dimensions in tomographic imaging and resulting in significant discrepancies between the reconstructed target geometry and the actual shape. Second, because the positioning error of user equipment (UE) far exceeds the wavelength used by existing communication systems, traditional SAR imaging autofocus algorithms become ineffective, necessitating the development of new methods to circumvent the issues posed by positioning errors. Finally, conventional TomoSAR algorithms adopt a per-channel processing framework by independently generating SLC images for each channel; however, when each channel employs ISAR techniques to generate SLC images, inherent data discrepancies among the channels result in inconsistent translational compensation, which introduces phase errors during the elevation focusing process and ultimately leads to the occurrence of spurious targets in the imaging outcomes.  Methods  In this paper, we first propose applying the nonparametric translational compensation method originally developed for ISAR imaging to the generation of single-look complex (SLC) images, thereby effectively circumventing the adverse effects introduced by positioning errors. Existing ISAR-related literature typically assumes that the target adheres to a turntable model, yet the actual SAR imaging geometry diverges significantly from this idealized assumption. Based on the SAR imaging scenario, we have rederived the mathematical mapping that links the ISAR tomographic imaging results to the target’s true spatial coordinates. Leveraging this mapping, we formulate the coordinate reconstruction challenge as a system of nonlinear equations and subsequently propose a novel coordinate reconstruction method that integrates a particle swarm optimization (PSO) algorithm, ultimately achieving an accurate restoration of the target's genuine geometric shape. Furthermore, in order to address the inherent issue of inconsistent translational compensation among channels within traditional per-channel processing frameworks, we have designed a joint phase calibration tomographic imaging algorithm that employs a unified phase calibration strategy to eliminate inter-channel phase discrepancies, thereby markedly improving both the elevation focusing performance and the overall imaging quality.  Results and Discussions  We validate the proposed methods through simulation experiments on complex building targets under both ideal and non-ideal trajectory conditions, using the CD distance as the evaluation metric for coordinate reconstruction accuracy. The experimental results demonstrate that the CD distances under ideal and non-ideal trajectories are 1.34 and 1.54, respectively, indicating only a slight performance degradation under non-ideal conditions. Notably, imaging point clouds obtained under non-ideal trajectories exhibit evident point dropout. A comparative analysis of the cumulative probability distribution curves of distance errors under the two trajectory conditions reveals that the overall distribution trends are very similar; significant differences in the probability distributions emerge only when the distance error exceeds 2 m. This observation indicates that, in terms of the CD distance evaluation metric, the primary discrepancies between imaging results obtained under ideal and non-ideal trajectories are concentrated in regions exhibiting point cloud dropout and in areas outside the main target. Hence, the influence of non-ideal trajectories is mainly manifested in the variation of scattering intensity distribution. Moreover, comparative experiments between the joint phase calibration framework and traditional algorithm frameworks show that conventional tomographic imaging methods exhibit marked stacking effects at different elevations, with false targets appearing at incorrect elevation levels. This behavior suggests that independently compensating for translational motion in each channel is prone to inducing inter-channel phase discrepancies, thereby severely impairing elevation focusing performance. In contrast, the incorporation of joint phase calibration yields a substantial improvement in imaging quality.  Conclusions  The experimental results validate the effectiveness of the proposed methods: by adopting the ISAR nonparametric translational compensation and the PSO-based coordinate reconstruction techniques, the true geometric shape of the target is successfully recovered. Moreover, the joint phase calibration strategy effectively eliminates the issue of false targets in elevation focusing that arises from conventional per-channel processing, thereby significantly enhancing both the elevation focusing capability and the overall image quality.
Research on Inverse QR Decomposition Optimization for Sparse Adaptive System Identification Algorithms
PENG Yi, ZHANG Pengfei, WANG Xiaoyong, GAO Junqi, LI Changlong, ZHANG Zhiyuan, SUN Tianxiang
Available online  , doi: 10.11999/JEIT250562
Abstract:
  Objective  The traditional sparse regularization recursive least squares algorithm, L1/L0 Norm Recursive Least Squares (L1/L0-RLS), demonstrates theoretical superiority in sparse parameter space estimation and has become a significant method in system identification and channel equalization. However, under limited numerical precision conditions, its covariance matrix iterative computation process can lead to successive accumulation of rounding errors, inducing divergence and instability in the least squares solution.  Methods  To address this issue, this paper proposes an improved algorithm based on the Inverse QR Decomposition (IQRD) framework. This framework not only effectively suppresses the accumulation of rounding errors in traditional regularized RLS algorithms, but also eliminates the calculation step of weight coefficient replacement in traditional QR decomposition, thereby significantly improving the numerical robustness and system identification efficiency of the algorithm in finite precision environments. Specifically, this article first systematically constructs the L1-IQRD-RLS and L0-IQRD-RLS algorithms under the L1/L0 constrained inverse QR decomposition architecture. Through theoretical derivation, a universal recursive expression for weight coefficients is obtained, and an innovative automatic parameter selection mechanism is introduced into the algorithm framework to solve the dynamic optimization problem of sparse regularization parameters.  Results and Discussions  To verify the effectiveness of the proposed algorithm in sparse constraints and robustness, Monte Carlo simulation experiments were used to quantitatively evaluate the algorithm performance. The results showed that L1-IQRD-RLS and L0-IQRD-RLS can maintain long-term numerical stability in an 11 decimal fixed-point computing environment. Compared with traditional algorithms, they exhibit significant performance advantages in key indicators such as system sparse representation, parameter estimation variance, and covariance matrix condition number. Further verification of actual test data confirms that the improved algorithm can maintain numerical stability even in environments with limited accuracy, significantly improving its robustness compared to traditional methods. The application effect of measured data shows that the regularized RLS algorithm improved by the inverse QR framework exhibits significant advantages in key indicators such as system sparsity representation, parameter estimation, and numerical stability. Its iterative convergence success rate is significantly improved compared to traditional methods.  Conclusions  This paper focuses on the issue of sparse system identification in the field of adaptive filtering. Currently, traditional sparse-regularized recursive least squares (RLS) algorithms still face challenges in numerical stability under limited numerical precision. To address this problem, this study proposes constructing an inverse QR decomposition framework to overcome the numerical ill-conditioning caused by successive rounding errors in sparse-regularized RLS algorithms. This approach significantly enhances the algorithm's numerical robustness in low-precision environments. Additionally, it innovatively introduces an automatic parameter selection mechanism into the algorithm framework, effectively eliminating the need for repeated parameter tuning and ensuring stable performance optimization through sparse constraints.In practical electromagnetic signal processing, tasks such as system identification and beamforming are constrained by the finite precision of hardware implementation and often face the inherent sparsity characteristics of the system itself. This paper's algorithm provides targeted solutions: its enhanced finite word-length robustness effectively suppresses numerical divergence in adaptive weight updates, ensuring stable implementation on fixed-point processors; meanwhile, the introduced sparse constraints naturally align with the physical structure of sparse arrays, improving the accuracy of algorithm estimation results. This research offers a practical algorithmic approach for achieving high-performance, high-stability sparse-constrained systems on precision-limited hardware platforms.
A Physics-Constrained Deep Learning Framework for High-Fidelity Sea Clutter Generation under Small-Sample Conditions
SUN Dianxing, LIU Xinliang, LIU Ningbo, DING Hao, YU Hengli, SONG Guanglei
Available online  , doi: 10.11999/JEIT250697
Abstract:
  Objective  The verification and validation of radar target detection algorithms, particularly in maritime surveillance, heavily relies on the availability of high-fidelity synthetic sea clutter data. However, generating realistic sea clutter under high sea-state conditions (e.g., Sea State 4 and above) is a significant challenge due to the non-stationary and non-Gaussian nature of the signal. Traditional statistical models often fail to capture the complex time-frequency characteristics of such data, especially when direct measurement is difficult or unavailable. A novel framework is proposed that combines a complex-valued generative adversarial network with physics-constrained learning and an adaptive transfer learning mechanism to address the issue of small-sample sea clutter generation. The primary goal is to develop a robust and efficient method for generating high-quality synthetic sea clutter data that closely mimics real-world conditions, thereby providing a reliable data foundation for the development and testing of advanced radar systems.  Methods  The proposed framework integrates a Complex Variational Autoencoder Wasserstein Generative Adversarial Network (CVAE-WGAN) with a transfer learning strategy to address the challenge of generating high-fidelity sea clutter data under small-sample conditions. The model operates in the complex domain to jointly process in-phase and quadrature components, preserving the orthogonality and phase relationships of the signal. A Magnitude-Phase Attention (APA) module is introduced to enhance the joint modeling of amplitude and phase, while complex residual blocks are designed to improve gradient propagation and training stability. A physics-constrained loss function system, comprising a time-frequency ridge loss and a Doppler band loss, is implemented to guide the generation process to align with the physical characteristics of sea clutter. To handle data scarcity, an adaptive transfer learning mechanism based on Kullback-Leibler Divergence (KLD) is employed to dynamically adjust the model during fine-tuning in target domains, enabling efficient knowledge transfer across different sea-state scenarios.  Results and Discussions  The performance of the proposed CVAE-WGAN framework is evaluated using real-world sea clutter datasets, demonstrating its effectiveness in generating high-fidelity synthetic data. In the source domain (Sea State 4), the generated data closely matches real measurements in terms of amplitude statistics (PDF-CS = 0.872) (Fig. 5), temporal correlation (ACF-CS = 0.9382) (Fig. 7), and time-frequency characteristics (SPEC-RMSE = 4.5379 dB) (Fig. 6). The time-frequency ridge accuracy reaches 95.2% (|z|≤1) (Fig. 10). The adaptive transfer learning mechanism is validated by applying the pre-trained model to a more challenging scenario (Sea State 5) with only 20% of the target domain samples. The generated clutter maintains a strong fit to the empirical amplitude distribution (PDF-CS = 0.8448) (Fig. 11, Table 2) and exhibits good autocorrelation properties (ACF-CS = 0.9557) (Fig. 12, Table 2), with time-frequency ridge accuracy at 95.24% (∣z∣≤1) (Fig. 14, Table 2). Ablation studies reveal that the Magnitude-Phase Attention (APA) module is critical for joint amplitude and phase modeling, as its removal significantly degrades performance (e.g., PDF-CS drops 17.3%, SPEC-RMSE increases 35.0%) (Table 1). The method proves stable even with as little as 15% of the target data (PDF-CS > 0.6, Z=1 > 82%) (Table 3), underscoring its suitability for data-scarce environments.  Conclusions  This study presents a novel framework for generating high-fidelity sea clutter data under small-sample conditions, combining a complex-valued generative adversarial network with physics-constrained learning and an adaptive transfer learning mechanism. The proposed CVAE-WGAN model, guided by a sophisticated loss function system, demonstrates a strong capability to capture both the statistical and physical properties of high sea-state environments. The integration of the KLD-based transfer learning mechanism significantly enhances the model's adaptability, enabling high-quality data generation even with limited target domain samples. By addressing the challenge of small-sample sea clutter generation, this framework provides a reliable and robust data foundation for the development and testing of advanced radar anti-clutter and anti-jamming algorithms. Future work focuses on further optimizing the framework for extreme data scarcity and exploring its application in other non-stationary radar signal scenarios.
A Multi-layer Resilient Control Framework for Networked Microgrids against False Data Injection Attacks
HUANG yu, CAO zhengyang, HU songlin, YUE dong, CHEN yonghua, YAN yunsong
Available online  , doi: 10.11999/JEIT250850
Abstract:
  Objective  With the increasing penetration of renewable distributed energy and the growing reliance on cyber-physical infrastructures, networked microgrids (NMGs) have become highly vulnerable to false data injection attacks (FDIAs), which threaten frequency stability and system security. Traditional secondary control methods, constrained by limited communication resources and rigid sampling mechanisms, struggle to ensure resilient operation when facing stealthy cyber-attacks and dynamic disturbances. To address these challenges, it is imperative to design control frameworks that can jointly optimize communication efficiency, enhance attack detection, and guarantee rapid stability recovery. This study therefore develops a multi-layer resilient control strategy that integrates event-triggered communication, data-driven observation, and deep reinforcement learning, aiming to provide an effective solution for securing the stability of NMGs against sophisticated cyber threats.  Methods  The proposed ETC–RBF–DRQL framework integrates an event-triggered communication mechanism, a radial basis function (RBF) observer, and a deep reinforcement learning (DRQL) compensator to achieve resilient frequency control in networked microgrids under false data injection attacks (FDIAs). The event-triggered scheme reduces redundant data transmission while maintaining stability, and the RBF observer estimates system states and detects anomalous deviations. Upon detection, the DRQL module adaptively generates compensation signals to suppress attack impact and restore system stability. The framework is mathematically formulated within a modularized dynamic model of networked microgrids, ensuring provable stability under communication and attack constraints. Simulation experiments are conducted on a 4-node distributed microgrid testbed in MATLAB/Simulink, including diverse renewable energy sources and realistic communication links, to validate the effectiveness and scalability of the proposed approach.  Results and Discussions  The proposed ETC-RBF-DRQL framework was validated on a 4-node networked microgrid under FDIA scenarios. Simulation results show that the method achieves superior overall performance in frequency regulation, communication efficiency, and attack resilience. Specifically, the frequency deviation peak is reduced from 0.0218 Hz under periodic PI control to 0.0121 Hz, while the steady-state average deviation and fluctuation standard deviation are reduced to 0.0097 Hz and 0.0074 Hz, respectively (Fig.4, Table 2). Meanwhile, the average communication event rate decreases to 11.9 pkt·s-1, corresponding to a 76.2% reduction compared with periodic sampling (Table 2). In addition, the proposed framework maintains reliable attack detection performance, with a detection rate of 91.5%, a false alarm rate of 4.8%, and an AUC of 0.968 (Table 2). These results demonstrate that the proposed method can effectively coordinate frequency recovery, communication saving, and FDIA mitigation in networked microgrids.  Conclusions  This paper investigates a multi-layer resilient control framework for networked microgrids under FDIAs and communication constraints. The proposed ETC–RBF–DRQL method integrates event-triggered communication, RBF-based attack detection, and dual Q-learning-based adaptive compensation, thereby achieving closed-loop coordination of anomaly detection, attack suppression, and frequency stability recovery. Simulation results on a 4-node networked microgrid demonstrate that, compared with traditional PI-based schemes, the proposed approach significantly reduces frequency deviation peaks and shortens recovery time, while effectively lowering communication overhead. Theoretical analysis further confirms its feasibility and stability under bounded estimation errors. Nevertheless, this study focuses on sensor-side FDIAs and simplified communication conditions; future work will extend to more complex multi-type attacks and hardware-in-the-loop validation to advance engineering applications.
Available online  , doi: 10.11999/JEIT251084
Abstract:
A Risk-modulated Learning Framework for Physical-layer RFIDAuthentication under Dynamic Interference
WU Haifeng, YU Wenbo, ZENG Yu, YANG JiangFeng
Available online  , doi: 10.11999/JEIT251108
Abstract:
  Objective  Dynamic interference and metallic reflections severely affect the reliability of coupled Radio Frequency IDentification (RFID) authentication. Conventional static models cannot adapt to time-varying noise and multipath effects, which leads to unstable recognition. To address this problem, this paper proposes a Risk-Modulated Learning Identification Framework (RMLIF) that integrates stochastic channel modeling, adaptive risk regulation, and risk-regularized classification. The aim is to achieve stable and interpretable physical-layer authentication under nonstationary interference, thereby improving the anti-counterfeiting reliability of RFID systems.  Methods  A Stochastic Differential Equation (SDE)-based coupled channel model is first established to jointly characterize drift, diffusion, and impulsive interference (Eq.(1)), and the existence and uniqueness of its solution are proved. A Target-Driven Adaptive Risk (TDAR) algorithm is then designed to dynamically adjust physical-layer parameters based on the Recognition Risk Index (RRI). The RRI is derived from classification posterior probabilities (Eq.(3)), and its exponential mapping to the Signal-to-Interference-plus-Noise Ratio (SINR) is characterized analytically (Eq.(11), Fig. 3), which enables real-time risk estimation and closed-loop control. For feature representation, a difference-based compressive feature modeling method is used to capture the perturbation between normalized and reference signals (Fig. 1), and Theorem 1 establishes the stability of the compressed mapping. Parallel steady-state and perturbation feature paths are further designed (Table 1), and their joint robustness is proved in Corollary 4. In addition, the framework shows that TDAR regulation is equivalent to a risk-regularized classification process (Theorem 3), which effectively enlarges the classification margin without modifying the classifier structure.  Results and Discussions  Theoretical analysis derives the generalization error bound, sample complexity, and robustness limits (Theorem 4~7), showing that filtering high-risk samples reduces redundancy and improves learning efficiency. The Asymptotic Real Risk Index (ARRI) is further defined to explain long-term convergence and structural self-consistency (Theorem 8). Experiments conducted on a USRP N2000 platform (Table 3) use six types of EPC C1 Gen2 tags under four interference conditions, namely no copper plate and small, medium, and large copper plates (Fig. 4). Compared with conventional methods, including Coupling_14, Hu_Fu, CNN_Vgg19, and PCFM, the corresponding RMLIF-enhanced versions achieve clear gains in classification accuracy (Fig. 5). In all no/small/medium/large copper-plate interference scenarios, the proposed framework achieves accuracy above 90%, with an average improvement of 10%~20% over traditional methods. PCFM_RMLIF achieves the best overall performance. PCA visualization confirms the stability of the compressed features (Fig. 6) and the clearer class separation after risk regulation (Fig. 7). The TDAR algorithm converges rapidly, generally within two iterations (Fig. 9). As the effective sample ratio and feature dimension increase, the RRI decreases monotonically (Fig. 10), in agreement with Theorem 6. Entropy analysis (Fig. 11) shows that risk regulation reduces system uncertainty and improves stability. Cross-condition tests further verify the robustness and generalization ability of the framework (Fig. 12).  Conclusions  This paper develops a unified risk-modulated learning framework for physical-layer RFID authentication under dynamic interference. The RMLIF framework combines SDE-based channel modeling, adaptive TDAR regulation, and compressive feature reconstruction into a closed-loop mechanism that links physical signals with recognition risk. Both theoretical analysis and experimental results show that risk-driven regulation effectively suppresses disturbance, improves feature separability, and reduces generalization error. The proposed approach achieves high accuracy, rapid convergence, and strong robustness, and provides an effective solution for dynamic RFID anti-counterfeiting authentication.
Pearson Correlation Fusion Sensing Method for Noncircular Signals
LAI Huadong, LIN Cong, LUO Peng, XU Jinqiang, LIU Mingxin, XU Weichao
Available online  , doi: 10.11999/JEIT251247
Abstract:
  Objective  With the rapid growth of wireless devices and communication services, spectrum resources have become increasingly scarce. Spectrum sensing, as a fundamental function of cognitive radio, enables dynamic spectrum access and improves spectrum utilization efficiency. However, conventional spectrum sensing methods based on circular signal assumptions cannot effectively detect noncircular signals. In addition, some detectors designed for noncircular signals show degraded performance under low signal-to-noise ratio (SNR) or limited sample conditions. To address these limitations, a nonparametric spectrum sensing scheme based on the Weighted Pearson Correlation Coefficient (WPCC) is proposed. The scheme applies a linear fusion strategy to the real-valued composite coherence matrix, which captures the second-order statistical characteristics of noncircular signals.  Methods  The WPCC detector constructs a real-valued composite observation vector and computes the corresponding composite coherence matrix. Pearson Correlation Coefficients (PCCs) are extracted from this matrix to characterize the statistical properties of noncircular signals. The first two product moments of squared sample PCCs are derived, and optimal fusion weights are obtained based on the deflection coefficient. The true PCCs are approximated by their sample estimates to obtain data-driven fusion weights that do not require prior knowledge of sensing channels. These weights are then linearly combined with the squared sample PCCs to construct the WPCC test statistic, thereby exploiting the spatial diversity of sensing antennas. The final decision is made by comparing the WPCC statistic with a sensing threshold determined by the specified false alarm probability. Specifically, a WPCC value below the threshold indicates the null hypothesis of an idle frequency band, whereas a value above the threshold indicates the alternative hypothesis that the frequency band is occupied by primary users.  Results and Discussions  Simulation experiments evaluate the sensing performance of the proposed nonparametric WPCC-based method (Algorithm 1) in terms of sensing probability, deflection coefficient, Receiver Operating Characteristic (ROC) curve, and Area Under the Curve (AUC), with comparisons to NCLMPIT, NCAGM, NCHDM, and NCJT. The numerical results show that the proposed method outperforms the compared detectors under various simulation conditions. In particular, the WPCC detector achieves the highest sensing probability and exhibits superior performance at low false alarm probabilities of 0.05 (Fig. 2), 0.01 (Fig. 3(a)), and 0.005 (Fig. 3(b)), with sample sizes not exceeding 100. In addition, the proposed method shows clear advantages under different numbers of antennas (Fig. 4), different noise variance conditions (Fig. 5), and different levels of correlation strength (Fig. 6). The applicability of the WPCC method to circular signals is also demonstrated by its high sensing probability for QPSK and 16PSK signals (Fig. 7). The superior overall performance of the proposed detector is further confirmed by higher deflection coefficient curves and ROC curves (Figs. 8, 9). The largest AUC values quantitatively demonstrate its overall optimality among all considered methods (Table 1). These results indicate strong robustness under low SNR and small-sample conditions.  Conclusions  A Pearson correlation fusion sensing method for noncircular signals is proposed based on the real-valued composite covariance representation and the Locally Most Powerful Invariant Test (LMPIT) framework. By combining optimal fusion weights derived from sample PCCs with a linear weighting scheme, the method fully exploits second-order statistical information. It enhances strongly correlated components while suppressing weak correlations and noise interference. Analytical expressions for the false alarm probability and sensing threshold are derived. Both theoretical analysis and simulation results show that the proposed method achieves superior performance compared with existing noncircular signal sensing methods in terms of sensing probability, deflection coefficient, ROC curve, and AUC.
Entropy-driven Adaptive Fusion Network for Scene Classification of High-Resolution Remote Sensing Images
SONG Wanying, LIU Yuchen, WANG Jie, WANG Anyi
Available online  , doi: 10.11999/JEIT251147
Abstract:
  Objective  Remote sensing image scene classification is intended to assign semantic labels to aerial or satellite images. With the rapid development of Earth observation technologies, high-resolution remote sensing images provide abundant detail but also present major challenges, including complex spatial structures, large scale variations, high intra-class variance, and strong inter-class similarity. Traditional Convolutional Neural Networks (CNNs) have achieved notable success in local spatial modeling, but they cannot adequately capture long-range dependencies because of their fixed receptive fields. To address this limitation, CNN-Transformer hybrid architectures have been proposed to balance local detail and global semantics. However, these models usually adopt simple concatenation for multi-scale feature fusion, which introduces redundancy and reduces discriminability. In addition, although the Swin Transformer uses window-based self-attention to capture contextual information, it still shows clear limitations in the analysis of complex high-resolution images. Specifically, long-range dependency modeling across windows is constrained by the fixed window size. The extraction of fine-grained local features is also limited because deep networks tend to overlook crucial fine-texture information from low- and mid-level features. Moreover, existing multi-level feature fusion strategies lack semantic guidance and therefore readily introduce background noise. Therefore, a network that can balance global contextual modeling and local discriminability while enabling adaptive fusion is still needed.  Methods  To address limited cross-window interaction and the absence of semantic guidance in multi-level feature fusion, an Entropy-driven Adaptive Fusion Swin Transformer (E-AF-ST) network is proposed. The architecture uses a lightweight Swin-Tiny backbone and incorporates two key modules: the Attention-guided region Selection and feature Optimization module (ASO) and the Entropy-driven Gated Fusion module (EGF) (Fig. 1). The ASO module addresses weak cross-window interaction and insufficient fine-grained feature extraction in the Swin Transformer through three consecutive stages (Fig. 2a). First, cross-window sparse attention is computed to remove physical window boundaries. By enlarging the patch partition size, sparse attention is applied to the entire image sequence, allowing global contextual correlations across the whole image to be captured. Second, dynamic region selection is performed. On the basis of pixel-level entropy measurement, a multilayer perceptron maps entropy features to attention scores, and a Top-k masking strategy dynamically selects the most informative discriminative regions. Third, recursive feature optimization is performed. Multi-head self-attention and layer normalization are applied at the local scale to progressively enhance boundaries and microstructural information. The EGF module then integrates the Swin Transformer output features, the globally enhanced contextual features, and the locally optimized features to reduce semantic discrepancies (Fig. 2b). First, energy normalization is performed using the Frobenius norm to obtain a probabilistic energy distribution. Next, an entropy-driven gated fusion mechanism calculates the Shannon entropy for each branch. A learnable soft-normalization gating function then maps the entropy information to normalized fusion weights, automatically reducing the weight of branches with high entropy caused by cluttered backgrounds. Finally, the fused representations undergo lightweight recursive optimization using depthwise separable convolutions and GELU activation functions with residual connections to suppress redundant information. The forward propagation process is systematically summarized in Algorithm 1.  Results and Discussions  To validate the discriminative capability of the proposed network, extensive experiments were conducted on two widely used public datasets, AID and NWPU-RESISC45. The proposed E-AF-ST network shows superior classification performance compared with existing advanced methods (Table 1). On the AID dataset, the model achieves state-of-the-art overall accuracies of 95.56% and 97.21% at training ratios of 20% and 50%, respectively. On the challenging NWPU-RESISC45 dataset, it achieves the highest accuracies of 92.45% and 94.59% at training ratios of 10% and 20%, respectively. The confusion matrices show that the recognition accuracy of most categories exceeds 95% (Figs. 3, 4), and the misclassification proportions for classes with complex backgrounds are significantly lower than those of the baseline model (Table 2). Visual analysis based on Grad-CAM further confirms the advantages of the E-AF-ST network in global contextual modeling and critical region selection. Compared with the Swin-Tiny baseline, the proposed network demonstrates more precise semantic focus (Fig. 5). In “airport” and “port” scenes, background noise is effectively suppressed and key targets are accurately highlighted. In structurally complex scenes such as “viaducts" and “railway stations”, extension directions and texture characteristics are comprehensively captured. Ablation experiments confirm that the cross-window sparse attention in the ASO module and the dynamic weight allocation in the EGF module are highly complementary. Furthermore, this performance gain is achieved with only a minimal increase in model complexity, with a total of 30.45M parameters and 4.72G- FLOPs.  Conclusions  An E-AF-ST network is proposed to address insufficient extraction of local discriminative information, cross-scale feature inconsistency, and semantic redundancy in high-resolution remote sensing image scene classification. With information entropy used as a guiding metric, the ASO module enables precise selection and recursive optimization of discriminative regions, whereas the EGF module achieves adaptive and redundancy-reduced integration of multi-source features. Experimental and visual results show that the proposed method effectively reduces interference from complex backgrounds and outperforms existing mainstream CNN-Transformer hybrid architectures. This study provides a new theoretical perspective and technical route for multi-scale target perception and feature semantic alignment.
A Closed-loop Feedback Adaptive Beam Alignment Algorithm for Shipborne Low Earth Orbit Satellite Communication Terminals
CHEN Haotian, MA Zixian, XIE Xinhong, LI Nayu, LI Baozhu, SONG Chunyi, XU Zhiwei
Available online  , doi: 10.11999/JEIT251324
Abstract:
  Objective  The 6G-based SATellite COMmunication (SATCOM) network has become a primary solution for ubiquitous and oceanic communications. Compared with traditional Geostationary Earth Orbit (GEO) satellites, the latest generation of Low Earth Orbit (LEO) satellites offers higher throughput, lower end-to-end latency, and lower deployment cost. Phased arrays are therefore widely used in LEO SATCOM because of their beam agility. However, maritime wind-wave disturbances cause nonlinear relative motion between shipborne terminals and LEO satellites, which creates major challenges for high-precision satellite acquisition and tracking. To address this issue, a new beam alignment algorithm is required for LEO SATCOM systems. Such an algorithm should first obtain the instantaneous target state and motion characteristics through target acquisition, and then use a multi-target tracking method to predict satellite trajectories on the basis of the target states, thereby compensating for estimation errors caused by severe coupled motions.  Methods  The proposed closed-loop feedback adaptive beam alignment algorithm consists of two tightly coupled components: target acquisition and target state updating. In the target acquisition stage, a RAnk Reduction Estimator(RARE) is first used to decompose the array factor matrix and convert the original two-dimensional Direction Of Arrival(DOA) estimation problem into two sequential one-dimensional estimation problems. This process greatly reduces the computational complexity of each Sparse Bayesian Learning(SBL) iteration. On the basis of the coarse grid generated by RARE, an Adaptive Newton Sparse Bayesian Learning(ANSBL) method is developed. ANSBL uses block-sparse Bayesian learning to achieve initial target acquisition on the coarse grid, and then performs two-stage Newton refinement to reduce off-grid mismatch. This strategy provides high-accuracy DOA estimation in both \begin{document}$ \theta $\end{document} and \begin{document}$ \varphi $\end{document} and improves angular observation precision. In the target state updating stage, an Unscented Kalman Filter(UKF)-based ternary joint prediction mechanism is proposed. The UKF simultaneously predicts the target motion state, signal variance, and noise variance for the next target acquisition process. These predicted probability distributions are then used to update the initial grid and hyperparameters of the subsequent SBL acquisition stage, providing more consistent and comprehensive initial values. Through this closed-loop interaction, target acquisition and state tracking are deeply integrated, which substantially reduces the number of SBL iterations required for convergence. This advantage is particularly evident under high sea-state conditions, where reduced beam alignment time is critical.  Results and Discussions  The proposed closed-loop feedback adaptive beam alignment algorithm first uses on-grid DOA estimation to reduce array factor correlation and improve target acquisition efficiency, and then uses Newton iteration to achieve higher off-grid accuracy (Fig. 3). The proposed method is subsequently validated using real ship attitude data collected from a 28000-DWT bulk carrier under actual sea conditions (Fig. 4). The UKF refines the DOA results through state updating. Its predictions of signal position, signal variance, and noise variance provide accurate initial values for the hyperparameters, thereby reducing the number of iterations and enabling faster convergence than other algorithms (Fig. 5). Under low sea-state conditions, the proposed method not only achieves satellite alignment in less than 0.2 s, but also reduces the satellite position estimation error from ±1°\begin{document}$ \sim $\end{document}±0.5° (Fig. 6(a)). Under high sea-state conditions, the UKF effectively predicts satellite positions and reduces the satellite position estimation error from ±2.5°\begin{document}$ \sim $\end{document}±0.65°, which verifies the robust tracking accuracy and error mitigation capability of the proposed method in harsh marine environments (Fig. 6(b)).  Conclusions  To meet the performance requirements of beam alignment algorithms for LEO communication satellites, this paper proposes a closed-loop feedback adaptive beam alignment algorithm. The algorithm first uses a block-based SBL algorithm to obtain grid-based DOA estimation results, and then achieves super-resolution direction estimation under off-grid conditions through adaptive Newton iteration. Through the UKF, the estimation results are dynamically calibrated in real time. The UKF further predicts the target motion state, signal variance, and noise variance for the next target acquisition process, thereby improving tracking continuity and alignment accuracy. Numerical simulations show that the proposed algorithm outperforms traditional beam alignment methods in both numerical accuracy and robustness, and effectively mitigates severe terminal shaking under complex sea conditions.
Real-Time Sub-bottom Horizon Picking Based on Maximum Correlated Kurtosis Deconvolution Combined with Continuity Constraint
MENG Xinbao, ZHOU Tian, ZHU Jianjun, LI Tie, WANG Peihong, ZHAO Guoqing
Available online  , doi: 10.11999/JEIT250727
Abstract:
  Objective  Sub-bottom profiling is widely employed in seabed geological and resource exploration, pipeline route inspection, and port and channel safety assurance, and is regarded as a frontier in underwater acoustic detection research. Accurate extraction of sub-bottom horizons plays a critical role in the interpretation of sedimentary structures, analysis of seabed substrate characteristics, and identification of buried objects. However, existing horizon picking techniques often face difficulty in balancing picking quality, false-alarm control, and online real-time performance. To address this issue, a real-time sub-bottom horizon picking method integrating maximum correlated kurtosis deconvolution and continuity constraint is proposed.  Methods  The proposed method consists of three stages: preprocessing, coarse horizon extraction, and fine horizon extraction. In preprocessing, the raw echoes are enhanced via cascaded band-pass filtering and matched filtering, followed by a fixed delay correction to align picked positions with the pulse leading-edge arrivals. In coarse extraction, synthesized periodic signals are constructed under multiple slicing step lengths, and maximum correlated kurtosis deconvolution is applied to enhance impulsive horizon responses, yielding potential horizon sequences. These candidates are then screened and fused using a cross-step-length consistency criterion to suppress false alarms. In fine extraction, a continuity constraint is introduced within an online sliding window to filter isolated points, segment horizons, and perform curve fitting and correction, further reducing residual false alarms and improving continuity.  Results and Discussions  Simulation and field-data experiments were conducted to evaluate detection probability, false alarm probability, horizon positioning error, processing time, and extracted horizon profiles. Monte Carlo results show that the fine extraction stage further reduces false alarms and positioning errors while maintaining detection performance close to that of the coarse extraction stage (Fig.5, Fig.6). When the echo signal-to-noise ratio is higher than –15 decibels, the detection probability exceeds 70.000% and the false alarm probability remains below 0.200%; when it is higher than –10 decibels, the detection probability exceeds 99.000%, the false alarm probability falls below 0.100%, and the positioning error approaches one sample interval (Fig.6). In sub-bottom survey simulation, the proposed method successfully extracts both the seabed surface and the buried sedimentary horizon under different noise conditions, with results more refined than those of the comparative algorithm based on fractional Fourier transform and overall comparable to manual interpretation (Fig.7, Fig.8). Field-data results further confirm its effectiveness: for the signal-based comparative algorithms, the proposed method achieves an average detection probability of 91.833%, an average false alarm probability of 0.004%, and an average positioning error of 10.15 samples, while the comparative algorithm based on fractional Fourier transform shows a much higher false alarm probability of 3.987% (Table 1). For the image-based comparative algorithms, although detection probabilities are above 95%, their false alarm probabilities and processing times remain markedly higher than those of the proposed method (Table 2). Qualitative results also show that the extracted horizons agree well with manual interpretation trends, with lower background noise, no obvious large-scale false layers, and good preservation of local fluctuations and interruptions (Fig.912). Overall, the proposed method achieves a more favorable balance for online horizon extraction by combining acceptable detection probability and positioning accuracy with extremely low false alarm probability and real-time processing capability (Table 1, Table 2).  Conclusions  This study presents a real-time sub-bottom horizon picking method based on maximum correlated kurtosis deconvolution combined with continuity constraint, structured into three stages: preprocessing, coarse extraction, and fine extraction. The method effectively extracts the seabed surface and sedimentary horizons while meeting real-time processing requirements. Simulation results show that when the signal-to-noise ratio exceeds –10 dB, the method achieves a detection probability greater than 99.000%, a false alarm probability below 0.100%, and a positioning error near one sample. Field data processing results indicate an average detection probability of 91.833%, an average false alarm probability of 0.004%, and an average positioning error is 10.15 samples. These findings validate the effectiveness and practical value of the proposed approach for real-time extraction of shallow sub-bottom horizons. The method demonstrates the ability to maintain high detection accuracy while minimizing false alarms and ensuring millisecond-level processing times, making it highly suitable for online sub-bottom horizon extraction tasks in practical applications.
A Cross-Precision Motion Compensation Technique for Security Surveillance Video Coding
JIANG Wei, MA Wei, LU Jinghui, ZHANG Yue, ZHANG Yundong
Available online  , doi: 10.11999/JEIT251301
Abstract:
  Objective  In the field of modern security surveillance, high-altitude dome cameras are often deployed at critical locations such as bridges and tower tops that are susceptible to external interference, resulting in problems such as jitter and blurring in captured videos, which pose great challenges to video coding. In video compression coding, high-precision motion compensation is the key to improving coding efficiency. The existing Ultimate Motion Vector Expression (UMVE) technique suffers from insufficient precision and lack of flexibility in adaptive adjustment. Although high-precision coding tools such as Registration-Based Coding Mode (RCM) and Affine Motion Compensation Prediction (AFFINE) can improve compensation accuracy, they have disadvantages of high computational complexity and hardware cost, making it difficult to meet the multiple requirements of coding efficiency, power consumption and real-time performance in high-altitude surveillance scenarios. Therefore, aiming at the core pain points of video coding for high-altitude dome cameras, it is of important academic value and practical application significance to design an optimized UMVE scheme that combines high-precision motion compensation, low computational complexity and scene adaptability, so as to improve coding efficiency and balance resource consumption.  Methods  This study proposes an Ultimate Motion Vector Expression technique supporting Cross-Precision Motion Compensation (UMVE_CPMC). Its core is to improve motion compensation accuracy by constructing an extended Up-Precision Motion Vector (UPMV), whose mathematical expression is UPMV = BaseMV + MMV(p, angle), where BaseMV is the basic motion vector obtained by the existing UMVE method, and MMV is the refined fine-tuning motion vector based on specific precision p and angle, with incremental candidates only provided at the 1/8 precision level to balance computational complexity and compression efficiency. For step-size adaptive adjustment, an improved scheme with six modes is proposed, covering enhanced UMVE, conventional UMVE and four precision-improved modes, allowing the encoder to switch flexibly according to scene characteristics. The average image gradient is adopted as an objective evaluation index; test scenes are divided into Class A (high-definition motion scenes) and Class B (low-definition scenes), and different coding configurations, sequences and parameters are set to compare coding gains and computational efficiency under different modes.  Results and Discussions  Experiments show that UMVE_CPMC achieves effective performance improvement in various scenes and modes. In Class A high-definition motion scenes, with the adaptive strategy disabled and RCM disabled, the average gains of Y, U and V components in Fusion Mode 1 reach -2.912%, -1.656% and -1.654% respectively, and the average coding time is reduced to 94.55% of the baseline; the average gain of the Y component in Independent Mode 1 reaches -2.925%, with coding time reduced to 91.91% of the baseline. Compared with traditional UMVE, when CPMC Independent Mode 1 is enabled under the scenario where RCM is enabled and other tools work collaboratively, the gain is improved from -0.276% to -1.310%, showing significantly higher cost performance. In Class B low-definition scenes, after enabling adaptive adjustment, the gain losses of Fusion Mode 1 and Mode 0 are significantly reduced, with average gain losses controlled at 0.071% and 0.108% respectively, successfully maintaining the original coding gain. In multi-scene comprehensive tests, when RCM and AFFINE are disabled, 9 out of 10 test sequences in adaptive Fusion Mode 1 show positive gains, including a Y-component gain of -10.691% for the yuxuedaolu sequence and -11.400% for the BQTerrace sequence. When all existing coding tools are enabled, the Y-component gains of dianjing, yuxuedaolu and BQTerrace sequences reach -1.29%, -2.05% and -1.21% respectively, with coding time reduced to 94%–96% of the baseline. In addition, correlation analysis between average image gradient and gain reveals a significant positive correlation: images with high average gradient (high definition) achieve greater gains from UMVE_CPMC, while those with low average gradient (low definition) hardly benefit. Principle analysis indicates that pixel changes in low-definition images are gentle, and high-precision interpolation fails to generate effective pixel values, resulting in insignificant compensation effects. Performance differences among modes match computational complexity: the fusion mode balances gain and stability, while the independent mode further reduces computation. The six step-size adaptive modes can meet real-time and precision requirements of different scenes.  Conclusions  The proposed UMVE_CPMC technique, by integrating cross-precision motion compensation with the UMVE algorithm, effectively solves the core problems of insufficient precision in traditional UMVE and high computational complexity of high-precision coding tools, achieving a favorable balance among coding efficiency, computational complexity and scene adaptability. This technique delivers remarkable coding gains in Class A high-definition motion scenes, with gains exceeding 10% for some sequences without other high-precision compensation tools and 1%–2% when cooperating with other tools. In Class B low-definition scenes, the original coding gain can be maintained through frame-level adaptive adjustment interfaces. Meanwhile, the fusion mode does not increase hardware complexity, and the independent mode significantly reduces coding time, suitable for encoder designs with limited resources or simplified requirements. UMVE_CPMC provides a new effective approach to solving the low coding efficiency caused by jitter and blurring in high-altitude dome camera video coding, enriches the video coding toolset, and offers important practical guidance for the optimization of video coding technologies in the security surveillance field. Future work can further optimize the adaptive strategy, explore integration with other advanced coding technologies, develop personalized coding schemes, and improve performance in complex scenarios.
Modeling and Characterization of Broadband Earth-Moon-Earth Communication Channels
LI Chengqian, QIAN Xiaowei, HU Xiaoling
Available online  , doi: 10.11999/JEIT251028
Abstract:
  Objective  This paper presents a comprehensive channel model for wideband Earth-Moon-Earth (EME) communication, tackling the shortcomings of traditional simplified models that cannot accurately represent the Moon’s complex scattering behavior and terrain-induced effects. Existing approaches, which treat the Moon as a point reflector or depend on empirical scattering laws, are inadequate for broadband, high-capacity systems. To address this, a unified large-scale link model is proposed to statistically capture terrain-driven reflection characteristics, while a small-scale model systematically analyzes multipath and Doppler effects, decomposing the channel and quantifying dynamic impairments. Link-level simulations validate the model’s accuracy. This work fills a critical gap in broadband EME channel modeling, providing a necessary foundation for the design and optimization of future deep space communication systems.  Methods  A dual-scale modeling approach is proposed for wideband Earth-Moon-Earth (EME) channels. At the large scale, a unified integral path loss model is developed for both wide- and narrow-beam scenarios, with lunar terrain statistically represented by a Gaussian height distribution to capture shadowing and roughness effects. A distributed integration method is used to compute effective RCS under narrow-beam conditions. At the small scale, the channel is decomposed into quasi-specular and diffuse components, with delay-power profiles derived from surface roughness and scattering mechanisms. Doppler shift and spread are analytically modeled based on Earth-Moon orbital dynamics. Monte Carlo simulations and numerical integration verify the models, and system-level performance is evaluated in terms of BER under various channel conditions with different equalization and frequency offset correction schemes.  Results and Discussions  A comprehensive channel model is developed to capture both large- and small-scale fading in wideband Earth-Moon-Earth (EME) communication. The large-scale model, validated by simulations, accurately represents the non-uniform power distribution across the lunar disk through an integrated RCS approach. At the small scale, quasi-specular and diffuse components characterize multipath delay spread, while the Doppler model quantifies effects from Earth’s rotation and lunar orbital motion, with a two-way shift of ~4.5 kHz and a spread of ±39.88 Hz at 1.296 GHz. Low-SNR simulations show that conventional equalizers (LMS, RLS, RAKE) stagnate near BER = 0.1, and frequency correction methods (FFT-based, MLE) degrade under large frequency offsets, highlighting the challenges of accurate compensation.  Conclusions  This paper develops and validates a comprehensive channel model for broadband Earth-Moon-Earth (EME) communication. The model more accurately predicts path loss, shadowing, multipath delay, and Doppler effects than conventional point-target or empirical methods. Results show that lunar terrain and surface properties cause severe signal degradation, which traditional equalization and frequency correction cannot effectively mitigate. Future work should integrate high-resolution lunar DEMs and measured RCS data to improve accuracy and explore adaptive methods, such as machine learning, to handle severe delay spread. This model offers a foundation for reliable EME links and future deep-space communication networks.
Slice Pricing and Access Control with QoS Guarantee for Vehicular Networks
CUI Yaping, ZHANG Feng, WU Dapeng, HE Peng, WANG Ruyan, WANG Pan
Available online  , doi: 10.11999/JEIT251219
Abstract:
  Objective  Vehicular applications have diverse Quality of Service (QoS) needs that traditional spectrum-focused networks struggle to meet. While network slicing over Mobile Edge Computing (MEC) offers customized provisioning, current approaches often overlook the holistic generation of slices and adaptive access control. To address these limitations, this paper proposes a two-stage vehicular network slicing framework that integrates resource-aware slice generation with intelligent pricing and access control. This framework enables efficient, dynamic resource allocation and access management, benefiting both the MEC-based Network Slice Provider (MEC-NSP) and vehicles by improving service quality, utilization, and adaptability through a Stackelberg game-based interaction mechanism.  Methods  The proposed solution features a two-layer coupled mechanism: “resource pre-allocation” and “Stackelberg game pricing and access control”. In the first stage, a 3D resource pre-allocation mechanism jointly optimizes communication, computation, and caching resources to satisfy vehicular latency and bandwidth requirements. This allocation is formulated as a Mixed-Integer Nonlinear Programming (MINLP) problem and decoupled into uplink and downlink sub-problems, solved via branch-and-bound and interior-point methods, respectively. In the second stage, a Stackelberg game balances the MEC-NSP’s profit and vehicles’ QoS. The MEC-NSP acts as the leader, setting dynamic slice prices, while the network controller (the follower) determines the optimal slice selection probabilities. This interaction is resolved using the Iterative Slice Pricing Algorithm (ISPA), which has been proven to converge to a Nash equilibrium.  Results and Discussions  Simulations demonstrate that the proposed framework consistently outperforms baseline algorithms (Fixed Slice Pricing, Average Resource Allocation, and Random Selection) under various network conditions. In bandwidth-constrained scenarios, it increases MEC-NSP profit by up to 20.77% compared to the Random Selection approach. With abundant resources (150% capacity), it maintains profit gains of 3–9% over other baselines. The ISPA algorithm exhibits fast convergence to equilibrium (approx. 175 iterations). The flexible pricing mechanism effectively balances network loads, improves cache hit rates, and reduces resource bottlenecks, ensuring high QoS satisfaction.  Conclusions  The proposed dual-layer framework successfully integrates slice generation and pricing to address resource-aware network slicing in vehicular MEC environments. By coupling 3D resource pre-allocation with a Stackelberg game-based pricing strategy, the system significantly improves MEC-NSP profit, resource utilization, and vehicle QoS. Future work will explore blockchain-based mechanisms to facilitate trust negotiation and decentralized resource orchestration for cross-domain cooperation in multi-operator, multi-vendor environments.
Available online  , doi: 10.11999/JEIT251134
Abstract:
An Ultra-Wideband Low-Profile Dipole Patch Antenna for VHF-Band Probing Radars
TIAN Yuxiao, ZHANG Feng, MA Zhangjun, WANG Jiacheng, JI Yicai
Available online  , doi: 10.11999/JEIT260105
Abstract:
  Objective  In radar systems, the limitations of traditional narrowband antennas in data transmission rate and resolution have become increasingly evident. Ultra-WideBand (UWB) antennas therefore receive broad attention because they provide high range resolution and strong interference suppression capability. However, at low frequencies, existing UWB antennas usually suffer from excessively large physical size, which makes installation on airborne or vehicle-mounted platforms difficult. By contrast, compact antennas that are easier to deploy often exhibit insufficient gain and cannot satisfy the penetration-depth requirement of deep subsurface detection. Thus, achieving a proper balance among antenna size, bandwidth, and gain over an ultra-wideband range remains a major challenge for VHF-band probing radars. To address this issue, a planar dipole antenna loaded with an Artificial Magnetic Conductor (AMC) structure and metallic shorting walls is proposed. The antenna maintains stable radiation performance over a wide frequency range while preserving a low-profile and structurally simple configuration.  Methods  The reflection-phase characteristics of AMC unit cells with different geometries are compared, and square unit cells are selected to construct a 9 × 7 AMC reflective layer. Owing to its in-phase reflection property, the AMC structure removes the conventional requirement for a quarter-wavelength spacing between the antenna and a metallic ground plane, thereby reducing the profile height. The dipole patch adopts an optimized meandered current-bending structure to reduce the lateral size. Metallic shorting walls are further loaded at both ends of the antenna. According to image theory, equivalent currents are generated on the outer surfaces of these metal walls during operation, which effectively extends the electrical length and improves low-frequency performance without increasing the physical size. In addition, two vertical metallic walls are connected to the ground plane on both sides of the antenna to form a reflective back cavity, which strengthens unidirectional radiation and improves antenna gain. As part of the overall co-design, four 125 Ω resistors are inserted between the feed region and the metallic sidewalls. This resistive loading suppresses strong low-frequency resonances and broadens the impedance bandwidth at the cost of acceptable Ohmic loss.  Results and Discussions  A prototype with favorable simulated performance is fabricated and measured in a microwave anechoic chamber. The measured impedance bandwidth for VSWR<2 is 50~400 MHz, which agrees well with the simulated range of 84~366 MHz. The measured impedance matching is slightly better than the simulated result, mainly because cable loss and power-divider loss in the feeding network reduce the reflected power. The measured gain follows the same trend as the simulated gain, with deviations within 1 dBi. Radiation-pattern measurements show that at 100, 200, and 300 MHz, the measured copolarization patterns agree well with the simulated results, and the maximum radiation direction remains normal to the antenna plane, which confirms the effectiveness of the proposed design. As shown in Fig. 5, the current on the radiating patch layer mainly flows along the +x direction and generates a radiated electric field along the +z direction. The current on the AMC unit can be represented by an equivalent current loop oriented along the +z direction. At this frequency, the x-direction current and the parasitic current loop on the AMC jointly enhance the antenna gain. This result explains the gain-improvement mechanism of the AMC structure. When the operating frequency increases to 400 MHz, the electrical size of the antenna reaches approximately \begin{document}$ 1.6\lambda $\end{document}, which causes main-lobe splitting and shifts the maximum radiation direction toward 90°. Although this high-frequency beam splitting introduces spatial clutter, it is an acceptable physical trade-off for achieving the ultra-low profile of 0.07 λL, while the overall UWB characteristic still supports high time-domain resolution in probing radar systems. At 400 MHz, the measured H-plane co-polarization level is slightly higher than the simulated value, possibly because of coupling between the feeding cable and the vertically mounted antenna.  Conclusions  A low-profile UWB planar dipole antenna is proposed for VHF-band probing radar applications. By combining the AMC layer, metallic shorting walls, and resistive loading, the proposed design improves impedance matching while preserving a compact size. The reflective back cavity further improves the realized gain. The fabricated prototype shows good agreement between measurement and simulation. The antenna operates over 100–366 MHz and exhibits a measured VSWR<2 bandwidth of 50~400 MHz. It maintains a compact electrical size of 0.38λL × 0.18λL × 0.07λL, and the maximum measured gain within the operating band reaches 6 dBi. The proposed co-design provides a practical solution for low-frequency probing radar antennas that require wide bandwidth, low profile, and relatively high gain.
Semantic Relation-enhanced Adaptive Graph Representation Learning for Next POI Recommendation
WANG Zhuolu, XU Shenghua, WANG Yong, JIANG Shunshun
Available online  , doi: 10.11999/JEIT251357
Abstract:
  Objective  In recent years, next Point Of Interest (POI) recommendation has played an increasingly important role in Location-Based Social Networks (LBSNs). However, existing Graph Representation Learning (GRL)-based recommendation methods have struggled to balance node distributions across different domains (i.e., node types) effectively and have often overlooked feature differences among heterogeneous relations. Thus, complex semantic dependencies in contextual information cannot be fully captured when users’ temporal preference patterns are modeled.  Methods  To address these issues, a next POI recommendation method based on Semantic Relation-enhanced adaptive Graph Representation Learning (SR-GRL) is proposed. A heterogeneous transition graph is constructed to integrate three entity types, namely POIs, POI categories, and regions, and their complex interrelationships. An adaptive balanced random walk sampling strategy is designed to balance node distributions across different domains dynamically and to reduce information redundancy. A type-aware attention mechanism is then used to learn semantic associations among nodes through relation-specific transformation matrices, so that feature differences across node types can be identified effectively. The obtained disentangled POI representations are then used for spatiotemporal encoding of user check-in sequences, and a self-attention mechanism is applied to aggregate users, temporal preference features. Finally, next POI recommendation is generated through a Softmax function.  Results and Discussions  Experiments on the Foursquare datasets from Tokyo and New York and the Sina Weibo dataset from Shanghai show that, compared with state-of-the-art baselines, the SR-GRL method achieves Recall@10 improvements of 2.22%\begin{document}$ \sim $\end{document}24.16%, F1@10 improvements of 1.16%\begin{document}$ \sim $\end{document}10.48%, and NDCG@10 improvements of 3.01%\begin{document}$ \sim $\end{document}17.37%, indicating better recommendation performance.  Conclusions  Overall, the SR-GRL approach can balance the distributions of different node types dynamically and strengthen the modeling of complex semantic dependencies in heterogeneous contextual information.
Design of an Aerospace-grade Radiation-hardened SRAM Cell for High-speed Read/Write Applications
CAI Shuo, SHUAI Wei, HU Xing, LIANG Xinjie, HUANG Zhu, YU Fei
Available online  , doi: 10.11999/JEIT251287
Abstract:
  Objective  With the continued scaling of Complementary Metal-Oxide-Semiconductor (CMOS) technology nodes and the reduction in supply voltage, Static Random Access Memory (SRAM) in aerospace environments becomes increasingly sensitive to high-energy particle radiation and is prone to Single-Node Upset (SNU) and Double-Node Upset (DNU). This sensitivity poses a serious challenge to the reliability of spaceborne Systems-on-Chip (SoC). Existing Radiation-Hardened-By-Design (RHBD) structures, however, usually cannot balance strong radiation tolerance with high-speed access performance. This work therefore aims to design an aerospace-grade radiation-hardened SRAM cell for high-speed read/write applications that provides both strong radiation resistance and fast access performance.  Methods  The proposed Read Fast and Write Fast 16-Transistor (RFWF16T) SRAM is built on a dual-source isolation architecture composed of 16 transistors (8 PMOS and 8 NMOS) (Fig. 1, Fig. 2). By using a symmetric recovery mechanism, the RFWF16T reduces the number of key sensitive nodes to only two. Redundant transistors (P2 and P6) are used to establish a stable high-level isolation state, which isolates the storage nodes from potential disturbances during the non-access phase. To achieve high-speed operation, the RFWF16T combines a short feedback path with a low-impedance voltage discharge loop. Unlike conventional hardened cells that rely on stacked transistors, which increase resistance and delay, the RFWF16T adopts a parallel access topology connected to word lines and bit lines. This configuration forms a low-impedance path during write operations and significantly accelerates node voltage switching (Fig. 3). Performance verification confirms the self-recovery capability of the four data nodes. A comprehensive variation analysis is conducted, including Process-Voltage-Temperature (PVT) variations and 2,000-point Monte Carlo simulations. Additionally, an improved Electrical Quality Metric (EQM) is proposed to evaluate multidimensional performance quantitatively.  Results and Discussions  The RFWF16T exhibits strong overall performance, particularly in overcoming the speed bottleneck of hardened SRAM cells. In terms of access speed, the RFWF16T performs substantially better than typical models such as S8P8N, SAW16T, and RH20T. Under standard conditions (28 nm CMOS process, 1.0 V, 25 °C, TT corner), the RFWF16T achieves a Read Access Time (RAT) of 20.97 ps and a Write Access Time (WAT) of 2.72 ps. These values correspond to average speed improvements of 46.65% and 14.77%, respectively, over eight comparable hardened structures (Table 2). PVT analysis confirms that the RFWF16T maintains the lowest latency across voltages from 0.7 V to 1.1 V and temperatures from –25 °C to 125 °C (Fig. 6). This write-speed advantage is attributed to the removal of write contention through optimized discharge paths. In terms of noise margin and stability, the RFWF16T demonstrates strong robustness and achieves the highest Write Word-line Toggle Voltage (WWTV) among nine comparative structures. Its Hold Static Noise Margin (HSNM) and Read Static Noise Margin (RSNM) also rank among the best, which ensures stability under disturbances (Fig. 7). In radiation hardening, the RFWF16T achieves a 100% self-recovery rate for SNUs and an 83.3% recovery rate for DNUs, reaching the state-of-the-art level among DNU-recoverable units (Table 1). Monte Carlo simulations confirm that the average recovery times of the internal nodes range from 1.09 ns to 1.19 ns (Fig. 4, Fig. 5). In terms of overhead, the RFWF16T maintains a normalized area of 1.00× (4.3 μm × 1.9 μm) (Table 3, Fig. 2) and an average power consumption of 23.45 nW (Table 4). Although the power consumption is slightly higher, this increase is a reasonable trade-off for the substantial speed advantage. In the EQM evaluation, the RFWF16T obtains the highest score, which confirms its overall advantage in balancing reliability, speed, and stability (Fig. 9).  Conclusions  A radiation-hardened SRAM cell, RFWF16T, is proposed for aerospace-grade high-speed read/write applications. The cell contains only two sensitive nodes and achieves 100% self-recovery for SNUs and an 83.3% recovery rate for DNUs, which demonstrates strong radiation tolerance. Compared with eight other SRAM cells, the RFWF16T significantly reduces read and write delay with only a slight increase in area and power consumption, while maintaining good noise immunity and the best electrical quality metric. PVT and Monte Carlo simulations further confirm the stability and robustness of the proposed cell under different operating conditions. Future work will focus on array-level integration and tape-out verification, and on its application in satellite-borne high-speed data processing.
Construction of Maximum Distance Separable Codes and Near Maximum Distance Separable Codes Based on Cyclic Subgroup of \begin{document}$ \mathbb{F}_{{q}^{2}}^{*} $\end{document}
DU Xiaoni, XUE Jing, QIAO Xingbin, ZHAO Ziwei
Available online  , doi: 10.11999/JEIT251204
Abstract:
  Objective  The demand for higher performance and efficiency in error-correcting codes has increased with the rapid development of modern communication technologies. These codes detect and correct transmission errors. Because of their algebraic structure, straightforward encoding and decoding, and ease of implementation, linear codes are widely used in communication systems. Their parameters follow classical bounds such as the Singleton bound: for a linear code with length \begin{document}$ n $\end{document} and dimension \begin{document}$ k $\end{document}, the minimum distance \begin{document}$ d $\end{document} satisfies \begin{document}$ d\leq n-k+1 $\end{document}. When \begin{document}$ d=n-k+1 $\end{document}, the code is a Maximum Distance Separable (MDS) code. MDS codes are applied in distributed storage systems and random error channels. If \begin{document}$ d=n-k $\end{document}, the code is Almost MDS (AMDS); when both a code and its dual are AMDS, the code is Near MDS (NMDS). NMDS codes have geometric properties that are useful in cryptography and combinatorics. Extensive research has focused on constructing structurally simple, high-performance MDS and NMDS codes. This paper constructs several families of MDS and NMDS codes of length \begin{document}$ q+3 $\end{document} over the finite field \begin{document}$ {\mathbb{F}}_{{{q}^{2}}} $\end{document} of even characteristic using the cyclic subgroup \begin{document}$ {U}_{q+1} $\end{document}. Several families of optimal Locally Repairable Codes (LRCs) are also obtained. LRCs support efficient failure recovery by accessing a small set of local nodes, which reduces repair overhead and improves system availability in distributed and cloud-storage settings.  Methods  In 2021, Wang et al. constructed NMDS codes of dimension 3 using elliptic curves over \begin{document}$ {\mathbb{F}}_{q} $\end{document}. In 2023, Heng et al. obtained several classes of dimension-4 NMDS codes by appending appropriate column vectors to a base generator matrix. In 2024, Ding et al. presented four classes of dimension-4 NMDS codes, determined the locality of their dual codes, and constructed four classes of distance-optimal and dimension-optimal LRCs. Building on these works, this paper uses the unit circle \begin{document}$ {U}_{q+1} $\end{document} in \begin{document}$ {\mathbb{F}}_{{{q}^{2}}} $\end{document} and elliptic curves to construct generator matrices. By augmenting these matrices with two additional column vectors, several classes of MDS and NMDS codes of length \begin{document}$ q+3 $\end{document} are obtained. The locality of the constructed NMDS codes is also determined, yielding several classes of optimal LRCs.  Results and Discussions  In 2023, Heng et al. constructed generator matrices with second-row entries in \begin{document}$ \mathbb{F}_{q}^{*} $\end{document} and with the remaining entries given by nonconsecutive powers of the second-row elements. In 2025, Yin et al. extended this approach by constructing generator matrices using elements of \begin{document}$ {U}_{q+1} $\end{document} and obtained infinite families of MDS and NMDS codes. Following this direction, the present study expands these matrices by appending two column vectors whose elements lie in \begin{document}$ {\mathbb{F}}_{{{q}^{2}}} $\end{document}. The resulting matrices generate several classes of MDS and NMDS codes of length \begin{document}$ q+3 $\end{document}. Several classes of NMDS codes with identical parameters but different weight distributions are also obtained. Computing the minimum locality of the constructed NMDS codes shows that some are optimal LRCs satisfying the Singleton-like, Cadambe–Mazumdar, Plotkin-like, and Griesmer-like bounds. All constructed MDS codes are Griesmer codes, and the NMDS codes are near Griesmer. These results show that the proposed constructions are more general and unified than earlier approaches.  Conclusions  This paper constructs several families of MDS and NMDS codes of length \begin{document}$ q+3 $\end{document} over \begin{document}$ {\mathbb{F}}_{{{q}^{2}}} $\end{document} using elements of the unit circle \begin{document}$ {U}_{q+1} $\end{document} and oval polynomials, and by appending two additional column vectors with entries in \begin{document}$ {\mathbb{F}}_{q} $\end{document}. The minimum locality of the constructed NMDS codes is analyzed, and some of these codes are shown to be optimal LRCs. The framework generalizes earlier constructions, and the resulting codes are optimal or near-optimal with respect to the Griesmer bound.
A Miniaturized Steady-State Visual Evoked Potential Brain-Computer Interface System
CAI Yu, WANG Junyang, JIANG Chuanli, LUO Ruixin, LÜ Zhengchao, YU Haiqing, HUANG Yongzhi, ZHONG Ziping, XU Minpeng
Available online  , doi: 10.11999/JEIT251223
Abstract:
  Objective  The practical use of Brain-Computer Interface (BCI) systems in daily settings is limited by bulky acquisition hardware and the cables required for stable performance. Although portable systems exist, achieving compact hardware, full mobility, and high decoding performance at the same time remains difficult. This study aims to design, implement, and validate a wearable Steady-State Visual Evoked Potential (SSVEP) BCI system. The goal is to create an integrated system with ultra-miniaturized and concealable acquisition hardware and a stable cable-free architecture, and to show that this approach provides online performance comparable with laboratory systems.  Methods  A system-level solution was developed based on a distributed architecture to support wearability and hardware simplification. The core component is an ultra-miniaturized acquisition node. Each node functions as an independent EEG acquisition unit and integrates a Bluetooth Low Energy (BLE) system-on-chip (CC2640R2F), a high-precision analog-to-digital converter (ADS1291), a battery, and an electrode in one encapsulated module. Through an optimized 6-layer PCB design and stacked assembly, the module size was reduced to 15.12 mm × 14.08 mm × 14.31 mm (3.05 cm3) with a weight of 3.7 g. Each node uses one active electrode, and all nodes share a common reference electrode connected by a thin short wire. This structure reduces scalp connections and allows concealed placement in hair using a hair-clip form factor. Multiple nodes form a star network coordinated by a master device that manages communication with a stimulus computer. A cable-free synchronization strategy was implemented to handle timing uncertainties in distributed wireless operation. Hardware-event detection and software-based clock management were combined to align stimulus markers with multi-channel EEG data without dedicated synchronization cables. The master device coordinates this process and streams synchronized data to the computer for real-time processing. System evaluation was conducted in two phases. Foundational performance metrics included physical characteristics, electrical parameters (input-referred noise: 3.91 mVpp; common-mode rejection ratio: 132.99 dB), and synchronization accuracy under different network scales. Application-level performance was assessed using a 40-command online SSVEP spelling task with six subjects in an unshielded room with common RF interference. Four nodes were placed at Pz, PO3, PO4, and Oz. EEG epochs (0.14\begin{document}$ \sim $\end{document}3.14 s post-stimulus) were analyzed using Canonical Correlation Analysis (CCA) and ensemble Task-Related Component Analysis (e-TRCA) to compute recognition accuracy and Information Transfer Rate (ITR).  Results and Discussions  The system met its design objectives. Each acquisition node achieved an ultra-compact form factor (3.05 cm3, 3.7 g) suitable for concealed wear and provided more than 5 hours of battery life at a 1 000 Hz sampling rate. Electrical performance supported high-quality SSVEP acquisition. The cable-free synchronization strategy ensured stable operation. More than 95% of event markers aligned with the EEG stream with less than 1 ms error (Fig. 4), meeting SSVEP-BCI requirements. This stability supported the quality of recorded neural signals. Grand-averaged SSVEP responses showed clear and stable waveforms with precise phase alignment (Fig. 5). The signal-to-noise ratio at the fundamental stimulation frequency exceeded 10 dB for all 40 commands (Fig. 6). In the online spelling experiment, the system showed strong decoding performance. With the e-TRCA algorithm and a 3-s window, the average accuracy was (95.00 ± 2.04)%. The system reached a peak ITR of (147.24 ± 30.52) bits/min with a 0.4-s data length (Fig. 7). Comparison with existing SSVEP-BCI systems (Table 1) indicates that, despite constraints of miniaturization, cable-free use, and four channels, the system achieved accuracy comparable with several cable-dependent laboratory systems while offering improved wearability.  Conclusions  This work presents a wearable SSVEP-BCI system that integrates ultra-miniaturized hardware with a distributed cable-free architecture. The results show that coordinated hardware and system design can overcome tradeoffs between device size, user mobility, and decoding capability. The acquisition node (3.7 g, 3.05 cm3) supports concealable wearability, and the synchronization strategy provides reliable cable-free operation. In a realistic environment, the system produced online performance comparable with many cable-dependent setups, achieving 95.00% accuracy and a peak ITR of 147.24 bits/min in a 40-target task. Therefore, this study provides a practical system-level solution that supports progress toward wearable high-performance BCIs.
Model-Free Adaptive Resilient Control of Vehicle Platoons Against Hybrid Cyberattacks
HAN Qiaoni, MA Jianguo, LI Peng, ZUO Zhiqiang
Available online  , doi: 10.11999/JEIT251135
Abstract:
  Objective  Connected and automated vehicle platoons represent a key technology for improving traffic efficiency, driving safety, and fuel economy in intelligent transportation systems. Through inter-vehicle information exchange and cooperative control, vehicle platoons achieve safe and efficient car-following operations. However, the strong dependence on vehicular communication networks makes such systems vulnerable to cyberattacks, particularly hybrid threats that combine Denial-of-Service (DoS) and False Data Injection (FDI) attacks. These attacks may interrupt communication or tamper with transmitted information, which threatens the safety and stability of vehicle platoon systems. In addition, vehicle platoon control is affected by environmental disturbances, parametric uncertainties, and nonlinear vehicle dynamics. Existing model-based control methods often experience performance degradation under such complex conditions. Therefore, a resilient data-driven control strategy that does not rely on accurate mechanical models is required. This paper develops an attack-compensated Model-Free Adaptive Control (MFAC) framework to ensure secure and stable operation of heterogeneous nonlinear vehicle platoons under hybrid cyberattacks.  Methods  To address the resilient control problem of connected vehicle platoons under cyberattacks, an MFAC method with attack compensation is proposed for hybrid attacks that include both DoS and FDI attacks. First, a nonlinear longitudinal vehicle dynamics model of the platoon is established. Using the dynamic linearization technique, the model is converted into an equivalent compact-form dynamic linearized data model. This transformation decouples controller design from the specific mechanical model of the vehicle. An output tuning factor is further introduced to balance the tracking of position and velocity states. Second, a hybrid attack model is constructed to represent persistent FDI attacks that inject malicious data and aperiodic DoS attacks that interrupt communication. A pseudo-gradient estimator is then designed to capture system dynamics from real-time input-output data. The influence of hybrid attacks on this estimator is analyzed, and an adaptive update strategy is proposed for operation during DoS attacks. Finally, an intelligent attack compensation mechanism is designed. During DoS attack periods, the mechanism utilizes historical control input information to maintain controller operation. This design enables the system to operate continuously even when real-time vehicle state information is unavailable and further improves control performance under DoS attacks.  Results and Discussions  Rigorous theoretical analysis proves that the tracking error of the closed-loop system remains bounded under specific conditions on the frequency and duration of cyberattacks (Theorem 1). Extensive simulations verify the effectiveness of the proposed method. During cyberattacks, the MFAC method with the proposed compensation mechanism adaptively adjusts the attenuation rate of the control input and maintains system control performance (Fig. 3). Follower vehicles successfully track the leader’s velocity variations and maintain the desired inter-vehicle spacing (Fig. 4a, 4b). The tracking error exhibits satisfactory convergence behavior (Fig. 4d), which confirms the stability of the closed-loop system. Comparative studies highlight the role of the compensation mechanism. When the mechanism is disabled, the platoon experiences clear performance degradation during cyberattacks (Fig. 5). In contrast, the proposed method maintains higher tracking accuracy and faster error recovery. Additional simulations analyze the effect of FDI attack intensity. As attack intensity increases, the steady-state error bound expands (Fig. 6). This observation quantitatively supports the theoretical robustness analysis and provides useful guidance for determining security thresholds in applications.  Conclusions  This paper advances secure control of heterogeneous nonlinear connected vehicle platoons by proposing an attack-compensated MFAC framework. The framework addresses the combined challenges of hybrid cyberattacks (DoS and FDI attacks) and nonlinear system dynamics. Specifically, three key contributions are made: (1) A data-driven dynamic linearization framework is developed, and an output tuning factor is introduced to enable simultaneous position and velocity tracking based on the nonlinear longitudinal vehicle dynamics model and its equivalent data-based linearized model. (2) A hybrid attack model is established that includes aperiodic DoS attacks that interrupt communication and bounded additive FDI attacks that inject malicious data, capturing their essential characteristics. (3) An intelligent historical input-driven compensation mechanism is designed and integrated with a pseudo-gradient estimator to improve control performance during DoS-induced communication interruptions. Theoretical analysis and simulation results confirm the effectiveness of the proposed method. When attack parameters satisfy specific conditions, the system tracking error remains bounded, and follower vehicles accurately track the leader’s states. The proposed method also achieves better velocity tracking accuracy and faster error convergence than the compensation-free baseline scheme. By focusing on hybrid scenarios with aperiodic DoS and bounded additive FDI attacks, this study provides a practical model-free approach to improve cybersecurity in connected vehicle platoons. Future work will examine more stealthy hybrid attack modes, including non-additive FDI, spoofing, and DoS attacks, to analyze their coupling mechanisms and develop targeted defense strategies. In addition, a communication-efficient MFAC strategy that integrates an event-triggered mechanism will be investigated to reduce network load and improve scalability.
SCUNet-Based Decoding Algorithm for Rayleigh Fading Channels Integrating Feature Extraction and Recovery Mechanisms
WANG Leijun, WANG Kuan, XIE Jinfa, PENG Xidong, LI Jiawen, CHEN Rongjun
Available online  , doi: 10.11999/JEIT251138
Abstract:
  Objective  This study examines limitations of conventional Deep Neural Network (DNN) decoding algorithms in Rayleigh fading channels, including constrained performance, limited generalization, and weak fading resistance. To address these issues, a decoding algorithm based on the SCUNet (Swin Conv UNet) architecture, termed SCUNetDec, is proposed. In 6G communication scenarios, wireless channels exhibit strong dynamics and complexity, which restrict the ability of traditional decoding methods to meet requirements for high reliability, low latency, and robustness. Intelligent decoding methods with adaptive feature learning are therefore valuable. SCUNetDec integrates multi-dimensional feature extraction and recovery modules and uses a noise-level map to strengthen channel-state perception. These components enable the network to learn channel characteristics, reduce fading effects, and improve decoding performance. The study provides an approach for intelligent decoding in complex channel environments and supports the development of efficient 6G communication systems.  Methods  The SCUNetDec network combines three mechanisms—data preprocessing, feature extraction and recovery, and noise-level mapping—to enhance signal representation learning and decoding in Rayleigh fading channels. In the preprocessing stage, dimensionality expansion converts the one-dimensional received signal into a two-dimensional feature map, improving structural visibility and supporting spatial correlation learning. The feature extraction and recovery module uses multi-layer convolution and attention mechanisms to capture essential channel features, whereas deconvolution layers and residual connections suppress interference introduced during dimensionality transformation. This improves reconstruction quality and decoding accuracy. A noise-level map embeds SNR (Signal to Noise Ratio)-related information aligned with the feature maps, allowing the model to adjust to channel variation and adapt decoding strength. The combined effect of these mechanisms increases noise robustness, generalization, and decoding stability, offering a systematic decoding solution for complex 6G wireless environments.  Results and Discussions  SCUNetDec enhances signal learning and decoding in Rayleigh fading channels through its feature extraction–recovery module and noise-level map. Simulations under different coding schemes validate its effectiveness. For the (7,4) Hamming code, SCUNetDec outperforms conventional DNN decoding and approaches Maximum Likelihood (ML) performance; at BER (Bit Error Rate) = 10–4, the gap to ML is about 1.5 dB, and at FER (Frame Error Rate) = 10–3, the gap is about 2.0 dB (Fig. 4). This indicates that SCUNetDec captures complex signal relationships and learns associations between information and parity-check nodes. For the (2,1,3) convolutional code, SCUNetDec performs close to the Viterbi algorithm at BER = 10–3, with a gap of roughly 2.0 dB, while conventional DNN decoding degrades at high SNRs (Fig. 5). For Polar codes with a rate of 0.5, SCUNetDec shows a gain of about 4.0 dB over successive cancellation (SC) decoding at BER = 10–4 and maintains an advantage of about 1.0 dB at FER = 10–3, with SC performing slightly better only in the low-SNR region (Fig. 6). Decoding-time comparisons show that SCUNetDec reduces decoding latency relative to traditional methods (Table S1). Ablation experiments confirm that integrating the feature extraction and recovery modules into SCUNet improves decoding performance (Fig. 7). Overall, results show that SCUNetDec provides robust decoding performance across coding schemes and SNR levels.  Conclusions  This study proposes SCUNetDec to address performance limitations of DNN decoders in Rayleigh fading channels. The method enhances SCUNet using signal feature extraction and recovery modules. Simulations and ablation experiments on Hamming, convolutional, and Polar codes show strong generalization capability and effectiveness. Compared with traditional DNN models, SCUNetDec achieves decoding performance close to optimal decoding algorithms and reduces decoding time. These findings indicate that SCUNetDec has practical potential for complex channel environments. Future work will examine fusion of neural and traditional algorithms to balance performance and complexity through dynamic parameter optimization and explore intelligent decoding strategies for long codes. Research will also investigate joint modulation–decoding modeling and end-to-end architectures to improve adaptability under high-order modulation and complex channels.
Multimodal Pedestrian Trajectory Prediction with Multi-Scale Spatio-Temporal Group Modeling and Diffusion
KONG Xiangyan, GAO YuLong, WANG Gang
Available online  , doi: 10.11999/JEIT250900
Abstract:
  Objective  The rapid development of autonomous driving and social robotics has increased the need for accurate pedestrian trajectory prediction to improve safety and interaction efficiency. Existing group-based methods mainly emphasize local spatial interaction and often overlook latent grouping characteristics across time. This study proposes a multi-scale spatiotemporal feature construction method that separates trajectory shape from absolute spatiotemporal coordinates. This enables the model to capture latent group associations across different temporal intervals. A spatiotemporal interaction three-element encoding mechanism is incorporated to extract dynamic relationships between individuals and groups. By integrating the reverse process length mechanism of diffusion models, the system progressively reduces prediction uncertainty. This approach provides an effective solution for multimodal trajectory prediction in complex, crowded scenes and offers theoretical support for improving the accuracy and stability of long-range trajectory forecasting.  Methods  The algorithm performs deep modeling of pedestrian trajectories through multi-scale spatiotemporal group modeling across three components: group construction, interaction modeling, and trajectory generation. First, to address the limitations of methods that focus on local spatiotemporal patterns but overlook cross-dimensional latent characteristics, a multiscale trajectory grouping model is developed. Its core design extracts trajectory offsets to represent trajectory shapes, separating motion features from absolute positions. This enables the system to identify latent group associations among agents who follow similar motion patterns across different periods. Second, a spatiotemporal interaction three-element encoding method is proposed. By defining neural interaction strength, interaction categories, and category functions, the method captures detailed individual interactions and the global dynamic evolution of collective behavior. Finally, a Diffusion Model is introduced for multimodal prediction. Through the reverse process length mechanism, the model converges gradually, reduces uncertainty, and transforms a diffuse prediction space into plausible future trajectories.  Results and Discussions  The proposed model was evaluated against 11 state-of-the-art baselines on the NBA dataset (Table 1). The results show clear advantages in minADE20. It achieves substantial gains over GroupNet+CVAE in long-term prediction tasks, improving minADE20 and minFDE20 by 0.18 and 0.36, respectively, at the 4-second horizon. Although it is slightly inferior to MID in long-term trend prediction, possibly because group dynamics shift rapidly and intensely in NBA scenarios, the model maintains strong instantaneous accuracy. This supports the effectiveness of the multi-scale grouping strategy, which uses historical trajectories to capture complex dynamic interactions. On the ETH/UCY datasets (Table 2), MSGD provides consistent improvements across all five sub-scenes. In the dense and highly interactive UNIV scene, the method exceeds all baselines by leveraging the strengths of multi-scale modeling. Although MSGD is marginally behind PPT in long-distance endpoint constraints, it maintains a lead in minADE20. It also outperforms Trajectory++ in velocity smoothness and directional coherence (std dev: 0.701 2) (Table 3), indicating that the generated trajectories maintain natural smoothness aligned with human motion. Ablation studies verify the independent effects of the diffusion model, spatiotemporal feature extraction, and multi-scale grouping modules (Table 4). Grouping sensitivity analysis on the NBA dataset shows that full-court grouping (group size 11) enhances long-term stability, reducing minFDE20 by 0.026–0.03 at 4 seconds (Table 5). Configurations with group sizes of 5 or 2 further support the importance of team formations and “one-on-one” local offensive and defensive dynamics (Table 6). Diffusion-step and training-epoch sensitivity analysis reveals a complementary relationship: moderate diffusion steps (30–40) refine denoising and improve accuracy, whereas excessive steps may cause overfitting (Table 7). Qualitative visualization confirms that MSGD generates multimodal trajectories with high overlap with ground truth (Fig. 2).  Conclusions  This study presents a trajectory prediction algorithm that improves performance in two primary ways: (1) it captures pedestrian interactions by extracting spatiotemporal features, and (2) it strengthens collective behavior modeling through multi-scale grouping. Experiments show that the method achieves state-of-the-art performance on the NBA and ETH/UCY datasets, and ablation studies confirm the effectiveness of all modules. Two limitations remain. First, explicit environmental information, such as maps or obstacles, is not yet incorporated. Second, the diffusion model requires substantial computational cost during inference. Future research will address these issues.
Design of Dynamic Resource Awareness and Task Offloading Schemes in Multi-Access Edge Computing Networks
ZHANG Bingxue, LI Xisheng, YOU Jia
Available online  , doi: 10.11999/JEIT250640
Abstract:
  Objective  With the growth of the industrial Internet of Things and the widespread use of multimode terminals, multi-access edge computing has become a key technology that supports low-latency and energy-efficient industrial applications. Task offloading is central to addressing the large volume and complex processing requirements of multimode terminals. In multi-access edge computing systems, end-user network selection strongly affects offloading and resource allocation. However, existing network-selection mechanisms emphasize user decisions while neglecting the effects of task execution, task-data transmission, and processing on network performance. Current studies on offloading design emphasize delay, energy optimization, and resource allocation, but overlook how collaborative computing across heterogeneous networks affects resource cost and dynamic resource balance. To address these issues, this study considers users’ diverse requirements and the differentiated capabilities of heterogeneous resource providers. It focuses on cost-efficient task-execution decisions and dynamic-resource allocation in multi-access heterogeneous networks to reduce system cost, improve service quality, and support cooperative use of heterogeneous resources.  Methods  Following the MEC network model, this study establishes cost-calculation models for task-execution time, energy consumption, and communication-resource consumption for different networks during end-user task selection. Using auction theory, it constructs a cost-effectiveness model for task evaluation and bidding between users and edge servers, and formulates the objective optimization problem based on combinatorial two-way auction theory. A dynamic resource-sensing and task offloading algorithm based on an auction mechanism is then proposed. Through two-way broadcasting of pending tasks and required resources, the algorithm performs network-selection assessment and dynamic allocation of computing and communication resources. Servers submit valid bids only when their available resources satisfy user constraints. Servers that issue valid bids compete for task-execution opportunities until the user obtains the optimal bid and corresponding server, which completes the auction-matching process.  Results and Discussions  The proposed dynamic-resource allocation and task offloading algorithm accounts for heterogeneous-network conditions and resource usage, and selects offloading locations based on resource availability. By setting simulation parameters, a heterogeneous wireless-network cooperation model is constructed. The effects of network size on offloading cost and offloaded data volume are analyzed. Simulation results show that the algorithm reduces system cost by at least 5% compared with benchmark algorithms (Fig. 3), with larger advantages when the number of end users increases. Changes in the number of servers influence users’ network-selection behavior (Fig. 4, 5, 6). Across algorithms, the proposed method increases the amount of offloaded data by approximately 10% relative to benchmark schemes (Fig. 7, 8). Finally, the study analyzes how variation in communication-resource cost parameters affects users’ preference for offloading via the 5G public network. Higher communication-cost parameters markedly reduce the data volume offloaded through the 5G network (Fig. 9).  Conclusions  To address complex data-processing demands from multimode terminals, this study develops a cooperative multi-access edge computing architecture for multimode devices. Flexible and intelligent wireless-network selection provides additional resources for end-user task offloading. A server-bidding and user-target bidding model is built using an auction framework, and a dynamic resource-perception and task offloading algorithm is proposed. The algorithm first adjusts and selects the offloading network and allocates computing and communication resources according to incoming tasks. It then determines the offloading location with minimum execution cost based on competition among edge servers. Results indicate that the proposed algorithm lowers system cost compared with benchmark approaches, increases the amount of data offloaded to multiple edge servers, improves utilization of edge-computing resources, and enhances system energy efficiency and operational efficiency.
UAV-assisted Mobile Edge Computing based on Hybrid Hierarchical DRL in the Internet of Vehicular
YANG Miaoyan, FANG Xuming
Available online  , doi: 10.11999/JEIT250743
Abstract:
  Objective  In the Internet of Vehicles (IoV), the use of Unmanned Aerial Vehicles (UAVs) to address increasing edge computing demand has become a key direction in 6G research. However, when Deep Reinforcement Learning (DRL) is applied to optimize system latency, the action space grows exponentially with the number of vehicles and causes training difficulty and slow convergence. This study proposes a two-layer hybrid solution for UAV-assisted Mobile Edge Computing (MEC) based on DRL, termed Hybrid Hierarchical Deep Reinforcement Learning (HHDRL).  Methods  The HHDRL algorithm adopts a two-layer architecture to decompose complex optimization tasks. The upper layer uses an agent based on Proximal Policy Optimization (PPO) and a multi-head actor network to manage user offloading and UAV control policies. The N heads determine offloading decisions for N users, including local processing or offloading to associated CAPs or the UAV. A separate UAV flight-control head selects discrete acceleration actions to satisfy practical control constraints. The lower layer applies a computationally efficient greedy algorithm to prioritize resources based on task characteristics. This hybrid hierarchical design reduces the computational cost associated with DRL-only resource allocation.  Results and Discussions  The performance of the HHDRL scheme was evaluated through numerical simulations using a Rician fading channel model, a UAV flight energy consumption model, and system parameters such as mission data sizes of 9~18 Mbits and mission complexities of 2 000~3 000 cycles/bit. Figure 3 shows that HHDRL converges faster than standard DRL, although the final reward is slightly lower. Figure 4 indicates that HHDRL maintains the user delay fairness of DRL. The evaluation in Figure 5 shows that the proposed method reduces system latency by approximately 71~91% compared with a random baseline and by 1~12% compared with the original DRL algorithm. Figure 6 shows training time results for different numbers of users; HHDRL consistently achieves shorter training times, and its training time grows more slowly as the number of users increases. This results from the reduced DRL output action space. When the PPO-based upper layer is replaced with other DRL algorithms, the scheme still outperforms the random baseline and achieves performance comparable to non-hierarchical DRL, demonstrating the generality of the architecture. Figure 8 shows that computational resources have the strongest effect on latency because computation typically dominates total task processing time. Figure 9 presents UAV trajectory optimization. Figure 9(a) shows realistic velocity changes under discrete acceleration control. Figure 9(b) shows that the UAV adjusts its position to track dynamic user distribution while maintaining stable flight.  Conclusions  This study presents an HHDRL algorithm that integrates DRL with a greedy strategy in a hierarchical framework to address the training challenges of UAV-assisted MEC in IoV scenarios. The simulations show that (1) the proposed method accelerates convergence and reduces training time compared with standard DRL; (2) its latency performance is comparable to DRL and significantly better than heuristic and random baselines; and (3) the framework effectively manages task offloading, resource allocation, and UAV trajectory optimization under practical constraints. Future work will extend the framework to multi-UAV collaboration and more complex environments.
Multi-Agent Deep Reinforcement Learning Strategy for Multi-Spacecraft Long-Distance Orbital Game
DI Peng, YIN Zengshan, LIN Zheng, YAO Ye
Available online  , doi: 10.11999/JEIT251384
Abstract:
This paper introduces a novel research scenario for multi-spacecraft Orbital Pursuit-Evasion Game (OPEG), which has not yet been systematically studied. To enhance the decision-making capabilities of spacecraft and enable them to formulate more robust policies in complex multi-agent games, this paper proposes a multi-agent deep reinforcement learning algorithm based on a progressive adversarial training framework to solve the game policies of each spacecraft. Two sets of examples with different orbital characteristics and various simulation conditions were set up for simulation verification, and behavioral deviation analysis is conducted to verify the robustness of the policy. The impact of different orbital characteristics, simulation conditions, and behavioral deviations on the game policy was analyzed. Simulation results show that the proposed method enables each spacecraft to formulate an effective game policy that satisfies all set constraints and has good robustness.  Objective  As the space environment becomes increasingly complex, space security has become a hot research area. The existence of a large amount of space debris and failed spacecraft poses a serious threat to high-value spacecraft in orbit. Therefore, the study of Orbital Pursuit-Evasion Game (OPEG) for non-cooperative target spacecraft has attracted widespread attention. Existing research focuses on OPEG for two spacecraft, but less on OPEG for multiple spacecraft. When there are more than two players in the game, zero-sum game design is not feasible, and it is difficult to solve using traditional methods. Furthermore, existing research ignores engineering dynamic constraints and simplifies or defines the dynamics as a two-dimensional scene when modeling the problem, which can cause considerable errors. To overcome the limitations of existing spacecraft game scenarios, this paper proposes a novel multi-spacecraft OPEG research scenario. The aim is to investigate the application of the MADRL algorithm in solving the approximate steady-state policies of each spacecraft in long-distance multi-spacecraft OPEG, highlighting the significant advantages of the MADRL algorithm in solving multi-spacecraft OPEG, and providing a feasible solution for truly realizing autonomous multi-spacecraft game play in the future.  Methods  The Multi-Agent Proximal Policy Optimization (MAPPO) algorithm based on the Progressive Adversarial Training Framework (PATF) is used to solve the optimal game policy for each spacecraft in the Multi-Spacecraft OPEG. First, a multi-constrained multi-spacecraft OPEG model is established based on actual engineering constraints, and the problem is transformed into a Decentralized Partially Observable Markov Decision Process (Dec-POMDPs). Secondly, in order to improve the decision-making ability of agents in complex multi-agent game environments and formulate more robust game policies, a novel PATF is introduced, with different reward functions designed for the specific missions of each spacecraft. Finally, two sets of simulation examples with different orbital characteristics were set up, and four different simulation conditions were set up for simulation and behavioral deviation analysis was performed.  Results and Discussions  The MAPPO algorithm based on the PATF proposed in this paper is compared with the original MAPPO (Fig. 3). The results show that the proposed method can learn effective policies more quickly, reduce ineffective exploration, and achieve a higher final convergence reward value with less fluctuation in the reward curve. This also demonstrates that the PATF can significantly enhance the decision-making ability of agents, enabling them to formulate robust policies more effectively. Simulation verification was performed using two sets of examples in four different settings (Figs. 4, 5, 6, and 7). Simulation results (Tables 3 and 4) show that the proposed method performs well in both sets of examples. Furthermore, it was verified that when the pursuer and the interceptor are on the same orbital plane, the pursuer is more likely to be intercepted. When the interceptor and the target are not on the same orbital plane, the interceptor has a relatively easier time carrying out the interception mission. This paper also analyzes the situation where both sides of the game have behavioral biases, and models this by adding control noise. Simulation results (Tables 5 and 6) show that both sides adopt relatively conservative policies to counter the control noise. The game policy formulated by the method in this paper is an approximate steady-state policy. Behavioral deviations will lead to a decrease in one’s own payoff and an increase in the opponent's payoff, and the game policy has good robustness.  Conclusions  The method proposed in this paper can be well applied to solving the long-distance OPEG problem involving multiple spacecraft in non-coplanar elliptical orbits, enabling each spacecraft to formulate excellent game policies. The PATF facilitates better decision-making by the spacecraft in complex multi-spacecraft dynamic systems, with robust control policies developed by the pursuer and interceptors. The results also demonstrate the accuracy and effectiveness of the reward function design. Through two sets of examples and simulation results with different settings, the impact on the policies of both parties when the pursuer and interceptor have different orbital characteristics is analyzed. When interceptors have different maximum thrusts, the decision-making of each spacecraft changes accordingly. The behavior deviation analysis proves that the game policies of each spacecraft have good robustness. When one party’s behavior deviates, the approximate steady-state policy balance will change, resulting in a decrease in its own benefits and an increase in the other party’s benefits. The research scenario formulated in this paper expands the scope of existing research on multi-spacecraft game problems.
Joint Channel Estimation and Diagnosis for Blocked RIS-Assisted Multi-User Multipath Millimeter-Wave Systems
LI Shuangzhi, LIU Cong, WANG Ning, HAN Gangtao, GUO Xin
Available online  , doi: 10.11999/JEIT260093
Abstract:
  Objective  Although Reconfigurable intelligent surface (RIS) can effectively modulate Millimeter-Wave (mmWave) signals to reshape wireless environments, its elements are susceptible to weather and physical obstructions in practice, causing unpredictable distortions that necessitate joint channel estimation and blockage diagnosis. While most existing work focuses on single-user systems, multi-user scenarios remain underexplored—presenting a key opportunity to leverage the commonality of RIS blockages and RIS-Base station (BS) paths across users. This paper proposes a low-complexity framework exploiting the sparsity and correlation of multi-user cascaded channels for joint estimation and diagnosis.  Methods  Based on the premise that all User Equipments (UEs) share the same RIS-BS channel and a common RIS blockage, we decompose the problem into two stages. First, a target UE is selected, where we exploit the dual sparsity of the mmWave channel and blockage vector, along with linear dependencies among RIS-BS paths, to formulate a sparse recovery problem. This is solved via a hierarchical Bayesian model using an efficient sparse Bayesian learning algorithm for joint recovery. Second, partial Channel State Information (CSI) from the target UE constructs a common coupling matrix that integrates the RIS-BS channel and blockage, reformulating channel estimation for the remaining UEs as another sparse recovery problem.  Results and Discussions  This paper proposes a low-complexity strategy for cascaded channel estimation and blockage diagnosis by exploiting the sparsity and correlation of multi-user cascaded channels and leveraging RIS blockage commonality. Ideal estimation results serve as a theoretical lower bound, against which the proposed algorithm and two benchmark schemes are compared. Simulation results demonstrate that the proposed algorithm consistently outperforms the benchmarks (Fig. 1). Key findings include: higher target user SNR improves NMSE, highlighting selection importance (Fig. 2); strong convergence with increasing iterations (Fig. 3); closer approximation to the ideal case as time frames increase (Fig. 4); robustness under increased blockage (Fig. 5); performance gains from more base station antennas leveraging array orthogonality (Fig. 6); superior estimation with slightly lower runtime via path correlations (Table 3); and accuracy reduction with increasing path count due to higher model complexity (Figs. 7 and 8).  Conclusions  This paper proposes a joint channel estimation and blockage diagnosis framework for blocked RIS-assisted multi-user mmWave systems. Simulations show the method closely approaches the theoretical performance bound in complex multipath environments. It maintains performance advantages under high blockage rates while reducing pilot overhead and computational complexity via common channel structures. The work mitigates performance degradation in practical RIS deployments, clarifies key parameter impacts, and offers insights for system design. As practical blockages often exhibit block or structured sparsity, a promising direction is to incorporate structured priors (e.g., group sparsity, Markov random fields) into the SBL framework to capture spatial correlations and enhance diagnostic accuracy and robustness.
Multi-dimensional Resource Joint Optimization Algorithm for UAV Inspection of Collaborative Tasks of Perception and AI
LI Shiyang, ZHU Xiaorong
Available online  , doi: 10.11999/JEIT251284
Abstract:
  Objective  With increasing demand for aerial operations, the capabilities of various aircraft are steadily expanding across all airspace levels and multiple industries. The application of Unmanned Aerial Vehicles (UAVs) now spans multiple altitude layers, from low to high altitudes, and covers micro, medium, and large models. UAVs are widely used in public safety, transportation, emergency management, logistics and distribution, geographic surveying and mapping, and other fields, thereby promoting innovation and transformation in production and daily life. Compared with traditional manual inspection, UAV inspection, as an emerging operational approach, can acquire image information that is difficult for the human eye to capture. Labor costs are therefore significantly reduced, and the accuracy and efficiency of inspection operations are improved. However, UAV inspection also creates new challenges for multidimensional resource allocation and task scheduling. In power system inspection, for example, transmission lines are exposed to outdoor environments for long periods and are vulnerable to corrosion, aging, and even damage. Regular inspections are therefore required to ensure operational safety.   Methods  A four-stage multidimensional resource inspection and scheduling collaborative optimization algorithm is proposed. The original optimization problem is decomposed into four subproblems according to the inspection process. After mathematical analysis of each subproblem, a corresponding solution method is proposed. For the node selection problem, a dual-aided Mixed-Integer Linear Programming (MILP) transformation method is used. For the UAV data acquisition problem, a data-driven boundary learning method is adopted. For UAV communication resource allocation, a bandwidth-power joint optimization algorithm based on Successive Convex Approximation (SCA) is used. For node computing power allocation, a lower-bound analytical allocation method is adopted. Finally, the original problem is solved by an alternating optimization method across the subproblems, thereby forming the complete algorithm.  Results and Discussions  Simulation results show that the proposed algorithm reduces overall UAV energy consumption compared with the benchmark algorithms. Simulation training is conducted for visual positioning and fault detection services to examine the relationship among compression ratio, data volume, and service performance. Figures 25 show that fault detection accuracy reaches its optimum at 60% data volume and 60% compression ratio. Visual positioning accuracy reaches its optimum at 80% data volume and 80% compression ratio. Figure 6 shows that the proposed algorithm achieves higher accuracy than the benchmark algorithms for AI services. As shown in Figures 7 and 8, under varying bandwidth, computing power, and other resource conditions, the proposed algorithm consistently performs better than the benchmark algorithms in terms of energy consumption and effectively reduces total energy consumption.  Conclusions  A multidimensional resource joint optimization algorithm is proposed for intelligent UAV inspection with collaborative perception and AI tasks. An optimization problem is formulated with the objective of minimizing UAV energy consumption, using bandwidth, power, computing power, node selection, data volume, and actual compression ratio as variables. The algorithm jointly minimizes UAV energy consumption for two AI services, fault detection and visual localization. Simulation results show that the algorithm reduces total UAV energy consumption and improves model training accuracy. This study focuses on the application scenario of single-UAV inspection. More complex multi-UAV collaborative inspection scenarios can be examined in future work, and additional services can be incorporated for a more comprehensive analysis.
Aerial Spatio-Temporal Image Generation via Latent Diffusion Model
SHANG Yuying, HOU Yingyan, LIU Zinan, LU Wanxuan, HUANG Yuhong, WANG Yixiao, YU Hongfeng, FU Kun
Available online  , doi: 10.11999/JEIT260165
Abstract:
  Objective  Aerial Earth observation plays a pivotal role in environmental monitoring, disaster warning, and urban planning. However, constrained by flight platform endurance, mission window timeliness, and other operational limitations, the acquired aerial imagery often fails to fully characterize the long-term evolutionary processes of the Earth's surface. Although pre-trained diffusion models have demonstrated considerable potential in image generation, their application in the aerial domain remains challenging due to the scarcity of high-quality temporal annotation data and the semantic–visual misalignment arising from variable observation scales. To address these challenges, this paper proposes ASTIG, a training-free framework for Aerial Spatio-Temporal Image Generation. By leveraging the generative priors of pre-trained latent diffusion models and large language models, ASTIG establishes a novel paradigm for semantically controllable aerial temporal image generation.  Methods  ASTIG comprises three synergistically designed components. First, a dynamic semantic decomposition process is introduced to automatically parse complex aerial scene evolution descriptions into frame-level visual prompts, compensating for the lack of temporal semantic annotations in existing aerial image-text datasets. Second, a linguistic binding strategy is proposed to establish explicit associations between key ground objects and their corresponding visual attributes within the cross-attention mechanism of the diffusion model, thereby enhancing the semantic response precision of generated imagery. Third, a temporal anchor attention mechanism is integrated, which employs dual reference frames to enforce subject stability and background consistency across the generated temporal images, effectively suppressing inter-frame temporal drift under training-free conditions.  Results and Discussions  ASTIG and baseline models are evaluated on 7, 236 high-quality aerial spatio-temporal descriptions across six automated metrics, including subject consistency, background consistency, temporal flickering, motion smoothness, aesthetic quality, and imaging quality. Quantitative results (Tables 1 and 2) demonstrate the superiority of ASTIG in temporal image generation, with improvements of 3. 91% and 4. 57% in subject consistency and temporal smoothness over the frame-prompt baseline, respectively. Qualitative comparisons (Fig. 4) further highlight its robust capability in modeling long-term geospatial evolutionary imagery. Ablation studies validate the individual effectiveness of the linguistic binding strategy and the temporal anchor attention mechanism (Table 3 and Fig. 5). Sensitivity analyses of the intervention steps (Table 4 and Fig. 6) and binding strength (Table 5 and Fig. 7) provide systematic exploration of optimal parameter configurations. Furthermore, extensibility experiments under satellite perspectives (Figs. 8 and 9) reveal the potential of ASTIG to generalize beyond aerial platforms to broader Earth observation scenarios.  Conclusions  This paper proposes ASTIG, a training-free framework for aerial spatio-temporal image generation that addresses the scarcity of high-quality long-term temporal data and semantic–visual misalignment. By leveraging the generative priors of pre-trained latent diffusion models and large language models, ASTIG integrates dynamic semantic decomposition process, linguistic binding strategy, and temporal anchor attention mechanism to jointly address temporal semantic construction, semantic response precision, and inter-frame consistency. Experimental results demonstrate that ASTIG outperforms existing baseline methods across multiple automated evaluation metrics, offering a novel paradigm for aerial spatio-temporal image generation. As a training-free method, the performance of ASTIG is inherently bounded by the prior knowledge of the backbone model. Future work will explore geometric correction and nadir-view prior constraints to better align generated results with the physical properties of satellite imagery.
Joint Power Allocation and AP On-Off Control for Long-Term Energy Efficient Cell-Free Massive MIMO Systems
WEI Siqi, GUO Fengqian, CHONG Baolin, CHENG Guo, LU Hancheng
Available online  , doi: 10.11999/JEIT260014
Abstract:
  Objective   With the rapid development of wireless communication technologies, Cell-Free Massive Multiple-Input Multiple-Output (CF-mMIMO) has emerged as an effective paradigm to overcome the limitations of traditional cell-centric networks, such as limited performance for edge users. By deploying a large number of distributed Access Points (APs) connected to a Central Processing Unit (CPU) to cooperatively serve users, CF-mMIMO improves spectral efficiency and macro-diversity gain. However, dense AP deployment also introduces a critical challenge: high energy consumption. In practical systems, if all APs remain continuously active, especially during periods of low traffic load, substantial and unnecessary energy consumption occurs. This behavior reduces network sustainability and conflicts with global “dual-carbon” goals. Existing studies on energy efficiency in CF-mMIMO systems mainly focus on short-term performance optimization. These short-term approaches often ignore long-term traffic dynamics and the requirement of queue stability. Therefore, they lack robustness under time-varying traffic conditions and may cause queue congestion and significant performance fluctuations, which are unacceptable for next-generation wireless networks with strict reliability requirements. Although several recent studies examine long-term energy efficiency optimization, most assume that all APs remain active at all times. Therefore, the energy-saving potential of adaptive AP on-off control is not fully utilized.  Methods   To address these issues, a joint power allocation and AP on-off control strategy is proposed for downlink CF-mMIMO systems. The optimization problem aims to maximize long-term energy efficiency subject to user queue stability and AP power constraints. Because the problem has stochastic and long-term characteristics, the Lyapunov optimization framework is applied to transform the original long-term fractional programming problem into a sequence of deterministic drift-plus-penalty minimization problems solved in each time slot. The resulting per-slot problems remain nonconvex. Therefore, each problem is decomposed into two subproblems: power allocation and AP on-off control. The Successive Convex Approximation (SCA) method is used to convert the nonconvex formulations into solvable convex problems. An alternating optimization algorithm is then developed to jointly solve the two subproblems, which enables adaptive resource configuration under dynamic network conditions and stochastic traffic arrivals.  Results and Discussions   The proposed algorithm is evaluated through extensive simulations. First, the convergence behavior is examined. Numerical results (Fig. 2) show that per-slot energy efficiency increases rapidly and stabilizes after several iterations, which verifies the convergence of the alternating optimization procedure. Second, the effect of the control parameter is analyzed. As the parameter increases, the algorithm places greater emphasis on energy efficiency. Average power consumption decreases and then stabilizes (Fig. 3), whereas long-term energy efficiency increases and eventually stabilizes (Fig. 4). These results confirm the trade-off between energy efficiency and queue stability. Third, the proposed scheme is compared with three baseline methods. The results (Fig. 5) show that the proposed joint optimization approach consistently achieves higher long-term energy efficiency than the baseline methods. Fourth, the necessity of long-term optimization is demonstrated by comparing queue lengths with a short-term baseline (Fig. 6). Under the same traffic arrival rate, the short-term method shows cumulative queue growth, whereas the Lyapunov-based approach maintains queue lengths within a stable range and ensures network stability. Finally, robustness under imperfect Channel State Information (CSI) is evaluated (Fig. 7). Although energy efficiency decreases as channel uncertainty increases, the proposed method consistently outperforms the baseline approaches, which demonstrates strong robustness to channel estimation errors.  Conclusions   A long-term energy efficiency optimization framework is proposed for CF-mMIMO systems with stochastic traffic arrivals. By applying Lyapunov optimization theory, the stochastic long-term problem is transformed into slot-level drift-plus-penalty problems based on queue states. This transformation enables per-slot resource scheduling decisions while maintaining queue stability. On this basis, an efficient joint resource scheduling algorithm that integrates power allocation and AP on-off control is developed. The original problem is decomposed into power allocation and AP on-off control subproblems and solved through alternating optimization. Simulation results show that the proposed method adapts to dynamic traffic conditions. By placing underutilized APs into sleep mode, the algorithm improves long-term system energy efficiency and maintains queue stability. These results provide guidance for the design of green and sustainable wireless networks.
Research on Recognition Method in Mixture Scenarios of Ships and Floating Targets
DING Hao, LI Ao, CAO Zheng, LIU Ningbo, WANG Guoqing, SUN Dianxing
Available online  , doi: 10.11999/JEIT251119
Abstract:
  Objective  In radar maritime target detection scenarios, when two or more targets are located within the same range cell, mixture echoes are generated, such as echoes containing both ships and floating targets. Existing target recognition methods exhibit notable limitations in these scenarios because they typically focus on the Doppler channel with the strongest energy in the time-frequency domain. To address this issue, a target recognition method that integrates mode reconstruction and time-frequency features is proposed. The aim is to distinguish individual targets without prior knowledge of whether the received echoes contain mixture targets, thereby avoiding reliance on high range resolution or multipolarization information.  Methods  The core idea is to introduce Variational Mode Decomposition (VMD) to decompose radar echoes into multiple modal components, thereby enabling Doppler-channel separation. To address spurious modes and the fragmented representation of a single target across multiple modes after decomposition, an energy-constrained mode filtering method and a spectral-consistency-based mode clustering method are proposed for effective mode selection and reconstruction. Based on the reconstructed signals, time-frequency differences between ships and floating targets are analyzed in terms of micromotion and signal complexity. Features are extracted from two perspectives: motion stability and the disorder degree of energy distribution, referred to as VF and REDDC features, respectively. These features enable accurate identification of individual targets.  Results and Discussions  Experiments are conducted using X-band radar measured data under sea states 2–4 (Table 1 and Table 2). The results show that the proposed method achieves an average recognition accuracy of 97.32% in mixture scenarios. This performance significantly exceeds that of the existing four-feature recognition method (Table 3) and other advanced methods (Fig. 9). The effect of frequency separation between different targets is further examined. When the time-frequency ridge spacing exceeds 70 Hz, the recognition accuracy reaches 97.93% (Fig. 11). This result also provides empirical guidance for selecting an appropriate clustering threshold during the mode reconstruction stage. When mixture scenarios change to single-target scenarios due to relative motion, the proposed method achieves an average recognition accuracy of 93.34%. This value is 4.62% higher than that of the existing four-feature method (88.72%) (Table 4). Additional analysis indicates that the observation duration used for feature extraction should be no less than 0.25 s to maintain the expected recognition accuracy (Fig. 12).  Conclusions  This study examines recognition problems in maritime multi-target mixture scenarios. VMD is applied to separate the constituent components of mixture echoes. To address spurious modes and fragmented representation of target information across multiple modes, an energy-constrained mode filtering method and a spectral-consistency-based mode clustering method are proposed. VF and REDDC features are extracted from the perspectives of structural characteristics and signal complexity. A Support Vector Machine (SVM) classifier is then used for target recognition. Performance analysis confirms that the proposed method effectively identifies each constituent target in mixture echoes and maintains strong recognition performance in single-target scenarios. Future work will improve computational efficiency and real-time capability by optimizing the stopping criteria of VMD iterations and will further examine the application boundaries of the method using measured data under higher sea states.
A Long-Short Term Fusion Spiking Neural Network for Detecting Tiny Moving Targets in Dynamic Vision
LI Miao, ZHANG Heng, CHEN Nuo, SHI Yangsi, HE Shiman, AN Wei
Available online  , doi: 10.11999/JEIT250785
Abstract:
  Objective  Long-distance electro-optical surveillance systems are widely used for applications such as space debris monitoring and unauthorized drone flight warning. In such systems, targets appear randomly and move rapidly. Because of the long detection distance, targets appear extremely small in the optical sensor and lack obvious morphological or texture features; therefore, they are classified as tiny moving targets. Conventional tiny-target perception mechanisms adopt the “image frame imaging + artificial neural network processing” paradigm. This approach generates large data volumes and requires high computational power and energy consumption, which restricts system lightweight deployment. In recent years, inspired by bionic perception and brain-like processing, the paradigm of “dynamic vision detection + brain-like processing” has emerged as a new direction. Dynamic vision provides low redundancy and high temporal resolution. However, its output is not regular image frames but sparse event streams, which require new processing methods. The Spiking Neural Network (SNN) is regarded as the third-generation neural network. It uses sparse connections and spike-based representations and naturally matches the asynchronous event triggering and bright–dark pulse output of dynamic vision sensors. Existing SNN-based methods mainly focus on targets with clear shapes in scenarios such as autonomous driving and are not well suited for tiny moving targets in long-distance electro-optical surveillance systems. To address this problem, a Long-Short Term Fusion SNN is proposed to support the application of dynamic vision in tiny moving target detection.  Methods  The proposed network architecture contains four main components. First, a short-term feature extraction module, the Spiking Swin Transformer (SST), is designed to capture the morphological expansion characteristics of tiny targets. This module focuses on spatiotemporal correlations across adjacent time steps and spatial regions. It integrates a spiking self-attention mechanism to enhance the learning of irregular pixel correlations and temporal dependencies. Second, a long-term feature extraction module, the spiking ConvLSTM (SCL), is proposed to learn motion continuity embedded in long temporal sequences. A longer temporal range provides richer learnable motion features. The SCL is designed based on the ANN-style ConvLSTM architecture and takes advantage of the inherent temporal processing capability of spiking recurrent neural networks to strengthen long-term temporal memory. Third, features from the SST and SCL branches are aligned and integrated through tensor alignment and additive fusion, forming the Spiking Feature Pyramid Network (SFPN). This module performs spiking pyramid operations to fuse cross-scale spatiotemporal features across different network depths. Finally, a detection head is used to extract and identify tiny targets.  Results and Discussions  The proposed algorithm is validated using real dynamic vision data for drone detection. Experimental results show clear performance improvements across several evaluation metrics. Compared with methods that rely only on short-term temporal features, the proposed method increases recall by about 1.3% and improves accuracy by about 0.9%, which allows more reliable detection of tiny moving targets. Analysis of the F1-score further indicates that recall improves by 1.3% while false alarms are reduced. These results confirm that the dual-path spiking memory network for long-term feature extraction strengthens the ability of the model to identify subtle target characteristics. In particular, the integration of long-term temporal features improves discrimination between noise events and genuine tiny targets.  Conclusions  This study addresses tiny moving target detection under dynamic vision and proposes a method based on Long-Short Term Fusion SNN. Considering the morphological expansion characteristics and motion continuity of tiny targets, the SST module and the SCL module are designed to extract short-term and long-term temporal features. Multi-scale dual-path features are fused through a spiking pyramid module. By learning high-dimensional features across different temporal windows, the method enables deeper mining and automatic learning of limited surface features of tiny targets. Experiments on real dynamic vision data verify the performance advantage of the proposed method, achieving a recall rate above 95% and outperforming comparison algorithms. Ablation experiments further demonstrate that long-term temporal feature learning and larger temporal data ranges improve tiny target detection performance. The proposed method enables natural integration between sparse event streams from dynamic vision sensors and spiking neural mechanisms. It provides algorithmic support for applying the “bionic detection + brain-like processing” perception paradigm in long-distance electro-optical surveillance systems.
Security Protection for Vessel Positioning in Smart Waterway Systems Based on Extended Kalman Filter–Based Dynamic Encoding
TANG Fengjian, YAN Xia, SUN Zeyi, ZHU Zhaowei, YANG Wen
Available online  , doi: 10.11999/JEIT250846
Abstract:
  Objective  With the rapid development of intelligent shipping systems, vessel positioning data face severe privacy leakage risks during wireless transmission. Traditional privacy-preserving methods, such as differential privacy and homomorphic encryption, suffer from data distortion, high computational overhead, or reliance on costly communication links, making it difficult to achieve both data integrity and efficient protection. This study addresses the characteristics of vessel stabilization systems and proposes a dynamic encoding scheme enhanced by time-varying perturbations. By integrating the Extended Kalman Filter (EKF) and introducing unstable temporal perturbations during encoding, the scheme uses receiver-side acknowledgments (ACK feedback) to achieve reference-time synchronization and independently generates synchronized perturbations through a shared random seed. Theoretical analysis and simulations show that the proposed method achieves nearly zero precision loss in state estimation for legitimate receivers, whereas decoding errors of eavesdroppers grow exponentially after a single packet loss, effectively countering both single- and multi-channel eavesdropping attacks. The shared-seed synchronization mechanism avoids complex key management and reduces communication and computational costs, making the scheme suitable for resource-constrained maritime wireless sensor networks.  Methods  The proposed dynamic encoding scheme introduces a time-varying perturbation term into the encoding process. The perturbation is governed by an unstable matrix to induce exponential error growth for eavesdroppers. The encoded signal is constructed from the difference between the current state estimate and a time-scaled reference state, combined with the perturbation term. A shared random seed between legitimate parties enables deterministic and synchronized generation of the perturbation sequence without online key exchange. At the legitimate receiver, the perturbation is canceled during decoding, enabling accurate state recovery. Local state estimation at each sensor node is performed using EKF, and the overall communication process is reinforced by acknowledgment-based synchronization to maintain consistency between the sender and receiver.  Results and Discussions  Simulations are conducted in a wireless sensor network with four sensors tracking vessel states, including position, velocity, and heading. The results indicate that legitimate receivers achieve nearly zero estimation error (Fig. 3), whereas eavesdroppers exhibit exponentially increasing errors after a single packet loss (Fig. 4). The error growth rate depends on the instability of the perturbation matrix, confirming the theoretical divergence. In multi-channel scenarios, independent perturbation sequences for each channel prevent cross-channel correlation attacks (Fig. 5). The scheme maintains low communication and computational overhead, making it practical for maritime environments. Furthermore, the method shows strong robustness to packet loss and channel variations, satisfying SOLAS requirements for data integrity and reliability.  Conclusions  A dynamic encoding scheme with time-varying perturbations is proposed for privacy-preserving vessel state estimation. By integrating EKF with an unstable perturbation mechanism, the method ensures high estimation precision for legitimate users and exponential error growth for eavesdroppers. The main contributions are as follows: (1) an encoding framework that achieves zero precision loss for legitimate receivers; (2) a lightweight synchronization mechanism based on shared random seeds, which removes complex key management; and (3) theoretical guarantees of exponential error divergence for eavesdroppers under single- or multi-channel attacks. The scheme is robust to packet loss and channel asynchrony, complies with SOLAS data integrity requirements, and is suitable for resource-limited maritime networks. Future work will extend the method to nonlinear vessel dynamics, adaptive perturbation optimization, and validation in real maritime communication environments.
Research on Time Slots Aggregation and Topology Aggregation Model for Unmanned Aerial Vehicle Swarm Overall Time Synchronization
WANG Zhenling, TAO Haihong, WEI Haitao, WANG Zhengyong
Available online  , doi: 10.11999/JEIT251274
Abstract:
  Objective  Unmanned Aerial Vehicle (UAV) swarms overcome the technical and performance limitations of individual UAVs and enable complex missions that cannot be accomplished by a single platform. High-precision time synchronization among swarm nodes serves as a fundamental requirement for key swarm operations, including resource scheduling, cooperative positioning, and multi-node data fusion. Existing research on UAV time synchronization mainly focuses on improving the accuracy of basic synchronization approaches. However, limitations remain in adapting to topological changes during swarm formation flights and in achieving global synchronization among multiple nodes. As the scale of UAV swarms increases, the connectivity of time-comparison links between nodes during formation flights exhibits clear time-varying characteristics. These characteristics create challenges for maintaining continuous, reliable, and precise overall time synchronization. To address stable formation flight and formation transformation scenarios in different mission stages of UAV swarms, an Observation Time Slots Aggregation (OTSA) model and a Time-Varying Topology Aggregation (TVTA) model are proposed to enhance the robustness of global time synchronization among swarm nodes and to improve Time Synchronization Accuracy (TSA). This study proposes an effective solution for Leader-Following Consistency Time Synchronization (LFCTS) in UAV swarms and provides references for time synchronization applications in heterogeneous and distributed systems.  Methods  Compared with the traditional Quasi Real-time Bidirectional Time Comparison (QRBTC) scheme, the time synchronization method based on the OTSA model fully uses all synchronization signal transmission and reception link resources within each time slot of the system synchronization period. Based on the “one transmission and multiple receptions” mechanism of all nodes, the Follower Node (FN) performs direct synchronization or single-hop indirect synchronization with the Leader Node (LN) in each time slot according to the OTSA model. This process produces tens of times more clock-skew observation samples than the traditional QRBTC scheme. The OTSA method improves the robustness of global time synchronization. It also enables secondary data processing using multi-slot synchronization samples, which further improves TSA compared with the QRBTC method. Based on the LFCTS results obtained during the system signal synchronization period, the TVTA model extends the direct comparison and single-hop indirect comparison mechanism of the OTSA model to cross-period multi-hop comparison. This extension addresses overall time synchronization instability caused by the time-varying characteristics of synchronization link relationships during UAV swarm takeoff, assembly, and formation transformation.  Results and Discussions  In the OTSA method, all time-comparison link resources of the total time slots are fully used during the synchronization period (Fig. 2). Based on the constructed error model and simulation analysis, for a UAV swarm with 50 nodes and a time slot allocation of 20 ms, time synchronization using the OTSA model achieves a single-slot TSA of 4.10~4.27 ns (Fig. 6). Within a complete time synchronization period, the overall TSA reaches 2.46~2.56 ns, which is better than the QRBTC scheme under the same conditions (Fig. 5(a)). The TVTA method uses cross-period synchronization comparison relationships to construct multi-hop time comparison links (Fig. 3 and 4). When the FN obtains external comparison relationships of other nodes through aggregation processing, one-way or two-way Dijkstra’s algorithm is applied to determine the multi-hop comparison link with optimal connectivity. Time tracing and comparison with the LN are then completed through edge computing. Error analysis indicates that during UAV swarm takeoff, assembly, and transitions to triangle or rhombus formations, time synchronization based on the TVTA model achieves an overall TSA better than 8.6 ns, which provides stronger global time synchronization capability.  Conclusions  This study addresses the robustness of time synchronization in UAV swarm formation flights. For stable formation flight and formation transformation scenarios during different mission stages, the OTSA and TVTA models are proposed. An error model is constructed and performance is analyzed. The results show the following. (1) The OTSA model improves the robustness of overall time synchronization through direct comparison and single-hop indirect comparison across multiple time slots within one synchronization period. The model achieves an overall TSA better than 2.56 ns and performs better than the traditional QRBTC method. (2) The TVTA model achieves overall UAV swarm time synchronization through multi-hop relay between nodes. Even when time-comparison links change, the model maintains global TSA better than 8.6 ns. (3) These two methods consider the time-varying characteristics of comparison links among UAV swarm nodes and have been verified through small-scale UAV swarm flight tests. They maintain synchronization robustness and performance and provide necessary support for coordinated UAV swarm operations. Future work will focus on practical flight verification, adaptation in complex scenarios, and further improvement of overall synchronization accuracy.
Spherical Geometry-guided and Frequency-Enhanced Segment Anything Model for 360° Salient Object Detection
CHEN Xiaolei, SHEN Yujie, ZHONG Zhihua
Available online  , doi: 10.11999/JEIT251254
Abstract:
  Objective  With the rapid development of Virtual Reality (VR) and Augmented Reality (AR) technologies and the increasing demand for omnidirectional visual applications, accurate salient object detection in complex 360° scenes has become critical for system stability and intelligent decision-making. The Segment Anything Model (SAM) demonstrates strong transferability across two-dimensional vision tasks. However, it is primarily designed for planar images and lacks explicit modeling of spherical geometry, which limits its direct application to 360° Salient Object Detection (360° SOD). To address this limitation, this study integrates the generalization capability of SAM with spherical-aware multi-scale geometric modeling to improve 360° SOD. Specifically, a Multi-Cognitive Adapter (MCA), Spherical Geometry Guided Attention (SGGA), and Spatial-Frequency Joint Perception Module (SFJPM) are proposed to enhance multi-scale structural representation, mitigate projection-induced geometric distortions and boundary discontinuities, and strengthen joint global and local feature modeling.  Methods  The proposed 360° SOD framework is built on SAM and consists of an image encoder and a mask decoder. During encoding, spherical geometry modeling is incorporated into patch embedding by mapping image patches onto a unit sphere and explicitly modeling spatial relationships between patch centers. This strategy injects geometric priors into the attention mechanism, which improves sensitivity to non-uniform geometric characteristics and reduces information loss caused by omnidirectional projection distortion. The encoder adopts a partial freezing strategy and is organized into four stages, each containing three encoder blocks. Each block integrates the MCA for multi-scale contextual fusion and the SGGA to model long-range dependencies in spherical space. Multi-level features are concatenated along the channel dimension to form a unified representation. The representation is then refined by the SFJPM, which jointly captures spatial structures and frequency-domain global information. The fused features are subsequently fed into the SAM mask decoder. Saliency maps are optimized under ground-truth supervision to achieve accurate object localization and boundary refinement.  Results and Discussions  Experiments are conducted using the PyTorch framework on an RTX 3090 GPU with an input resolution of 512 × 512. Evaluations are performed on two public datasets, 360-SOD and 360-SSOD, and compared with 14 state-of-the-art methods. The proposed approach consistently achieves superior performance across six evaluation metrics. On the 360-SOD dataset, the model achieves a Mean Absolute Error (MAE) of 0.015 2 and a maximum F-measure of 0.849 2, outperforming representative methods such as MDSAM and DPNet. Qualitative results show that the proposed method produces saliency maps that are highly consistent with ground-truth annotations. The model handles challenging scenarios effectively, including projection distortion, boundary discontinuity, multi-object scenes, and complex backgrounds. Ablation studies further show that MCA, SGGA, and SFJPM each contribute to performance improvement and operate complementarily.  Conclusions  This study proposes an SAM-based framework for 360° salient object detection that jointly addresses multi-scale representation, spherical distortion awareness, and spatial-frequency feature modeling. The MCA improves multi-scale feature fusion, the SGGA compensates for Equirectangular Projection (ERP)-induced geometric distortion, and the SFJPM enhances long-range dependency modeling. Extensive experiments verify the effectiveness and feasibility of applying SAM to 360° SOD. Future research will extend this framework to omnidirectional video and multi-modal scenarios to further improve spatiotemporal modeling and scene understanding.
A Complexity-Reduced Active Interference Cancellation Algorithm in f-OFDM
CHEN Hao, WEN Jiangang, ZOU Yuanping, HUA Jingyu, SHENG Bin
Available online  , doi: 10.11999/JEIT251172
Abstract:
  Objective  Due to spectrum scarcity and diverse communication requirements, a waveform technology with high spectral efficiency, flexible subband configuration, and support for asynchronous communication is required for Sixth Generation mobile communication (6G). Among the candidate waveforms, filtered Orthogonal Frequency Division Multiplexing (f-OFDM) is considered a promising solution that satisfies these requirements. By applying subband filtering, f-OFDM enables flexible subband configuration and asynchronous transmission. However, the filtering mechanism inevitably introduces intrinsic interference into the system. A dominant component of this interference is InTer-subBand Interference (ITBI), which is mainly caused by Out-Of-Band Emission (OOBE) leakage from adjacent subbands. Therefore, suppressing subband OOBE is essential for reducing ITBI and improving the performance of f-OFDM systems. Based on the structure of f-OFDM systems, a Complexity-Reduced Active Interference Cancellation (CRAIC) algorithm is proposed to suppress the OOBE of f-OFDM subbands and improve overall system performance.  Methods  First, based on the spectral structure of f-OFDM, a subset of data subcarriers in the target subband is used to generate Cancellation Carriers (CCs). A CRAIC optimization model for f-OFDM systems is then constructed under the constraint of CCs power. The cost function is defined according to the superposed spectrum of data subcarriers and CCs at Desired Frequency Points (DFPs). Second, by introducing a real-complex domain transformation and reformulating the optimization model, the original complex-domain CRAIC programming problem is converted into a real-domain Second-Order Cone Programming (SOCP) problem, which enables efficient computation. Furthermore, computer simulations evaluate the effects of key parameters on CRAIC performance, including the number of CCs (\begin{document}$ M $\end{document}), the number of data subcarriers used to generate CCs (\begin{document}$ K $\end{document}), and the number of DFPs (\begin{document}$ Q $\end{document}). Based on these evaluations, practical recommendations are provided for configuring CRAIC parameters in f-OFDM systems.  Results and Discussions  Simulation results show that in the edge region of the adjacent subband, the proposed CRAIC algorithm produces the steepest Power Spectral Density (PSD) roll-off compared with the conventional ZP and Origin schemes. This result indicates that CRAIC provides the strongest ITBI suppression in this region and achieves the lowest Bit Error Rate (BER) for Edge Subcarriers (ESs) in the adjacent subband. Specifically, CRAIC achieves a maximum PSD reduction of 4 dB and 12 dB compared with ZP and Origin, respectively (Fig. 2a). This result occurs because the right Q/2 DFPs are largely located in the edge region of SB2, which leads to effective spectral suppression in this area. Therefore, the BER at the edge of SB2 is significantly lower for CRAIC than for Origin, and a visible performance improvement is also observed compared with ZP (Fig. 3a). Furthermore, the effects of key parameters \begin{document}$ M $\end{document}, \begin{document}$ K $\end{document} and \begin{document}$ Q $\end{document} are examined through simulations. The results show that increasing \begin{document}$ M $\end{document} continuously improves OOBE suppression capability (Fig. 4a), although spectral efficiency gradually decreases. In contrast, increasing \begin{document}$ K $\end{document} and \begin{document}$ Q $\end{document} produces only limited performance improvement. When these parameters exceed certain values, further increases do not provide additional gains (Fig. 5a and Fig. 6a). Based on these observations, \begin{document}$ M=4 $\end{document}, \begin{document}$ K=8 $\end{document}, \begin{document}$ Q=4 $\end{document} are selected as typical parameter settings for the scenario considered in this study. Under this configuration, CRAIC (\begin{document}$ K=8 $\end{document}) achieves significant improvements in ES BER compared with Origin and ZP (Fig. 8a), whereas the BER of Internal Subcarriers (ISs) remains nearly the same as that of the two benchmark schemes (Fig. 8b). Compared with the full-scale CRAIC scheme (\begin{document}$ K=20 $\end{document}), CRAIC (\begin{document}$ K=8 $\end{document}) reduces the size of the data-subcarrier mapping matrix by 60% while causing only limited BER degradation (Fig. 8a). These results indicate that the proposed algorithm preserves the performance of the full-scale Active Interference Cancellation (AIC) scheme while substantially reducing computational complexity.  Conclusions  A CRAIC algorithm for filtered OFDM systems is studied. The CRAIC optimization model is constructed under the constraint of CC power, and the cost function is defined based on the superposed spectrum of selected data subcarriers and CCs at DFPs. Through real-imaginary domain conversion and model reformulation, the complex-domain optimization problem is converted into a real-domain SOCP problem. Simulation results show that the CRAIC algorithm effectively reduces the PSD of the target subband, particularly in the transition region of the adjacent subband, which leads to clear improvement in edge BER performance. The effects of key parameters are also evaluated. Increasing \begin{document}$ M $\end{document} increases the performance gain of CRAIC compared with ZP, although spectral efficiency decreases. Increasing \begin{document}$ K $\end{document} improves OOBE suppression, although the gain gradually decreases and computational complexity increases. Increasing \begin{document}$ Q $\end{document} does not continuously reduce PSD. Overall, the CRAIC algorithm improves subband isolation in f-OFDM systems, reduces ITBI, and improves system performance.
Communication, Computation, and Caching Resource Collaboration for Heterogeneous Artificial Intelligence Generated Content Service Provisioning
WU Mengru, GAO Yu, ZHAO Bo, XU Bo, SUN Hao, GUO Lei
Available online  , doi: 10.11999/JEIT251300
Abstract:
  Objective  In the Artificial Intelligence of Things (AIoT), Edge Servers (ESs) provide intelligent content generation services to AIoT devices by utilizing cached Artificial Intelligence Generated Content (AIGC) models. However, the limited computing resources and caching capacity of ESs make it difficult to support the large-scale caching demands of heterogeneous AIGC services. To address this issue, a communication, computation, and caching resource collaboration scheme is proposed based on a combined cloud-edge and edge-edge collaborative framework. The scheme considers three representative AIGC services: lightweight AIGC services, computation-intensive AIGC services, and preprocessing-based AIGC services. The objective is to minimize the total AIGC service latency through joint optimization of transmit power, computing resource allocation, model caching strategies, and offloading decisions.  Methods  Communication, computation, and caching resource collaboration for heterogeneous AIGC services is investigated. First, an AIGC service-oriented AIoT system model is established to incorporate both cloud-edge and edge-edge collaboration. An optimization problem is then formulated to minimize the total latency of AIGC services through joint optimization of transmit power, computing resource allocation, model caching strategies, and offloading decisions. Because the formulated problem is non-convex, an Alternating Optimization (AO) algorithm is proposed. The original problem is decomposed into three subproblems. These subproblems are solved using the Successive Convex Approximation (SCA) method, Karush-Kuhn-Tucker (KKT) conditions, and an improved Harris Hawks Optimization (HHO) algorithm.  Results and Discussions  Simulation experiments compare the proposed joint optimization scheme with three baseline methods: Particle Swarm Optimization (PSO), fixed resource allocation, and random offloading and caching. First, the convergence of the proposed AO algorithm is verified (Fig. 2). The results show that the algorithm converges rapidly within a limited number of iterations across different subproblems. Second, increasing transmission bandwidth significantly reduces the total AIGC service latency (Fig. 3). This occurs because each device obtains more bandwidth resources for task transmission, and the ES can allocate more bandwidth to deliver generated content in the downlink. Furthermore, the total AIGC service latency decreases as the ES storage capacity increases for all schemes (Fig. 4). Greater storage capacity enables the ES to store more AIGC models, which reduces the transmission delay between the ES and the cloud server. Moreover, when the required floating-point operations per bit increase, the total AIGC service latency rises significantly across all schemes (Fig. 5). Finally, the total AIGC service latency decreases as the maximum transmit power of the Base Station (BS) increases (Fig. 6). This occurs because higher BS transmit power improves the downlink signal-to-noise ratio, which increases the downlink transmission rate and reduces overall service latency. The proposed scheme demonstrates better performance than the baseline schemes, particularly under high computational demand.  Conclusions  Communication, computation, and caching resource collaboration for heterogeneous AIGC services is investigated. The objective is to minimize total AIGC service latency through joint optimization of the transmit power of AIoT devices and BSs, computing resource allocation, AIGC model deployment, and service offloading decisions under computation and caching resource constraints. Because the formulated problem is a mixed-integer nonlinear programming problem, an efficient AO algorithm is developed. The original optimization problem is decomposed into three subproblems, which are solved using the SCA algorithm, KKT conditions, and the HHO algorithm, respectively. Simulation results show that the proposed algorithm reduces the total AIGC service latency compared with the baseline schemes.
Multi-scale Frequency Adapter and Dual-path Attention for Time Series Forecasting
YANG Zhenzhen, XU Yi, WAN Chengye, YANG Yongpeng
Available online  , doi: 10.11999/JEIT251188
Abstract:
  Objective  With the rapid development of big data technology, time series data are increasingly used in meteorology, power systems, finance, and other fields. However, mainstream forecasting methods face challenges in multi-scale modeling and frequency-domain feature extraction, which limit the ability to capture dynamic properties and periodic patterns in complex datasets. Traditional statistical approaches, such as AutoRegressive Integrated Moving Average (ARIMA), rely on assumptions of linear relationships and therefore perform poorly when applied to nonlinear or high-dimensional time series data. Although deep learning methods, particularly those based on convolutional neural networks and Transformer architectures, improve forecasting accuracy through advanced feature extraction and long-range dependency modeling, limitations remain in efficiently extracting and integrating multi-scale features in both temporal and frequency domains. These limitations reduce stability and forecasting accuracy, especially in dynamic and heterogeneous applications. This study proposes an intelligent forecasting framework that models multi-scale information and improves prediction accuracy across different scenarios.  Methods  A Multi-scale Frequency Adapter and Dual-path Attention (MFADA) framework is proposed for time series forecasting. The framework integrates two key modules: the Multi-scale Frequency Adapter (MFA) and the Multi-scale Dual-path Attention (MDA). The MFA module captures multi-scale frequency features through adaptive pooling and deep convolution operations. This design improves sensitivity to different frequency components and supports modeling of both short-term and long-term dependencies. The MDA module applies a multi-scale attention mechanism to strengthen fine-grained modeling across temporal and feature dimensions. It enables effective extraction and fusion of comprehensive time-domain and frequency-domain information. The framework is designed with computational efficiency to ensure scalability. Experiments on eight public datasets verify the effectiveness and robustness of the proposed method compared with existing time series forecasting approaches.  Results and Discussions  Extensive experiments were conducted on eight publicly available multivariate datasets, including ECL, Weather, ETT (ETTm1, ETTm2, ETTh1, ETTh2), Solar-Energy, and Traffic. Evaluation metrics include Mean Absolute Error (MAE) and Mean Squared Error (MSE). Model complexity was assessed through parameter count, FLoating Point Operations (FLOPs), and training time. Comparisons were performed with state-of-the-art models, including Fredformer, Peri-midFormer, iTransformer, TFformer, PatchTST, MSGNet, TimesNet, and TCM. Results show that MFADA achieves superior forecasting performance on most datasets and forecasting horizons (Table 1). The model obtains the best average MSE and MAE of 0.163 and 0.261 on ECL, representing decreases of 13.2% and 17.3% compared with TimesNet for forecasting length 96. On the periodic ETTm1 dataset, the average MSE reaches 0.377, which is 5.3% lower than MSGNet. Ablation experiments (Table 2) confirm the contributions of the MFA and MDA modules. Removing MFA or replacing MDA with standard self-attention increases forecasting errors on ECL, Weather, ETTh1, and ETTh2. These results indicate the complementary roles of both modules in modeling complex temporal patterns. Complexity analysis (Fig. 2) shows that MFADA achieves a balanced trade-off among forecasting accuracy, parameter efficiency, and training time, outperforming Fredformer, MSGNet, and TimesNet. Visualization results for ECL and ETTh2 (Fig. 3, Fig. 4) demonstrate that MFADA effectively follows ground-truth trends, captures turning points, and improves prediction accuracy at both global and local levels. Performance on the Traffic dataset is relatively weaker because of strong spatial correlations in the data, which indicates potential directions for future research.  Conclusions  This paper proposes MFADA, a time series forecasting method that integrates multi-scale frequency adaptation and dual-path attention mechanisms. MFADA presents four main advantages: (1) The MFA module effectively extracts and integrates multi-scale frequency-domain features through pyramid pooling and channel gating, which improves representation across different temporal scales. (2) The MDA module captures multi-scale dependencies in both temporal and feature dimensions, enabling fine-grained dynamic modeling. (3) The architecture maintains computational efficiency through lightweight convolution and pooling operations. (4) Experimental results across eight datasets and multiple forecasting horizons demonstrate strong generalization ability, particularly for multivariate and long-term forecasting tasks. These results show that MFADA improves both accuracy and efficiency in time series forecasting and provides useful directions for research and practical applications. Future work will explore the integration of spatial correlation information to further improve model applicability.
Research on Ultrasound Imaging Algorithm Fused with Diffusion Model
YUAN Ye, HUANG Minshang, YANG Weifeng
Available online  , doi: 10.11999/JEIT251083
Abstract:
  Objective   Medical ultrasound imaging uses ultrasonic waves to probe human tissues and forms images by processing returning echoes. It has become an essential clinical diagnostic tool because it is noninvasive, safe, and capable of real-time imaging. However, conventional ultrasound imaging remains fundamentally limited by factors such as the finite width of ultrasonic pulses, variations in tissue acoustic impedance, and the complexity of echo signals. These factors lead to persistent challenges, including limited spatial resolution, severe speckle noise, and off-axis artifacts. These limitations directly reduce lesion detectability and diagnostic accuracy. Traditional approaches based on hardware optimization and signal processing algorithms, such as adaptive beamforming, have provided only incremental improvement. Their performance is often constrained by physical laws, computational complexity, and dependence on manual parameter tuning. Recent deep learning methods, particularly those based on Generative Adversarial Networks (GANs), have shown promising performance, but they suffer from training instability and limited interpretability. The diffusion model, an emerging state-of-the-art generative framework, has shown strong robustness and generalization in Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) reconstruction. However, its application in ultrasound imaging remains largely unexplored. This study aims to address this gap by developing a novel diffusion model-based framework for high-quality ultrasound image formation and to provide a stable, efficient, and interpretable solution for improving ultrasound image quality.  Methods   A novel ultrasound imaging method based on a Denoising Diffusion Probabilistic Model (DDPM) is proposed. The core of the method is a multi-scale diffusion network architecture designed to progressively refine a low-quality ultrasound image, such as one generated by a simple Delay-And-Sum (DAS) beamformer, into a high-quality image. The process includes forward and reverse stages. In the forward stage, Gaussian noise is gradually added to a high-quality ground-truth image over a series of time steps. In the reverse stage, the model is trained to learn the conditional denoising function. A custom denoising network takes a low-resolution DAS image as conditional input and fuses it with the noisy image at each denoising step through residual connections and feature-wise transformations at multiple scales. This deep fusion mechanism enables the network to incorporate the underlying anatomical structure from the low-quality input while iteratively removing noise and artifacts through the diffusion process. The model is trained on a dataset of paired low-quality and high-quality ultrasound images, in which the high-quality images serve as the training target. The training objective is to maximize the variational lower bound of the likelihood, thereby enabling the network to reverse the noising process. The proposed method is quantitatively compared with traditional DAS, Minimum Variance (MV) beamforming, and a representative GAN-based super-resolution method using Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity Index (SSIM).  Results and Discussions   The proposed diffusion model demonstrates superior performance in improving ultrasound image quality. Quantitatively, the method achieves a mean PSNR of 35.2 dB and an SSIM of 0.933, with a PSNR improvement of 4.5 dB over conventional beamforming methods, while maintaining excellent structural fidelity. The method also consistently outperforms adaptive MV beamforming and GAN-based methods across all evaluation metrics, including contrast-to-noise ratio. Visual assessment supports these quantitative results. The generated images show markedly reduced speckle noise and substantially improved boundary definition of anatomical structures. Notably, these improvements are achieved without the blurring or artificial textures commonly observed in other deep learning-based methods. The multi-scale architecture with conditional feature injection effectively preserves structural integrity, as shown by the clear and continuous edges in the output images. The progressive denoising nature of the method also provides inherent interpretability for the image refinement process. Unlike the opaque single-step generation used in many other deep learning models, this method provides a transparent, stepwise enhancement pathway from the initial input to the final output. In addition, the training process remains stable and convergent, avoiding the instability that frequently affects adversarial training methods. Ablation experiments confirm the critical role of the deep fusion mechanism, and resolution analysis verifies substantial improvement in both lateral and axial resolution compared with all baseline methods.  Conclusions   This study develops and validates a novel ultrasound imaging method based on a diffusion model. The proposed framework effectively addresses key limitations of conventional methods and existing deep learning-based approaches. It avoids the complex matrix computations and manual parameter tuning required by adaptive beamformers and provides a more stable training framework than GAN-based methods. The results show that the method can substantially improve image quality by increasing PSNR and maintaining excellent structural similarity, thereby producing images with suppressed noise, reduced artifacts, and improved resolution. The multi-scale diffusion process preserves anatomical structures and provides a degree of interpretability for the image generation process. This work establishes diffusion models as a promising new framework for advanced ultrasound imaging and provides a robust, high-performance technical route for overcoming current bottlenecks in ultrasound image quality, with broad potential clinical value.
Two-Channel Joint Coding Detection for Cyber-Physical Systems Against Integrity Attacks
MO Xiaolei, ZENG Weixin, FU Jiawei, DOU Keqin, WANG Yanwei, SUN Ximing, LIN Sida, SUI Tianju
Available online  , doi: 10.11999/JEIT250729
Abstract:
  Objective  Cyber-Physical Systems (CPS) are widely applied across infrastructure, aviation, energy, healthcare, manufacturing, and transportation, as computing, control, and sensing technologies advance. Due to the real-time interaction between information and physical processes, such systems are exposed to security risks during data exchange. Attacks on CPS can be grouped into availability, integrity, and reliability attacks based on information security properties. Integrity attacks manipulate data streams to disrupt the consistency between system inputs and outputs. Compared with the other two types, integrity attacks are more difficult to detect because of their covert and dynamic nature. Existing detection strategies generally modify control signals, sensing signals, or system models. Although these approaches can detect specific categories of attacks, they may reduce control performance and increase model complexity and response delay.  Methods  A joint additive and multiplicative coding detection scheme for the two-channel structure of control and output is proposed. Three representative integrity attacks are tested, including a control-channel bias attack, an output-channel replay attack, and a two-channel covert attack. These attacks remain stealthy by partially or fully obtaining system information and manipulating data so the residual-based χ2 detector output stays below the detection threshold. The proposed method introduces paired additive watermarking signals with positive and negative patterns, together with paired multiplicative coding and decoding matrices on both channels. These additional unknown signals and parameters introduce information uncertainty to the attacker and cause the residual statistics to deviate from the expected values constructed using known system information. The watermarking pairs and matrix pairs operate through different mechanisms. One uses opposite-sign injection, while the other uses a mutually inverse transformation. Therefore, normal control performance is maintained when no attack is present. The time-varying structure also prevents attackers from reconstructing or bypassing the detection mechanism.  Results and Discussions  Simulation experiments on an aerial vehicle trajectory model are conducted to assess both the influence of integrity attacks on flight paths and the effectiveness of the proposed detection scheme. The trajectory is modeled using Newton’s equations of motion, and attitude dynamics and rotational motion are omitted to focus on positional behavior. Detection performance with and without the proposed method is compared under the three attack scenarios (Fig. 2, Fig. 3, Fig. 4). The results show that the proposed scheme enables effective identification of all attack types and maintains stable system behavior, demonstrating its practical applicability and improvement over existing approaches.  Conclusions  This study addresses the detection of integrity attacks in CPS. Three representative attack types (bias, replay, and covert attacks) are modeled, and the conditions required for their successful execution are analyzed. A detection approach combining additive watermarking and multiplicative encoding matrices is proposed and shown to detect all three attack types. The design uses paired positive–negative additive watermarks and paired encoding and decoding matrices to ensure accurate detection while maintaining normal control performance. A time-varying configuration is adopted to prevent attackers from reconstructing or bypassing the detection elements. Using an aerial vehicle trajectory simulation, the proposed approach is demonstrated to be effective and applicable to cyber-physical system security enhancement.
Peak-to-Average Power Ratio Reduction Theory and Method forOrthogonal Time Frequency Space Systems via Nonzero-Unitary Precoding
ZENG Junlong, JIANG Zhanjun, LIU Haoxiang, ZHANG Huawei, LI Cuiran
Available online  , doi: 10.11999/JEIT250888
Abstract:
  Objective  Orthogonal Time Frequency Space (OTFS) and its variants provide robust performance in high-mobility doubly selective channels. However, their inherently high Peak-to-Average Power Ratio (PAPR) limits power amplifier efficiency and practical implementation. Recent observations have revealed a mismatch between theory and practice. Some OTFS variants obtained by changing the orthogonal basis, such as DCT-based designs, reduce PAPR while maintaining an OTFS-like Bit Error Rate (BER). However, the prevailing explanation mainly attributes reliability to constant-modulus unitary transforms and does not directly account for such non-constant-modulus cases. Therefore, it remains unclear which unitary bases preserve the channel-hardening behavior that stabilizes effective gains and protects BER, and which unitary choices may degrade performance even though they are mathematically unitary. This paper aims to close this gap by establishing a verifiable and more general condition for BER-robust unitary precoding, and by developing a waveform and precoder design approach that suppresses PAPR without sacrificing reliability in OTFS and typical OTFS-like variants.  Methods  A waveform design framework based on nonzero-unitary precoding is established. An upper bound on effective channel-gain fluctuation is derived. It is shown that when the precoder satisfies a nonzero and near-uniform energy-spreading condition, the variance of the effective channel coefficients decreases as the time-frequency grid grows, indicating the emergence of a channel-hardening effect. On the basis of this result, waveform design is formulated as a peak-power minimization problem over the unitary precoder. The objective is to reduce the maximum instantaneous power while preserving the unitary structure required by the modulation framework. A CVX-based solver is used to provide a performance-reference benchmark for the formulated objective. For engineering implementation, an efficient algorithm is developed using the Alternating Direction Method of Multipliers (ADMM). In this method, the original nonconvex design is decomposed into low-cost sub-updates together with a unitary projection step, which enables scalable computation.  Results and Discussions  Simulation results under representative doubly selective channels with high terminal speeds show that the proposed precoder design achieves noticeable PAPR suppression while maintaining the BER close to that of conventional constant-modulus unitary precoding. In addition, the CVX-based benchmark reveals the attainable performance region, and the ADMM-based implementation approaches this reference with a favorable PAPR-BER trade-off. The computational advantage is also validated. Compared with general-purpose convex optimization, the ADMM solver reduces the overall runtime and complexity by roughly three orders of magnitude for typical OTFS parameter settings, which supports real-time or near-real-time deployment. The observed performance trends are consistent with the theoretical insight that near-uniform energy spreading stabilizes effective channel gains and prevents spiky basis vectors from degrading robustness. Furthermore, the framework is applicable to OTFS variants because basis selection and waveform shaping can be interpreted equivalently as unitary-precoder design within the same optimization architecture.  Conclusions  A theoretical and algorithmic solution for PAPR suppression in OTFS systems is presented through nonzero-unitary precoding. Channel hardening is established under a nonzero and near-uniform energy-spreading condition, which provides a principled justification for seeking low-PAPR solutions beyond constant-modulus transforms. A peak-power minimization formulation is adopted to translate this insight into waveform optimization, and a CVX benchmark is provided to quantify the achievable performance reference. A low-complexity ADMM algorithm is then constructed to enable scalable computation through simple sub-updates and unitary projection, while keeping BER performance essentially unchanged. The proposed approach provides a unified low-PAPR waveform design paradigm for OTFS and its variants, with theoretical generality, computational efficiency, and controllable performance under high-mobility doubly selective channels.
Wavelet Transform and Attentional Dual-Path EEG Model for Virtual Reality Motion Sickness Detection
CHEN Yuechi, HUA Chengcheng, DAI Zhian, FU Jingqi, ZHU Min, WANG Qiuyu, YAN Ying, LIU Jia
Available online  , doi: 10.11999/JEIT251233
Abstract:
  Objective  Virtual Reality Motion Sickness (VRMS) presents a barrier to the wider adoption of immersive Virtual Reality (VR). It is primarily caused by sensory conflict between the vestibular and visual systems. Existing assessments rely on subjective reports that disrupt immersion and do not provide real-time measurements. An objective detection method is therefore needed. This study proposes a dual-path fusion model, the Wavelet Transform ATtentional Network (WTATNet), which integrates wavelet transform and attention mechanisms. WTATNet is designed to classify resting-state ElectroEncephaloGraph (EEG) signals collected before and after VR motion stimulus exposure to support VRMS detection and research on the mechanisms and mitigation strategies.  Methods  WTATNet contains two parallel pathways for EEG feature extraction. The first applies a Two-Dimensional Discrete Wavelet Transform (2D-DWT) to both the time and electrode dimensions of the EEG, reshaping the signal into a two-dimensional matrix based on the spatial layout of the scalp electrodes in horizontal or vertical form. This decomposition captures multi-scale spatiotemporal features, which are then processed using Convolutional Neural Network (CNN) layers. The second pathway applies a one-dimensional CNN for initial filtering followed by a dual-attention structure consisting of a channel attention module and an electrode attention module. These modules recalibrate the importance of features across channels and electrodes to emphasize task-relevant information. Features from both pathways are fused and passed through fully connected layers to classify EEGs into pre-exposure (non-VRMS) and post-exposure (VRMS) states based on subjective questionnaire validation. EEG data were collected from 22 subjects exposed to VRMS using the game “Ultrawings2.” Ten-fold cross-validation was used for training and evaluation with accuracy, precision, recall, and F1-score as metrics.  Results and Discussions  WTATNet achieved high VRMS-related EEG classification performance, with an average accuracy of 98.39%, F1-score of 98.39%, precision of 98.38%, and recall of 98.40%. It outperformed classical and state-of-the-art EEG models, including ShallowConvNet, EEGNet, Conformer, and FBCNet (Table 2). Ablation experiments (Tables 3 and 4) showed that removing the wavelet transform path, the electrode attention module, or the channel attention module reduced accuracy by 1.78%, 1.36%, and 1.01%, respectively. The 2D-DWT performed better than the one-dimensional DWT, supporting the value of joint spatiotemporal analysis. Experiments with randomized electrode ordering (Table 4) produced lower accuracy than spatially coherent layouts, indicating that 2D-DWT leverages inherent spatial correlations among electrodes. Feature visualizations using t-SNE (Figures 5 and 6) showed that WTATNet produced more discriminative features than baseline and ablated variants.  Conclusions  The dual-path WTATNet model integrates wavelet transform and attention mechanisms to achieve accurate VRMS detection using resting-state EEG. Its design combines interpretable, multi-scale spatiotemporal features from 2D-DWT with adaptive channel-level and electrode-level weighting. The experimental results confirm state-of-the-art performance and show that WTATNet offers an objective, robust, and non-intrusive VRMS detection method. It provides a technical foundation for studies on VRMS neural mechanisms and countermeasure development. WTATNet also shows potential for generalization to other EEG decoding tasks in neuroscience and clinical research.
DGCN-MFW: A Lightweight Human Action Recognition Network for Millimeter-Wave Radar 3D Point Clouds
DING Xuanyu, JIN Biao, ZHANG Zhenkai
Available online  , doi: 10.11999/JEIT251087
Abstract:
  Objective  Millimeter-wave radar 3D point clouds provide important spatial cues for human action recognition. However, their inherent disorder complicates feature extraction, and actions rely on temporal correlations across multiple frames, which makes single-frame analysis prone to error. In this paper, a dynamic graph convolutional network is proposed for long 3D point-cloud sequences to improve recognition performance and efficiency through multi-scale feature fusion, adaptive frame weighting, and cross-attention.  Methods  A dynamic graph convolutional network solution, DGCN-MFW, is proposed with three core components: dynamic graph convolution feature extraction, multi-scale feature fusion, and adaptive temporal frame weighting. In Step 1, dynamic graph convolution is used to automatically construct spatial geometry through local directed neighborhood graphs, and the neighborhoods are updated online. This design avoids manual graph construction and improves feature robustness. In Step 2, multi-scale feature fusion is applied to jointly extract and integrate point-cloud features across spatial and temporal dimensions, thereby capturing local details and global semantics. In Step 3, adaptive frame weighting is introduced to learn the importance of each frame, emphasize discriminative key frames, and suppress noisy or unimportant frames. Cross-attention is further used to enable information exchange between the center frame and its context, compensating for the limitations of single-frame analysis caused by motion blur, occlusion, or pose ambiguity.  Results and Discussions  The proposed network extracts features through dynamic graph convolution, performs multi-scale feature fusion and adaptive frame weighting, and ultimately completes human action recognition. It achieves strong performance on the public TI and Vayyar millimeter-wave radar point-cloud datasets. With only 2.06M parameters and 4.51 GFLOPs, it outperforms existing methods (Tables 2, 3, and 4). Ablation experiments confirm that both core modules substantially improve recognition accuracy (Table 1). The confusion matrices indicate accuracy above 99% for most actions on the two datasets, demonstrating superior recognition performance (Figs. 10 and 11). However, its scalability, parameter efficiency, and processing efficiency for large-scale data still require improvement. Future work will therefore focus on further lightweight design and architectural optimization to improve efficiency.  Conclusions  To address the two main challenges in mmWave radar 3D point-cloud-based human action recognition, an action recognition algorithm based on a dynamic graph convolutional network and multi-feature fusion is proposed. A multi-scale feature fusion module and cross-scale interaction are used to extract local and global features, which improves spatial representation. An adaptive frame-weighting module and a cross-attention mechanism are adopted to capture the temporal evolution of actions. The method achieves accuracies of 98.32% and 99.48% on two datasets with 2.06M parameters and 4.51 GFLOPs, outperforming mainstream models. It provides a new solution for high-precision, low-resource mmWave radar action recognition and is suitable for real-time scenarios such as industrial human-machine interaction, intelligent security, and healthcare.
Dynamic State Estimation of Distribution Network by Integrating High-degree Cubature Kalman Filter and Long Short-Term Memory Under False Data Injection Attack
XU Daxing, SU Lei, HAN Heqiao, WANG Hailun, ZHANG Heng, CHEN Bo
Available online  , doi: 10.11999/JEIT250805
Abstract:
  Objective  Dynamic state estimation of distribution networks is presented as a core technique for maintaining secure and stable operation in cyber-physical power systems. Its practical performance is limited by strong system nonlinearity, high-dimensional state characteristics, and the threat posed by False Data Injection Attack (FDIA). A method that integrates High-degree Cubature Kalman Filter (HCKF) with Long Short-Term Memory network (LSTM) is proposed. HCKF is applied to enhance estimation precision in nonlinear high-dimensional scenarios. The estimation outputs from HCKF and Weighted Least Squares (WLS) are combined for rapid FDIA identification using residual-based analysis. The LSTM model is then employed to reconstruct measurement data of compromised nodes and refine state estimation results. The approach is validated on the IEEE 33-bus distribution system, demonstrating reliable accuracy enhancement and effective attack resilience.  Methods   The strong nonlinearity of distribution networks limits the estimation accuracy of dynamic methods based on the Cubature Kalman Filter (CKF). A hybrid measurement state estimation model that combines data from Phasor Measurement Unit (PMU) and Supervisory Control And Data Acquisition (SCADA) is established. HCKF is applied to enhance estimation performance in nonlinear, high-dimensional scenarios by generating higher-order cubature points. Under FDIA, the estimation outputs from WLS and HCKF are jointly assessed, allowing rapid intrusion detection through residual evaluation and state consistency checking. Once an attack is identified, an LSTM model performs time-series prediction to reconstruct the measurement data of compromised nodes. The reconstructed data replace abnormal values, enabling correction of the final state estimation.  Results and Discussions  Experiments on the IEEE 33-bus distribution system show that without FDIA, HCKF achieves higher estimation accuracy for voltage magnitude and phase angle than CKF. The Average voltage Relative Error (ARE) of voltage magnitude decreases by 57.9%, and the corresponding phase-angle error decreases by 28.9%, confirming the superiority of the method for strongly nonlinear and high-dimensional state estimation. Under FDIA, residual-based detection effectively identifies cyber attacks and avoids false alarms and missed detections. The prediction error of LSTM for the measurement data of compromised nodes and their associated branches remains on the order of 10–6, indicating high reconstruction fidelity. The combined HCKF and LSTM maintains stable state tracking after intrusion, and its performance exceeds that of WLS and adaptive Unscented Kalman Filter.  Conclusions  The dynamic state estimation method that integrates HCKF and LSTM enhances adaptability to strong nonlinearity and high-dimensional characteristics of distribution networks. Rapid and accurate FDIA identification is achieved through residual evaluation, and LSTM reconstructs the measurement data of compromised nodes with high reliability. The method maintains high estimation accuracy under normal operation and preserves stability and precision under cyber intrusion. It offers technical support for secure and stable operation of distribution networks in the presence of malicious attacks.
SAR Saturated Interference Suppression Method Guided by Precise Saturation Model
DUAN Lunhao, LU Xingyu, TAN Ke, LIU Yushuang, YANG Jianchao, YU Jing, GU Hong
Available online  , doi: 10.11999/JEIT251283
Abstract:
  Objective  With the increasing number of electromagnetic devices, Synthetic Aperture Radar (SAR) is highly susceptible to Radio Frequency Interference (RFI) within the same frequency band. RFI typically appears as bright streaks in SAR images and severely degrades image quality. Considerable research has been conducted on interference suppression, and many effective methods have been proposed. However, most existing approaches do not consider the nonlinear saturation of interfered echoes. In practical scenarios, the interference power is usually high, and the gain controller in the SAR receiver cannot effectively regulate the amplitude of interfered echoes. Therefore, the input signal amplitude of the Analog-to-Digital Converter (ADC) exceeds its dynamic range. This condition drives the SAR receiver into saturation and leads to nonlinear distortion in the interfered echoes. Such phenomena have been observed in multiple SAR systems. Documented cases include receiver saturation in the LuTan-1 satellite and several airborne SAR platforms. Analyses of SAR data further confirm the presence of saturated interference in systems such as Sentinel-1, Gaofen-3, and other spaceborne SAR platforms. After saturation occurs, the echo spectrum exhibits spurious components and spectral artifacts. These effects cause a mismatch between existing suppression methods and the actual characteristics of saturated interference. Therefore, many current methods cannot effectively mitigate this type of interference. Moreover, accurate models that precisely describe the output components of saturated interfered echoes remain limited. To address these issues, a precise analytical model for saturated interference is established, and an effective saturated interference suppression method is proposed based on this model.  Methods  Based on the processing of the basic saturation model, a mathematical model is first developed to accurately characterize the output components of saturated interference. The accuracy of the model in describing amplitude and phase is validated through simulations. A detailed analysis of the output components of interfered echoes under saturation conditions is also conducted. Compared with the one-bit sampling model and the traditional tanh saturation model, the proposed model provides higher accuracy in describing amplitude information. In addition, the model is not limited by the sampling bit width of ADCs and can theoretically be extended to describe saturation outputs in other radar receivers. Based on the observation that harmonic phases can be expressed as a linear combination of the phases of the original signal components, and by exploiting the high-power characteristic of the interference fundamental harmonic, a saturated interference suppression method is proposed. First, because the interference fundamental harmonic has relatively high power, it is extracted using eigen-subspace decomposition. Then, based on harmonic phase relationships, the extracted interference fundamental harmonic, and the SAR transmitted signal, various interference harmonics are systematically constructed. These include higher-order interference harmonics, target harmonics, and intermodulation harmonics, which together form a complete dictionary. Finally, a sparse optimization problem is solved to achieve separation and suppression of saturated interference. The effectiveness of the proposed method is verified using measured Gaofen-3 data.  Results and Discussions  Experiments are conducted using both simulated and measured data to verify the effectiveness of the proposed method in suppressing saturated interference. For simulated data, the proposed method completely removes interference stripes in the SAR image (Fig. 7). Analysis of the time-frequency spectra of the processed echoes (Fig. 8 and Fig. 9) shows that traditional methods cannot effectively eliminate higher-order harmonics. Thus, the proposed method improves the Target-to-Background Ratio (TBR) by 1.76 dB and achieves the lowest Root Mean Square Error (RMSE) of 0.078 3 (Table 3). For the measured Gaofen-3 data, analysis of the processed images and the time-frequency spectra of echoes confirms that the proposed method effectively suppresses interference. Conventional methods still exhibit residual interference in the processed results (Fig. 10 and Fig. 11).  Conclusions  With the increasing deployment of electromagnetic devices, SAR systems are increasingly susceptible to in-band interference. High-power interference can drive the SAR receiver into saturation and cause nonlinear distortion, which reduces the effectiveness of traditional interference suppression methods. To address this issue, a model that precisely characterizes the saturated output components of interfered echoes is established. Based on this model, an interference suppression method for saturated interference is proposed. Simulation and experimental results show that the model accurately describes saturation behavior and that the proposed method effectively suppresses saturated interference.
Genetic-algorithm-optimized All-metal Metasurface for Cross-band Stealth via Low-cost Computer Numerical Control Fabrication
ZHANG Ming, ZHANG Najiao, LI Jialei, LI Kang, Vazgen MELIKYAN, YANG Lin, HOU Weimin
Available online  , doi: 10.11999/JEIT251080
Abstract:
  Objective  Traditional electromagnetic stealth materials face the practical challenge of achieving both microwave absorption and infrared stealth. Conventional solutions, including geometric optimization and multilayer composite coatings, often suffer from narrow bandwidth, complex fabrication, and limited cross-band compatibility. This study proposes a genetic algorithm-optimized all-metal random coding metasurface that enables concurrent broadband Radar Cross Section (RCS) reduction and low infrared emissivity on a monolithic metallic platform, thereby addressing these practical limitations.  Methods  Monolithic all-metal C-shaped resonant units are employed. The design is based on the Pancharatnam-Berry geometric phase, in which the reflection phase is regulated by the rotation angle of the unit. Coding schemes of 2-bit, 3-bit, and 4-bit are implemented, corresponding to 4, 8, and 16 discrete phase states. A MATLAB-CST co-simulation framework is established. CST extracts unit responses using the Finite Element Method (FEM), whereas MATLAB applies a genetic algorithm to optimize the phase distribution for scattering energy diffusion. All-metal metasurface prototypes (150×150 mm2, 10×10 array) are fabricated using Computer Numerical Control(CNC) cutting.  Results and Discussions  Genetic algorithm optimization converges within 6~8 generations. Increasing the number of coding bits enhances phase randomness. The 4-bit metasurface achieves an average 10 dB RCS reduction over 11\begin{document}$ \sim $\end{document}18.4 GHz. Simulation results agree with anechoic chamber measurements under oblique incidence angles from 0° to 60°. Infrared imaging confirms the low emissivity of the metallic surface. Compared with conventional composite or multilayer structures, the all-metal design simplifies fabrication, prevents interfacial mismatch, and improves structural stability. The metasurface demonstrates broadband, wide-angle, and cross-band stealth performance.  Conclusions  This study presents a genetic algorithm-optimized all-metal random coding metasurface that achieves cross-band stealth compatibility. The design addresses the persistent challenge of realizing both microwave performance and thermal management in conventional stealth materials. Three main technical contributions are demonstrated. (1)The monolithic copper structure provides greater than 99.9% infrared reflectivity in the 8\begin{document}$ \sim $\end{document}14 μm band, verified by FLIR imaging, and achieves an average 10 dB RCS reduction over 11\begin{document}$ \sim $\end{document}18.4 GHz. (2)The single-material configuration removes the risk of delamination. The CNC-fabricated prototype maintains structural integrity under 60° oblique incidence and reduces fabrication cost by approximately 78% compared with lithographic processing. (3)The co-simulation optimization framework converges within eight generations for 4-bit coding, enabling broadband scattering manipulation over 7.4 GHz. The proposed metasurface combines fabrication reliability, cost efficiency, and dual-band stealth capability. These characteristics provide a practical basis for large-scale deployment in military stealth systems and satellite platforms that require multispectral concealment and long-term structural durability.
Research on the Architecture of Dual-field Reconfigurable Polynomial Multiplication Unit for Lattice-based Post-quantum Cryptography
CHEN Tao, ZHAO Wangpeng, BIE Mengni, LI Wei, NAN Longmei, DU Yiran, FU Qiuxing
Available online  , doi: 10.11999/JEIT250929
Abstract:
  Objective  Polynomial multiplication accounts for more than 80% of the computational time in lattice cryptography algorithms. The Number Theoretic Transform (NTT) and Fast Fourier Transform (FFT) reduce the computational complexity of polynomial multiplication from exponential to logarithmic order. However, mainstream lattice cryptography algorithms, including Kyber, Dilithium, and Falcon, differ considerably in their parameter sets and polynomial multiplication implementations. To support polynomial multiplication under multiple parameter configurations and improve resource utilization, a dual-field reconfigurable polynomial multiplication unit architecture is proposed.  Methods  First, the computational network for polynomial multiplication is extracted according to the parameter characteristics of Kyber, Dilithium, and Falcon. The internal dual-field multiplication operations are optimized at the algorithm level. Next, a dual-field reconfigurable polynomial multiplication unit architecture is designed for the polynomial multiplication network. The dual-field reconfigurable multiplication unit is further optimized to improve computational speed. Finally, a parallelism analysis is conducted to improve resource utilization of the computational architecture. The proposed architecture achieves the highest area efficiency when supporting 1-lane 64 bit, 2-lane 32 bit, or 4-lane 16 bit operations.  Results and Discussions  The architecture is experimentally validated on the Xilinx FPGA XC7V2000TFLG1925. It simultaneously supports one channel of complex-form floating-point operations or two channels of 17\begin{document}$ \sim $\end{document}32 bit internal NTT operations and four channels of 16 bit internal NTT operations. At an operating frequency of 169 MHz, the architecture reduces the area-time product by more than 50%.  Conclusions  The proposed dual-field reconfigurable processing unit architecture provides advantages in scalability, area efficiency, and core unit performance. Its configurable bit-width design adapts more easily to traditional cryptographic processors and provides a practical approach for migrating conventional public-key cryptosystems to post-quantum cryptography.
Reconfigurable Intelligent Surface Assisted Key Generation Resistant to Signal Injection Attacks
YANG Lijun, WANG Haomin, ZHU Tiancheng, WU Meng
Available online  , doi: 10.11999/JEIT251281
Abstract:
  Objective  This study examines the potential threat of signal injection attacks to Physical Layer Key Generation (PLKG) in Reconfigurable Intelligent Surface (RIS)-assisted wireless systems. The threat is especially pronounced in quasi-static channels, where the channel state remains highly correlated across multiple probing rounds. From both attack and defense perspectives, the study clarifies how spatial correlation between RIS reflection channels and eavesdropping channels can be exploited to improve key inference. A channel-randomization mechanism is designed that uses the controllability of RIS to suppress key leakage, reduce the eavesdropper’s key capacity, and improve the security of RIS-assisted PLKG in future 6G scenarios. Quantitative analysis further examines the relationships among injection power, Signal-to-Noise Ratio (SNR), and spatial correlation. These results provide reference guidance for robust RIS configuration and secure system design.  Methods  An RIS-assisted Time-Division Duplex (TDD) system is considered. Single-antenna Alice and Bob generate symmetric keys from a reciprocal channel, whereas a two-antenna active eavesdropper, Eve, injects signals using previously observed Channel State Information (CSI) (Fig. 1). The links follow quasi-static Rayleigh block fading. CSI for Alice, Bob, and Eve is defined for each time slot within a coherence interval. A conventional injection attack is first modeled. Eve estimates the eavesdropping channel in one slot, precodes an injected waveform, and contaminates the subsequent probing at Alice and Bob, partially steering their key source. A joint key inference strategy is then proposed. This strategy exploits the spatial correlation between RIS reflection channels and eavesdropping channels, as well as the common RIS-induced subchannel shared by legitimate and eavesdropping links (Table 1). As a defense, a channel-randomization PLKG scheme is proposed. Alice randomly reconfigures RIS coefficients at each probing round. Therefore, the effective channels of Alice-Bob, Alice-Eve, and Bob-Eve vary independently across rounds, whereas Alice-Bob reciprocity within a single round is preserved. Injection signals precoded with outdated CSI therefore appear as uncorrelated interference at the legitimate nodes. Mutual-information-based bounds on secret-key capacity are derived to obtain key capacities. The eavesdropper’s Key Recovery Rate (KRR) is defined for performance evaluation. The theoretical results are validated through MATLAB Monte Carlo simulations with 10,000 trials using an information-theoretic estimator toolbox. The simulations examine different SNR levels, injection power values, and spatial correlation conditions (Figs. 2\begin{document}$ \sim $\end{document}5, Table 2).  Results and Discussions  Analysis of the conventional injection attack without RIS defense shows that at high SNR, Alice and Bob observe nearly identical reciprocal channels due to channel reciprocity. Eve’s estimate, derived from injected signals, follows a similar trend but shows noticeable mismatch (Fig. 2). Eve can therefore recover some key bits, although errors remain, and the KRR remains moderate. When the proposed joint key inference strategy is applied, Eve’s reconstructed channel more closely matches the legitimate response (Fig. 3). This effect arises because RIS-assisted PLKG causes legitimate and eavesdropping links to share an RIS-induced subchannel. The resulting spatial correlation provides additional exploitable information beyond the known injected signal. Therefore, Eve’s key capacity and KRR increase significantly, which indicates a stronger RIS-specific security threat. At fixed SNR (Fig. 4), Eve’s key capacity without defense increases rapidly with injection power and may approach or exceed the legitimate key capacity. Under RIS randomization, the legitimate capacity decreases slightly, whereas Eve’s capacity remains small and nearly constant. This result indicates that randomization converts structured injection signals into noise. Spatial-correlation analysis in Fig. 5 shows that Eve’s capacity without defense increases rapidly and becomes critical as correlation approaches one. In contrast, under RIS randomization the increase is gradual, and the capacity may remain near zero at moderate correlation levels. Table 2 confirms these trends in terms of KRR. The KRR is about 50% without correlation and injection. It increases to about 62.5% when injection is applied but spatial correlation is zero, whereas the defense keeps the value close to random guessing. When spatial correlation and injection power are higher, the KRR exceeds 80%. The proposed defense reduces this value to approximately 57%~66%.  Conclusions  This study examines the dual role of RIS in PLKG security. RIS can increase vulnerability but can also serve as an effective defensive mechanism. By exploiting the correlation between RIS reflection channels and eavesdropping channels, a joint key inference attack is developed that increases the eavesdropper’s key capacity and recovery rate compared with conventional injection attacks. This result reveals a new attack vector in RIS-assisted systems. A channel-randomization PLKG scheme is then proposed by exploiting the dynamic controllability of RIS. The scheme shortens the effective coherence time to a single probing round and decorrelates successive channel realizations from the attacker’s perspective. Theoretical analysis and Monte Carlo simulations show that the proposed scheme converts malicious injection signals into uncorrelated interference, reduces the eavesdropping key capacity, and pushes the eavesdropper’s KRR close to random guessing. This property remains effective even under high SNR, strong spatial correlation, and high injection power. The scheme achieves these security improvements with low hardware overhead compared with reconfigurable antenna-based solutions, because RIS devices are expected to serve as infrastructure elements in future 6G networks. The results provide guidance for the secure design of RIS-assisted PLKG systems and suggest that the controllable characteristics of RIS should be used for both performance improvement and security protection.
Research on Monophonic Speech Separation Method Using Time-Frequency Domain Multi-scale Information Interaction Strategy
LAN Chaofeng, YANG Guotao, CHEN Yingqi, GUO Xiaoxia
Available online  , doi: 10.11999/JEIT251340
Abstract:
  Objective  Monaural speech separation aims to extract individual speaker signals from a single-channel mixture. It is a core technology for addressing the “cocktail party problem” and has substantial application value in low-resource, low-latency scenarios such as mobile voice assistants, teleconferencing, and hearing aids. However, the lack of spatial cues in single-channel signals, together with the substantial overlap of multiple speakers in both time-domain waveforms and frequency-domain spectra, makes accurate separation highly challenging, especially when the integrity and clarity of the target speech must be preserved. Current deep learning-based models often show limitations in three closely related aspects: effective coordination of multi-scale dependencies, efficient fusion of time-frequency information, and control of computational complexity. To address these challenges, a novel Multi-Scale Attention model integrating Time-Frequency domain information (MSA-TF) is proposed to improve separation performance, computational efficiency, and generalization capability.  Methods  The MSA-TF model contains three key components. First, a lightweight Time-Frequency fusion module is designed. The module first divides the frequency band into four subbands on the basis of speech priors, such as low-frequency energy concentration and high-frequency detail sensitivity, to extract spectral features efficiently. A dynamic gating mechanism with decomposed convolutions and SiLU activation is then applied to adaptively enhance speaker-discriminative features and suppress redundant channels associated with noise. Finally, a cross-attention mechanism is used to promote deep interaction between time-domain and frequency-domain features during the encoding stage. Global semantic information from the time domain guides the selection and weighting of useful frequency-domain features, allowing mutual correction and complementarity. This module adds only 0.8 M parameters. Second, a Multi-scale Interaction Separator is proposed to address the limitations of sequential or loosely coupled multi-scale processing in models such as SepFormer. Multi-granularity features, ranging from frame-level F 1 to syllable-level semantic F 4, are extracted through cascaded dilated convolutions. Its core is the “GF-LF Iterative Feedback” mechanism. The Global Flash module, based on efficient FLASH attention, captures long-range dependencies and syllable-level context. This global information is upsampled and injected into local features ( F k) through residual connections. Local Flash modules, also based on FLASH attention, then process the enhanced local features (\begin{document}$ {\boldsymbol{F}}_k^{\prime} $\end{document}) to model fine-grained structures and suppress frame-level noise. The updated local features are subsequently fed back through adaptive pooling to refine the global representation in the next iteration. This closed-loop bidirectional flow enables deep synergy between global semantics and local details. A gated fusion mechanism at the end dynamically balances the contributions of different scales. Third, to control computational complexity, an efficient hierarchical grouped attention mechanism is adopted, reducing the complexity from quadratic to nearly linear with sequence length. The overall MSA-TF architecture is end-to-end and consists of a 1D convolutional encoder, the integrated time-frequency and multi-scale modules, a mask network, and a symmetric decoder.  Results and Discussions  Extensive experiments are conducted on the standard WSJ0-2mix and Libri-2mix datasets, with Scale-Invariant Signal-to-Noise Ratio (SI-SNR) and Signal-to-Distortion Ratio (SDR) used as evaluation metrics. Ablation studies (Table 1) confirm the individual and joint contributions of the proposed modules. When only the time-frequency module is added to the TDAnet baseline, SI-SNR increases by 0.3 dB and SDR by 0.4 dB with only a small increase in parameters, confirming its contribution to signal structure modeling, particularly for high-frequency details. When only the multi-scale interaction module is incorporated, SI-SNR increases by 2.5 dB and SDR by 2.7 dB, highlighting its central role in modeling long-term dependencies. When the time-frequency and multi-scale modules are combined in the complete MSA-TF core, a synergistic effect is obtained, reaching 17.6 dB SI-SNR, which exceeds the sum of the individual gains. This result indicates that the dual-dimensional features provided by time-frequency fusion and the deep dependency modeling enabled by multi-scale interaction strengthen each other. Spectrogram analysis (Fig. 3) further shows that the time-frequency module effectively suppresses residual high-frequency noise and produces clearer spectral contours for the target speech. On the WSJ0-2mix test set (Table 2), MSA-TF achieves state-of-the-art performance, with 17.6 dB SI-SNR and 17.8 dB SDR. It matches the performance of SuperFormer and substantially outperforms strong baselines such as Conv-Tasnet by 2.3 dB SI-SNR, while maintaining a reasonable parameter count of 15.6 M. Compared with models with larger parameter sizes, such as SignPredictionNet at 55.2 M, MSA-TF shows more efficient modeling. For generalization evaluation on the completely unseen Libri-2mix dataset (Table 4), MSA-TF, trained only on WSJ0-2mix, achieves 14.2 dB SI-SNR and 14.7 dB SDR. Its performance is comparable to that of Conv-Tasnet models trained specifically on Libri-2mix, which achieve 14.4 dB SI-SNR, and it outperforms BLSTM-Tasnet trained on Libri-2mix. This strong cross-dataset adaptability indicates that the model captures universal time-frequency characteristics and multi-scale dependency structures in speech signals rather than overfitting to a specific dataset distribution.  Conclusions  An MSA-TF model is presented to address key challenges in monaural speech separation through deep integration of multi-scale time-frequency information interaction. The proposed lightweight Time-Frequency fusion module efficiently supplements time-domain features with discriminative frequency-domain information. The Multi-scale Interaction Separator, with its iterative feedback mechanism, enables dynamic bidirectional information flow across scales and substantially improves the joint modeling of short-term details and long-term dependencies. Combined with an efficient attention design, the model achieves superior performance without excessive computational cost. Experimental results show that MSA-TF achieves leading separation performance on standard benchmarks and shows strong generalization ability on unseen data distributions, confirming the effectiveness of this comprehensive design. The model provides an efficient, robust, and generalizable solution for practical low-resource application scenarios. Future work may examine advanced cross-modal fusion techniques and dynamic scale adjustment strategies to further improve robustness and performance in more complex and variable acoustic environments.
Intelligent Sorting Algorithm for Multi-station Radar Signals Based on Federated Learning
YE Chengji, XIE Jian, ZHANG Zhaolin, WANG Ling
Available online  , doi: 10.11999/JEIT251355
Abstract:
  Objective  Radar signal sorting is a critical step in electronic reconnaissance and battlefield situational awareness. It is used to accurately separate interleaved pulse streams in complex electromagnetic environments. Although multi-station cooperative reconnaissance systems provide spatial diversity gains that can mitigate the parameter ambiguity and aliasing problems of single-station systems, their practical deployment faces major challenges. Traditional centralized processing architectures require massive volumes of raw Pulse Description Word (PDW) data to be transmitted to a central server. This requirement leads to prohibitive communication bandwidth costs and increases the risk of leakage of sensitive electromagnetic spectrum intelligence. In addition, because stations are geographically distributed and differ in antenna scanning patterns, the data collected at different stations often show significant Non-Independent and Identically Distributed (Non-IID) characteristics. Such heterogeneity reduces the generalization ability of local models trained on isolated data islands. To resolve the conflict between data isolation and the need for collaborative intelligence, a multi-station collaborative radar signal sorting method is proposed based on a Federated Learning (FL) framework. Collaborative model training is enabled without exchange of raw data, so that data privacy is preserved, communication overhead is reduced, and sorting robustness is improved in heterogeneous and noisy battlefield environments.  Methods  A centralized federated sorting framework is constructed to coordinate multiple reconnaissance stations. The method contains three main components: feature preprocessing, a lightweight local temporal model, and a heterogeneity-aware aggregation strategy. First, in data preprocessing, the raw PDW parameters, including TOA, CF, and PW, are normalized to address substantial differences in scale. Specifically, TOA is transformed into first-order differential values to extract Pulse Repetition Interval (PRI) information, which prevents numerical saturation and captures periodic patterns effectively (Fig. 3). Second, a local time-series sorting model is designed for the resource constraints of edge devices. A bidirectional Long Short-Term Memory (LSTM) network is used as the backbone to capture long-range dependencies and dynamic patterns in pulse sequences from both forward and backward directions. To accelerate convergence and prevent gradient vanishing, residual connections are added to fuse static and dynamic features. The extracted features are then mapped to the radiation source category space through a cascaded linear classification layer. Third, to address model drift caused by Non-IID data, including feature distribution shift and label distribution shift, a new aggregation strategy is proposed based on parameter decomposition and proximal regularization. Model parameters are decoupled into a feature extractor and a classifier. During federated aggregation, only the parameters of the generic feature extractor are uploaded and globally averaged, whereas the personalized classifier parameters are retained locally to adapt to the class distribution of each station. Furthermore, a proximal regularization term is added to the local loss function (Eq. 20). This constraint limits the deviation of local updates from the global model and ensures that the optimization direction does not diverge substantially because of local data heterogeneity, thereby improving the stability and convergence speed of the global model.  Results and Discussions  Extensive simulation experiments are conducted on core datasets with 3 stations and 5 radars, and on extended datasets with 9 stations and 12 radars, including complex modulation patterns such as jitter, sliding, and staggering. Quantitative analysis shows that the proposed method achieves sorting performance comparable to that of Centralized Learning (CL). On the core dataset, the Precision, Recall, and F1-score of the proposed method reach 96.51%, 96.35%, and 96.42%, respectively, exceeding those of FedAvg by approximately 0.67% in F1-score. On the more challenging extended dataset, the performance advantage becomes more significant, with an F1-score improvement of 3.86% over FedAvg (Table 4). These results indicate that the parameter decomposition strategy effectively balances common feature learning with personalized decision-making. Analysis by class further shows that, for categories that are difficult to distinguish, such as Radar 7 and Radar 10, the proposed method improves recognition accuracy by up to 15% and 6%, respectively, compared with FedAvg (Fig. 7 and Fig. 8). Robustness tests further demonstrate the adaptability of the method. When the number of participating stations increases from 3 to 9 (Fig. 9), the F1-score rises steadily from 73.53% to 83.75%. This result confirms that enlarging node scale in the FL framework produces collaborative gains through more diverse samples and reduced geographic statistical heterogeneity, which substantially improve model generalization and robustness. Under severe class skew conditions, the method maintains an F1-score above 80% on the core dataset (Fig. 10 and Fig. 11). Furthermore, under extreme electromagnetic conditions characterized by high pulse loss rates of 70% and spurious pulse rates of 70%, the model maintains sorting performance above 75%, which demonstrates strong robustness against noise and interference (Fig. 12).  Conclusions  An FL-based framework is proposed for multi-station collaborative radar signal sorting to address data privacy and transmission constraints in distributed reconnaissance. By integrating a lightweight LSTM with a heterogeneity-aware aggregation mechanism, the method effectively captures temporal pulse features and mitigates model drift caused by Non-IID data. Experimental results verify that the approach achieves accuracy comparable to that of centralized methods and shows superior robustness under label skew and severe data degradation, including high pulse loss and spurious pulse rates. This study provides a privacy-preserving and efficient solution for intelligent signal processing in distributed electronic warfare systems.
Dynamic Scale Perception-Driven Multi-UAV Collaborative 3D Object Detection Method
DUAN Shujing, WANG Zhirui, CHENG Peirui, FU Kun
Available online  , doi: 10.11999/JEIT251378
Abstract:
  Objective  Multi-UAV collaborative 3D object detection is a core technology for low-altitude intelligent perception, and the Bird’s-Eye View (BEV) feature representation paradigm provides support for global spatial consistency. However, in practical UAV remote-sensing scenarios, targets are extremely small, sparsely distributed, and embedded in a large proportion of background regions. Existing Transformer-based BEV perception methods adopt a homogeneous full-image feature-processing strategy. This strategy not only wastes computing resources because of excessive computation in large background areas, but also tends to dilute small-target features with background noise, making it difficult to balance computational efficiency and detection accuracy. Meanwhile, multi-UAV collaboration requires cross-device information interaction to achieve view complementarity and information gain, but this process is prone to redundant information and even feature conflicts. Traditional fixed-weight aggregation methods cannot accurately identify effective information or suppress redundancy, resulting in poor consistency of global BEV features and reduced collaborative detection accuracy. Therefore, the development of a detection network that is adaptive to multi-UAV aerial scenarios is of clear practical value.  Methods  A dynamic scale-aware detection network is proposed for efficient and accurate 3D object detection through two core modules: the Dynamic Scale-aware BEV Generation (DSBG) module and the Adaptive Collaborative BEV-Feature Aggregation (ACFA) module. The network establishes an end-to-end pipeline of “multi-view image input-dynamic scale adaptive feature encoding-BEV space 3D detection” (Fig. 1). First, the observed images collected by each UAV are processed independently by a parameter-sharing ResNet-50 backbone network to generate feature maps with a consistent structure. The DSBG module then takes these feature maps as input, calculates the amplitude of feature responses in each spatial region through the Local Scale-Aware Unit, and estimates the target distribution. On this basis, differentiated BEV grid encoding is dynamically allocated: high-resolution dense grids are assigned to high-response target regions to preserve fine-grained features, whereas low-resolution sparse grids are assigned to low-response background regions to reduce invalid computation. At the same time, target query vectors with spatial position priors are generated. The ACFA module receives the multi-resolution BEV features generated by the DSBG module, concatenates the dual-resolution features from different UAVs in the channel dimension, upsamples the low-resolution features to align them with the high-resolution features, models the local correlations of two-scale features through 3*3 convolution, and obtains a globally consistent BEV feature map through element-wise weighted summation. Finally, the global BEV features are fed into the DETR decoder for 3D target prediction, with Focal Loss used for classification and Smooth L1 Loss used for regression (Eqs. 5\begin{document}$ \sim $\end{document}6).  Results and Discussions  Extensive experiments are conducted on two public multi-UAV collaborative simulation datasets, AeroCollab3D and Air-Co-Pred. The results show that the proposed method achieves strong performance on both datasets. Compared with current state-of-the-art methods and baseline models, it not only improves mean Average Precision (mAP) by up to 7.2 percentage points, but also substantially reduces key evaluation metrics, including mean size error by more than 48%, mean localization error, and mean orientation error. In particular, clear advantages are observed in small-target detection and fine-grained category recognition, with pedestrian detection accuracy improved by nearly 10 percentage points. Ablation experiments verify the effectiveness of both the DSBG and ACFA modules. The proposed method steadily improves detection accuracy while significantly reducing computational cost by up to 41.6%, thereby achieving coordinated optimization of accuracy and efficiency. Visualization results (Fig. 3) show that the predicted bounding boxes have higher spatial alignment with the ground truth, effectively alleviating the common problems of target overlap and missed detection in traditional methods. Fig. 4 further illustrates the technical advantages of multi-UAV collaborative detection. Even for targets occluded by obstacles, the proposed method achieves efficient detection, thereby enhancing the comprehensive perception capability of the global region.  Conclusions  A dynamic scale-aware detection network is proposed for multi-UAV collaborative 3D object detection to address the core challenges of the efficiency-accuracy tradeoff and poor feature consistency in traditional methods. The DSBG module achieves dynamic matching between the BEV encoding scale and target distribution, thereby reducing redundant computation, whereas the ACFA module improves multi-scale and multi-view feature aggregation to ensure global feature consistency and accuracy. Experimental results on two datasets confirm that the proposed method outperforms existing advanced methods in detection accuracy, computational efficiency, and robustness. Future work will focus on optimizing dynamic scale-adjustment strategies with temporal information and exploring multi-sensor fusion with lightweight LiDAR data to improve detection stability in complex scenarios.
Construction Methods of Two-Dimensional Golay-Zero Correlation Zone Array Sets with Flexible Parameters
WANG Meiyue, LIU Tao, CHEN Xiaoyu, LI Yubo
Available online  , doi: 10.11999/JEIT251360
Abstract:
  Objective  Sequences with good correlation properties are widely used in wireless communications, cryptography, and radar systems. However, a sequence set cannot simultaneously achieve ideal autocorrelation and ideal cross-correlation. This limitation has led to the study of two signal classes with ideal correlation properties: Zero Correlation Zone(ZCZ) sequences and Golay Complementary Sets(GCS). A Golay-ZCZ sequence set combines the advantages of both. Its constituent sequences exhibit ideal periodic autocorrelation and cross-correlation within the ZCZ, and the sums of their aperiodic autocorrelations are zero at all nonzero shifts. Therefore, a Golay-ZCZ set is both a ZCZ set and a GCS. It can thus be used in the applications of both sequence classes. An array set is a two-dimensional extension of a sequence set. Although Golay-ZCZ sequence sets have been widely studied and constructed, research on Two-Dimensional (2D) Golay-ZCZ array sets remains limited. This study proposes three constructions of 2D Golay-ZCZ array sets based on 2D multivariable functions and the concatenation operator. These array sets can be used as precoding matrices for massive Multiple Input Multiple Output(MIMO) omnidirectional transmission.  Methods  Three construction methods for 2D Golay-ZCZ array sets are proposed, including one direct construction and two indirect constructions. The resulting parameters have not been reported in existing studies. In the first construction, a 2D Golay-ZCZ array set is generated using 2D multivariable functions, with parameters expressed as prime powers. This direct function-based approach enables efficient synthesis of the target arrays. The second and third constructions generate 2D Golay-ZCZ array sets through horizontal and vertical concatenation of Two-Dimensional Complete Complementary Codes(2D CCC), respectively. In these indirect constructions, the parameters are not restricted to prime powers. This property broadens the applicability of the methods and increases parameter flexibility.  Results and Discussions  The first construction generates a 2D Golay-ZCZ array set with array size \begin{document}$ p_{1}^{{m}_{1}}\times p_{2}^{{m}_{2}} $\end{document} and ZCZ size \begin{document}$ ({p}_{1}-1)p_{1}^{{\pi }_{1}(2)-1}\times ({p}_{2}-1)p_{2}^{{\sigma }_{1}(2)-1} $\end{document} through a direct function-based method, where \begin{document}$ {p}_{1} $\end{document} and \begin{document}$ {p}_{2} $\end{document} are prime numbers. For clarity, the magnitudes of the 2D periodic cross-correlation function of the constructed array set are illustrated in Example 1 (Fig. 1). The second construction generates a ZCZ array set with array size \begin{document}$ {L}_{1}\times {N}^{2}{L}_{2} $\end{document} and ZCZ size \begin{document}$ ({L}_{1}-1)\times (N-1){L}_{2} $\end{document} based on the horizontal concatenation of \begin{document}$ (N,N,{L}_{1},{L}_{2}) $\end{document} 2D CCC. The third construction generates a ZCZ array set with array size \begin{document}$ {N}^{2}{L}_{1}\times {L}_{2} $\end{document} and ZCZ size \begin{document}$ (N-1){L}_{1}\times ({L}_{2}-1) $\end{document} based on the vertical concatenation of \begin{document}$ (N,N,{L}_{1},{L}_{2}) $\end{document} 2D CCC. An illustrative example of Construction 2 is provided, and the corresponding correlation magnitudes are shown in (Figs. 2 and 3). As summarized in (Table 1), the construction methods proposed in this paper generate parameter sets that have not been reported in the existing literature. The constructed array sets provide considerable flexibility in array dimensions and ZCZ sizes. This flexibility is valuable for the design of precoding matrices in MIMO omnidirectional transmission systems. In practical implementations, the dimension of a precoding matrix is typically determined by the number of transmit antennas, whereas the ZCZ size must match the maximum multipath delay spread of the channel. Owing to this parameter flexibility, the proposed 2D Golay-ZCZ array sets support adaptive selection under different antenna configurations and channel conditions.  Conclusions  Three construction methods for 2D Golay-ZCZ array sets are proposed. These methods generate array sets with flexible array sizes and large ZCZ widths. The first construction is based on a 2D multivariable function and can include previous results as special cases without using kernels. The second and third constructions rely on the concatenation operator and provide greater parameter flexibility. The proposed 2D Golay-ZCZ arrays have potential applications in MIMO omnidirectional transmission. The parameter-flexible array sets can be selected according to different antenna configurations and channel conditions. This property suppresses multi-antenna interference within the zero-correlation zone and maintains uniform transmitted energy.
PSAQNet: A Perceptual Structure Adaptive Quality Network for Authentic Distortion Oriented No-reference Image Quality Assessment
JIA Huizhen, ZHAO Yuxuan, FU Peng, WANG Tonghan
Available online  , doi: 10.11999/JEIT251220
Abstract:
  Objective  No-Reference Image Quality Assessment (NR-IQA) is critical for practical imaging systems when pristine reference images are unavailable. However, many existing methods face three major challenges: limited robustness under complex distortions, weak generalization when distortion distributions shift (e.g., from synthetic to real-world settings), and insufficient modeling of geometric or structural degradations such as spatially varying blur, misalignment, and texture-structure coupling. These limitations cause models to rely excessively on dataset-specific statistics and reduce their effectiveness when applied to diverse scenes with mixed degradations. To address these issues, the Perceptual Structure Adaptive Quality Network (PSAQNet) is proposed to improve the accuracy and adaptability of NR-IQA under complex distortion conditions.  Methods  PSAQNet is designed as a unified CNN-Transformer framework that preserves hierarchical perceptual cues and supports global context reasoning. Instead of relying on late-stage pooling, distortion evidence is progressively enhanced throughout the network. The architecture contains several key components. The Advanced Distortion Enhanced Module (ADEM) operates on multi-scale features extracted from a pre-trained backbone. It adopts multi-branch gating and a distortion-aware adapter to emphasize degradation-related signals and reduce interference from dominant image content. This mechanism dynamically selects feature branches that correspond to perceptual degradation patterns, which is beneficial for spatially non-uniform or mixed distortions. To model geometric degradations, PSAQNet integrates Spatial-Guided Convolution (SGC) and Channel-Aware Adaptive Kernel convolution (CA_AK). SGC improves spatial sensitivity by guiding convolutional responses with structure-aware cues and focusing on regions where geometric distortions are prominent. CA_AK further improves geometric modeling by adaptively adjusting receptive behavior and recalibrating channels to preserve distortion-sensitive components. Additionally, PSAQNet incorporates efficient feature fusion strategies. Group Convolutional Block Attention Module (GroupCBAM) enables lightweight attention-based fusion of multi-level CNN features, whereas AttInjector selectively injects local distortion cues into global Transformer representations. This design allows global semantic reasoning to be guided by localized degradation evidence without introducing redundancy or instability.  Results and Discussions  Extensive experiments on six benchmark datasets containing both synthetic and real-world distortions demonstrate that PSAQNet achieves strong performance and stable agreement with human subjective judgments. The proposed method outperforms several recent approaches, particularly on real-world distortion datasets. These results indicate that PSAQNet effectively enhances distortion evidence, models geometric degradation, and integrates local distortion cues with global semantic representations. Such capabilities improve robustness under distribution shifts and reduce reliance on narrow distortion priors. Ablation studies confirm the contribution of each module. ADEM increases distortion saliency, SGC and CA_AK improve sensitivity to geometric degradations, and GroupCBAM and AttInjector strengthen the interaction between local and global features. Cross-dataset evaluations further demonstrate the generalization capability of PSAQNet across different content categories and distortion types. Scalability experiments also show that the framework benefits from stronger pretrained backbones without compromising its modular design.  Conclusions  PSAQNet addresses several key limitations in NR-IQA by integrating local distortion enhancement, geometric-aware feature modeling, and global semantic fusion within a unified framework. The modular architecture improves robustness and generalization across diverse distortion conditions and supports practical deployment in real-world scenarios. Future work will explore vision–language pre-training to improve cross-scene adaptability.
LLM-based Data Compliance Checking for Internet of Things Scenarios
LI Chaohao, WANG Haoran, ZHOU Shaopeng, YAN Haonan, ZHANG Feng, LU Tianyang, XI Ning, WANG Bin
Available online  , doi: 10.11999/JEIT250704
Abstract:
  Objective  The implementation of regulations such as the Data Security Law of the People’s Republic of China, the Personal Information Protection Law of the People’s Republic of China, and the European Union General Data Protection Regulation (GDPR) has established data compliance checking as a central mechanism for regulating data processing activities, ensuring data security, and protecting the legitimate rights and interests of individuals and organizations. However, the characteristics of the Internet of Things (IoT), defined by large numbers of heterogeneous devices and the dynamic, extensive, and variable nature of transmitted data, increase the difficulty of compliance checking. Logs and traffic data generated by IoT devices are long, unstructured, and often ambiguous, which results in a high false-positive rate when traditional rule-matching methods are applied. In addition, the dynamic business environments and user-defined compliance requirements further increase the complexity of rule design, maintenance, and decision-making.  Methods  A large language model-driven data compliance checking method for IoT scenarios is proposed to address the identified challenges. In the first stage, a fast regular expression matching algorithm is employed to efficiently screen potential non-compliant data based on a comprehensive rule database. This process produces structured preliminary checking results that include the original non-compliant content and the corresponding violation type. The rule database incorporates current legislation and regulations, standard requirements, enterprise norms, and customized business requirements, and it maintains flexibility and expandability. By relying on the efficiency of regular expression matching and generating structured preliminary results, this stage addresses the difficulty of reviewing large volumes of long IoT text data and enhances the accuracy of the subsequent large language model review. In the second stage, a Large Language Model (LLM) is employed to evaluate the precision of the initial detection results. For different categories of violations, the LLM adaptively selects different prompt words to perform differentiated classification detection.  Results and Discussions  Data are collected from 52 IoT devices operating in a real environment, including log and traffic data (Table 2). A compliance-checking rule library for IoT devices is established in accordance with the Cybersecurity Law, the Data Security Law, other relevant regulations, and internal enterprise information-security requirements. Based on this library, the collected data undergo a first-stage rule-matching process, yielding a false-positive rate of 64.3% and identifying 55 080 potential non-compliant data points. Three aspects are examined: benchmark models, prompt schemes, and role prompts. In the benchmark model comparison, eight mainstream large language models are used to evaluate detection performance (Table 5), including Qwen2.5-32B-Instruct, DeepSeek-R1-70B, and DeepSeek-R1-0528 with different parameter configurations. After review and testing by the large language model, the initial false-positive rate is reduced to 6.9%, which demonstrates a substantial improvement in the quality of compliance checking. The model’s own error rate remains below 0.01%. The prompt-engineering assessment shows that prompt design exerts a strong effect on review accuracy (Table 6). When general prompts are applied, the final false-positive rate remains high at 59%. When only chain-of-thought prompts or concise sample prompts are used, the false-positive rate is reduced to approximately 12% and 6%, respectively, and the model’s own error rate decreases to about 30% and 13%. Combining these strategies further reduces the error rate of the small-sample prompt approach to 0.01%. The effect of system-role prompt words on review accuracy is also evaluated (Table 7). Simple role prompts yield higher accuracy and F1 scores than the absence of role prompts, whereas detailed role prompts provide a clearer overall advantage than simple role prompts. Ablation experiments (Table 8) further examine the contribution of rule classification and prompt engineering to compliance checking. Knowledge supplementation is applied to reduce interference and misjudgment among rules, lower prompt redundancy, and decrease the false-alarm rate during large language model review.  Conclusions  A large language model-driven data compliance checking method for IoT scenarios is presented. The method is designed to address the challenge of assessing compliance in large-scale unstructured device data. Its feasibility is verified through rationality analysis experiments, and the results indicate that false-positive rates are effectively reduced during compliance checking. The initial rule-based method yields a false-positive rate of 64.3%, which is reduced to 6.9% after review by the large language model. Additionally, the error introduced by the model itself is maintained below 0.01%.
A Lightweight and High-Reliability Challenge Generation Strategy for APUF
LAN Guohao, ZHANG Hui, DUO Bin, WANG Zibin, ZHOU Rang, LI Dongfen
Available online  , doi: 10.11999/JEIT251073
Abstract:
  Objective  The Arbiter Physical Unclonable Function (APUF) is a lightweight security primitive that has been widely adopted in identity authentication and key generation for resource-constrained devices. However, its response consistency is highly sensitive to environmental perturbations, leading to inconsistent responses for the same challenge under different conditions, severely undermining the reliability of APUF-based security systems. Existing reliability improvement schemes for APUF, which mainly rely on hardware modification or challenge screening, generally suffer from high resource overhead and low efficiency. To address the limitations of these existing solutions, a Delay-Constrained Challenge Generation Strategy (DCGS) is proposed to enhance APUF reliability without extra hardware overhead or screening-related inefficiencies.  Methods  The core of DCGS lies in modeling APUF path delay properties and constructing challenges with constrained delay differences to ensure response stability. First, a logistic regression (LR) model is established to characterize the relationship between APUF challenge bits and path delays. From the trained LR model, a delay weight vector is derived to quantify the contribution of each challenge bit to the overall path delay. Second, a two-stage challenge generation mechanism is designed to integrate delay constraint control: The first stage is prefix bit initialization, which generates distinct prefix sequences to establish a stable delay baseline for subsequent bit extension. The second stage is bit-wise extension, where each remaining challenge bit is dynamically determined based on the delay weight vector. During this extension process, the cumulative delay difference of the challenge is monitored in real time, ensuring it stays within a preset threshold range. Unlike traditional screening methods that post-process candidate challenges, DCGS directly generates stable challenges by design, eliminating the need for candidate pools and improving generation efficiency.  Results and Discussions  Performance evaluations of DCGS are conducted under varying noise intensities. At a noise intensity of 0.3 (maximum practical level), the reliability of DCGS-generated challenges remains at 100% (Fig.2). In terms of generation efficiency, DCGS consumes only 0.017 seconds to generate 10,000 challenges (Table 4). For response uniformity, DCGS achieves a value of 50.02% (Table 4). For uniqueness, it reaches 50.46% (Table 4). These two key metrics are both close to the ideal theoretical value of 50%. Security analysis shows that the average bit entropy of DCGS-generated challenges is 0.9807 (Fig.3), and the conditional entropy is 0.9878—only 0.0023 lower than that of random challenges (0.9901).  Conclusions  This paper proposes a delay-constrained challenge generation strategy for APUF, aiming to address the problems of inconsistent responses, low generation efficiency, and high hardware resource consumption of traditional schemes in high-noise environments. By modeling the path delay characteristics of APUF using LR and integrating a prefix initialization mechanism with a bit-wise extension mechanism, the strategy ensures that the generated challenges meet the preset delay difference threshold range. Through this method, the DCGS achieves high reliability, high efficiency, and good response uniformity without increasing hardware overhead. Experimental results show that DCGS can effectively enhance the reliability of APUF in complex environments, providing strong technical support for secure applications in resource-constrained devices.
Review of Non-invasive Brain–Computer Interfaces for Continuous Motor Control
XU Minpeng, JIA Leyi, ZHOU Xiaoyu, CHEN Enze, WANG Junyang, XIAO Xiaolin, MING Dong
Available online  , doi: 10.11999/JEIT260011
Abstract:
  Significance   Continuous motor control is a fundamental capability for brain–computer interface (BCI) systems aiming at natural and efficient interaction with external devices. Compared with discrete command-based control, continuous control enables real-time and smooth regulation of motion parameters such as position, velocity, and trajectory, which is essential for applications including assistive mobility, neuro rehabilitation, robotic manipulation, and immersive human–machine interaction. Although invasive BCI s have demonstrated high-performance continuous control benefiting from high-quality neural recordings, their reliance on surgical implantation restricts long-term use and large-scale deployment. Therefore, a systematic review of non-invasive continuous motor control BCI technologies is necessary to clarify research progress, methodological characteristics, and remaining challenges.  Progress   This review summarizes advances in non-invasive continuous motor control BCIs from four closely related aspects: control paradigms, decoding algorithms, application scenarios, and performance evaluation. At the paradigm level, motor imagery, steady-state visual evoked potentials, event-related potentials, and hybrid paradigms have been investigated to support continuous control through sustained intention modulation, dynamic stimulus encoding, and hierarchical or shared-control strategies. Regarding decoding algorithms, two major frameworks are identified: motion parameter mapping methods and motion parameter regression methods. Motion parameter mapping methods achieve continuous output by temporally integrating discrete classification results or mapping them to velocity or state variables, whereas motion parameter regression methods directly establish relationships between EEG features and continuous kinematic parameters. In recent studies, nonlinear models and deep learning approaches have been increasingly incorporated to improve robustness under non-stationary EEG conditions. At the application level, non-invasive continuous control has evolved from two-dimensional cursor tasks to more practical scenarios such as wheelchair navigation, robotic arm manipulation, unmanned systems, and virtual or augmented reality environments. In addition, existing studies evaluate continuous control performance using both objective metrics (e.g., trajectory error, task success rate, and information transfer rate) and subjective measures (e.g., workload and user experience), reflecting diverse experimental designs and control objectives.  Conclusions  Overall, existing studies demonstrate that non-invasive BCIs are capable of supporting continuous motor control; however, current research remains at a stage where diverse methods coexist without a unified framework. At the paradigm level, different approaches vary in their ability to reliably elicit and sustain continuous motor intentions. In terms of decoding algorithms, both motion parameter mapping and regression methods face limitations in robustness, generalization, and long-term stability due to the non-stationary nature of EEG signals. At the application level, many studies are still constrained to specific tasks and controlled environments, and the transferability of continuous control strategies to complex real-world scenarios requires further validation. Moreover, the lack of standardized evaluation protocols hinders direct comparison and systematic optimization across studies.  Prospects   Future research should focus on improving the stability and reliability of continuous control paradigms, enhancing decoding robustness under realistic EEG conditions, and strengthening the alignment between control strategies and application requirements. Establishing unified evaluation frameworks that integrate both objective and subjective indicators will be critical for methodological convergence and fair comparison. With continued advances, non-invasive continuous motor control BCIs are expected to play an increasingly important role in assistive technologies, rehabilitation systems, and advanced human–machine interaction.
Rotatable-Antenna-Aided Near-Field Wideband Integrated Sensing and Communication Systems: Hybrid Beamforming Design
XU Hongbo, MO Minghui, XIN Wei, WANG Shuli, WANG Ji, LI Xingwang, ZHENG Le
Available online  , doi: 10.11999/JEIT260023
Abstract:
  Objective  With the rapid evolution of sixth-generation (6G) mobile communication systems, integrated sensing and communication (ISAC) has emerged as a key enabling paradigm for simultaneously supporting high-precision sensing and high-rate data transmission under limited spectrum resources. In near-field wideband scenarios, however, ISAC systems suffer from several fundamental challenges, including pronounced near-field effects, and wideband beam splitting. These impairments significantly degrade both communication throughput and sensing reliability, especially when conventional fixed-orientation antenna arrays and phase-shifter-based beamforming architectures are employed. Due to their limited spatial adaptability and inherent frequency-independent characteristics, traditional architectures are unable to fully exploit the spatial–frequency degrees of freedom available in near-field wideband channels. Therefore, it is of great importance to develop a new antenna architecture and beamforming framework that can effectively mitigate beam splitting, enhance energy focusing capability, and maintain robustness across wide bandwidths. To address these challenges, a rotatable-antenna-assisted near-field wideband ISAC architecture is investigated, aiming to improve system sum-rate performance while satisfying sensing-related constraints.  Methods  A novel near-field wideband ISAC system architecture assisted by rotatable antennas (RAs) is proposed. By introducing mechanically or electronically adjustable antenna boresight directions, additional angular degrees of freedom are provided at the antenna element level, enabling flexible spatial coverage and adaptive energy focusing. Furthermore, a TTD-based hybrid beamforming architecture is adopted, which provides frequency-dependent phase shifts in the frequency domain to compensate for the frequency-independent characteristics of conventional phase shifters, thereby ensuring consistent beam focusing across all subcarriers and effectively suppressing wideband beam splitting. Based on a spherical-wave near-field channel model that explicitly incorporates propagation distance, angular information, and the orientation gain of rotatable antennas—thereby allowing the array response to depend jointly on both angle and distance and overcoming the limitations of the planar-wave assumption—a joint optimization problem is formulated to maximize the system sum rate, while simultaneously considering transmit power constraints, sensing power thresholds, and physical limitations on antenna rotation angles. To address the formulated non-convex optimization problem, a penalty-based fully digital approximation (PBFDA) algorithm is developed. In each iteration, the orientations of the rotatable antennas are first optimized using a particle swarm optimization (PSO) method to enhance the weighted channel gain. Then, with the antenna orientations fixed, a reduced-dimensional formulation combined with successive convex approximation (SCA) is employed to solve the fully digital beamforming problem. Finally, a block coordinate descent (BCD) algorithm based on manifold optimization is adopted to jointly optimize the analog beamformer, digital beamformer, and TTD units, thereby progressively approximating the fully digital solution, with the three components iteratively updated until convergence is achieved (Algorithm 1–Algorithm 4).  Results and Discussions  Simulation results demonstrate the effectiveness and superiority of the proposed RA-assisted near-field wideband ISAC framework. The convergence behavior of the proposed penalty-based fully digital approximation (PBFDA) optimization algorithm indicates that the objective function monotonically increases and stabilizes within a limited number of iterations, confirming its numerical stability and efficiency (Fig. 2). Compared with conventional fixed-antenna architectures, the proposed RA-based scheme achieves a substantial improvement in system sum rate under the same transmit power constraints (Fig. 3). Furthermore, the impact of system bandwidth on spectral efficiency is investigated. As the system bandwidth increases, TTD-based hybrid beamforming schemes experience weakened frequency-dependent compensation capability due to the limited number of TTD units and the constrained maximum delay, which exacerbates wideband beam splitting and leads to a degradation in spectral efficiency. In contrast, the optimal fully digital beamforming approach enables accurate control over each subcarrier, rendering its spectral efficiency basically not varying with bandwidth (Fig. 4). The trade-off between communication performance and sensing power is also evaluated. As the sensing power threshold increases, the achievable sum rate decreases for all schemes, while the proposed method consistently outperforms the others (Fig. 5). The effects of antenna array size, antenna directivity factor, and maximum rotation angle are further investigated. Increasing the number of antennas improves spectral efficiency due to higher array gain, with the RA-based system consistently outperforming benchmark schemes (Fig. 6). As the antenna directivity factor increases, the RA system leverages adaptive orientation to focus energy toward desired users, achieving continuous performance gains, whereas fixed-orientation and isotropic schemes degrade (Fig. 7). Moreover, enlarging the allowable rotation range provides greater spatial alignment flexibility and further improves system performance (Fig. 8). Overall, the results demonstrate that the proposed architecture enhances near-field energy focusing and achieves performance close to fully digital beamforming with lower hardware complexity.  Conclusions  A rotatable-antenna-assisted near-field wideband ISAC system with a TTD-based fully connected hybrid beamforming architecture is investigated. By jointly exploiting antenna rotation and true time delay, the proposed framework effectively mitigates near-field effects and wideband beam splitting. A penalty-based fully digital approximation (PBFDA) optimization algorithm is developed to address the resulting highly non-convex problem. Numerical results demonstrate that the proposed scheme significantly improves system sum rate under sensing constraints and approaches the performance of fully digital beamforming, validating its effectiveness for near-field wideband ISAC applications.
A Triple Modular Redundancy Voter Insertion Algorithm Utilizing Stagnation-Aware Probabilistic Reordering
LIU Zhaoting, LIU Peng
Available online  , doi: 10.11999/JEIT250825
Abstract:
  Objective  With the rapid development of integrated circuit technology, performance degradation and failure of electronic devices in high-energy particle radiation environments become increasingly prominent. High reliability is required in applications such as aerospace, the nuclear industry, petroleum exploration, and deep-sea detection. Among the available reliability enhancement techniques, Triple Modular Redundancy (TMR) is widely regarded as one of the most effective methods. In TMR, three identical copies of a digital circuit operate in parallel with the same input, and the correct output is obtained through majority voting when one copy fails. Common implementation methods include fine-grained TMR, system-level partitioning, and state synchronization. State synchronization is a key step in TMR-based radiation hardening because it restores registers to the correct state after a fault. This process is achieved by inserting synchronization voters, but the resulting resource overhead is often high. This study proposes a new synchronization voter insertion algorithm to reduce hardware cost. The objective is to develop and validate an algorithm that avoids exponential runtime complexity and, relative to existing methods, reduces the number of required synchronization voters.  Methods  After circuit preprocessing, the synchronization voter insertion task is formulated as a Feedback Vertex Set Problem (FVSP). The memory circuit is first extracted from the digital circuit to exclude nodes outside the candidate range and reduce circuit size. A Feedback Vertex Set (FVS) is then solved to identify the flip-flop nodes at which synchronization voters should be inserted. By inserting voters at the outputs of these flip-flops, all cycles containing memory elements are broken, and state synchronization is ensured. In implementation, a Simulated Annealing (SA) algorithm is used. Topological ordering is adopted to avoid direct loop detection and to reduce the time complexity of cycle checking. To improve search efficiency and solution quality, a Stagnation-Aware Probabilistic Reordering (SAPR) scheme is incorporated into the SA framework. A priority-based mechanism is applied during topological reordering to reduce conflicts and false conflict judgments in critical search steps. The candidate-set update strategy is also refined so that insertion positions with the fewest conflicts are selected in the topological ordering. When the FVS is not improved over multiple iterations, reordering is triggered with a certain probability to balance computational cost and the ability to escape local optima.  Results and Discussions  The quality of the FVS obtained by the SAPR-SA-FVSP algorithm is evaluated by comparison with three other methods. The proposed method shows higher probabilities of achieving the minimum average, best, and worst values, which indicates better overall solution quality (Table 3). Furthermore, SAPR-SA-FVSP shows a smaller mean standard deviation, which indicates better stability. The average standard deviations over all test graphs are 0.596 34 for SA-FVSP, 0.667 55 for the Nonuniform Neighborhood Sampling (NNS)-based SA method, 0.651 93 for dynamic-threshold reordering, and 0.56217 for SAPR-SA-FVSP, confirming the superior stability of the proposed method (Table 4). Using the ISCAS89 and ITC'99 benchmark circuits, the proposed voter insertion algorithm is further compared with the critical path-based voter insertion algorithm and the highest-fanout flip-flop algorithm. Across all test cases, SAPR-SA-FVSP yields the smallest number of synchronization voters. The maximum reduction reaches 78.88% relative to the critical path-based method and 74.05% relative to the highest-fanout flip-flop algorithm (Table 5). The proposed algorithm also shows better speed and robustness. It runs successfully on all test cases without failure. The average execution times on the circuits for which all three algorithms complete successfully are 9880.19 ms for the critical path-based algorithm, 9 625.04 ms for the highest-fanout flip-flop algorithm, and 3 389.73 ms for the proposed algorithm.  Conclusions  The proposed SAPR strategy improves the conventional SA-FVSP method and yields better solution quality and greater stability. On this basis, a resource-efficient synchronization voter insertion algorithm is proposed for restoring correct register states in TMR-hardened digital circuits. The algorithm divides the task into memory-circuit extraction and FVSP solving. Its completeness and efficiency are demonstrated theoretically, and substantial reductions in synchronization voter insertion are verified on benchmark circuits relative to the critical path-based and highest-fanout flip-flop methods. The proposed method therefore provides an effective approach for reducing hardware overhead while maintaining high reliability in TMR hardening of digital circuits.
Privacy-preserving Computation in Trustworthy Face Recognition: A Comprehensive Survey
YUAN Lin, WU Yanshang, ZHANG Liyuan, ZHANG Yushu, WANG Nannan, GAO Xinbo
Available online  , doi: 10.11999/JEIT251063
Abstract:
  Significance   With the widespread deployment of face recognition in Cyber-Physical Systems (CPS), including smart cities, intelligent transportation, and public safety infrastructures, privacy leakage has become a central concern for both academia and industry. Unlike many biometric modalities, face recognition operates in highly visible and loosely controlled environments, such as public spaces, consumer devices, and online platforms, where facial image acquisition is easy and pervasive. This exposure makes facial data especially vulnerable to unauthorized collection and misuse. Insufficient protection may lead to identity theft, unauthorized tracking, and deepfake generation, which threaten individual rights and reduce trust in digital systems. Therefore, facial data protection is not only a technical issue but also a significant societal and ethical challenge. This work integrates fragmented research across computer vision, cryptography, and privacy-preserving computation. It provides a unified perspective that guides the development of trustworthy face recognition ecosystems that balance usability, regulatory compliance, and public trust.  Contributions   This paper systematically reviews recent advances in privacy-preserving computation for face recognition, covering both theoretical foundations and practical implementations. The architecture and application pipeline of face recognition systems are first examined, and privacy risks at each stage are identified. At the data collection stage, unauthorized or covert capture of facial images introduces immediate risks of misuse. During model training and deployment, gradient leakage, membership inference, and overfitting may expose sensitive information about individuals contained in training data. At the inference stage, adversaries may reconstruct facial images, perform unauthorized recognition, or associate identities across datasets, which compromises anonymity. To address these threats, existing approaches are classified into four major privacy-preserving paradigms: data transformation, distributed collaboration, image generation, and adversarial perturbation. Within these paradigms, ten representative techniques are analyzed. Cryptographic computation, including homomorphic encryption and secure multiparty computation, enables recognition without revealing raw data but often introduces substantial computational overhead. Frequency-domain learning converts images into spectral representations to suppress identifiable details while retaining discriminative features. Federated learning decentralizes model training and reduces centralized data exposure, although it remains vulnerable to gradient inversion attacks. Image generation techniques, such as face synthesis and virtual identity modeling, reduce reliance on real facial data during training and evaluation. Differential privacy introduces calibrated noise to provide statistical privacy guarantees, whereas face anonymization obscures identifiable visual traits. Template protection and anti-reconstruction mechanisms defend stored facial features against reverse engineering. Adversarial privacy protection introduces imperceptible perturbations that interfere with machine recognition yet preserve human visual perception. Several representative studies in each category are further examined. Commonly used evaluation datasets are summarized. A comparative analysis is conducted across multiple dimensions, including face recognition performance, privacy protection effectiveness, and practical usability. This analysis systematically identifies the strengths and limitations of different types of methods.   Prospects   Several research directions are identified for future work. A primary challenge is to achieve a dynamic balance between privacy protection and system utility. Excessive protection may degrade recognition accuracy, whereas insufficient safeguards expose users to unacceptable risks. Adaptive mechanisms that adjust privacy levels according to context, task requirements, and user consent are therefore required. Another promising direction is the development of inherently privacy-aware recognition paradigms, such as feature representations that minimize identity leakage by design. The establishment of standardized evaluation frameworks for privacy risk and usability is also essential. Such frameworks would enable reproducible benchmarking and facilitate real-world deployment. The emergence of generative foundation models, including diffusion models and large multimodal models, further changes the research landscape. These models enable synthetic data generation and controllable identity representations. However, they also enable more advanced attacks, such as high-fidelity face reconstruction and identity impersonation. Addressing these dual effects requires interdisciplinary collaboration across computer vision, cryptography, law, and ethics, supported by appropriate regulation and continued methodological development.  Conclusions  This paper provides a comprehensive reference for researchers and practitioners engaged in trustworthy face recognition. By integrating advances from multiple disciplines, it promotes the development of effective facial privacy protection technologies and supports the secure, reliable, and ethically responsible deployment of face recognition in practical scenarios. The long-term goal is to establish face recognition as a trustworthy component of CPS that balances functionality, privacy protection, and societal trust.
ReXNet: A Trustworthy Framework for Space-air Security Integrating Uncertainty Quantification and Explainability
LIU Zhuang, CHEN Yuran, ZHANG Jiatong, JIANG Yujing, WANG Xuhui
Available online  , doi: 10.11999/JEIT251159
Abstract:
  Objective  The Space-Air-Ground Integrated Network (SAGIN) has emerged as a strategic infrastructure for national development. However, its security vulnerabilities are increasingly evident. The physical, network, and application layers of SAGIN face different security challenges that require targeted protection strategies. Aerospace scenarios require both high predictive accuracy and transparent decision making. Therefore, more robust, reliable, and interpretable intelligent methods are needed to support network security and system trustworthiness.  Methods  A detection framework is proposed that integrates Uncertainty Quantification (UQ) and eXplainable Artificial Intelligence (XAI). In the front-end stage, a Bayesian deep learning method based on Monte Carlo Dropout is adopted to enable probabilistic prediction modeling. This approach separates and quantifies epistemic uncertainty and aleatoric uncertainty, which improves model reliability. In the back-end stage, SHAP and LIME are applied to provide feature attribution for each prediction, improving model interpretability and transparency. Moreover, the intermediate layer of the framework allows flexible replacement of deep learning backbones, enabling adaptation to different space and aerospace application scenarios.  Results and Discussions  Extensive experiments were conducted on representative space-air security datasets, including UAV swarm fault detection, ADS-B injection attacks, and network fraud detection. The experimental results show that the proposed framework achieves high-precision anomaly detection. It also evaluates prediction confidence and identifies unknown samples outside the model knowledge boundary. In addition, the framework generates logically consistent and traceable explanations for model decisions, which improves interpretability and operational reliability. The results indicate that the combined use of UQ and XAI improves the robustness and trustworthiness of intelligent models in aerospace security applications.  Conclusions  This study improves the reliability and transparency of anomaly detection models in the space-air domain. It reflects a transition in artificial intelligence applications from focusing only on prediction accuracy to emphasizing system trustworthiness. Future work will promote practical deployment of the framework. The focus will include real-time processing capability, lightweight implementation, and operation in resource-constrained environments such as onboard and on-orbit systems. These efforts support more secure, autonomous, and efficient operation of SAGIN and contribute to the sustainable development of future space-air information networks.
A Review of Causal Feature Learning in Deep Learning Image Classification Models
WANG Xiaodong, JIANG Ling, LI Huihui, WANG Buhong
Available online  , doi: 10.11999/JEIT250738
Abstract:
  Significance   Deep learning is built on statistical correlations rather than causal relationships. Therefore, such models face major challenges in generalization, interpretability, and stability. Unlike human cognition, which mainly depends on causal discovery and use, current deep learning models remain at the bottom of the Pearl Causal Hierarchy (PCH). Therefore, integrating causal inference into deep learning has become a major research goal. As a core branch of deep learning, image classification models, represented by Convolutional Neural Networks (CNNs), show these limitations particularly clearly. Thus, causal inference is urgently needed to address this bottleneck. Among the available approaches for incorporating causal inference into these models, Causal Feature Learning (CFL), a framework that combines unsupervised machine learning with causal inference, shows clear advantages. Previous studies have confirmed that causal relationships are implicitly embedded in the pixel information of input images in image classification tasks. According to the Causal Coarsening Theorem (CCT), causal knowledge can be obtained from observed image data at low experimental cost. In classification tasks, the optimal solution is given by the Markov Boundary (MB) of the causal Bayesian network for the class variable. These theories strongly support efforts to connect deep image classification models with causal inference through CFL. Overall, the importance of CFL has become increasingly evident, and it is regarded as a promising breakthrough direction for next-generation models.  Progress   This paper provides a comprehensive review of CFL in deep learning image classification models from three core aspects: statistical causal inference theory, correlation analysis methods, and CFL implementations. First, the relevant definitions of CFL and its two mainstream statistical implementation frameworks are introduced, including causal discovery based on the Structural Causal Model (SCM) and causal effect estimation based on the Rubin Causal Model (RCM). Second, correlation analysis methods for deep learning image classification models, which are located at the threshold of the PCH, are systematically summarized from three perspectives: forward, backward, and horizontal. Third, with these auxiliary tools as a foundation, progress in CFL for image classification is classified into four main directions: causal Feature Discovery (CFD), Causal Feature Effect Estimation (CFEE), Causal Representation Learning (CRL), and Spurious Correlation Removal (SCR). CFD is based on the SCM framework and aims to derive confounding-free causal graphs through explicit or implicit causal intervention analysis of image data or models. Under the RCM framework, CFEE uses observed image data to quantitatively evaluate the causal effects of features, while addressing the lack of counterfactual samples and confounding bias. CRL focuses on selecting or extracting high-dimensional features from image data to learn causal relationships and identify low-dimensional cross-image representations. SCR removes non-causal features from images and preserves causal features through different methods. In addition, available toolkits, top conference resources, and academic organizations are listed. This paper also discusses key technical issues and future research directions.  Conclusions  This review summarizes the technological development of CFL. Overall, substantial progress has been made, although challenges remain in different research directions. CFD has the advantage of following the basic logic of causal theory, with clear and simple structures that are easy to understand. However, CFD still faces immature processing methods for high-dimensional image data and limited generalization ability. CFEE can effectively distinguish causal features from confounding features. Its evaluation results are closer to real decision-making logic and show strong general applicability. Common limitations of CFEE include the requirement for observable confounders, strong dependence on causal assumptions, and limited computational efficiency. CRL offers greater flexibility in representation dimensions and can identify causal factors that drive classification while excluding non-causal factors. Its main unresolved problems include generalization bias, factor coupling, prior dependence, weak evaluation, and high cost. SCR is highly targeted but has poor generalization ability. From a broader perspective, CFL should not be restricted to specific methods. Any method that aims to construct causal relationships from microvariables, such as image pixels, to causal macrovariables, such as global semantics, can be considered part of this field. Therefore, CFL remains an open research topic.  Prospects   The goal of causal inference is to move beyond correlation and clarify the causal relationships among variables by designing more rigorous experiments or using more advanced statistical methods. This requires deeper assumptions about feature relationships and broader exploration of underlying causal chains. Both remain highly challenging and are likely to become major focuses of future research in this field. To address the technical challenges in CFL, this paper proposes the following future directions: (1) unifying construction paradigms and establishing standards for image-based SCMs to improve the standardization and consistency of causal discovery; (2) developing RCM methods supported by generative artificial intelligence to address sample scarcity in causal effect estimation; (3) reforming models to learn new image causal representations, thereby fundamentally addressing the inherent limitations of CNNs in CFL; and (4) integrating spurious correlation analysis with reinforcement learning, and using reinforcement learning to equip deep learning image classification models with meta-learning capability for causal exploration. It can be expected that, once these key issues in CFL are resolved, the accuracy, generalization, interpretability, and stability of deep learning image classification models will improve substantially.
Optimized Implementation of Low-Depth Lightweight S-Boxes
FENG Zixi, LIU Yupeng, DOU Guowei, LIU Chengle
Available online  , doi: 10.11999/JEIT250690
Abstract:
  Objective  With the rapid development and widespread deployment of the Internet of Things (IoT), embedded systems, and mobile computing devices, secure communication and data protection on resource-constrained platforms have become a central focus in information security. These devices are typically characterized by severe limitations in computational capability, storage capacity, and energy consumption. These limitations make traditional cryptographic algorithms inefficient or even infeasible in such environments. In response, lightweight cryptographic algorithms have been proposed as an effective class of solutions. Their primary objective is to achieve security levels comparable to those of traditional algorithms while significantly reducing hardware and computational overhead through algorithmic simplification and structural optimization. These algorithms are designed to operate efficiently under tight resource constraints and are particularly suitable for applications such as sensor networks, smart cards, RFID systems, and wearable devices. From the perspective of hardware implementation, the design of lightweight cryptographic algorithms must consider multiple performance metrics, including throughput, latency, power efficiency, chip area, and circuit depth. Among these metrics, chip area and circuit depth are particularly critical because they directly affect production cost and computational speed. The Substitution-box (S-Box), as the core nonlinear component that provides confusion in most symmetric encryption schemes, plays a decisive role in determining both the security and implementation efficiency of the entire cipher. Therefore, efficient methods for realizing low-area and low-depth S-Boxes are of fundamental importance for the design of secure and practical lightweight cryptographic systems.  Methods  In this work, a novel S-Box optimization algorithm based on Boolean satisfiability (SAT) solving is proposed to optimize two key hardware metrics simultaneously: logic area and circuit depth. A circuit model with depth k and width w is constructed for this purpose. Under a given area constraint, SAT-solving techniques are used to determine whether the circuit model can implement the target S-Box. By iteratively adjusting the circuit depth, width, and area parameters, an optimized S-Box implementation is obtained. The method is specifically developed for 4-bit S-Boxes, which are widely used in many lightweight block ciphers, and it provides implementations that are highly efficient in both structural compactness and computational depth. This dual optimization approach reduces hardware cost while maintaining low latency, making it particularly suitable for scenarios in which both performance and energy efficiency are critical. The proposed method begins by transforming the S-Box implementation problem into a formal SAT problem, which enables the use of powerful SAT solvers to exhaustively explore possible logic-level representations. In this transformation, a diverse set of logic gates, including 2-input, 3-input, and 4-input gates, is used to construct flexible logic networks. To enforce area and depth constraints, arithmetic operations such as binary addition and comparator logic are encoded into SAT-compatible Boolean constraints, which guide the solver toward low-area and low-depth solutions. To further accelerate the solving process and avoid redundant search paths, symmetry-breaking constraints are introduced. These constraints eliminate logically equivalent but structurally different representations, thereby significantly reducing the size of the solution space. The CaDiCaL SAT solver, known for its speed and efficiency in handling large-scale SAT problems, is used to compute optimized S-Box implementations that minimize both depth and area. The proposed approach not only generates efficient implementations but also provides a general modeling framework that can be extended to other logic-synthesis problems in cryptographic hardware design.  Results and Discussions  To validate the effectiveness of the proposed optimization method, a comprehensive set of experiments is conducted on 4-bit S-Boxes from several representative lightweight block ciphers, including Joltik, Piccolo, Rectangle, Skinny, Lblock, Lac, Midori, and Prøst. The results show that the method consistently produces high-quality implementations that are competitive with, or superior to, existing state-of-the-art results in both chip area and circuit depth. Specifically, for the S-Boxes of Joltik and Piccolo, as well as those used in Skinny and Rectangle, the generated implementations match the best known results in both metrics, indicating that the method can successfully reproduce optimal or near-optimal designs. In the cases of Lblock and Lac, although the logic area remains similar to previous results, the circuit depth is significantly reduced from 10 to 3, representing a substantial improvement in processing latency and suitability for real-time applications. For the inverse S-Box of the Rectangle cipher, the proposed implementation achieves the same circuit depth as previous designs but reduces the area from 24.33 Gate Equivalents (GE) to 17.66 GE, yielding a more compact and efficient realization. The optimization results for the Midori S-Box further confirm the effectiveness of the method: the depth is reduced from 4 to 3, and the area is reduced from 20.00 GE to 16.33 GE. For the Prøst cipher’s S-Box, two alternative implementations are presented to illustrate the trade-off between area and depth. The first achieves a depth of 4 with an area of 22.00 GE, matching the best known depth but at a higher area cost. The second increases the depth to 5 but reduces the area significantly to 13.00 GE. These results show that the method supports flexible optimization under different design constraints and also provides deeper insight into the complexity and trade-offs of S-Box implementation.  Conclusions  This paper presents a SAT-based method for jointly optimizing S-Box hardware implementations in terms of area and circuit depth. By modeling S-Box realization as a satisfiability problem and using advanced constraint encoding, multi-input logic gates, and symmetry-breaking techniques, the method effectively reduces hardware complexity while maintaining or improving depth performance. Extensive experiments on various 4-bit S-Boxes show that the proposed approach matches or outperforms existing results, particularly in reducing circuit depth and improving logic compactness. This makes it well suited to lightweight cryptographic systems operating under strict constraints on silicon area, speed, and energy consumption. Despite these advantages, the method still has limitations. Although it achieves optimal or near-optimal results for 4-bit S-Boxes, scalability to larger instances, such as 5-bit or 8-bit S-Boxes, remains challenging because of the exponential growth of the search space and solving time. As model complexity increases, solving becomes computationally expensive and may fail to converge in practice. Future work will focus on improving modeling efficiency and solver performance through refined constraint generation, stronger pruning strategies, and heuristic-guided search, with the goal of extending the method to more complex S-Boxes and other nonlinear components in lightweight and post-quantum cryptographic systems.
Vision-Guided and Force-Controlled Method for Robotic Screw Assembly
ZHANG Chunyun, MENG Xintong, TAO Tao, ZHOU Huaidong
Available online  , doi: 10.11999/JEIT251193
Abstract:
  Objective  With the rapid development of intelligent manufacturing and industrial automation, robots are increasingly applied to high-precision assembly tasks, especially screw assembly. However, current systems still face several challenges. The pose of assembly objects is often uncertain, which makes initial localization difficult. Small features such as threaded holes are blurred and difficult to identify accurately. Conventional vision-based open-loop control may also cause assembly deviation or jamming. This study proposes a vision–force cooperative method for robotic screw assembly. The method establishes a closed-loop assembly system that covers coarse positioning and fine alignment. A semantic-enhanced 6D pose estimation algorithm and a lightweight hole detection model are used to improve perception accuracy. Force-feedback control then adjusts the end-effector posture dynamically. This approach improves the accuracy and stability of screw assembly.  Methods  The proposed screw-assembly method is based on a vision–force cooperative strategy that forms a closed-loop process. In the visual perception stage, a semantic-enhanced 6D pose estimation algorithm addresses disturbances and pose uncertainty in complex industrial environments. During initial pose estimation, Grounding DINO and SAM2 generate pixel-level masks that provide semantic priors for the FoundationPose module. In the continuous tracking stage, semantic cues from Grounding DINO support translational correction. To detect small threaded holes, an improved lightweight hole detection algorithm based on NanoDet is designed. It uses MobileNetV3 as the backbone and adds a CircleRefine module in the detection head to estimate hole centers precisely. In the assembly positioning stage, a hierarchical vision-guided strategy is used. The global camera performs coarse positioning for overall guidance, while the hand–eye camera conducts local correction using hole detection results. In the closed-loop assembly stage, force-feedback control adjusts the posture to achieve accurate alignment between the screw and the threaded hole.  Results and Discussions  The method is validated experimentally in robotic screw assembly scenarios. The improved 6D pose estimation algorithm reduces the average position error by 18% and the orientation error by 11.7% compared with the baseline (Table 1). The tracking success rate in dynamic sequences increases from 72% to 85% (Table 2). For threaded hole detection, the lightweight NanoDet-based algorithm is evaluated on a dataset collected from assembly environments. It achieves 98.3% precision, 99.2% recall, and 98.7% mAP (Table 3). The model size is 11.7 MB and the computational cost is 2.9 GFLOPs, which are both lower than most benchmark models while maintaining high accuracy. A circular branch is introduced to fit hole edges (Fig. 8), providing accurate center predictions for visual guidance. Under different inclination angles (Fig. 10), the assembly success rate remains above 91.6% (Table 4). For screws of different sizes (M4, M6, and M8), the success rate remains above 90% (Table 5). Under small external disturbances (Fig. 12), the success rates reach 93.3%, 90%, and 83.3% for translational, rotational, and mixed disturbances, respectively (Table 6). Force-feedback comparison experiments show that the success rate is 66.7% under visual guidance alone. With force-feedback control, the rate increases to 96.7% (Table 7). The system maintains stable performance throughout complete screw-assembly cycles and achieves an average cycle time of 9.53 s (Table 8), meeting industrial assembly requirements.  Conclusions  This study presents a vision–force cooperative method that addresses key challenges in robotic screw assembly. The approach enhances target localization accuracy through a semantic-enhanced 6D pose estimation algorithm and a lightweight threaded hole detection network. The integration of hierarchical vision guidance and force-feedback control enables precise alignment between screws and threaded holes. Experimental results show that the method ensures reliable assembly under varied conditions, providing a practical solution for intelligent robotic assembly. Future work will focus on adaptive force control, multimodal perception fusion, and intelligent task planning to further improve generalization and self-optimization in complex industrial environments.
Crosstalk-Free Frequency-Spin Multiplexed Multifunctional Device Realized by Nested Meta-Atoms
ZHANG Ming, DONG Peng, TAO En, YANG Lin, HAN Qi, HE Yuhang, HOU Weimin, LI Kang
Available online  , doi: 10.11999/JEIT251202
Abstract:
  Objective  To address high fabrication costs and signal crosstalk in existing multidimensional multiplexed metasurfaces, a crosstalk-free, frequency-spin multiplexed single-layer metasurface based on nested bi-spectral meta-atoms is proposed. Two C-shaped split-ring resonators are physically superimposed to target the Ku band (12.5 GHz) and the K band (22 GHz). This configuration enables four fully independent information channels, defined by two frequencies and two spin states, without spatial division or multilayer stacking. The objective is to demonstrate independent, high-performance vortex beam generation and holographic imaging, providing a simplified and cost-effective solution for advanced 6G communication and sensing systems.  Methods  A reflective metal–dielectric–metal metasurface architecture is adopted, in which each unit cell integrates an Outer C-Shaped Split-Ring Resonator (OCSRR) and an Inner C-Shaped Split-Ring Resonator (ICSRR). Parameter sweeps performed using CST Microwave Studio are used to select structures that provide high cross-polarization conversion at the target frequencies while maintaining negligible responses in non-target bands. Independent spin multiplexing is achieved through the combined use of transmission phase and geometric phase, controlled by resonator rotation. Two prototypes are fabricated using printed circuit board technology. MS1 is designed for focused vortex beam generation with topological charges l = +1, +2, +3, and +4, whereas MS2 is designed for holographic imaging of the letters “H”, “B”, “K”, and “D”. Device performance is validated by near-field scanning measurements under oblique incidence using a vector network analyzer.  Results and Discussions  Simulation and experimental results confirm strong frequency selectivity and effective spin decoupling enabled by the nested meta-atom design. The OCSRR and ICSRR dominate the electromagnetic responses at 12.5 GHz and 22 GHz, respectively, and exhibit linear superposition behavior with minimal crosstalk. MS1 generates four focused vortex beams with clearly separated topological charges, achieving an average mode purity of 88.25%. MS2 reconstructs four independent and well-defined holographic images with high channel isolation. The close agreement between measured and simulated results demonstrates the robustness of the device and validates the effectiveness of the crosstalk-free design strategy under practical illumination conditions.  Conclusions  A reliable approach for realizing crosstalk-free frequency-spin multiplexed metasurfaces using nested meta-atoms is demonstrated. Simultaneous and independent manipulation of electromagnetic waves across four channels is achieved on a single metasurface layer, substantially reducing design complexity and fabrication cost. The successful demonstration of multi-channel vortex beam generation and holographic imaging indicates strong potential for integrated multifunctional applications in next-generation wireless communication and optical systems.
Routing and Resource Scheduling Algorithm Driven by Mixture of Experts in Large-scale Heterogeneous Local Power Communication Network
JING Chuanfang, ZHU Xiaorong
Available online  , doi: 10.11999/JEIT251176
Abstract:
  Objective  Emerging power services, such as distributed energy consumption, place stringent performance requirements on Large-Scale Heterogeneous Local Power Communication Networks (LHLPCNs). Limited communication resources and increasing service demands make it challenging to provide on-demand services and improve network capacity while ensuring Quality of Service (QoS). Conventional routing and resource scheduling algorithms based on optimization or heuristics depend on precise mathematical models and parameters, and their computational cost increases as network size and variables grow. These limitations reduce their adaptability to expanding power application scenarios. Advances in Mixture-of-Experts (MoE) frameworks offer a promising direction because they reduce the need to train task-specific models by using an ensemble of specialized AI experts. Motivated by these challenges, this study proposes an MoE-based routing and resource scheduling algorithm (RASMoE) for LHLPCNs integrating High-Power Line Carrier (HPLC) and Radio Frequency (RF). RASMoE is designed to meet personalized QoS requirements and support more power services within limited resources.  Methods  An optimization problem that minimizes the difference between QoS supply and demand in LHLPCNs is formulated as a 0–1 integer linear programming model considering multimodal links, channels, and modulation methods. To solve this NP-hard problem, a new MoE framework comprising expert networks and gated networks is designed. The framework supports personalized service requirements in terms of data rate, delay, and reliability, while improving convergence. The expert networks include shared and QoS-specific experts that generate optimal next hops and compute allocation strategies for links, channels, and modulation modes between node pairs. The gated networks dynamically combine and reuse these experts to support known and unforeseen service types. Extensive comparative experiments are conducted, and RASMoE shows improved resource utilization, reduced delay, and higher reliability relative to multiple baselines.  Results and Discussions  The performance supply-demand differences of five algorithms under varying service numbers are compared (Fig. 3). RASMoE consistently achieves the smallest differences across scenarios due to its gating network, which combines QoS-specific experts to align resource allocation with service requirements. Because control and compute-intensive services have strict delay requirements, their average End-to-End (E2E) latency under different service numbers is evaluated (Fig. 4). The proposed algorithm achieves the lowest average E2E latency because its GAT-enhanced expert networks extract node load states and interact with the network environment in real time through a Multi-Armed Bandit (MAB) mechanism. This supports adaptive allocation strategies. The average reliability of E2E paths for different numbers of control, compute-intensive, and acquisition services is also illustrated (Fig. 5).  Conclusions  This study proposes a MoE-driven routing and resource scheduling algorithm for LHLPCNs. The framework integrates expert networks and a gating network. The expert networks include GAT-based shared experts for E2E path selection and MAB-based QoS-specific experts for adaptive allocation of links, channels, and modulation schemes according to QoS demands and link states. The gated networks orchestrate and reuse these experts to support services with single or multiple QoS requirements, including previously unseen service types. Theoretical analysis shows that the method improves resource utilization in LHLPCNs, with notable advantages in multi-service scenarios characterized by diverse QoS demands. Future work will examine integrating the MoE framework with domain-specific models, including power load forecasting and predictive analytics, to enhance the use of renewable energy sources.
A Fast and Accurate Programming Strategy for Analog In-Memory Computing Validated With a Transposable RRAM Macro and 0.64% Fully-Parallel RMS Error
XIE Lifan, WEI Songtao, YAO Peng, WU Dong, TANG Jianshi, QIAN He, GAO Bin, WU Huaqiang
Available online  , doi: 10.11999/JEIT251174
Abstract:
  Objective  Non-Volatile Memory (NVM)-based Compute-in-Memory (CIM) is considered a promising candidate for next-generation artificial intelligence accelerators because of its high energy efficiency and instant wake-up capability. However, the conventional Write-and-Verify (W&V) scheme cannot satisfy the speed and precision requirements of highly parallel CIM macros. The main limitation arises from the inefficient verification stage. Cell-by-cell reading must be repeated for the entire array, which significantly increases programming time. In addition, switching from the verify state, where only one row is active, to the compute state, where all rows are active, introduces systematic errors such as reference drift and IR-drop-induced weight inaccuracy. Analog CIM macros with on-chip programming must also tolerate large and non-uniform offsets under massive parallel operation. This work proposes three techniques: (1) a Back-Propagation-Assisted Programming (BPAP) scheme that rapidly and accurately locates failing cells without full-array verification; (2) an Analog-domain Offset-Canceling Structure (AOSC) that compensates channel-wise offsets in situ; and (3) a transposable Resistive Random-Access Memory (RRAM) macro equipped with parallel Two-Channel current-domain Analog-to-Digital Converters (TC-ADC), which doubles the effective sampling rate with only 15% additional ADC area.  Methods  As shown in Fig. 2, the transposable RRAM macro contains two processing elements (PEs) and a shared backward-processing ADC (BP-ADC). Each PE includes an input loader (IL), a Digital-to-Analog Converter (DAC) array, a Bit-Line (BL) buffer and switch array, and 32 TC-ADCs. This configuration supports fully parallel forward computation. An Error Loader (EL) and a Source-Line (SL) buffer are also included to provide an error input vector for transposed matrix-vector multiplication (MVM). Fig. 3 illustrates the programming flow of the BPAP scheme. After AOSC calibration, a forward calculation is first executed. The differences between the expected outputs (yexp) and the measured outputs (yreal) are then computed on chip and used as inputs for the following back-propagation phase. The derivatives of the RRAM weights are calculated using several validation patterns. This training-like process adapts to the actual RRAM states and detects programming failures under the highly parallel computing condition. Weights with derivatives exceeding a predefined error threshold are selected for remapping. This approach enables accurate programming without performing cell-by-cell verification across the entire array. In the forward phase (Fig. 4a), each 2T2R cell is configured as a signed weight, and the SLs are clamped at VCM by the TC-ADCs. For each PE, a fully parallel 4b-IN/4b-W MVM operation is completed with 320 active rows of 2T2R cells, and 32 ADCs perform simultaneous conversions. In the backward phase (Fig. 4b), only the upper half of the reference voltages drives the SL buffers, and the weight is configured in 1T1R mode. Differential computation between the positive and negative 1T1R cells is performed by an external processor. Fig. 5 shows the operation of the AOSC scheme. Redundant rows in the RRAM array are programmed to compensate the analog computing offsets in situ. Offset currents are first measured by applying an all-zero input pattern to the regular weights. The redundant RRAM weights are then programmed to minimize the offset currents under a constant input voltage. During normal computation, these programmed redundancy rows receive the same input voltage to cancel the offsets. The macro supports this AOSC operation with only about 1% additional array area. Fig. 6 shows the TC-ADC architecture. A class-AB output stage, together with associated switches and capacitors, enables two-channel conversion and reduces the computation latency by half. This design increases the ADC area by only about 15% while achieving a 2× sampling rate.  Conclusions  Replacing the conventional W&V procedure with BPAP, together with AOSC calibration and TC-ADC acceleration, enables reliable and high-precision programming of analog RRAM-CIM macros under massive parallel operation. The measured results show 96.5% classification accuracy on MNIST and a 4.8% improvement on ImageNet. The proposed techniques are compatible with standard 2T2R and 1T1R RRAM bit cells and can be extended to larger arrays and deeper neural networks.
Design of a Narrowband Energy-Selective Protective Antenna Integrating Electromagnetic Protection and Out-of-Band Interference Suppression
GAI Longjie, XU Yanlin, WANG Sijun, LIU Peiguo, HU Ning, HE Zhengwei
Available online  , doi: 10.11999/JEIT251363
Abstract:
  Objective  With the rapid development of wireless communication technologies, the Electromagnetic (EM) environment is becoming increasingly complex. Electronic information equipment is facing growing challenges from High-Intensity Radiation Fields (HIRFs) and out-of-band interference. This trend makes the co-design of EM protection and out-of-band interference suppression in electronic information systems an urgent issue. As the front end of the radio-frequency channel, antennas provide the main path by which EM waves in free space are converted into guided waves in microwave circuits. High-power EM waves can couple into a system through an antenna and cause EM damage. In single-frequency applications, if an antenna does not exhibit narrowband characteristics, out-of-band interference signals may also enter the system through the antenna and disrupt normal operation. A narrowband energy-selective protective antenna should therefore be developed to provide both out-of-band interference suppression and in-band EM protection against strong EM threats, thereby improving the operational stability and environmental adaptability of electronic information equipment in complex EM environments.  Methods  A coaxial-fed microstrip patch antenna is designed, and its structure is optimized through simulation for operation at 915 MHz. The antenna structure is designed to provide both narrowband behavior and EM protection, thereby achieving integrated EM protection and out-of-band interference suppression. A high dielectric constant is used to support both antenna miniaturization and narrowband operation. Accordingly, a TP-2 substrate with a dielectric constant of 20 is selected to obtain the required narrowband response. In a conventional coaxial-fed microstrip patch antenna, the probe passes directly through the dielectric substrate and connects to the radiating patch, which leaves insufficient space for the integration of a protective structure. To solve this problem, a layered-substrate design with a central hollow cavity is adopted. This configuration forms a layered cavity protective structure and enables the antenna itself to exhibit energy-selective protection characteristics.  Results and Discussions  To verify the performance of the proposed antenna, physical fabrication and experimental measurements are carried out (Fig. 14). The measured center frequency is 928.5 MHz, and the operating bandwidth is 927.0-930.0 MHz. Although the measured center frequency is shifted by 12.8 MHz from the simulated design value, the antenna still exhibits favorable narrowband characteristics (Fig. 15). The measured radiation pattern agrees well with the simulated result. In the Phi = 0 deg plane, a stable omnidirectional radiation pattern is observed, and the measured maximum gain reaches 2.5 dBi (Figs 11 and 16). The Shielding Effectiveness (SE) is measured by a high-power injection method. As the injected power increases, the radiated power increases linearly. When the injected power reaches 22 dBm, the increase in radiated power begins to saturate, which indicates that the diodes in the protective structure start to conduct and that the energy-selective mechanism is activated. As the injected power increases further, the SE rises gradually. When the injected power reaches 48 dBm, the radiated power rises sharply to the level of the original linear radiation curve, and the SE drops abruptly, which indicates diode breakdown and failure of the protective structure. In summary, the activation threshold of the protection function is 26 dBm, and the device failure threshold is 48 dBm. Within this range, the maximum SE reaches 26 dB (Fig. 18).  Conclusions  Based on a coaxial-fed microstrip patch antenna, a narrowband energy-selective protective antenna with integrated EM protection and out-of-band interference suppression is designed and demonstrated. The complete process is covered, including theoretical analysis, structural simulation and optimization, prototype fabrication, and experimental verification. First, Characteristic Mode Analysis (CMA) is used to examine the potential operating modes of the microstrip patch antenna. By analyzing the electric- and magnetic-field modal distributions, the impedance-matching characteristics are clarified, and the optimal coaxial feed position is determined. Next, the use of a high-permittivity substrate enables both antenna miniaturization and narrowband performance, and an Interference Suppression Capability (ISC) better than 22.1 dB is achieved. A layered-substrate structure with a central hollow cavity is then proposed, and a cavity-based protective structure integrated into the feed-probe region is established. An equivalent-circuit model is also developed to explain the operating mechanisms of the antenna in both the normal and protective states. Finally, the antenna prototype is fabricated and tested. The measured results show favorable narrowband characteristics, good agreement between the measured and simulated radiation patterns, and a measured maximum gain of 2.5 dBi. In addition, by applying the reciprocity principle and using a high-power injection method for SE testing, a maximum SE of 26 dB is obtained, which confirms the excellent EM protection capability of the antenna. Compared with existing protective antennas, the proposed structure achieves both out-of-band interference suppression and EM protection within the antenna itself. This design advances the integration of frequency-domain interference suppression and energy-domain protection. It should also be noted that the deviation between the measured and simulated center frequencies is caused in part by nonuniform substrate permittivity and fabrication tolerances, which reflects the sensitivity of narrowband antennas to structural parameters. In future work, a tunable mechanism may be adopted to develop a frequency-reconfigurable narrowband energy-selective protective antenna, so that frequency deviations can be compensated dynamically and the design robustness and environmental adaptability can be improved.
One-pass Architectural Synthesis for Continuous-Flow Microfluidic Biochips Based on Deep Reinforcement Learning
LIU Genggeng, JIAO Xinyue, PAN Youlin, HUANG Xing
Available online  , doi: 10.11999/JEIT251058
Abstract:
Continuous-Flow Microfluidic Biochips (CFMBs) are widely applied in biomedical research because of miniaturization, high reliability, and low sample consumption. As integration density increases, design complexity significantly rises. Conventional stepwise design methods treat binding, scheduling, layout, and routing as separate stages, with limited information exchange across stages, which leads to reduced solution quality and extended design cycles. To address this limitation, a one-pass architectural synthesis method for CFMBs is proposed based on Deep Reinforcement Learning (DRL). Graph Convolutional Neural networks (GCNs) are used to extract state features, capturing structural characteristics of operations and their relationships. Proximal Policy Optimization (PPO), combined with the A* algorithm and list scheduling, ensures rational layout and routing while providing accurate information for operation scheduling. A multiobjective reward function is constructed by normalizing and weighting biochemical reaction time, total channel length, and valve count, enabling efficient exploration of the decision space through policy gradient updates. Experimental results show that the proposed method achieves a 2.1% reduction in biochemical reaction time, a 21.3% reduction in total channel length, and a 65.0% reduction in valve count on benchmark test cases, while maintaining feasibility for larger-scale chips.  Objective  CFMBs have gained sustained attention in biomedical applications because of miniaturization, high reliability, and low sample consumption. With increasing integration density, design complexity escalates substantially. Traditional stepwise design methods often yield suboptimal solutions, extended design cycles, and feasibility limitations for large-scale chips. To address these challenges, a one-pass architectural synthesis framework is proposed that integrates DRL to achieve coordinated optimization of binding, scheduling, layout, and routing.  Methods  All CFMB design tasks are integrated into a unified optimization framework formulated as a Markov decision process. The state space includes device binding information, device locations, operation priorities, and related parameters, whereas the action space adjusts device placement, operation-to-device binding, and operation priority. High-dimensional state features are extracted using GCNs. PPO is applied to iteratively update policies. The reward function accounts for biochemical reaction time, total flow-channel length, and the number of additional valves. These metrics are evaluated using the A* algorithm and list scheduling, normalized, and weighted to balance trade-offs among objectives.  Results and Discussions  Based on the current state and candidate actions, architectural solutions are generated iteratively through PPO-guided policy updates combined with the A* algorithm and list scheduling. The defined reward function enables the generation of CFMB architectures with improved overall quality. Experimental results show an average reduction of 2.1% in biochemical reaction time, an average reduction of 21.3% in total flow-channel length, with a maximum reduction of 57.1% in the ProteinSplit benchmark, and an average reduction of 65.0% in additional valve count compared with existing methods. These improvements reduce manufacturing cost and operational risk.  Conclusions  A one-pass architectural synthesis method for CFMBs based on DRL is proposed to address flow-layer design challenges. By applying GCN-based state feature extraction and PPO-based policy optimization, the multiobjective design problem is transformed into a sequential decision-making process that enables joint optimization of binding, scheduling, layout, and routing. Experimental results obtained from multiple benchmark test cases confirm improved performance in biochemical reaction completion time, total channel length, and valve count, while preserving scalability for larger chip designs.
Battery Pack Multi-fault Diagnosis Algorithm Based on Dual-Perspective Spectral Attention Fusion
LIU Mingjun, GU Shenyu, YIN Jingde, ZHANG Yifan, DONG Zhekang, JI Xiaoyue
Available online  , doi: 10.11999/JEIT251156
Abstract:
  Objective  With the rapid growth of electric vehicles and their widespread deployment, battery pack faults have become more frequent, creating an urgent need for efficient fault diagnosis methods. Although deep learning-based approaches have achieved notable progress, existing studies remain limited in addressing multiple fault types, such as Internal Short Circuit (ISC), sensor noise, sensor drift, and State-Of-Charge (SOC) inconsistency, and in modeling the coupling relationships among these faults. To address these limitations, a multi-fault diagnosis algorithm for battery packs based on dual-perspective spectral attention is proposed. A dual-perspective tokenization module is designed to extract spatiotemporal features from battery data, whereas a spectral attention mechanism addresses non-stationary time-series characteristics and captures long-term dependencies, thereby improving diagnostic performance.   Methods  To improve spatiotemporal feature extraction and fault diagnosis performance, a dual-perspective spectral attention fusion algorithm for battery pack multi-fault diagnosis is proposed. The overall architecture consists of four core modules (Fig. 3): a dual-perspective tokenization module, a spectral attention module, a feature fusion module, and an output module. The dual-perspective tokenization module applies positional encoding to jointly model temporal and spatial dimensions, enabling comprehensive spatiotemporal feature representation. When combined with the spectral attention mechanism, the capability of the model to handle non-stationary characteristics is strengthened, leading to improved diagnostic performance. In addition, to address the lack of comprehensive publicly available datasets for battery pack fault diagnosis, a new dataset is constructed, covering ISC, sensor noise, sensor drift, and SOC inconsistency faults. The dataset includes three operating conditions, FUDS, UDDS, and US06, which alleviates data scarcity in this research field.  Results and Discussions  Experimental results indicate that the proposed method improves average precision, recall, F1 score, and accuracy by 10.98%, 12.64%, 13.84%, and 13.45%, respectively, compared with existing optimal fault diagnosis methods. Comparison experiments under different operating conditions (Table 6) support this conclusion. Conventional convolutional neural network methods perform well in local feature extraction; however, fixed-size convolution kernels are not well suited to time features with varying frequencies, which limits long-term temporal dependency modeling and global feature capture. Recurrent neural network-based methods show reduced computational efficiency when large-scale datasets are processed. Transformer-based models face constraints in spatial feature extraction and in representing temporal variations. By contrast, the proposed algorithm addresses these limitations through an integrated architectural design. Ablation experiments demonstrate the contribution of each module to overall performance (Table 7), and the complete framework improves average F1 score and accuracy by 9.30% and 9.26%, respectively, compared with ablation variants. Robustness analysis under simulated noise conditions (Table 8) shows that the proposed method achieves accuracy improvements ranging from 49.95% to 124.34% over baseline methods at noise levels from –2 dB to –8 dB, indicating strong noise resistance.  Conclusions  A multi-fault diagnosis algorithm for battery packs is presented that integrates dual-perspective tokenization and spectral attention to combine spatiotemporal and spectral information. The dual-perspective tokenization module performs tokenization and positional encoding along temporal and spatial axes, which improves spatiotemporal representation. The spectral attention mechanism strengthens modeling of non-stationary signals and long-term dependencies. Experiments under FUDS, UDDS, and US06 driving cycles show that the proposed method outperforms existing multi-fault diagnosis approaches, with average gains of 13.84% in F1 score and 13.45% in accuracy. Ablation studies confirm that both modules contribute substantially and that their combination enables effective handling of complex time-series features. Under high-noise conditions (–2 dB, –4 dB, –6 dB, and –8 dB), the method also shows improved robustness, with accuracy gains of 49.95%, 90.39%, 112.01%, and 124.34%, respectively, compared with baseline methods. Several limitations remain. First, the data are mainly derived from laboratory simulations, and further validation under real-world operating conditions is required. Second, the effect of fault severity on battery management system hierarchical decision making has not been fully addressed, and future work will focus on establishing a fault severity grading strategy. Third, physical interpretability requires further improvement, and subsequent studies will explore the integration of equivalent circuit models or electrochemical mechanism models to balance diagnostic accuracy and interpretability.
A Neural Network-Based Robust Direction Finding Algorithm for Mixed Circular and Non-Circular Signals Under Array Imperfections
YU Qi, YIN Jiexin, LIU Zhengwu, WANG Ding
Available online  , doi: 10.11999/JEIT250884
Abstract:
  Objective   Direction Of Arrival (DOA) estimation is affected by low Signal-to-Noise Ratios (SNR), the coexistence of Circular Signals (CSs) and Non-Circular Signals (NCSs), and multiple forms of array imperfections. Conventional subspace-based estimators exhibit model mismatch in such environments and show reduced accuracy. Although neural-network methods provide data-driven alternatives, the effective use of the distinctive statistical properties of NCSs and the maintenance of robustness against diverse array errors remain insufficiently addressed. The objective is to design a DOA estimation algorithm that operates reliably for mixed CSs and NCSs in the presence of array imperfections and provides improved estimation accuracy in challenging operating conditions.  Methods   A robust DOA estimation algorithm is proposed based on an improved Vision Transformer (ViT) model. A six-channel image-like input is first constructed by fusing features derived from the covariance matrix and pseudo-covariance matrix of the received signal. These channels include the real component, imaginary component, magnitude, phase, magnitude ratio reflecting the NCS characteristic, and the phase of the pseudo-covariance matrix. A gradient-masking mechanism is introduced to adaptively fuse core and auxiliary features. The ViT architecture is then modified: the standard patch-embedding module is replaced with a convolutional layer to extract local information, and a dual-class-token attention mechanism, placed at the sequence head and tail, is designed to enhance feature representation. A standard Transformer encoder is used for deep feature learning, and DOA estimation is performed through a multi-label classification head.  Results and Discussions   Extensive simulations are carried out to assess the proposed algorithm (6C-ViT) against MUSIC, NC-MUSIC, a Convolutional Neural Network (6C-CNN), a Residual Network (6C-ResNet), and a MultiLayer Perceptron (6C-MLP). Performance is evaluated using Root Mean Square Error (RMSE) and angular estimation error under different operating conditions. Under single-source scenarios with low SNR and no array errors, 6C-ViT achieves near-zero RMSE across most angles and shows minor edge deviations (Fig. 2). It maintains the lowest RMSE across the SNR range from –20 dB to 15 dB (Fig. 3), indicating good generalization to unseen SNR levels. In dual-source scenarios containing mixed CS and NCSs under array errors, 6C-ViT shows clear advantages. Its estimation errors fluctuate slightly around zero, whereas competing techniques present larger errors and pronounced instabilities, especially near array edges (Fig. 4). Its RMSE decreases steadily as SNR increases and reaches below 0.1° at high SNR, while traditional approaches saturate around 0.4° (Fig. 5). Robust behavior is further observed across different numbers of signal sources (K = 1, 2, 3) and snapshot counts (100 to 2 000). 6C-ViT preserves high accuracy and stability under these variations, whereas other methods show marked degradation or instability, most evident at low snapshot counts or with multiple sources (Fig. 6). When evaluated using unknown modulation types, including UQPSK with a non-circularity rate of 0.6 and 64QAM, under array errors, 6C-ViT continues to produce the lowest RMSE across most angles (Fig. 7), demonstrating strong generalization capability. Ablation studies (Fig. 8) confirm the contributions of the six-channel input, the gradient masking module, the convolutional embedding, and the dual class token mechanism. The complete configuration yields the highest accuracy and the most stable performance.  Conclusions   Strong robustness is demonstrated in complex scenarios that contain mixed CS and NCSs, multiple array imperfections, low SNR, and closely spaced sources. By fusing multi-dimensional features of the received signal and using an enhanced Transformer architecture, the algorithm attains higher estimation accuracy and improved generalization across different signal types, error conditions, snapshot counts, and noise levels compared with subspace- and neural-network-based baselines. The method provides a reliable DOA estimation solution for demanding practical environments.
Adversarial Attacks on 3D Target Recognition Driven by Gradient Adaptive Adjustment
LIU Weiquan, SHEN Xiaoying, LIU Dunqiang, SUN Yanwen, CAI Guorong, ZANG Yu, SHEN Siqi, WANG Cheng
Available online  , doi: 10.11999/JEIT251264
Abstract:
  Objective   Robust environmental perception is essential for intelligent driving systems. Light Detection And Ranging (LiDAR) provides high-resolution 3D point cloud data and serves as a core information source for object detection and recognition. However, deep learning models for 3D point cloud recognition show notable vulnerability to adversarial attacks. Small, imperceptible perturbations can cause severe classification errors and threaten system safety. Existing attack methods have improved the Attack Success Rate (ASR), but the perturbations they generate often lack concealment, create outliers, and show poor imperceptibility because they do not adequately preserve the geometric structure of point clouds. This reduces their suitability for realistic security evaluation of optoelectronic perception systems. Developing an attack method that maintains a high success rate while preserving geometric consistency and imperceptibility is therefore critical. This study addresses this need by proposing a framework that incorporates point cloud geometry into perturbation generation.  Methods   A Gradient Adaptive Adjustment (GAA) adversarial attack method for 3D point cloud recognition is proposed. The framework (Fig. 2) includes three coordinated modules. The 3D Point Cloud Salient Region Extraction module evaluates decision-level vulnerability using Shapley value analysis to identify and rank point subsets with the strongest influence on classifier output. Perturbations are then concentrated in these sensitive regions. A curvature-weighted gradient mechanism integrates local geometric priors. For each point in the salient region, a local covariance matrix is computed from its k-nearest neighbors. Principal component analysis generates eigenvalues and eigenvectors, which are used to compute a curvature measure. A Gaussian kernel function produces curvature-dependent weights that are applied to backpropagated gradients. This suppresses perturbations in high-curvature areas and encourages them in low-curvature regions to preserve local shape morphology. A principal curvature direction constrained 0ptimization module further refines the perturbation direction. The weighted gradient is projected onto the principal curvature directions, and the projection components are fused using coefficients derived from the corresponding eigenvalues. This aligns the perturbation with natural geometric trends and avoids unnatural deformation. An adaptive optimization algorithm then minimizes a multi-objective loss balancing attack success, geometric similarity (via chamfer distance and hausdorff distance), and perturbation sparsity. The adversarial point cloud is iteratively updated based on the saliency map, curvature-weighted gradients, and principal direction constraints.  Results and Discussions   Experiments on ModelNet40, ShapeNetPart, and KITTI were conducted using PointNet, DGCNN, and PointConv. The GAA method showed strong performance. On ModelNet40 with PointNet, it achieved a 97.69% ASR with an average of 28 perturbed points, outperforming ten baselines such as AL-Adv (92.92% ASR, 40 points) and Kim et al. (89.38% ASR, 36 points) (Table 1). It also produced lower geometric distortion, as indicated by smaller Chamfer Distance and Hausdorff Distance values. Visual results (Fig. 4) show that GAA produces fewer outliers and more natural adversarial point clouds compared with methods such as AL-Adv. The method generalized well across architectures, reaching 99.78% ASR on DGCNN and 96.91% on PointConv (Table 2), with similar performance on ShapeNetPart (Table 3). Ablation experiments on the number of salient regions (K) showed consistent improvements in ASR and reduced geometric distortion as K increased from 1 to 6 (Table 4, Fig. 5), confirming the advantage of targeting multiple critical regions. Tests on the KITTI dataset demonstrated strong performance in real-world, noisy environments. The method maintained high ASRs, such as 99.33% on PointNet, with limited perturbations (Table 5). An ablation study on K indicated that K=4 offers an effective balance between success rate and perturbation cost for PointNet (Table 6).  Conclusions   This study presents a GAA method for adversarial attacks on 3D point cloud recognition. By combining a Shapley value-based saliency analyzer, a curvature-weighted gradient mechanism, and a principal curvature direction constraint, the method generates adversarial examples that achieve high attack success while preserving geometric consistency. Experiments show that GAA minimizes perceptual distortion and perturbs fewer points across datasets and models. The method provides a practical tool for vulnerability analysis and supports the development of more robust and secure optoelectronic perception systems for intelligent driving. Future work will examine robustness under adverse conditions and assess physical-world implications.
A Channel Phase Self-compensation Method for Active-Integrated Arrays
SUN Liying, LU Yunlong, XU Jun, HU Yang
Available online  , doi: 10.11999/JEIT251325
Abstract:
The seamless integration of active circuitry and antennas can effectively improve link performance and system integration. At present, active-integrated antennas are mainly designed by adjusting the antenna impedance while maintaining the desired radiation characteristics to achieve direct matching with active transistors. However, the effect of the antenna’s complex impedance on the phase response of the active channel, as well as its potential application in active-integrated phased arrays, has not been thoroughly studied. This paper proposes a channel phase self-compensation method for active-integrated arrays. For each active channel, the active transistor is directly integrated with the radiating element, where the load impedance at the transistor drain is matched to the input impedance of the antenna element. Under a constant active gain, the required complex load impedance is solved to establish an explicit mapping between the phase response of each active channel and its corresponding load impedance. According to the phase-shift requirements among array channels, appropriate load impedances are selected as the input impedances of the corresponding radiating elements. This approach applies a predefined phase distribution to each channel without using external phase-shifting structures. It can control the initial beam direction or compensate for the path difference between elements in conformal arrays. An active-integrated phased-array antenna with a preset beam direction is designed as a demonstration example to verify the effectiveness of the proposed method. The method provides an efficient design approach for next-generation active-integrated arrays.  Objective  In the traditional design approach, active circuit channels and antenna arrays are matched to 50 Ω before interconnection. This configuration occupies considerable physical space and limits system-level integration. In addition, insertion loss in passive matching networks and mismatch loss at the interconnections reduce overall link performance. Direct co-integration of active circuitry and antenna elements can address these limitations. However, multi-channel active-integrated antenna arrays often require one or multiple superimposed phase distributions across the channels to satisfy different application requirements, such as initial beam offset in fuze systems, wavefront compensation in conformal active phased arrays, and wide-angle beam scanning. These phase gradients are typically realized through backend phase-shifting networks. In this work, the complex impedance characteristics of the antenna are adjusted when it is directly integrated with the active circuitry. The phase response of the active-integrated channels can therefore be tuned within a certain range without using complex matching networks or additional phase shifters. This strategy reduces the complexity and performance requirements of the backend phase-shifting network. The advantages are more evident in millimeter-wave, high-frequency, and terahertz systems, where the available phase-shift range of phase shifters is limited.  Methods  Phase self-compensation of the active channels is achieved through the direct integration of the active transistor and the radiating element. In this configuration, the drain output of the transistor is directly connected to the input of the radiating element, and impedance transformation is realized within the antenna element. The proposed method includes three main steps. (1) The active transistor is first modeled as a two-port network. By evaluating the antenna element’s complex impedance as the load on different constant-gain circles, the mapping between the phase response of the active channel and the load impedance is established. The achievable phase-shift range of the active channel is then determined. (2) According to the required phase-shift distribution among the array channels, suitable combinations of active gain and corresponding complex load impedances (not unique) are selected. These combinations are not unique. (3) The realizability of the selected impedances is examined according to the characteristics of the radiating element. The impedance values with the highest feasibility are implemented by optimizing the radiating element, which includes fine adjustment of its geometry and feed position to meet the target impedance. When the radiating element is modified, particularly for circularly polarized elements, desirable radiation characteristics must also be preserved, including good axial ratio and beam-scanning performance.  Results and Discussions  The proposed phase self-compensation mechanism enables the array to achieve initial beam pointing and compensate for path-length differences caused by special array geometries, such as conformal or curved surfaces, without using additional phase-shifting structures. Therefore, the performance requirements of the backend phase-shifting network in active phased arrays can be reduced. To verify the effectiveness of the proposed method, a 1×4 circularly polarized active-integrated linear array (Fig. 9) is designed and demonstrated. Based on channel-level impedance calculations (Fig. 6) and an analysis of the antenna-element impedance characteristics (Fig. 8), a phase gradient of 38° between adjacent channels is synthesized and applied to the circularly polarized active-integrated array. Without degrading the circular polarization performance and without external phase-shifting circuitry, the initial beam direction of the active-integrated phased array is shifted to the desired angle of θ0 = 12° (Fig. 13). The phase self-compensation design does not degrade the beam-scanning capability of the array. After an additional phase gradient is applied for beam steering, the array achieves a scanning range of up to 50°. The gain reduction remains within 2 dB relative to the initial pointing direction, and the axial ratio remains below 4 dB throughout the scanning range.  Conclusions  Within the framework of active-integrated arrays, this work uses the phase-tuning effect produced by the complex impedance at the antenna port when the radiating element is directly matched to the active transistor. A desired phase-gradient distribution can therefore be synthesized among the channels of an active-integrated phased array within an achievable range. This capability enables compensation for required phase distributions, such as preset beam direction and path-length equalization in conformal-array applications, without relying on additional phase shifters. Therefore, the complexity and performance requirements of the backend phase-shifting circuitry are reduced. The effectiveness of the proposed method is validated through a multi-channel circularly polarized active-integrated phased-array prototype with a preset beam direction. Both full-wave simulations and experimental measurements confirm that the phase self-compensation mechanism provides the required initial beam pointing while preserving beam-scanning capability and polarization performance. This study provides a new approach for the design of high-efficiency next-generation active-integrated phased arrays.
Mamba-YOWO: An Efficient Spatio-Temporal Representation Framework for Action Detection
MA Li, XIN Jiangbo, WANG Lu, DAI Xinguan, SONG Shuang
Available online  , doi: 10.11999/JEIT251124
Abstract:
  Objective  Spatio-temporal action detection aims to localize and recognize action instances in untrimmed videos. This task is essential for applications such as intelligent surveillance and human–computer interaction. Existing methods, particularly those based on 3D convolutional neural networks (3D CNNs) or Transformers, often face difficulty balancing computational cost and the ability to model long-range temporal dependencies. The YOWO series provides efficient detection but relies on 3D convolutions with limited receptive fields. The Mamba architecture, based on a Selective State Space Model (SSM) with linear computational complexity, has shown strong capability for long-sequence modeling. This study integrates Mamba into the YOWO framework to improve temporal modeling efficiency and representation ability while reducing computational cost, addressing the limited application of Mamba in spatio-temporal action detection.  Methods  The proposed Mamba-YOWO framework is built on the lightweight YOWOv3 architecture. It adopts a dual-branch heterogeneous design for feature extraction. The 2D branch, derived from YOLOv8 with CSPDarknet and PANet structures, processes keyframes to extract multi-scale spatial features. The temporal branch replaces conventional 3D convolutions with a hierarchical architecture composed of a Stem layer and three stages (Stage1–Stage3). Stage1 and Stage2 apply Patch Merging for spatial downsampling and stack Decomposed Bidirectionally Fractal Mamba (DBFM) blocks. The DBFM block employs a bidirectional Mamba structure to capture temporal dependencies in both past-to-future and future-to-past directions. A Spatio-Temporal Interleaved Scan (STIS) strategy is introduced within the DBFM block. This strategy combines bidirectional temporal scanning with spatial Hilbert quad-directional scanning, enabling serialized video representation while maintaining spatial locality and temporal consistency. Stage3 applies 3D average pooling to compress temporal features. An Efficient Multi-scale Spatio-Temporal Fusion (EMSTF) module is designed to integrate features from the 2D and temporal branches. This module applies group convolution–guided hierarchical interaction for preliminary fusion and a parallel dual-branch structure for refined fusion, generating an adaptive spatio-temporal attention map. A lightweight detection head with decoupled classification and regression subnetworks produces the final action tubes.  Results and Discussions  Extensive experiments were conducted on the UCF101-24 and JHMDB datasets. Compared with the YOWOv3/L baseline on UCF101-24, Mamba-YOWO achieved a Frame-mAP of 90.24% and a Video-mAP@0.5 of 60.32%, which correspond to improvements of 2.1% and 6.0%, respectively (Table 1). These improvements were obtained while reducing model parameters by 7.3% and computational cost (GFLOPs) by 5.4%. On JHMDB, Mamba-YOWO achieved a Frame-mAP of 83.2% and a Video-mAP@0.5 of 86.7% (Table 2). Ablation experiments verified the contribution of key components. The optimal number of DBFM blocks in Stage2 was four, whereas additional blocks reduced performance, likely due to overfitting (Table 3). The proposed STIS scanning strategy achieved higher accuracy than 1D-Scan, Selective 2D-Scan, and Continuous 2D-Scan (Table 4), which indicates that joint modeling of temporal consistency and spatial structure improves representation quality. The EMSTF module also outperformed other fusion methods, including CFAM, EAG, and EMA (Table 5), which shows its stronger ability to integrate heterogeneous features. These results indicate that the Mamba-based temporal branch effectively models long-range dependencies with linear complexity, whereas the EMSTF module improves multi-scale spatio-temporal feature integration.  Conclusions  This study proposes Mamba-YOWO, an efficient spatio-temporal action detection framework that integrates the Mamba architecture into YOWOv3. The model replaces conventional 3D convolutions with a DBFM-based temporal branch that incorporates the STIS scanning strategy, which improves long-range temporal modeling with linear computational complexity. The EMSTF module further improves feature representation through group convolution and dynamic gating mechanisms. Experimental results on UCF101-24 and JHMDB show that Mamba-YOWO achieves higher detection accuracy, such as 90.24% Frame-mAP on UCF101-24, whereas model parameters and computational cost are reduced. Future work will examine the theoretical mechanism of Mamba for temporal modeling, extend its capability to longer video sequences, and support lightweight deployment on edge devices.
Data-Driven Secure Control for Cyber-Physical Systems under Denial-of-Service Attacks: An Online Mode-Dependent Switching-Q-Learning Algorithm
ZHANG Ruifeng, YANG Rongni
Available online  , doi: 10.11999/JEIT250746
Abstract:
  Objective   The open network architecture of Cyber-Physical Systems (CPSs) enables flexibility and scalability, but also increases vulnerability to cyber-attacks. In particular, Denial-of-Service (DoS) attacks represent a predominant threat, causing packet loss and performance degradation by channel jamming. CPSs under dormant and active DoS attacks can be modeled as dual-mode switched systems with stable and unstable subsystems, respectively. Therefore, switched system theory provides a promising framework for secure control design with high degrees of freedom and reduced conservatism. However, exact modeling of practical CPSs remains difficult due to attacks and noise. Although Q-learning-based control shows potential for unknown CPSs, a critical gap persists for switched systems with unstable modes, especially in establishing an evaluable stability criterion. Hence, learning-based secure control design and an evaluable security criterion for unknown CPSs under DoS attacks remain open problems.  Methods   An online mode-dependent switching-Q-learning algorithm is proposed to study data-driven secure control and an evaluable criterion for unknown CPSs under DoS attacks. First, CPSs under dormant and active DoS attacks are transformed into switched systems with stable and unstable subsystems, respectively. Then, the optimal control problem of the value function is addressed for model-based switched systems by constructing a Generalized Switching Algebraic Riccati Equation (GSARE) and deriving the corresponding mode-dependent optimal security controller. The existence and uniqueness of the GSARE solution are proved. Based on these results, a data-driven optimal security control law is developed through a novel online mode-dependent switching-Q-learning algorithm. Finally, by using the learned control gains and parameter matrices, a data-driven evaluable security criterion related to attack frequency and duration is established under switching and subsystem constraints.  Results and Discussions   Comparative experiments using a wheeled robot are conducted to verify the efficiency and advantages of the proposed methods. First, comparison between the model-based result (Theorem 1) and the data-driven result (Algorithm 1) shows that the optimal control gains and parameter matrices under threshold errors are successfully obtained from both the GSARE and the proposed learning algorithm, as indicated by the iterative curves (Fig. 2 and Fig. 3). Meanwhile, the tracking errors of the CPS converge to zero under the proposed data-driven controller (Fig. 5), ensuring exponential stability and verifying algorithm effectiveness. Second, the learning process curves (Fig. 4) show that although the initial learned control gain is not stabilizing, Algorithm 1 still converges to an optimal stabilizing gain. This result reduces conservatism compared with existing Q-learning approaches that require stabilizing initial gains. Third, comparison between the proposed data-driven evaluable security criterion (Theorem 2) and existing criteria shows that, even when the learned switching parameters do not satisfy conventional dwell-time constraints, the proposed criterion yields attack frequency and duration bounds under new switching and subsystem constraints. As shown in Tab. 1, the proposed criterion is less conservative than existing evaluable criteria. Finally, applying the learned controller and obtained DoS constraints to robot tracking control demonstrates faster and more accurate trajectory tracking compared with existing Q-learning controllers (Fig. 6 and Fig. 7), confirming the advantages of the proposed approach.  Conclusions   Based on switched system theory and learning-based control, an online mode-dependent switching-Q-learning algorithm and a corresponding evaluable security criterion are presented for unknown CPSs under DoS attacks. (1) By representing CPSs under dormant and active DoS attacks as switched systems with stable and unstable subsystems, respectively, the security problem is transformed into a stabilization problem with increased design freedom and reduced conservatism. (2) A novel online mode-dependent switching-Q-learning algorithm is developed for unknown switched systems with unstable modes, and comparative experiments show reduced conservatism relative to existing Q-learning methods. (3) A data-driven evaluable security criterion is established to characterize attack frequency and duration under switching and subsystem constraints, demonstrating lower conservatism than existing criteria based on single-subsystem or dwell-time constraints.
Aperiodic Total Squared Ambiguity Function: Theoretical Bounds for Binary Sequence Sets and Optimal Constructions
WEI Wenbo, SHEN Bingsheng, YANG Yang, ZHOU Zhengchun
Available online  , doi: 10.11999/JEIT251327
Abstract:
  Objective  In direct-sequence code division multiple access systems, the performance of spreading sequence sets is commonly evaluated using the total squared correlation metric. Traditional metrics such as total squared correlation and aperiodic total squared correlation are applicable only to synchronous communication systems and asynchronous systems with time shifts only, respectively. In modern high-speed mobile and satellite communications, the Doppler effect becomes significant. It causes both time and Doppler shifts in the received signal and leads to severe signal distortion. In communication scenarios that consider only time shift, the one-dimensional correlation function is typically used to measure system interference. However, in high-speed mobile environments the Doppler effect appears during signal transmission. Both time shift and Doppler shift of the sequence must therefore be considered simultaneously. In such cases, the two-dimensional ambiguity function should replace the one-dimensional correlation function. To mitigate Doppler effects, recent studies have focused on the design of Doppler-resilient sequences for mobile channels. Existing work mainly studies theoretical bounds of the ambiguity function, particularly the maximum ambiguity magnitude. Sequence sets are then constructed to achieve or asymptotically approach these bounds. This study instead examines the overall ambiguity function performance of binary sequence sets in asynchronous communication, namely the Aperiodic Total Squared Ambiguity Function (ATSAF). The objectives are as follows. First, the theoretical lower bound for the ATSAF of binary sequence sets is derived. Second, several classes of optimal binary sequence sets that achieve this bound are constructed based on the derived ATSAF bound.  Methods  The aperiodic time-phase cycling extension matrix \begin{document}$ {\boldsymbol{S}}_{a} $\end{document} is defined for a binary sequence set \begin{document}$ \boldsymbol{S} $\end{document} consisting of \begin{document}$ K $\end{document} sequences of length \begin{document}$ L $\end{document} to account for both time shifts and Doppler shifts. This definition converts the computation of the ATSAF for the sequence set \begin{document}$ \boldsymbol{S} $\end{document} into the calculation of the total squared correlation of the matrix \begin{document}$ {\boldsymbol{S}}_{a} $\end{document}. The theoretical lower bounds for the ATSAF of the binary sequence set \begin{document}$ \boldsymbol{S} $\end{document} are then derived for different combinations of the set size \begin{document}$ K $\end{document}, sequence length \begin{document}$ L $\end{document}, and Doppler shift \begin{document}$ V $\end{document}. To design binary sequence sets that achieve these ATSAF lower bounds, it is first proven that binary aperiodic complementary sets form ATSAF-optimal binary sequence sets. Furthermore, two additional classes of optimal binary sequence sets are constructed using Hadamard matrices and specific sequences. These sets are proven to achieve the theoretical ATSAF lower bound.  Results and Discussions  Existing studies mainly examine the maximum ambiguity magnitude of sequence sets, whereas this study analyzes the overall ambiguity function performance. The one-dimensional aperiodic total squared correlation analysis for asynchronous communication with delay only, studied by Ganapathy et al., is extended to the two-dimensional ATSAF, which considers both time delay and Doppler shift. First, the aperiodic time-phase cycling extension matrix \begin{document}$ {\boldsymbol{S}}_{a} $\end{document} is defined for a binary sequence set \begin{document}$ \boldsymbol{S} $\end{document} (Definition 3). The theoretical lower bounds for the ATSAF of the binary sequence set \begin{document}$ \boldsymbol{S} $\end{document} are then derived for different parameters, including set size \begin{document}$ K $\end{document}, sequence length \begin{document}$ L $\end{document}, and Doppler shift \begin{document}$ V $\end{document} (Theorem 1). When the Doppler shift \begin{document}$ V=1 $\end{document}, the derived ATSAF bound reduces to the aperiodic total squared correlation bound. Binary sequence sets that achieve these ATSAF bounds maintain the overall cross-interference energy in the two-dimensional delay-Doppler domain at its theoretical minimum. To construct such sequence sets, it is first proven that binary aperiodic complementary sets are ATSAF-optimal binary sequence sets (Theorem 2). Furthermore, two further classes of ATSAF-optimal binary sequence sets are constructed using Hadamard matrices and specific sequences (Theorems 3 and 4). Finally, an example demonstrates that the sequence set constructed in Theorem 4 is ATSAF-optimal (Example 1).  Conclusions  In high-speed mobile communication scenarios, Doppler effects cause distortion in received signals. By defining the aperiodic time-phase cycling extension matrix \begin{document}$ {\boldsymbol{S}}_{a} $\end{document} for a binary sequence set \begin{document}$ \boldsymbol{S} $\end{document}, the theoretical lower bound for the ATSAF is derived. This bound specifies the minimum theoretical value of the total energy of the binary sequence set S in the two-dimensional delay-Doppler domain. When Doppler shifts are not considered, the derived ATSAF bound reduces to the aperiodic total squared correlation bound. Furthermore, three classes of ATSAF-optimal binary sequence sets that achieve this theoretical bound are constructed using binary aperiodic complementary sets, Hadamard matrices, and specific sequences. These sequence sets maintain the overall cross-interference energy at the theoretical minimum in the two-dimensional delay-Doppler domain.
A Testability Evaluation Method Based on Reconvergent Fan-Out
WU Wenjun, LIANG Huaguo, YOU Chang, DOU Xianrui, XIAO Jiahui, LU Yingchun
Available online  , doi: 10.11999/JEIT251286
Abstract:
  Objective  As the scale and structural complexity of integrated circuits continue to increase, accurate testability evaluation becomes essential for Trojan detection, fault diagnosis, and test-point optimization in modern Design-for-Testability (DFT) flows. Metrics such as controllability, observability, and fault coverage depend on reliable probabilistic modeling of signal propagation. However, existing analytical and learning-based approaches often lose accuracy in circuits with dense Reconvergent Fan-Out (RFO) structures, where strong signal correlation invalidates classical independence assumptions and causes substantial estimation bias. Although several enhanced techniques attempt to incorporate structural information, many have high computational cost or limited scalability in deeper or highly reconvergent logic networks. This work addresses these limitations by proposing a testability evaluation method that incorporates RFO structural characteristics to improve modeling accuracy while maintaining practical computational efficiency.  Methods  The proposed approach starts with a structural analysis algorithm that identifies RFO regions through topological traversal of the circuit. A dedicated RFO recognition mechanism maps each root fan-out node to its corresponding RFO nodes, capturing the structural dependencies that govern correlated signal behavior and providing the basis for accurate probabilistic modeling. Building on this structural extraction, a weighted conditional probability model is formulated to correct testability distortion in reconvergent regions. Unlike previous optimization schemes, the weighting strategy assigns influence-based weights derived from the contribution of each root node to the target node, yielding probability estimates that more accurately reflect actual testability behavior. An efficient computational framework is also developed to integrate conditional probability propagation and weight selection into a single topological traversal process, thereby maintaining low algorithmic complexity while improving accuracy.  Results and Discussions  The proposed method is evaluated on representative benchmark circuits from the ISCAS-85, ISCAS-89, ITC’99, and EPFL suites. Performance is assessed in terms of controllability accuracy, ordering consistency, fault coverage estimation, and runtime efficiency. For controllability prediction, the method achieves an average RMSE of 0.0568, which corresponds to an average reduction of 25% relative to existing techniques, as reported in Table 2. Ordering consistency also improves, with the average Spearman correlation coefficient reaching 0.935, outperforming existing techniques. Fault coverage estimation shows similarly strong performance, with an average relative error of 3.64%, which is lower than that of previously reported methods, as shown in Table 1. Runtime analysis further indicates that the proposed framework maintains practical computational efficiency. Across all benchmark circuits, the method achieves an average speedup of 7× while preserving high accuracy, as illustrated in Figure 5.  Conclusions  This work addresses the degradation in testability evaluation accuracy caused by RFO structures in integrated circuits by proposing a reconvergent-fan-out-aware testability analysis method. The presented RFO structure identification algorithm extracts reconvergent information at the topological level and establishes explicit mappings between root nodes and RFO nodes. On this structural basis, a weighted conditional probability model is constructed to mitigate probability distortion induced by signal correlation in RFO regions. An efficient computational framework is further developed to integrate the full computation into a streamlined traversal-based process. Experimental results show that the proposed technique achieves accurate fitting of controllability RMSE and ordering consistency relative to simulation-based ground truth. In testability estimation, the predicted fault coverage values also closely match the simulation results. While maintaining high accuracy, the method also has low computational overhead.
Construction of Entanglement-Assisted Quantum MDS Codes
QU Yuanyue, GAO Jian
Available online  , doi: 10.11999/JEIT251251
Abstract:
  Objective  Entanglement-assisted quantum error-correcting codes (EAQECCs) provide a powerful mechanism for protecting quantum information through the use of pre-shared entanglement between sender and receiver. Traditional constructions of EAQECCs mainly rely on classical cyclic or constacyclic codes and often require strong algebraic constraints that limit the range of achievable parameters. This paper aims to develop a general and systematic framework for constructing new families of EAQECCs derived from twisted Reed-Solomon (TRS) codes over finite fields. The motivation is twofold: first, to extend the classical Reed–Solomon-based code design to its twisted form so as to capture richer algebraic structures; and second, to determine the exact number of maximally entangled pairs required for achieving the quantum Singleton bound. The ultimate goal is to produce maximum-distance separable (MDS) EAQECCs that outperform existing constructions in flexibility and parameter diversity.  Methods  The proposed method begins with the definition of TRS codes over finite fields, which introduce a “twist” parameter into the generator matrix, thereby altering the structure of their parity-check matrices. By systematically analyzing the associated coset-sum matrices and corresponding to twisted and untwisted cases, the rank of their product is determined. This rank directly equals the number of required entangled states, which forms the theoretical basis of our EAQECCs design.A detailed algebraic analysis shows that contains a submatrix with entries \begin{document}$ {M}_{l,j}=\displaystyle\sum\nolimits_{y\in W}{\left({\xi }^{j}y\right)}^{tl} $\end{document}, which simplifies to under certain group-theoretic conditions. The resulting matrix, which is a Vandermonde matrix, ensures full rank and thus provides an explicit characterization of the entanglement structure. This establishes the rank-preserving property crucial to constructing MDS EAQECCs. Based on these results, we derive two families of EAQECCs characterized by the number of entangled pairs. The corresponding parameters are tabulated and expressed as which satisfy the quantum Singleton bound with equality, confirming the MDS nature of the constructed codes.  Results and Discussions  Comprehensive parameter analyses and explicit examples verify the theoretical findings. Comparative studies further demonstrate the flexibility of the proposed framework. Unlike previous constructions that require divisibility conditions such as \begin{document}$ a\mid (q+1) $\end{document}and \begin{document}$ a\mid (q-1) $\end{document}, our approach remains valid under broader algebraic configurations, thereby significantly extending the feasible range of codes parameters. This difference is conceptually summarized in the remark section and verified numerically. A systematic comparison of our results with existing MDS EAQECCs(Tables 4)reveals several new parameter regimes previously inaccessible to classical or cyclic-code-based constructions. Particularly, our method yields larger code lengths and more adaptable entanglement consumption rates \begin{document}$ \dfrac{c}{n} $\end{document}, improving both the efficiency and generality of EAQECCs. The algebraic consistency across all tested cases confirms the correctness and universality of the TRS-based framework.  Conclusions  This study establishes a comprehensive algebraic framework for constructing MDS EAQECCs derived from twisted Reed–Solomon codes. By rigorously analyzing the rank properties of coset-sum matrices, we precisely determine the entanglement requirement and identify conditions under which the constructed codes achieve the quantum Singleton bound. Two broad classes of MDS EAQECCs are obtained, corresponding to \begin{document}$ a\mid \left(q+1\right) $\end{document} and \begin{document}$ a\mid \left(q-1\right) $\end{document}, respectively, both verified through explicit examples and tabulated results. Compared with existing papers, the proposed approach not only generalizes prior constructions but also extends the achievable parameter space to cases not covered by Reed–Solomon codes or cyclic codes frameworks. The derived codes exhibit improved structural flexibility, theoretical clarity, and potential applicability to high-performance quantum information systems. This work thus provides a novel and unified perspective for developing algebraically optimized EAQECCs, laying the foundation for future research on TRS-based quantum codes families and their efficient encoding implementations.
Multi-Scale Deformable Alignment-Aware Bidirectional Gated Feature Aggregation for Stereoscopic Image Generation from a Single Image
ZHANG Chunlan, QU Yuwei, NIE Lang, LIN Chunyu
Available online  , doi: 10.11999/JEIT250760
Abstract:
  Objective  The generation of stereoscopic images from a single image usually relies on depth as a prior, which often leads to geometric misalignment, occlusion artifacts, and texture blurring. Recent studies have therefore shifted toward end-to-end learning of alignment transformation and rendering within the image or feature domain. By adopting a content-based feature transformation and alignment mechanism, high-quality novel images can be generated without explicit geometric information. However, three main challenges remain. First, fixed convolution has limited ability to model large-scale geometric and disparity changes, which restricts feature alignment performance. Second, texture and structural information are tightly coupled in network representations, and hierarchical modeling and dynamic fusion mechanisms are often absent. This limitation makes it difficult to preserve fine details while maintaining semantic consistency. Third, existing supervision strategies mainly focus on reconstruction errors and provide limited constraints on the intermediate alignment process, which reduces the efficiency of cross-view feature consistency learning. To address these challenges, a Multi-Scale Deformable Alignment-Aware Bidirectional Gated Feature Aggregation network is proposed for stereoscopic image generation from a single image.  Methods  First, to address image misalignment and distortion caused by the inability of fixed convolution to adapt to geometric deformation and disparity changes, a Multi-Scale Deformable Alignment (MSDA) module is proposed. This module employs multi-scale deformable convolution to adaptively adjust sampling positions based on image content, enabling effective alignment between source and target features across different scales. Second, to address texture blurring and structural distortion in synthesized images, a feature decoupling strategy is adopted to guide shallow layers to learn texture information and deeper layers to model structural information. A Texture-Structure Bidirectional Gating Feature Aggregation (Bi-GFA) module is designed to achieve dynamic complementarity and efficient fusion of texture and structural features. Third, to improve cross-view feature alignment accuracy, a Learnable Alignment-Guided Loss (LAG) function is proposed. This loss guides the alignment network to adaptively refine the offset field at the feature level, thereby improving the fidelity and semantic consistency of the synthesized images.  Results and Discussions  This study focuses on scene-level image synthesis from a single image. Quantitative results show that the proposed method performs better than all compared methods in terms of PSNR, SSIM, and LPIPS. The method also maintains stable performance across different dataset sizes and scene complexities, indicating strong generalization ability and robustness (Tab. 1 and Tab. 2). Qualitative comparisons indicate that the generated images are visually closest to the ground-truth images and exhibit high overall sharpness and detail fidelity. In the outdoor KITTI dataset, pixel alignment errors of foreground objects are effectively reduced (Fig. 4). In indoor scenes, facial and hair textures are clearly reconstructed. High-frequency regions, such as champagne towers and balloon edges, present sharp contours and accurate color reproduction without visible artifacts or blurring. Both global illumination and local structural details are well preserved, producing high perceptual quality (Fig. 5). Ablation experiments further confirm the effectiveness of the proposed MSDA, Bi-GFA, and LAG modules (Tab. 3).  Conclusions  A Multi-Scale Deformable Alignment-Aware Bidirectional Gated Feature Aggregation network is proposed to address strong dependence on ground-truth depth, geometric misalignment and distortion, texture blurring, and structural distortion in stereoscopic image generation from a monocular image. The MSDA module improves the flexibility and accuracy of cross-view feature alignment. The Texture-Structure Bi-GFA module enables complementary fusion of texture details and structural information. The LAG further refines offset field estimation and improves the fidelity and semantic consistency of the synthesized images. Experimental results show that the proposed method performs better than existing advanced methods in structural reconstruction, texture clarity, and viewpoint consistency, while maintaining strong generalization ability and robustness. Future work will examine the effect of different depth estimation strategies on system performance and investigate more efficient network architectures and model compression methods to reduce computational cost and support real-time stereoscopic image generation.
TTSPD: A Multimodal Traffic Scene Perception Dataset Integrating Tire Data
YING Zongchen, GUI Lin, YANG Jiahan, ZHANG Fangwei, WANG Junfan, DONG Zhekang
Available online  , doi: 10.11999/JEIT260022
Abstract:
  Objective  With the rapid development of Intelligent Transportation Systems (ITS) and autonomous driving technologies, accurate traffic environment perception is a fundamental prerequisite for vehicle safety and decision making. Current perception frameworks primarily rely on high-resolution cameras and LiDAR sensors. Although these sensors provide rich information, they create severe challenges across the Perception-Storage-Calculation pipeline. High acquisition costs limit large-scale deployment. In addition, the massive data volume produced by high-dimensional sensors places heavy pressure on onboard storage and computational resources, often exceeding the power and thermal budgets of vehicle-grade edge platforms. These constraints motivate the exploration of alternative sensing paradigms that are cost-effective, compact, and computationally efficient while maintaining reliable perception accuracy. In response, the present study shifts the perception perspective from conventional external sensors to the tire-road contact interface, where abundant physical interaction information naturally exists. The objective is to construct a novel multimodal dataset, termed the Tire-integrated Traffic Scene Perception Dataset (TTSPD), which combines internal tire dynamics with external visual observations. This dataset is used to examine whether low-dimensional tire sensing data can complement or partially substitute high-dimensional visual data for accurate road surface classification. The study also aims to establish a new data morphology that balances perception performance and system efficiency for future intelligent vehicles.  Methods  To construct a high-quality and practically usable multimodal dataset, an integrated hardware-software acquisition framework is developed. From a hardware perspective, a specialized sensing system is designed by coupling tire-mounted multi-parameter sensors with a vehicle-mounted camera. To ensure reliable operation under the harsh mechanical conditions of a rotating tire, sensing nodes are encapsulated using a rubber-based composite material that provides mechanical protection and long-term stability. Wireless transmission is implemented using Bluetooth Low Energy (BLE) 5.0 with an adaptive frequency-hopping mechanism, enabling low-power and reliable communication during high-speed rotation. During data acquisition, the system synchronously collects six types of internal tire signals, including radial acceleration, tire temperature, and tire pressure, producing approximately 1.8 million sampling points. In parallel, a dashboard-mounted camera records high-resolution traffic scene images totaling 309 GB across four representative road surface conditions. To address the heterogeneity between high-frequency one-dimensional tire signals and two-dimensional visual data, a timestamp-based association strategy is adopted to achieve scene-level temporal alignment rather than strict frame-by-frame correspondence. Sensor sequences and image segments are grouped according to shared temporal windows and driving scenarios. This approach ensures semantic and temporal consistency at the scene level. The alignment strategy reflects practical deployment conditions and forms the basis of the final TTSPD dataset for multimodal fusion research.  Results and Discussions  The effectiveness of the proposed TTSPD is evaluated through comprehensive road surface classification experiments using mainstream deep learning models. Initial experiments based solely on visual data demonstrate strong baseline performance, with classification accuracies ranging from 87.25% to 93.75% (Table 7). These results confirm the quality and diversity of the visual modality in the dataset. The primary contribution of this study is the quantification of efficiency gains enabled by tire-based sensing. Comparative experiments progressively reduce the amount of visual data while integrating low-dimensional tire signals, particularly radial acceleration (Table 9). The results show that the multimodal model achieves approximately 95% of the full-data baseline accuracy while using only about 38.75% of the original data volume. This reduction in data dependency produces significant system-level benefits. Storage requirements decrease by approximately 61.25%, and overall model training time decreases by about 54.10% (Fig. 8). These findings indicate that tire dynamics encode high-value physical features related to road texture and surface conditions that complement visual cues. The proposed dataset therefore supports the development of lighter perception pipelines without reducing recognition performance.  Conclusions  This study addresses the long-standing Perception-Storage-Calculation bottleneck in vision-dominated autonomous driving systems by proposing the TTSPD. Multi-parameter sensors are embedded within tires using rubber-based encapsulation, and stable wireless communication is achieved through BLE 5.0. A robust tire-camera data acquisition system is therefore established. The resulting dataset covers four common and safety-critical road surface types: cement, asphalt, damaged, and water-covered roads. It provides a comprehensive foundation for multimodal perception research. Experimental results show that combining low-dimensional tire sensing data with visual information significantly improves perception efficiency. Approximately 95% of peak classification accuracy is achieved using only about 38.75% of the original data volume. This result effectively reduces storage pressure and computational cost, reflected in a 61.25% reduction in data storage and a 54.10% reduction in training time. The TTSPD dataset therefore proposes a practical data morphology that supports efficient and high-performance perception under vehicle-grade computational constraints. It also provides valuable resources for the future development of ITS.
Image Deraining Driven by CLIP Visual Embedding
SUN Jin, CUI Yuntong, TIAN Hongwei, HUANG Changcheng, WANG Jigang
Available online  , doi: 10.11999/JEIT251066
Abstract:
  Objective  Rain streaks introduce visual distortions that degrade image quality and significantly impair downstream vision tasks such as feature extraction and object detection. This work addresses the problem of single-image rain streak removal. Existing methods often rely heavily on restrictive priors or synthetic datasets. This dependence limits robustness and generalization because such data differ from complex and unstructured real-world scenarios. Contrastive Language-Image Pre-training(CLIP) demonstrates strong zero-shot generalization through large-scale image-text contrastive learning. Motivated by this property, this study proposes FCLIP-UNet, a visual-semantic-driven deraining architecture designed to improve rain removal and generalization in real-world rainy environments.  Methods  FCLIP-UNet adopts a U-Net encoder-decoder architecture and formulates deraining as pixel-level detail regression guided by high-level semantic features. During the encoding stage, textual queries are omitted. Instead, the first four layers of a frozen CLIP-RN50 are employed to extract robust features that are decoupled from rain distribution. These features exploit the semantic representation capability of CLIP to suppress diverse rain patterns. To guide accurate image restoration, a collaborative decoding architecture that integrates ConvNeXt-T and an Upsampling DepthWise Convolution Block (UpDWBlock) is adopted. The decoder employs ConvNeXt-T in place of conventional convolution modules to expand the receptive field and capture global contextual information. It parses rain streak patterns by using semantic priors extracted from the encoder. Under the constraint of these priors, UpDWBlock reduces information loss during upsampling and reconstructs fine-grained image details. Multi-level skip connections compensate for information loss introduced during encoding. In addition, a Layer-wise Differentiated Feature Perturbation Strategy (LDFPS) is incorporated to enhance robustness and adaptability in complex real-world rainy scenes.  Results and Discussions  Comprehensive evaluations are conducted on the Rain13K composite dataset by comparing the proposed model with ten state-of-the-art deraining algorithms. FCLIP-UNet shows consistently superior performance across all five testing subsets of Rain13K. In particular, the method outperforms the second-best approach on both datasets: on Test100 by 0.32 dB in Peak Signal-to-Noise Ratio (PSNR) and 0.06 in Structural Similarity Index Measure (SSIM); on Test2800 by 0.14 dB and 0.002, respectively. On Rain100H and Rain100L, FCLIP-UNet achieves competitive results, including the best SSIM on Rain100H and comparable results on other metrics (Table 3). To evaluate model generalization, the Rain13K-pretrained FCLIP-UNet is further tested on three datasets with different rainfall distribution characteristics: SPA-Data, HQ-RAIN, and MPID (Table 4, Fig. 7). Qualitative and quantitative evaluations are also conducted on the real-world NTURain-R dataset (Table 5, Figs. 8\begin{document}$ \sim $\end{document}10). These results consistently demonstrate the strong generalization capability of FCLIP-UNet. Ablation experiments on Rain100H validate the proposed encoder design and confirm the effectiveness of both UpDWBlock and LDFPS (Tables 6\begin{document}$ \sim $\end{document}8). Additional ablation studies show that the use of LDFPS, combined with a 1:1 weighting ratio between L1 loss and perceptual loss, provides the best performance for FCLIP-UNet (Tables 9\begin{document}$ \sim $\end{document}11).  Conclusions  This study proposes FCLIP-UNet, a deraining network designed for real-world generalization by leveraging the CLIP paradigm. Three main contributions are presented. First, image deraining is formulated as a pixel-level regression task that reconstructs rain-free images from high-level semantic features. A frozen CLIP image encoder extracts representations that remain stable across different rain distributions, thereby reducing domain shifts caused by diverse rain models. Second, a decoder that integrates ConvNeXt-T with an UpDWBlock is designed, and an LDFPS is proposed to improve robustness to unseen rain distributions. Third, a composite loss function jointly optimizes pixel-level accuracy and perceptual consistency. Experiments on both synthetic and real-world rainy datasets show that FCLIP-UNet effectively removes rain streaks, preserves fine image details, and achieves strong deraining performance with reliable generalization capability.
Optimal Federated Average Fusion of Gaussian Mixture–Probability Hypothesis Density Filters
XUE Yu, XU Lei
Available online  , doi: 10.11999/JEIT250759
Abstract:
  Objective  To realize optimal decentralized fusion tracking of uncertain targets, this study proposes a federated average fusion algorithm for Gaussian Mixture–Probability Hypothesis Density (GM-PHD) filters, designed with a hierarchical structure. Each sensor node operates a local GM-PHD filter to extract multi-target state estimates from sensor measurements. The fusion node performs three key tasks: (1) maintaining a master filter that predicts the fusion result from the previous iteration; (2) associating and merging the GM-PHDs of all filters; and (3) distributing the fused result and several parameters to each filter. The association step decomposes multi-target density fusion into four categories of single-target estimate fusion. We derive the optimal single-target estimate fusion both in the absence and presence of missed detections. Information assignment applies the covariance upper-bounding theory to eliminate correlation among all filters, enabling the proposed algorithm to achieve the accuracy of Bayesian fusion. Simulation results show that the federated fusion algorithm achieves optimal tracking accuracy and consistently outperforms the conventional Arithmetic Average (AA) fusion method. Moreover, the relative reliability of each filter can be flexibly adjusted.  Methods  The multi-sensor multi-target density fusion is decomposed into multiple groups of single-target component merging through the association operation. Federated filtering is employed as the merging strategy, which achieves the Bayesian optimum owing to its inherent decorrelation capability. Section 3 rigorously extends this approach to scenarios with missed detections. To satisfy federated filtering’s requirement for prior estimates, a master filter is designed to compute the predicted multi-target density, thereby establishing a hierarchical architecture for the proposed algorithm. In addition, auxiliary measures are incorporated to compensate for the observed underestimation of cardinality.  Results and Discussions  modified Mahalanobis distance (Fig.3). The precise association and the single-target decorrelation capability together ensure the theoretical optimality of the proposed algorithm, as illustrated in Fig. 2. Compared with conventional density fusion, the Optimal Sub-Pattern Assignment (OSPA) error is reduced by 8.17% (Fig. 4). The advantage of adopting a small average factor for the master filter is demonstrated in Figs. 5 and 6. The effectiveness of the measures for achieving cardinality consensus is also validated (Fig. 7). Another competitive strength of the algorithm lies in the flexibility of adjusting the average factors (Fig. 8). Furthermore, the algorithm consistently outperforms AA fusion across all missed detection probabilities (Fig. 9).  Conclusions  This paper achieves theoretically optimal multi-target density fusion by employing federated filtering as the merging method for single-target components. The proposed algorithm inherits the decorrelation capability and single-target optimality of federated filtering. A hierarchical fusion architecture is designed to satisfy the requirement for prior estimates. Extensive simulations demonstrate that: (1) the algorithm can accurately associate filtered components belonging to the same target, thereby extending single-target optimality to multi-target fusion tracking; (2) the algorithm supports flexible adjustment of average factors, with smaller values for the master filter consistently preferred; and (3) the superiority of the algorithm persists even under sensor malfunctions and high missed detection rates. Nonetheless, this study is limited to GM-PHD filters with overlapping Fields Of View (FOVs). Future work will investigate its applicability to other filter types and spatially non-overlapping FOVs.
Multi-path Resource Allocation for Confidential Services Based on Network Coding and Fragmentation Awareness in EONs
LIU Huanlin, AN Dongxin, CHEN Yong, CHEN Haonan, MA Bing, ZOU Jiachen
Available online  , doi: 10.11999/JEIT251222
Abstract:
  Objective  Each fiber in Elastic Optical Networks (EONs) provides enormous bandwidth capacity and carries a large volume of services and data. If any element in EONs is eavesdropped on or attacked, even for a short period, a large amount of data may be leaked or lost, which significantly reduces network performance. Moreover, confidential services are increasingly sensitive to data leakage and loss during transmission. Network attacks may therefore compromise a large number of confidential services. Network Coding (NC) combines data from different services using the XOR operation and transmits the coded data through EONs. Decoding is then performed at the receiver to recover the original information, providing a potential method to mitigate data eavesdropping during transmission. However, NC requires encryption constraints in EONs. Specifically, the routing and Frequency Slot (FS) allocation of other services must overlap with those of the confidential service to be encrypted. Therefore, routing and spectrum allocation for confidential services should consider both NC constraints and the efficiency of resource allocation.  Methods  A Multi-path Resource Allocation based on Network Coding and Fragmentation Awareness (MRA-NCFA) method is proposed to support secure and reliable transmission of confidential services under eavesdropping attacks. First, the proposed method applies NC to encrypt service data and adopts multi-path protection to improve transmission reliability. Second, in the routing stage, different strategies are designed for confidential and non-confidential services. For non-confidential services, the objective is to balance network load and improve resource utilization. A path weight function based on path load is designed. This function considers path hop count, the maximum idle spectrum block on the path, and the required FS of the service. The path with the largest function value is selected as the transmission path. For confidential services, routing selection focuses on preventing information leakage while considering path resource availability. Therefore, a path cost function based on eavesdropping probability is designed, and a routing strategy that considers this probability is adopted. Finally, different resource allocation strategies are applied. For non-confidential services, the objective is to maximize spectrum efficiency. Spectrum fragmentation should be minimized to maintain resource continuity and consistency. Therefore, a fragmentation-aware spectrum allocation strategy is designed. A fragmentation measurement formula evaluates the effect of service allocation on link resources. For confidential services, encryption constraints and FS matching must be satisfied. Therefore, a spectrum allocation strategy based on FS and fragmentation sensing is designed. This strategy considers both the effect of spectrum fragments and the effect of established service resources, which improves transmission security for confidential services.  Results and Discussions  The proposed MRA-NCFA algorithm achieves the lowest service blocking probability (Fig. 2). During routing selection, both confidential and non-confidential services consider path resource conditions. During resource allocation, fragmentation effects are also considered, which preserves idle resources for subsequent services as much as possible. In addition, confidential services adopt a multi-path transmission method. Large services can be divided into multiple sub-services, which improves spectrum resource utilization. As the number of services increases, the spectrum utilization of the MRA-NCFA algorithm improves significantly. This improvement results from the multi-path transmission mechanism, which divides large services into smaller ones and allows efficient use of small spectrum fragments. In addition, both confidential and non-confidential services consider path resource quantity during routing and prefer paths with lower spectrum consumption. During resource allocation, fragmentation effects are considered to avoid generating new fragments, which improves spectrum utilization (Fig. 3). As the number of services increases, the proposed MRA-NCFA algorithm shows the slowest and smallest increase in spectrum fragmentation ratio compared with the other two algorithms. This result occurs because the algorithm combines multi-path transmission with fragmentation-aware resource allocation, which improves the utilization of small spectrum fragments and reduces fragmentation in EONs. Moreover, both confidential and non-confidential services consider fragmentation effects during resource allocation and apply strategies to reduce fragmentation. Therefore, the proposed algorithm performs better than the Survivable Multipath Fragmentation-Sensitive Fragmentation-Aware Routing and Spectrum Assignment (SM-FSFA-RSA) algorithm and the Network Coding-based Routing and Spectrum Allocation (NC-RSA) algorithm (Fig. 4).  Conclusions  This study examines resource allocation for services that require protection against eavesdropping attacks in elastic optical networks. The objective is to satisfy the security requirements of confidential services and reduce spectrum fragmentation. The proposed MRA-NCFA algorithm applies NC to encrypt confidential services and adopts multi-path protection to improve transmission reliability. For non-confidential services, a path weight function based on path resources is designed for routing selection, and fragmentation-aware spectrum metrics are used for resource allocation. For confidential services, a path cost function that considers both path resources and eavesdropping probability is designed for routing selection. A bandwidth segmentation strategy based on eavesdropping probability supports multi-path transmission, and an FS and fragmentation sensing function based on encryption constraints is used for spectrum allocation. These mechanisms improve both reliability and security for confidential services. As the number of security-sensitive services on the Internet increases, the proposed MRA-NCFA algorithm can effectively reduce traffic blocking probability and improve spectrum resource utilization.
Phase Shift-Based Covert Backdoor Attack Strategy in Deep Neural Networks
ZHANG Heng, XIA Yu, REN Yan, DU Linkang, ZHANG Zhikun
Available online  , doi: 10.11999/JEIT251145
Abstract:
  Objective  The proliferation of deep neural networks (DNNs) in safety-critical domains such as autonomous driving and biomedical diagnostics has heightened concerns about their vulnerability to adversarial threats, particularly backdoor attacks. These attacks embed hidden triggers during training, causing models to behave normally on clean inputs while executing malicious actions when specific triggers are present. Existing backdoor methods predominantly operate in the spatial domain or frequency domain, but they face a fundamental trade-off between attack success rate (ASR) and stealthiness. Spatial triggers often introduce visible artifacts, while frequency-based amplitude perturbations disrupt energy distribution, making them detectable by advanced defenses like spectral anomaly detection. This work addresses the critical need for a backdoor paradigm that simultaneously achieves high attack performance, minimal perceptual distortion, and robustness against state-of-the-art defenses. Our objective is to develop a frequency-domain backdoor attack leveraging phase manipulation, which inherently aligns with human visual perception and structural coherence, thereby overcoming the limitations of existing methods.  Methods  FDPS integrates frequency-domain phase manipulation with perceptual similarity screening and standard data poisoning. The method begins by converting input images from RGB to YCrCb color space. This conversion isolates chrominance channels while preserving luminance information intact. Next, the system applies Discrete Fourier Transform to the chrominance components. This transformation produces complex frequency spectra. The method computes phase information using atan2 function and selectively shifts high-frequency components. Image reconstruction is performed through Inverse Fourier Transform. The framework incorporates Learned Perceptual Image Patch Similarity filtering. This filter discards generated instances that fall below similarity thresholds. The screening ensures all retained triggers maintain visual imperceptibility. Accepted poisoned samples receive target class labels. These samples are combined with clean training data following standard protocols.  Results and Discussions  FDPS achieves near-perfect 99% attack success rates while maintaining benign accuracy across three datasets and two network architectures (Table 1). The method operates by manipulating phase information in chrominance channels via Fourier transforms, with LPIPS filtering ensuring visual stealth. Experimental results show poisoned images retain semantic focus, as confirmed by Grad-CAM visualizations aligning with clean patterns (Fig. 4). The approach demonstrates strong defense evasion, scoring an anomaly index of 1.73 against Neural Cleanse - below the detection threshold of 2 (Fig. 3-5). Ablation studies validate that high-frequency phase perturbations achieve over 90% attack success with just 2% poisoning while minimizing impact on model utility (Fig. 6; Table 3).  Conclusions  An end-to-end frequency-domain strategy was developed to embed covert triggers in image classifiers while maintaining clean-data fidelity. By shifting selected phase components in chrominance and filtering with LPIPS, FDPS achieves 99% ASR with negligible BA loss and produces minimal visible artifacts. It also evades leading detection tools, including Grad-CAM, Neural Cleanse, ANP, and STRIP. The findings indicate that phase-centric, high-frequency perturbations constitute an especially potent and stealthy backdoor mechanism. Future work should explore broader modality coverage and develop frequency-domain anomaly detectors as principled countermeasures.
Blind Parameter Estimation Method for PSK Modulated Frequency-Hopping Signals Based on Improved Maximum Likelihood
ZHANG Tianhao, ZHANG Yushu, XU Zhongqiu, TANG Xinyi, DANG Wenhua, LI Guangzuo
Available online  , doi: 10.11999/JEIT260005
Abstract:
  Objective  Blind parameter estimation of non-cooperative Frequency-Hopping (FH) signals is a critical task in electronic reconnaissance and countermeasures. Estimation methods based on time-frequency analysis typically suffer from limited resolution or high computational complexity. Furthermore, methods based on compressive sensing rely heavily on the consistency between the predefined dictionary and the actual signal characteristics, and the estimation precision will be significantly compromised by grid mismatch or modulation-induced energy dispersion. Maximum Likelihood (ML)-based methods offer the advantage of high theoretical estimation accuracy with relatively low computational complexity. However, existing studies typically assume an ideal unmodulated signal model with a single frequency transition. Consequently, these ML-based methods suffer from severe model mismatch when processing FH signals with digital modulation, such as Phase Shift Keying (PSK), or multi-hop signals. Moreover, the conventional iterative solution of ML-based methods is prone to divergence or trapping in local optima. To address these limitations, this paper proposes an improved ML-based method for the blind parameter estimation of PSK-modulated FH signals.  Methods  To handle received multi-hop signals, a signal slicing technique based on the Short-Time Fourier Transform (STFT) is proposed to extract slices containing individual frequency transitions. Subsequently, to mitigate the model mismatch caused by digital modulation in conventional ML-based methods, a model-matching signal extraction approach based on the ML objective function is developed for PSK-modulated FH signals. Furthermore, a weighted iterative solving algorithm for ML estimation is designed to enhance convergence, thereby achieving robust and accurate estimation of frequency-hopping parameters.  Results and Discussions  To validate the effectiveness of the model-matching signal extraction approach, ablation experiments were carried out under various modulation schemes, including binary PSK (BPSK), quadrature PSK (QPSK), and 8-ary PSK (8PSK). The results indicate that the proposed approach (Group D) significantly reduces the Mean Square Error (MSE) of hopping frequency estimation compared to that without the proposed extraction (Group ND). These results demonstrate that the proposed method effectively mitigates the model mismatch (Fig. 5). Simulation results also illustrate that the designed weighted iterative algorithm achieves superior convergence performance compared with linear weighting and non-weighting schemes (Fig. 6). Moreover, the experiments verify the algorithm's insensitivity to initial frequency offsets, showing that it tolerates offsets of up to 2 MHz at SNR of -10 dB with little performance degradation (Fig. 7). Finally, comparative analysis with representative existing methods indicates that the proposed method outperforms the others in terms of estimation accuracy (Fig. 8).  Conclusions  To achieve blind parameter estimation for PSK-modulated FH signals, this paper proposes an improved ML-based method. By utilizing a signal slicing technique based on the STFT, the proposed method successfully extends the applicability of the ML-based estimator to continuous multi-hop signals. To mitigate the model mismatch induced by PSK modulation, a model-matching signal extraction approach is developed to isolate valid signal segments that conform to the ML model. Furthermore, a weighted iterative algorithm incorporating a dynamic weighting function is introduced to address the instability of the conventional iterative ML solver. Simulation results confirm that the proposed method effectively eliminates model mismatch and ensures superior convergence performance with insensitivity to initial frequency offsets. Moreover, it is shown to achieve high estimation precision for both hopping frequencies and hopping times.
A Semantic-Enhanced Cybersecurity Named Entity Recognition Approach Oriented to Lightweight Adaptation of Large Language Models
HU Ze, XU Tongwu, YANG Hongyu
Available online  , doi: 10.11999/JEIT251260
Abstract:
  Objective  Named Entity Recognition (NER) in the field of cybersecurity is a fundamental technology supporting threat intelligence analysis, vulnerability management, and security incident response. However, this field generally faces challenges such as dense technical terms, scarce labeled data, dynamic changes in entity categories, and highly complex semantic features, which make traditional deep learning models and existing Large Language Models (LLMs) significantly inadequate in terms of domain adaptability and semantic fusion capability. To address the aforementioned key issues while also considering the need for lightweight model deployment, this paper aims to construct a cybersecurity NER approach that can enhance domain semantic representation, improve the ability to identify rare entities, and apply to low-resource environments, providing a reliable technical path for intelligent threat analysis in cybersecurity scenarios.  Methods  To address the complex semantic features of cybersecurity texts, this paper proposes a semantically enhanced, lightweight, and LLMs-adaptable cybersecurity NER approach. The proposed approach uses LLM2Vec to achieve bidirectional semantic reconstruction of large model decoders and combines Low-Rank Adaptation (LoRA) for low-rank fine-tuning, so as to maintain deep semantic encoding capability while significantly reducing the amount of parameter updates. To address the challenges of sparse keywords and severe noise interference in cybersecurity texts, a sparse gated attention mechanism is introduced to strengthen keyword-focused feature extraction by dynamically selecting high-contribution cybersecurity terms through global gating and sparse inference. A SecRoBERTa-based semantic enhancement component is introduced, which utilizes a domain-pre-trained model to generate similar word embeddings, optimizes feature robustness in small-sample scenarios, and alleviates the challenges of identifying out-of-vocabulary words and low-frequency terms. Finally, a masked conditional random field is employed to constrain label transitions and guarantee BIO-compliant output sequences, achieving robust and consistent entity boundary prediction.  Results and Discussions  Extensive experiments were conducted on two public cybersecurity datasets, DNRTI and APTNER. The proposed approach achieved an F1 score of 91.91% on DNRTI, surpassing the previous state-of-the-art model by 2.14%. On APTNER, it reached an F1 score of 80.37%, outperforming the best baseline by 2.97%. Ablation studies confirmed the contribution of each key component: the Sparse Gated Attention mechanism improved F1 by 3.57% over standard Multi-Head Attention on DNRTI; the semantic enhancement module contributed a 2.32% F1 gain; and the MCRF (Masked Conditional Random Field) layer provided a 10.63% F1 improvement over traditional CRF (Conditional Random Field). The model also demonstrated efficient training and inference characteristics, aligning with its lightweight design goals.  Conclusions  This paper proposes a lightweight adaptation approach based on LLMs for NER in the cybersecurity domain, which effectively addresses the limitations of existing LLMs-based NER methods in domain adaptation and rare entity recognition. By integrating LLM2Vec and LoRA for lightweight fine-tuning, a sparse gated attention mechanism for domain feature fusion, and a SecRoBERTa-based semantic enhancement component for similar word precomputation, the proposed approach achieves high performance on DNRTI and APTNER datasets. The research provides an efficient technical path for NER tasks in low-resource cybersecurity scenarios and offers strong support for downstream tasks such as automated threat intelligence analysis.
A High-Performance Eye Tracking Method Based on Event Camera and Dual-Channel Differential Illumination
SONG Sishun, FENG Junchi, PU Chengyu, GUO Yu, LIU Shijie, HE Xin, CHENG Yuwei
Available online  , doi: 10.11999/JEIT251162
Abstract:
  Objective  Eye tracking has become an essential technology in human–computer interaction, medical diagnostics, cognitive neuroscience, and augmented/virtual reality applications. However, traditional eye tracking systems often suffer from two major limitations: low spatial accuracy and restricted temporal resolution, particularly in high-speed eye movement scenarios. These limitations hinder precise gaze estimation and reduce the reliability of real-time interactive systems. To address these challenges, this research integrates an event camera with the dual-channel differential illumination strategy to enhance the signal-to-noise ratio of corneal reflection events. By introducing the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, accurate localization of corneal reflection points is achieved. On this basis, the corneal reflection point coordinates are utilized in combination with Singular Value Decomposition (SVD) and the least-squares method to determine the corneal curvature center, thereby significantly improving the accuracy of gaze direction estimation. This research provides an efficient technical pathway for next-generation eye tracking systems and offers theoretical support for their deployment in complex interactive environments.  Methods  The proposed event-camera-based gaze tracking method integrates asynchronous eye movement event data through a dual-channel differential illumination framework, thereby enhancing gaze direction estimation accuracy under high-speed and dynamic conditions. Firstly, the event camera asynchronously captures brightness-change events with microsecond-level temporal resolution, enabling precise tracking of rapid eye movements, while the dual-channel differential illumination mechanism suppresses redundant reflections and enhances the contrast of corneal reflection points. Secondly, the DBSCAN algorithm is employed to process event data, effectively removing noise and optimizing the spatial localization accuracy of corneal reflection features. Finally, a ray-tracing model is reconstructed using SVD and least-squares fitting to determine the corneal curvature center, thereby achieving robust and high-precision gaze direction estimation. Experimental results on a biomimetic eye movement dataset demonstrate that the proposed method achieves high temporal resolution, localization accuracy, and robustness in dynamic tracking scenarios.  Results and Discussions  Experiments demonstrate that the proposed method achieves a temporal resolution of 25 kHz (Fig. 6), far exceeding conventional cameras. Differential illumination significantly improves the signal-to-noise ratio of corneal reflection events. The DBSCAN algorithm localizes corneal reflection points more efficiently than K-Means, Agglomerative Clustering, Mean Shift, and OPTICS, achieving accurate results within 10 ms without requiring predefined clusters (Fig. 8, Table 3). For gaze estimation, the proposed method maintains stable accuracy across sampling frequencies from 2 kHz to 25 kHz. At a 15° cone angle, the mean error (ME) and root mean square error (RMSE) are approximately 0.66° and 0.67°, respectively, while at 25° they increase slightly to 0.87° and 0.90° (Table 4). Compared with existing state-of-the-art (SOTA) gaze tracking methods, the proposed approach demonstrates superior overall performance in terms of both temporal resolution and accuracy (Table 5) Trajectory results (Fig. 9) show close alignment between estimated and ground truth gaze paths, and distribution analyses (Fig. 10) confirm concentrated error ranges below 1°.  Conclusions  This paper presents a novel eye tracking method integrating event cameras, dual-channel differential illumination. The method achieves high temporal resolution (25 kHz), enhances event signal quality, and reduces localization errors, yielding gaze estimation errors of less than 1°. The proposed approach provides a reliable technical pathway for next-generation high-performance eye tracking systems. Future work should consider sensor noise modeling and computational optimization to further improve real-world applicability.
Multi-projection plane InISAR 3D reconstruction method for complex moving ship targets
LI Ning, NIU Jinfa, WANG Weibin, HU Xingwang, WU Lin
Available online  , doi: 10.11999/JEIT251268
Abstract:
  Objective  Interferometric Inverse Synthetic Aperture Radar (InISAR) is a Three Dimensions (3D) reconstruction technique for non-cooperative target. However, the complex 3D rotational motion of the ship target causes unstable Doppler frequency changes, and Inverse Synthetic Aperture Radar (ISAR) imaging inevitably suffers from target overlap and occlusion problems, making high-precision complete 3D reconstruction difficult under a single projection plane. Thus, a multi-projection planes InISAR 3D reconstruction method of complex moving ship targets based on point cloud fusion is proposed. Through efficient and high-precision point clouds registration and fusion supplement target 3D information, significantly improving the 3D reconstruction quality.  Methods  This method fully leverages the advantages of multi-plane observation from the severe movement of ship targets, extracts the ship’s centerline and estimates the vertical rotation vector via Principal Component Analysis (PCA), to select the optimal imaging time corresponding to different Imaging Projection Planes, completes ISAR imaging and InISAR 3D reconstruction. Secondly, a point cloud fusion algorithm combining Weighted Random Sampling Consensus (RANSAC) and Hierarchical Iterative Closest Point (ICP) is proposed. The random sampling process is optimized through a feature stability weighting strategy, efficiently extracting and matching corresponding feature points in InISAR images, achieving high-precision multi- Imaging Projection Plane (IPP) point cloud fusion.  Results and Discussions  Experimental results demonstrate that the proposed method significantly enhances reconstruction accuracy and target completeness. For simulated ship point target data, Fig 7 shows excellent results, with a significant reduction in reconstruction error. Signal-to-noise ratio (SNR) analysis reveals that 3D fusion imaging quality improves continuously as SNR increases from –10 dB to 10 dB, maintaining robust fusion performance even under low SNR conditions. For simulated destroyer radar cross section data, this method achieved significant registration results, and the detail recovery and structural integrity of the fused image were significantly improved, effectively solving the problem of incomplete 3D information reconstruction caused by overlapping and occlusion of scattering points.  Conclusions  To address the issues of low reconstruction accuracy and information loss caused by target rotation, overlapping, and occlusion in traditional InISAR methods for 3D reconstruction of complex moving ship targets, this paper proposes a multi-IPP InISAR 3D reconstruction method based on point cloud fusion. This method employs a PCA optimal imaging time selection strategy, By employing weighted RANSAC and hierarchical ICP algorithms to achieve efficient and high-precision registration and fusion of InISAR point clouds under multiple IPPs, obtaining high-quality 3D reconstruction results. This paper conducts multi-scenario experiments by constructing a ship model with ideal scattering points and an electromagnetic simulation RCS model with occlusion effects, verifying the accuracy of the proposed method under ideal conditions and its applicability in complex real-world scenarios.
Delay Deterministic Routing Algorithm Based on Inter-controller Cooperation for Multi-layer Low Earth Orbit Satellite Networks
HUANG Longhui, DING Xiaojin, ZHANG Gengxin
Available online  , doi: 10.11999/JEIT251100
Abstract:
Objective The massive scale and large number of satellites in multi-layer Low Earth Orbit (LEO) constellations produce highly dynamic network topologies. Coupled with time-varying traffic loads, this condition causes temporal fluctuations in satellite network resources, such as available link queue size and link bandwidth. These variations make it difficult to establish stable end-to-end transmission paths and guarantee Quality of Service (QoS). To address this problem, Software-Defined Networking (SDN) is applied to multi-layer LEO constellations. SDN controllers collect network state information and enable unified management of network resources. The constellation is divided into multiple regions, with a controller deployed in each region to coordinate the operation of the constellation. A deterministic delay routing algorithm is designed within the SDN controller to compute inter-region transmission paths for traffic and satisfy deterministic delay requirements. Methods A deterministic delay routing algorithm based on controller cooperation is proposed for multi-layer LEO constellations. First, a regional division strategy and controller deployment scheme are designed. The satellite network is partitioned into multiple regions, each managed by a designated controller. Second, criteria are defined for Inter-Satellite Links (ISLs) between satellites within the same layer and across different layers to characterize link communication states. Third, a Time-Varying Graph (TVG) model represents the network topology and link resource attributes, including bandwidth, queue size, and link duration. This model is combined with a multi-destination Lagrange relaxation method to optimize path selection. The resulting paths satisfy both delay and delay jitter constraints. Adjacent regional controllers exchange network state information to support cooperative computation of feasible inter-region transmission paths. Results and Discussions To evaluate the proposed method, a simulation system for multi-layer LEO constellations was developed. The performance of the algorithm was tested under different data transmission rates. Compared with IUDR, the proposed method improves network performance by reducing end-to-end delay, delay jitter, and packet loss rate, and by increasing throughput. At a data transmission rate of 3 Mbit/(s·Hz), the average end-to-end delay is reduced by 16.0% (Fig. 3(a)), delay jitter by 37.9% (Fig. 3(b)), and packet loss rate by 37.2% (Fig. 3(c)). Throughput increases by approximately 2% (Fig. 3(d)). In terms of signaling overhead, the proposed algorithm achieves a higher Reduction-Improvement Gain Ratio, which increases by approximately 111.8% compared with IUDR. This result indicates superior overall performance of the DDRA-ICC. Additionally, the proposed method shows lower time complexity for route computation than IUDR. Conclusions To address deterministic delay requirements for traffic transmission in multi-layer LEO constellations, a controller cooperation-based deterministic delay routing algorithm is proposed. Performance evaluation under different load conditions shows that: (1) Compared with IUDR, the proposed algorithm reduces the average end-to-end delay, delay jitter, and packet loss rate by 16.0%, 37.9%, and 37.2%, respectively, and increases the average throughput by approximately 2%. (2) Although the additional overhead of DDRA-ICC is comparable to that of IUDR, the packet loss rate decreases further to 2.96%, representing a reduction of 52.49%, and the Reduction-Improvement Gain Ratio reaches 1.97. These results indicate lower packet loss, a higher Reduction-Improvement Gain Ratio, and a better balance between signaling overhead and reliability. Therefore, the proposed method provides advantages in ensuring deterministic traffic transmission. Future work may consider additional practical factors, such as satellite node failures and their effects on network performance, to further improve system capability.
A Large-Scale Multimodal Instruction Dataset for Remote Sensing Agents
WANG Peijin, HU Huiyang, FENG Yingchao, DIAO Wenhui, SUN Xian
Available online  , doi: 10.11999/JEIT250818
Abstract:
  Objective   The rapid advancement of Remote Sensing (RS) technology has reshaped Earth observation research, shifting the field from static image analysis to intelligent, goal-oriented cognitive decision-making. Modern RS systems are expected to perceive complex scenes, reason over heterogeneous information, decompose high-level objectives into executable subtasks, and make decisions under uncertainty. These requirements motivate the development of RS agents, which extend perception models to include reasoning, planning, and interaction functions. However, existing RS datasets remain task-centric and fragmented, as they are usually designed for single-purpose supervised learning such as object detection or land-cover classification. They seldom support multimodal reasoning, instruction following, or multi-step decision-making, all of which are essential for agentic workflows. Current RS vision-language datasets also have limited scale, constrained modality coverage, and simplified text annotations, with insufficient use of non-optical data such as Synthetic Aperture Radar (SAR) and infrared imagery. They further lack instruction-driven interactions that reflect real human-agent collaboration. This study constructs a large-scale multimodal image-text instruction dataset tailored for RS agents. The objective is to establish a unified data foundation that supports perception, reasoning, planning, and decision-making. By training models on structured instructions across diverse modalities and task categories, the dataset supports the development and evaluation of next-generation RS foundation models with agentic capability.  Methods   The dataset is built through a systematic and extensible framework that integrates multi-source RS imagery with instruction-oriented textual supervision. A unified input-output paradigm is defined to ensure compatibility across heterogeneous tasks and model architectures. This paradigm formalizes interactions between visual inputs and language instructions, allowing models to process image pixels, text descriptions, spatial coordinates, region references, and action-oriented outputs. A standardized instruction schema encodes task objectives, constraints, and expected responses in a consistent format. The construction process includes three stages. (1) Data collection and integration: multimodal RS imagery is aggregated from authoritative sources, covering optical, SAR, and infrared modalities with different spatial resolutions, scene types, and geographic distributions. (2) Instruction generation: a hybrid strategy combines rule-based templates with refinement by Large Language Models (LLMs). Template-based generation ensures task completeness and structural consistency, whereas LLM rewriting improves linguistic diversity and instruction complexity. (3) Task categorization and organization: the dataset is organized into nine core task categories and 21 sub-datasets that span low-level perception, mid-level reasoning, and high-level decision-making. A validation pipeline performs automated syntax and format checks, cross-modal consistency verification, and manual review of representative samples to ensure semantic alignment between images and instructions.  Results and Discussions   The dataset contains more than 2 million multimodal instruction samples, making it one of the largest and most comprehensive instruction resources in the RS domain. The inclusion of optical, SAR, and infrared imagery supports cross-modal learning and reasoning across heterogeneous sensing mechanisms. Compared with existing RS datasets, this dataset emphasizes instruction diversity, task compositionality, and agent-oriented interaction rather than isolated perception tasks. Baseline experiments conducted using state-of-the-art multimodal LLMs and RS foundation models show that the dataset supports evaluation across the full spectrum of agentic capabilities, from visual grounding and reasoning to high-level decision-making. The experiments also highlight challenges inherent to RS data, including extreme scale variation, dense object distributions, and long-range spatial dependencies. These challenges indicate important research directions for improving multimodal reasoning and planning in complex RS environments.  Conclusions   This work presents a large-scale multimodal image-text instruction dataset designed for RS agents. By organizing data across nine task categories and 21 sub-datasets, it provides a unified and extensible benchmark for agent-centric RS research. The contributions include: (1) a unified multimodal instruction paradigm for RS agents; (2) a 2-million-sample dataset covering optical, SAR, and infrared modalities; (3) empirical validation demonstrating support for end-to-end agentic workflows from perception to decision-making; and (4) a comprehensive evaluation benchmark based on baseline experiments. Future work will extend the dataset to temporal and video-based RS scenarios, integrate dynamic decision-making processes, and further improve reasoning and planning capability in real-world, time-varying environments.
A Clipped NMS List Decoding Algorithm of LDPC Codes for 5G URLLC
ZHANG Xiaojun, SONG Xin, GAO Jian, MI Yonghao, NIU kai
Available online  , doi: 10.11999/JEIT250853
Abstract:
  Objective  As one of the coding schemes in the fifth-generation (5G) wireless communication systems, Low-Density Parity-Check (LDPC) codes can achieve performance close to the Shannon limit through iterative decoding. However, in practical wireless transmission environments, the decoding performance of LDPC codes is susceptible to burst interference in wireless channels. The NMS decoding algorithm is highly sensitive to the distribution characteristics of input log-likelihood ratios (LLRs). Burst interference will cause LLRs to deviate from the Gaussian distribution, resulting in degradation in decoding performance. Meanwhile, 5G LDPC decoders are often equipped with a fixed number of processing units (PEs) according to the maximum lifting size to cover the full code length range. In URLLC (Ultra-Reliable Low-Latency Communications) short code transmission scenarios, the lifting size is much smaller than the maximum lifting size, leading to long-term idleness of a large number of processing units and insufficient utilization of hardware resources. To address the above issues, this paper proposes a Clipped Normalized Min-Sum List (CNMSL) decoding algorithm. By co-designing burst interference smoothing and idle resource reuse, it improves hardware resource utilization while enhancing decoding performance.  Methods  The statistical characteristics of LLRs over AWGN and interference channels are first analyzed, and the negative impact of burst interference on decoding performance is qualitatively illustrated to stem from the increased proportion of saturated LLRs induced by such interference. Next, the correlation between the optimal clipping threshold and channel noise variance, burst interference variance as well as burst probability is verified, which converges to a finite interval, the optimal threshold interval, when channel parameters undergo limited variations. On this basis, the CNMSL decoding algorithm is proposed. This algorithm constructs a list decoding architecture by reusing idle processing units in 5G LDPC decoders, where each decoding path performs independent and synchronous decoding to generate candidate codewords, and the optimal decoding result is screened out via CRC check. Meanwhile, an independent clipper is configured for each path with parameters set according to the optimal threshold interval, thereby effectively suppressing and mitigating the adverse effects of burst interference.  Results and Discussions  Experimental results show that the layered NMS algorithm almost fails to decode over interference channels without clipping mechanism. With a single clipping threshold, the algorithm works normally, and its BLER exhibits a convex-down trend of first decreasing and then increasing as the clipping threshold reduces. Under various channel conditions for both short and long codes, the single-clipping layered NMS algorithm with a clipping threshold of 3.5 achieves a gain of about 1 dB at \begin{document}$ BLER={10}^{-2} $\end{document} compared with that of 10, and the CNMSL algorithm further yields an additional gain of about 0.5 dB relative to the single-clipping NMS algorithm. In terms of hardware efficiency, when the lifting factor is less than 192, the PE utilization of the CNMSL algorithm is significantly higher than that of the layered NMS algorithm, with more remarkable improvement as the lifting factor decreases, and the average PE utilization of the CNMSL algorithm is increased by 69% compared with the layered NMS algorithm.  Conclusions  The CNMSL decoding algorithm is proposed in this paper, aiming to improve the error correction performance of the traditional layered NMS decoding algorithm over interference channels. By reusing idle PEs for list decoding to generate multiple candidate paths, the algorithm incurs no additional hardware overhead. In addition, an optimal threshold interval is defined to configure the clipper for each decoding path, which limits the proportion of saturated LLRs and makes the input LLRs follow a Gaussian or near-Gaussian distribution. Experimental results show that compared with the layered NMS decoding algorithm with a single clipper, the proposed CNMSL algorithm achieves a gain of approximately 0.5 dB for both short and long codes. Meanwhile, it increases the PE utilization by an average of 69%.
Drug Response Prediction Based on Graph Topology Attention Network
XU Peng, XU Hao, BAO Zhenshen, ZHOU Chi, LIU Wenbin
Available online  , doi: 10.11999/JEIT251099
Abstract:
  Objective  A core goal in modern cancer research is to figure out why patients respond differently to the same therapy. Achieving this requires developing computational tools that combine genetic information and drug properties to forecast treatment outcomes, which is essential for advancing personalized oncology. Although some existing methods have made progress in predicting cancer drug responses, effectively extracting features of drugs and integrating multi-omics data from cell lines have become challenges. To address these challenges, employing Graph Neural Networks (GNNs) to process drug molecular graphs has become a promising strategy. This research proposes a model that utilizes a graph topology attention network to capture features from drug molecular graphs, while an attention mechanism is applied to integrate multi-omics data.  Methods  In this study, a drug response prediction method based on Graph Topology Attention Network(GTAT) is proposed. The model integrates topological graph information to predict drug responses in cell lines. The model utilizes drug SMILES strings to generate two distinct drug representations and incorporates multi-omics data for cell line characterization (Fig. 1). For drug feature extraction, SMILES strings are first parsed to construct molecular graphs, which are then processed by the GTAT. This network captures both the topological information of the molecular graph-level and atom-level features, thereby producing structured molecular representations. Simultaneously, Extended Connectivity Fingerprints are computed from the same SMILES strings and transformed into continuous feature vectors via a Multi-Layer Perceptron (MLP). The graph-based drug representation and the fingerprint-based representation are subsequently concatenated to form a comprehensive drug feature vector. For cell line representation, multi-omics data are processed through omics-specific neural networks. The resulting features are fused using multi-head self-attention mechanisms, enabling the model to capture contextual interactions across omics modalities and generate an integrated cell line representation. Finally, the drug and cell line features are combined and fed into an MLP classifier to predict drug response outcomes. The proposed model effectively integrates heterogeneous biological data sources and significantly enhances prediction accuracy through multi-modal learning and attention-based feature fusion.  Results and Discussions  The proposed method achieves competitive performance on both GDSC and CCLE benchmark datasets (Table 2). Specifically, on the GDSC dataset, our approach outperforms all competing methods across all four metrics—AUC, AUPR, F1-score, and Accuracy. Notably, it improves the AUPR by approximately 1.92% over the second-best method, MOFGCN, demonstrating its advantage in handling class imbalance. On the CCLE dataset, our method still achieves the best performance in terms of AUC and Accuracy. Although it is marginally lower than GADRP in AUPR and F1-score, the gap is minimal, and our approach exhibits more robust overall discriminative ability (as reflected by AUC). These results collectively validate the effectiveness and strong generalizability of our method in drug sensitivity prediction tasks. The observed variation in AUPR and F1-score performance between datasets can be attributed to inherent differences in sample size and class distribution characteristics. The limited scale of the CCLE dataset, combined with its specific class imbalance (approximately 4:1 ratio of resistant to sensitive samples), may constrain the model's capacity to fully learn the underlying data distribution, particularly for minority classes. In contrast, the GDSC dataset exhibits greater heterogeneity and a more pronounced class imbalance (approximately 8:1), which collectively contribute to increased prediction difficulty and consequently lower performance on certain metrics.  Conclusions  Accurately predicting drug response in cell lines remains a central challenge in precision medicine, with significant implications for accelerating drug development and advancing personalized treatment. However, constructing a high-accuracy predictive model capable of effectively integrating multi-source biological information is difficult due to the complexity of drug molecular structures and inherent heterogeneity of cell lines. To address this, a cell line drug response prediction model based on Graph Topology Attention Network is proposed. This model employs the graph topology attention network to extract molecular graph features of drugs, which are then fused with molecular fingerprint features. Meanwhile, multi-omics features of cell lines are integrated using an attention mechanism. Experimental results demonstrate that the proposed model achieves superior performance over existing state-of-the-art benchmarks on the employed dataset. This study provides a new perspective for predicting cell line drug response. Certain limitations are acknowledged, such as the use of only three types of omics features for cell line representation and the influence of sample size on predictive outcomes. The integration of more diverse omics features, the application of pre-trained large-scale models, and the clinical translation for personalized medicine will be the primary focus of future work.
Multi-dimensional Spatio-temporal Features Enhancement for Lip reading
MA JinLin, ZHONG YaoWei, MA RuiShi
Available online  , doi: 10.11999/JEIT251111
Abstract:
  Objective  Lip reading is a challenging yet vital frontier in computer vision, dedicated to decoding spoken language solely from visual lip movements. The difficulty arises primarily from inherent ambiguities in the visual speech signal. On one hand, articulatory movements for different visemes can be extremely subtle. for instance, lip displacement differences as small as 0.3–0.7 mm for confusable pairs such as /p/–/b/ and /m/–/n/. These fine-grained spatial variations often lie below the effective resolution limits of conventional 3D convolutional neural networks. On the other hand, the natural co-articulation in speech introduces temporal ambiguity, where mouth shapes transiently blend multiple phonemes, making it difficult to isolate distinct visual units. These challenges are further compounded by real-world variables such as uneven lighting and significant inter-speaker articulation differences. As a result, current lip reading models frequently exhibit limitations in capturing discriminative spatiotemporal features, leading to suboptimal performance—especially for phonemes with minimal visual distinctions. Motivated by these issues, this work aims to develop a robust lip reading framework capable of effectively capturing and leveraging fine-grained spatiotemporal dependencies to improve recognition accuracy under diverse and realistic conditions.  Methods  To address the aforementioned limitations, this study proposes a novel lip reading framework named the Multi-dimensional Spatio-Temporal Enhancement Network (MSTEN), which is systematically designed to enhance spatial and temporal representations through integrated attention mechanisms and advanced residual learning. The framework incorporates three core components that collaboratively model the interdependencies between spatial and temporal features—an aspect often underutilized in conventional architectures. The first component, the Self-adjusting Spatio-temporal Attention (SaSTA) module, employs a self-adjusting mechanism operating concurrently across height, width, and temporal dimensions. It generates query, key, and value tensors via 1×1×1 3D convolutions, flattens them across spatial and temporal dimensions, and computes attention weights by multiplying the query with the transposed key, followed by softmax normalization. The resulting attention map is multiplied with the value vector and then combined with the original input via learnable parameters and a residual connection to preserve contextual information, yielding globally enhanced features. The second component, the Three-dimensional Enhanced Residual Block (TE-ResBlock), augments spatiotemporal feature extraction through temporal shift, multi-scale convolution, and channel shuffle. The temporal shift operation moves a quarter of the feature channels along the time axis to fuse adjacent frame information parameter-free, while multi-scale convolution uses parallel branches with kernel sizes of 3×3, 3×1, 1×3, and 1×1 to capture diverse receptive fields. Outputs are concatenated and processed via channel shuffle to improve cross-group information flow, with four TE-ResBlocks stacked for progressive feature refinement. The third component, the Multi-dimensional Adaptive Fusion (MDAF) module, deeply integrates spatial, temporal, and channel dimensions through three sub-modules: a Channel Enhancement Module (CEM) that recalibrates features using max pooling, temporal convolution, and sigmoid activation; a Spatial Enhancement Module (SEM) that expands the receptive field via identity mapping, standard and dilated convolution; and an Adaptive Temporal Capture Module (ATCM) that emphasizes dynamic movements using frame difference features and temporal weight maps. MDAF modules are inserted between TE-ResBlock stacks for iterative refinement. Finally, features from the MSTEN front-end are fed into a Densely Connected Temporal Convolutional Network (DC-TCN) back-end, which comprises four blocks, each containing three temporally convolutional layers with dense connections, to effectively model long-range phonological dependencies.  Results and Discussions  The proposed framework is comprehensively evaluated on the widely-used LRW dataset and GRID dataset, LRW comprising over 500,000 video clips from more than 1,000 speakers, GRID dataset consists of video clips from 34 speakers, with each speaker having 1,000 utterances and a total duration of 28 hours. Our model achieves an accuracy of 91.18%, representing an absolute improvement of 2.82 percentage points over a strong ResNet18 baseline, which underscores its substantial effectiveness. Ablation studies are conducted to dissect the contribution of each key component. The results clearly demonstrate that every proposed module brings a significant performance gain. Specifically, the introduction of the SaSTA module alone leads to an accuracy improvement of 2.09%, highlighting the crucial role of global spatiotemporal attention. The TE-ResBlock contributes a 1.73% increase, confirming its efficacy in multi-scale local feature extraction and inter-frame information fusion. Moreover, the MDAF module further enhances performance by 1.74%, emphasizing the benefit of adaptive multi-dimensional feature fusion, as detailed in Table 2.  Conclusions  This study presents a significant advancement in lipreading via the introduction of the MSTEN front-end network. The work is built upon three core contributions. First, the SaSTA module introduces an innovative mechanism for global context aggregation, effectively performing multi-dimensional feature weighting across height, width, and temporal sequences. Second, the TE-ResBlock tackles fundamental challenges in spatio-temporal modeling through a unique combination of temporal displacement, multi-scale convolution, and enhanced channel-wise interaction. Third, the MDAF module facilitates deep and synergistic integration of information from spatial, temporal, and channel dimensions. Together, these components work in concert to achieve state-of-the-art performance, reaching an accuracy of 91.18% on the challenging LRW dataset and 97.82% on the GRID dataset. Ablation studies further validate the individual and collective efficacy of each proposed innovation. Looking forward, future work will explore the extension of this framework to audio-visual speech recognition under noisy conditions, as well as the development of domain adaptation strategies to enhance robustness in low-resolution or resource-constrained scenarios.
FPGA Hybrid PLB Architecture for Highly Efficient Resource Utilization
WANG Yanlin, GAO Lijiang, YANG Haigang
Available online  , doi: 10.11999/JEIT260108
Abstract:
6-input look-up tables (LUTs) are frequently used in commercial Field-Programmable Gate Arrays (FPGAs) to build programmable logic blocks, while related experiments reveal that their average application in circuits is less than 30%, resulting in a significant waste of programmable resources. In this paper, the 6-input LUTs are fractured based on fracturable factors and recombined with different granularities to construct several new Hybrid Basic Logic Elements (HBLE). Based on HBLE, several novel Hybrid Programmable Logic Block (HPLB) architectures are proposed. Then the Programmable Logic Blocks (PLB) of Xilinx is replaced by several innovative HPLB architectures. Concurrently, a statistical evaluation algorithm for the mapped netlist is proposed. Finally, several HPLB architectures are experimentally verified and evaluated as appropriate. Experimental evaluations of the three enhanced architectures show that the HPLBs achieve an average area reduction of more than 30% when compared to Xilinx’s PLBs without adding more input ports. The hybrid HPLB architectures constructed with a fracturable factor N=3 produces the best optimization results when taking into account both HPLB utilization and area optimization. Based on the MCNC and VTR benchmarks, resource consumption increased by an average of 8.27% and 27.64%, respectively, thereby improving FPGA logic efficiency.  Objective  Currently, modern commercial FPGA architectures employ 6-LUTs as the fundamental building blocks for Basic Logic Elements (BLEs). Only about 30% of the Logic Elements (LEs) in the circuit are ultimately translated to 6-LUTs when mapping 6-LUT BLEs, according to experimental results. Nevertheless, more than half of the logic resources are wasted when 6-LUTs implement functions with inputs smaller than 6. Programmable resources will unavoidably be significantly wasted as a result. A circuit design mapped to 100 4-LUTs can be mapped to 78 6-LUTs during 6-LUT mapping studies, according to experimental data, with the {6,5,4,3,2}-LUT function distribution being {23,32,17,9,13}. The findings indicate that only around 25% of the 6-LUTs are ultimately mapped to 6-input functions, with the remaining 6-LUTs being underutilized. This illustrates even more how inefficient technical mapping is for LUTs with large input K.Methods The fracturable factor N, which is the number of sub-LUTs that may be obtained from a single LUT, characterizes the fracturable and reconfigurable nature of LUT architectures in FPGAs. Motivated by this, we decompose a 6-LUT into several granularities according to the fracturable factor in order to address the previously described problem of low resource utilization. Three novel hybrid-granularity divisible logic (HBLE) structures are created by connecting and reconfiguring the resultant sub-LUTs with additional input ports and multiplexer modules. We shall now investigate how FPGA performance is optimized by these three HBLE topologies. We shall now investigate how FPGA performance is optimized by these three HBLE topologies. One undivided 6-LUT and one divisible 6-LUT, divided into two 5-LUTs with a divisibility factor N=2, make up the HBLE2 structure. One undivided 6-LUT and one divisible 6-LUT, divided into one 5-LUT and two 4-LUTs, with a divisibility factor N=3, are included in the HBLE3 structure. One undivided 6-LUT and one divisible 6-LUT, which divides into four 4-LUTs with a divisibility factor N=4, make up the HBLE4 structure. Adder units are supported by all three HBLE structures, allowing for both latched and direct combinational logic output. Additionally, they allow direct latched output by avoiding combinational logic. A Hybrid Programmable Logic Block (HPLB) is a novel structure created by merging several HBLEs. The MCNC circuit set and the VTR circuit set, the two most well-known academic circuit benchmarks (BMs), are chosen for experimental assessment. A Xilinx Virtex-7 FPGA is used to map each circuit set. The mapped netlist is then used to tally the kinds and numbers of LUTs that were utilized. The minimum number of CLBs needed is found once the data has been arranged using the corresponding greedy algorithms. Since each Xilinx CLB has eight 6-LUTs, the greedy approach uses # Total LUT Number / 8 to determine the smallest number of CLBs needed following BM mapping. In order to guarantee similar conditions, each structure also needs to be sorted using the greedy algorithm after Xilinx’s CLB structure is replaced with the HPLB structure suggested in this research. This results in the bare minimum of HPLBs needed. It is not possible to use every LUT in the mapped CLBs during actual packing owing to routing constraints. As a result, the smallest value that may be achieved in a theoretical optimization scenario is represented by the optimized result that is acquired following greedy algorithm restructuring.  Results and Discussions  The average number of HPLBs needed for both HPLB2 and HPLB3 structures drops by about 8% when CLB structures are swapped out for HPLBs in order to map the MCNC circuit set. However, the number of HPLBs needed increases by more than 30% on average as a result of the HPLB4 structure. The needed count is smaller when HPLBs are used in place of CLBs for mapping the VTR circuit set. On average, the HPLB2 and HPLB4 counts drop by less than 10%, whereas the HPLB3 count drops by around 30%. This enables SRAM scheduling and complete input pin use. On the other hand, because of resource waste, the uniform CLB structure results in higher CLB requirements when implementing functions with a tiny LUT input K. The HPLB4 structure performs worse than the HPLB3 structure, according to post-mapping HPLB counts. Both the MCNC and VTR circuit sets achieve average area reduction ratios over 30%, according to analysis of post-mapping area optimization. All three HPLB structures attained area optimization ratios of about 31% on the MCNC test set. Different optimization effects were seen in the VTR test circuit set: HPLB2 produced an average area reduction of 30.63%, whereas HPLB4 produced an average decrease of 51.21%. The HPLB2 structure produced a 45.22% area reduction, even though its optimization effect was marginally less than that of HPLB4. A thorough examination of the area optimization results showed that a higher divisibility factor N produces more noticeable benefits for integrating small-scale LUTs in circuits, resulting in higher area reduction ratios from the enhanced architectures.  Conclusions  In order to solve the issue of low resource utilization in 6-LUTs, this research proposes three split granularity-based HPLB enhancement architectures. In addition to establishing an assessment procedure and matching algorithms for the enhanced structures, these HPLBs take the place of Xilinx’s CLB structure in order to examine the new structure’s benefits in resource utilization. Based on the proportion differences of different LUTs in the post-mapping netlist, evaluation experiments using the MCNC and VTR circuit test suites show that, although HPLB4 achieves significant area optimization, it requires additional HPLBs, resulting in increased interconnect area. While both HPLB2 and HPLB3 structures obtain average area optimizations over 30%, HPLB3 produces a significantly greater HPLB count and area optimization than HPLB2 as the test circuit scale grows. Thus, after replacing the CLB structure, the HPLB3 structure provides a more balanced optimization impact, greatly improving the utilization of programmable resources when taking into account the combined aspects of HPLB usage count and area optimization.
Efficient and Verifiable Ciphertext Retrieval Scheme Based on Trusted Execution Environment
WU Axin, FENG Dengguo, ZHANG Min, CHI Jialin, YI Yuling
Available online  , doi: 10.11999/JEIT251358
Abstract:
The ciphertext retrieval mechanism enables retrieval functionality over encrypted data. Symmetric Searchable Encryption (SSE) is a critical branch of ciphertext retrieval. However, due to considerations such as saving computing power, cloud servers may return incorrect or incomplete results. Moreover, attackers can also exploit these leaked information from search and access patterns to reconstruct the keyword details. Therefore, it is necessary and meaningful to protect the privacy of search and access patterns while achieving result verifiability. Nevertheless, existing verifiable SSE schemes that support search and access pattern privacy typically rely on keyword traversal mechanisms and their verification mechanisms are inefficient, which impose high computational and communication overheads on users. To address the above performance bottlenecks, this paper introduces an efficient and verifiable ciphertext retrieval scheme based on Trusted Execution Environment (TEE). To improve the efficiency of ciphertext retrieval, this scheme employs the collaborative implementation of hardware-level security isolation and oblivious data rearrangement to achieve keyword trapdoor size independent of the size of the keyword dictionary. Meanwhile, the correctness of the returned results is verified by embedding random numbers and blinding polynomial constant terms. Thanks to these designs, the scheme achieves significant efficiency improvements. Specifically, firstly, this scheme ensures that the size of keyword trapdoors depends solely on the number of query keywords, not the global dictionary size, effectively minimizing communication and computational costs. Secondly, this scheme requires storing only two random numbers to enable verifiability, substantially minimizing local storage overhead for users. Thirdly, the adoption of techniques, such as enabling data users to retrieve results via single-server and single-round interaction and leveraging symmetric homomorphic encryption, further enhances operational efficiency. Additionally, confidential computing within TEE weakens the security assumptions and trust level towards TEE. After formally proving the security of the proposed scheme using simulation-based methods, this paper has conducted a comprehensive performance evaluation. The evaluation results confirm that this scheme is significantly more efficient than other schemes with the same functionalities.
Physical Layer Security Game for Large Language Model-Based Inference in the Maritime Network
CHEN Haoyu, XIAO Liang, XU Xiaoyu, LI Jieling, WANG Zicheng, LIU Huanhuan, CHEN Hongyi
Available online  , doi: 10.11999/JEIT251269
Abstract:
  Objective  The physical-layer security game reveals the interaction between user equipment (UE) and attackers, and provides performance bounds of anti-jamming transmission and physical-layer authentication schemes based on the equilibriums. However, existing game models overlook smart attackers that send jamming or spoofing signals, fail to account for the maritime wireless channels affected by evaporation ducts and sea wave fluctuations, and are difficult to evaluate the performance of large language models (LLMs)-based inference, such as the vessel traffic monitoring.  Methods  The anti-jamming maritime communication game for LLM inference is formulated, where the jammer first selects the jamming power and channel to reduce the signal-to-interference-plus-noise ratio at the server with less jamming cost, and the UEs then choose transmit power, channel, LLM sparsity ratio and control center to send sensing data (e.g., images, temperature, and humidity) to enhance the inference accuracy with less latency. The physical-layer authentication game for maritime wireless networks with LLM inference is further formulated. The spoofing attacker first selects the number of spoofing packets to degrade authentication accuracy with less cost. The control center then selects the fast authentication mode based on channel state or the safe authentication mode based on the received signal strength and the arrival interval of the packet from multiple ambient transmitters, and the test threshold to increase accuracy with less cost.  Results and Discussions  Based on the Stackelberg equilibrium (SE) under the LLM with 7 billion parameters, the performance bounds of the reinforcement learning (RL)-based anti-jamming inference scheme are provided to reveal the impact of evaporation duct height, wave height, maximum sparsity ratio of LLM and the quantization level on inference accuracy and latency. In addition, the performance bounds of the RL-based maritime spoofing detection scheme are provided based on the SE of the physical-layer authentication game to show the impact of the maximum number of spoofing packets on the authentication accuracy. Simulations are carried out based on the five UEs with the antenna height of 3 meters offloading the image, temperature and humidity using the transmit power up to 200 mW at 5.8 GHz with a bandwidth of 20 MHz to five control centers with antenna heights of 6 m. The jammer applies Deep Q-Network to choose the jamming power with a maximum transmit power of 200 mW for each 5.8 GHz channel, and the spoofing attacker applies the Deep Q-Network to select the number of spoofing packets up to 100. The results show that the inference accuracy and latency of the RL-based anti-jamming maritime communication scheme for LLM inference converge to the performance bounds with gaps of less than 0.6% after 2500 time slots. In addition, the RL-based authentication scheme converges after 1000 time slots with the gap of less than 1.6%.  Conclusions  In this paper, we have formulated the maritime physical-layer security game for LLM inference, addressing scenarios such as anti-jamming sensing data transmission and spoofing detection, aiming at investigating how UEs determine transmit power and channel, and how the control center selects authentication modes and test thresholds to enhance the physical-layer security mechanisms. The attacker chooses attack modes and parameters to degrade the inference accuracy, increase latency, and even cause denial-of-service. Based on the SE and the conditions, the performance bounds of the inference accuracy increase with the maximum transmit power and linearly decrease with the sparsity ratio. Furthermore, the impact of the maximum number of spoofing packets on the inference accuracy is provided. Simulation results show that the RL-based maritime physical-layer security schemes converge to the performance bounds, thereby validating the accuracy and effectiveness of the game model.
A Method for Parallel Testing of Interlayer Vias in Monolithic 3D Integrated Circuits
CHEN Tian, CHEN Weikun, LIU Jun, LIANG Huaguo, LU Yingchun
Available online  , doi: 10.11999/JEIT251375
Abstract:
  Objective  As device dimensions in conventional two-dimensional integrated circuits approach fundamental physical limits, further improvements in performance and integration density face significant challenges. Monolithic three-dimensional integrated circuits (M3D ICs), which sequentially stack multiple active device layers on a single wafer, provide an effective solution to overcome these limitations. In M3D ICs, monolithic inter-tier vias (MIVs) are employed to realize vertical interconnections between device tiers. Compared with through-silicon vias (TSVs), MIVs feature much smaller dimensions, lower parasitic capacitance, and shorter interconnect delay. However, their small electrical variations and massive quantity cause defects to manifest mainly as subtle delay shifts, posing stringent requirements on test accuracy, efficiency, and robustness against Process, Voltage, and Temperature (PVT) variations. Existing MIV testing approaches suffer from limited scalability, strong PVT sensitivity, and difficulty in simultaneously achieving small-delay defect detection and fault localization in large-scale arrays. To address these challenges, a parallel MIV testing method based on a time-to-digital converter (TDC) is presented to enable efficient and reliable testing of large MIV arrays with low area and time overhead.  Methods  Large-scale MIVs are logically organized into a two-dimensional array structure. Each basic test cell consists of a device-under-test MIV, a tri-state buffer, and a D flip-flop, and multiple cells are cascaded to form row test chains and column test chains. By systematically exploiting the inherent input capacitance mismatch between the data and clock terminals of the D flip-flop, an embedded TDC structure incorporating the MIV under test is constructed. Test stimuli are generated by a digitally controlled delay line (DCDL), which produces START and STOP pulse signals with multiplicatively adjustable phase differences and injects them into different propagation paths of the test chains, enabling time quantization through a signal chasing mechanism. Structural symmetry between the test chains is employed to mitigate the influence of PVT variations. As the START and STOP phase difference is progressively amplified, multiple TDC readings are collected to characterize defect-induced small delay variations and to distinguish them from measurement noise and PVT-induced fluctuations. After fault information is obtained for individual test chains, cross-analysis of row and column test results enables fault localization within the two-dimensional MIV array.  Results and Discussions  Simulation results based on the Nangate 45 nm standard cell library demonstrate that, under fault-free conditions, TDC readings obtained at different phase difference settings exhibit a stable linear proportional relationship (Fig. 7). Extensive Monte Carlo simulations are performed to determine a robust deviation tolerance threshold of 2, which effectively separates normal variations caused by PVT fluctuations from abnormal shifts induced by defects. Fault injection experiments verify that small delay defects occurring on both the START chain and the STOP chain can be effectively detected and distinguished (Fig. 8). In terms of quantitative detection capability, the minimum detectable resistive open defect is approximately 8.4 kΩ, while the maximum detectable leakage defect and resistive short defect are about 67 kΩ and 32 kΩ, respectively, outperforming existing methods (Fig. 9). Moreover, the row–column decomposition architecture effectively alleviates the growth of test time as the MIV array size increases, resulting in a substantial reduction in overall test overhead. Area evaluation indicates that the average area overhead of the embedded built-in self-test structure is only 5.594 µm2 per MIV, making it suitable for high-density M3D integration.  Conclusions  A parallel TDC-based testing approach for large-scale MIV arrays is presented, which combines row–column decomposition, phase-difference multiplication, and proportional deviation-based decision mechanisms to achieve efficient detection and accurate localization of both hard faults and small delay defects. Structural symmetry within the test chains effectively enhances robustness against PVT variations. Simulation results confirm that the proposed method can reliably detect resistive open, leakage, and short defects while maintaining low area and time overhead. Compared with existing techniques, a favorable balance among test accuracy, PVT robustness, test efficiency, and hardware cost is achieved. Owing to its scalability and practical feasibility, the proposed approach provides an effective and reliable solution for MIV testing in advanced monolithic three-dimensional integrated circuits.
Physical Layer Key Generation Method for Integrated Sensing and Communication Systems
LIU Kexin, HUANG Kaizhi, PEI Xinglong, JIN Liang, CHEN Yajun
Available online  , doi: 10.11999/JEIT251034
Abstract:
  Objective  Integrated Sensing And Communication (ISAC) has become a central technology in Sixth-Generation (6G) wireless networks, enabling simultaneous data transmission and environmental sensing. However, the characteristics of ISAC systems, including highly directional sensing signals and the risk of sensitive information leakage to malicious sensing targets, create specific security challenges. Physical layer security provides lightweight methods to enhance confidentiality. In secure transmission, approaches such as artificial noise injection and beamforming can partially improve secrecy, although they may reduce sensing accuracy or communication efficiency. Their effect also depends on the quality advantage of legitimate channels over eavesdropping channels. For Physical Layer Key Generation (PLKG), existing work has only demonstrated basic feasibility. Most current schemes adopt a radar-centric design, which limits compatibility with communication protocols and restricts key generation rates. This paper proposes a PLKG method tailored for ISAC systems. It aims to maximize the Sum Key Generation Rate (SKGR) under sensing accuracy constraints through a Twin Delayed Deep Deterministic policy gradient (TD3)-based joint communication and sensing beamforming algorithm, thereby improving the security performance of ISAC systems.  Methods  A MIMO ISAC system is considered, where a base station (Alice) equipped with multiple antennas communicates with single-antenna users (Bobs) and senses a malicious target (Eve). The system operates under a TDD protocol to leverage channel reciprocity. A PLKG protocol designed for ISAC systems is developed, including channel estimation, joint communication and sensing beamforming, and key generation. The SKGR is derived in closed form, and sensing accuracy is evaluated using the Cramér-Rao Bound (CRB). To maximize the SKGR under CRB constraints, a non-convex optimization problem for the joint design of communication and sensing beamforming matrices is formulated. Given its NP-hardness, an algorithm based on TD3 is proposed. TD3 employs dual critic networks to reduce overestimation, delayed policy updates to enhance stability, and target policy smoothing to improve robustness. The state includes channel state information, the actions correspond to beamforming matrices, and the reward function combines SKGR, CRB, and power constraints.  Results and Discussions  Simulation results confirm the effectiveness of the proposed design. The TD3-based algorithm achieves a stable SKGR of 18.5 bits/channel use after training (Fig. 4), outperforming benchmark schemes such as Deep Deterministic Policy Gradient (DDPG), greedy search, and random algorithms. The SKGR increases monotonically with transmit power because of reduced noise interference (Fig. 5). Increasing the number of antennas also improves SKGR, although the gain diminishes as power per antenna decreases. The scheme maintains stable SKGR across different distances to the eavesdropper (Fig. 6), demonstrating the robustness of PLKG against eavesdropping attacks. The proposed algorithm manages the complex optimization problem effectively and adapts to dynamic system conditions, offering a practical approach for secure ISAC systems.  Conclusions  This paper presents a PLKG method for ISAC systems. The proposed protocol generates consistent keys between the base station and communication users. The SKGR maximization problem with sensing constraints is solved using a TD3-based algorithm that jointly optimizes communication and sensing beamforming matrices. Simulation results show that the method outperforms benchmark schemes, with significant gains in SKGR and adaptability to system conditions. The study establishes a basis for integrating PLKG into ISAC to strengthen security without reducing sensing performance. Future work will examine real-time implementation and scalability in large networks.
Modulation Recognition Method for High-Speed Mobile Communication Based on Attention Dynamic Fusion and Hybrid Pruning Transformer
ZHENG Qinghe, CHEN Bin, YU Lisu, HUANG Chongwen, JIANG Weiwei, SHU Feng, ZHAO Yizhe
Available online  , doi: 10.11999/JEIT251211
Abstract:
  Objective  Automatic modulation recognition is a critical preprocessing step in dynamic spectrum access and anti-jamming communication systems, directly impacting the robustness and spectrum efficiency of non-cooperative communication. In high-speed mobile communication scenarios such as satellite, high-speed rail, and drone swarm communications, signal modulation features suffer severe distortion due to Doppler shifts, time-varying channels, and non-stationary interference. The above issues pose significant challenges to traditional modulation recognition methods based on static assumptions, leading to feature mismatch and increased misjudgment rates. To address the issues of insufficient robustness and real-time performance in existing deep learning-based modulation recognition models under high-speed mobile environments, this paper proposes a lightweight dynamic fusion Transformer-based approach.  Methods  The proposed method consists of three main components: signal representation fusion block, Transformer model design, and model pruning for lightweight inference. First, a RollingQ mechanism is introduced to dynamically adjust the direction of attention query matrix based on the quality of each signal representation, breaking the cycle of attention fixation and achieving the balanced utilization of all types of signal representations. Then, the multi-head attention frequency enhancement Transformer (MAFE-Transformer) is designed, which integrates local and global spatiotemporal features through modules including lightweight convolutional enhancement, multi-attention feature extraction, and frequency learning and selection. Finally, an attention-based dynamic hybrid pruning strategy is applied to reduce structural redundancy and accelerate inference, enabling real-time modulation recognition.  Results and Discussions  Extensive experiments are conducted on two public datasets, RadioML 2016.10a and RML22, to validate the effectiveness of the proposed method. The MAFE-Transformer achieves average classification accuracies of 65.14% and 78.40% on the two datasets, respectively. Under low SNR conditions of –20~0 dB, the model demonstrates strong robustness, particularly on the RML22 dataset with dynamic channel model ETU70 (Fig. 5). The confusion matrix shows that the error distribution of MAFE-Transformer is relatively uniform among different modulation schemes, reflecting its well-balanced classification performance (Fig. 6). Ablation studies confirm that the RollingQ-based dynamic fusion mechanism improves accuracy by 7.2% on RadioML 2016.10a and 9.5% on RML22 compared to single signal representation (Fig. 7). The hybrid pruning strategy reduces inference latency to 2.2 ms per signal while maintaining high accuracy (Fig. 8). Comparative experiments show that the proposed model outperforms several state-of-the-art deep learning models (e.g., Ms-RaT, MobileViT, MobileRaT, and KA-CNN) by 4%–10% in recognition accuracy, demonstrating superior performance in high-speed mobile communication scenarios (Fig. 9).  Conclusions  This paper proposes a lightweight dynamic fusion Transformer-based automatic modulation recognition method to address the challenges of robustness and real-time performance in high-speed mobile communication environments. By introducing RollingQ mechanism and the MAFE-Transformer structure combined with dynamic hybrid pruning, the proposed method achieves a better trade-off between recognition accuracy and inference efficiency. Experimental results on public datasets confirm its effectiveness and robustness under complex channel conditions with Doppler shifts and time-varying interference. However, the proposed method has not been systematically evaluated under more complex interference such as impulsive noise or frequency-selective fading. Future work will focus on improving adaptability to non-stationary noise, cross-device generalization, and optimization for edge deployment.
Design and Verification of Robust Modulation Recognition Framework Under Blind Adversarial Attacks
ZHENG Qinghe, ZHOU Fuhui, YU Lisu, HUANG Chongwen, JIANG Weiwei, SHU Feng, ZHAO Yizhe
Available online  , doi: 10.11999/JEIT260019
Abstract:
  Objective  Deep learning-based automatic modulation recognition (AMR) models have demonstrated superior performance in non-cooperative communication systems such as cognitive radio and spectrum monitoring. However, the inherent vulnerability of deep learning models to adversarial attacks, where imperceptible perturbations can cause catastrophic misclassification, poses the severe security threat. Existing defense methods, including adversarial training, often rely on prior knowledge of specific attacks, incur significant computational overhead, and face the trade-off between robustness and accuracy on clean samples. To address these limitations, this paper aims to design and validate a robust modulation recognition framework that can operate effectively under blind adversarial attack scenarios without prior knowledge of the attack type and strategy, thereby ensuring the reliable deployment of intelligent communication systems in adversarial environments.  Methods  The proposed framework integrates a novel feature-purifying autoencoder module with standard modulation classifiers (CNN and Transformer). The core innovation lies in the autoencoder’s bottleneck layer, which incorporates a dynamic purification mechanism. This mechanism first calculates an adaptive threshold based on the statistical properties of the encoded latent features to identify anomalies. Subsequently, the Top-K sparsification operation selectively preserves only the most significant feature activations, effectively suppressing noise and adversarial perturbations while retaining essential signal characteristics. Then the autoencoder is trained via a three-stage curriculum learning strategy that sequentially optimizes reconstruction fidelity, feature sparsity, and semantic consistency between the purified and original clean signals, ensuring the output aligns with the true modulation manifold. This model-agnostic module can be seamlessly prepended to any trained classifier without retraining.  Results and Discussions  Comprehensive experiments are conducted on a simulated dataset encompassing 12 digital modulation types under multipath fading channels. The framework demonstrated substantial performance improvements. For the CNN and Transformer, the recognition accuracies under challenging targeted white-box attacks increased to 82.1% and 83.2%, and under non-targeted black-box attacks reached 87.7% and 89.4%, respectively (Table 1). The attack success rate (ASR) and attack effectiveness index (AEI) remained at low levels, confirming strong defensive capability. Figure 4 shows that defense efficacy improves with higher SNR. Crucially, the ablation study in Figure 5 highlights the indispensable role of the autoencoder, whose removal caused accuracy to plummet by 4.02% and 2.36% on CNN and Transformer under strong attacks. Further analysis (Figure 6) indicates that the framework maintains robustness across a wide range of perturbation bounds (\begin{document}$ \epsilon \leq 0.1 $\end{document}). Moreover, parameter sensitivity studies (Figures 7 and 8) show stable performance for threshold coefficient \begin{document}$ \xi $\end{document} in [1.5, 1.9] and sparsity rate k around 0.7, confirming its practical deployability.  Conclusions  This paper presents a robust, blind defense framework for robust AMR based on the feature-purifying autoencoder. The key advantages are threefold: 1) It provides effective defense against diverse white-box and black-box attacks without requiring any prior knowledge of various attack methods, achieving true blind defense; 2) As a preprocessing module, it eliminates the need for computationally expensive retraining of the primary classifier and is compatible with various backbone networks; 3) The multi-stage training strategy successfully balances robustness against attacks with the preservation of high accuracy on clean samples. Finally, experimental results on the comprehensive dataset validate the framework’s superiority. Future work will focus on lightweight architectural designs to reduce inference latency and further investigate performance boundaries under extreme low-SNR conditions combined with complex nonlinear channel impairments.
UWF-YOLO: A Lightweight Framework for Underwater Object Detection via Redundant Information Optimization
HOU Guojia, MA Jiaqi, WANG Yuechuan, HUANG Baoxiang, LI Kunqian
Available online  , doi: 10.11999/JEIT251129
Abstract:
  Objective  The rapid development of underwater imaging technology has significantly elevated the importance of underwater object detection for resource exploration and environmental monitoring applications. Generally, complex underwater environments yield various degradations of image quality such as color casts, haze-like effects, and non-uniform illumination. Unfortunately, existing vision-based object detection algorithms always suffer from unpleasing performance and notable limitations especially for detecting small objects, resulting in missed detections and false positives. Moreover, existing deep learning based underwater detection models also face substantial challenges in striking an optimal balance between accuracy and lightweight design under the condition of limited equipment resources. To address these issues, it is of great importance to design efficient underwater object detection methods in view of water-related vision tasks, which play a crucial role in marine resource exploration, ecological monitoring, underwater robotics, and intelligent perception systems for autonomous underwater vehicles.  Methods  In this paper, we propose a novel lightweight framework based on redundant information optimization for underwater object detection. Technically, we propose a lightweight underwater object detection network called UWF-YOLO based on redundancy information optimization. First, the C2f module is reconstructed by FasterNet Block to optimize both the backbone and neck networks, and a feature channel selection mechanism is incorporated to reduce the redundant features. On other hand, due to the redundant traditional convolutional features in the YOLO neck, it is difficult to adapt to the underwater environment. Ghost Convolution is also introduced to generate the Ghost feature map for enhancing the multi-scale feature fusion capability of the neck network. Next, our proposed method achieves parameter sharing by replacing the original detection head with a redundant optimization group detection head (RRG-Head) based on group convolution, thereby reducing computational costs. Finally, the structured channel pruning technique is applied to identify the inter-layer dependencies of the graph and bind the pruning units. Combined with the LAMP weight magnitude score normalization for evaluating the importance of channels, the low-contributing groups are pruned and fine-tuned to achieve network size compression. In addition, since the scene of underwater detection datasets are typically monotonous and the underwater objects contained in the available datasets are usually small and clustered. We also construct an underwater object detection dataset with complex scene, namely CSUOD, by collecting real-world underwater images from different websites and platforms to ensure both its diversity and authenticity, followed by manual annotation and resolution normalization preprocessing. CSUOD is specifically designed for various challenging underwater environments characterized by color casts, haze-like effects, and non-uniform illumination. In our CSUOD, we manually select 1135 images containing 6 different types, and perform the manual annotation and resolution standardization operations.  Results and Discussions  Extensive experiments are conducted on three public underwater object detection datasets (i.e., DUO, RUOD, and TrashCan) by comparing several popular and widely used object detection methods. The proposed model is evaluated against mainstream detectors, including YOLOv5s, YOLOv7-tiny, YOLOv8s, YOLOv9-tiny, and Deformable DETR. In computational complexity assessment, experimental results show that the proposed method has reduced the FLOPs, model size, and parameters by 60.4%, 77.3%, and 78.4%, respectively, compared to the baseline. In addition, our method has outperformed YOLOv9-tiny with comparable parameters by 0.3%, 2.3%, and 3.4% in mAP across the three datasets. Also, some comparative results on our established CSUOD dataset also indicate that our proposed model has a good improvement and stability even in complex underwater environments. Qualitative visualization results further illustrate the model’s robustness and detection stability under various underwater degradations, such as haze-like effects and non-uniform illumination.  Conclusions  Quantitative and qualitative experiments on different datasets have validated the effectiveness and robustness of the proposed method. In addition, our method achieves superior detection performance in complex underwater environments, effectively solving missed detections and false positives caused by background interference. A large number of experimental results show that our designed UWF-YOLO can not only achieve significant light weighting, but also maintain the comparable detection accuracy comparing with the benchmark model. This balance between the detection accuracy and low computational cost makes it particularly suitable for underwater devices with limited resources. Besides, the proposed method has great potential in practical scenarios such as marine ecological monitoring, underwater resource exploration, and autonomous underwater vehicle perception systems. It also provides a reliable and efficient technical foundation for real-time applications, with strong adaptability to different underwater conditions, efficient integration into embedded platforms, and support for real-time perception and decision-making. Our constructed dataset CSUOD in this study will help address the limitations of existing underwater object detection datasets and promote the development of underwater object detection. In the future, this work can be further extended to multi-modal perception systems and larger-scale datasets. These efforts will enable adaptive models for more dynamic underwater scenarios and support broader applications in intelligent ocean observation and autonomous navigation.
Performance Analysis and Rapid Prediction of Long-range Underwater Acoustic Communications in Uncertain Deep-sea Environments
CHEN Xiangmei, TAI Yupeng, WANG Haibin, HU Chenghao, WANG Jun, WANG Diya
Available online  , doi: 10.11999/JEIT251244
Abstract:
  Objective  In complex and dynamically changing deep-sea environments, the performance of underwater acoustic communications shows substantial variability. Feedback-based channel estimation and parameter adaptation are impractical in long-range scenarios because platform constraints prevent reliable feedback channels and the slow propagation of sound introduces significant delay. In typical long-range systems, environmental dynamics are often ignored and communication parameters are selected heuristically, which frequently leads to mismatches with actual channel conditions and causes communication failures or reduced efficiency. Predictive methods able to assess performance in advance and support feed-forward parameter adjustment are therefore required. This study proposes a deep-learning-based framework for performance analysis and rapid prediction of long-range underwater acoustic communications under uncertain environmental conditions to enable efficient and reliable parameter–channel matching without feedback.  Methods  A feed-forward method for underwater acoustic communication performance analysis and rapid prediction is developed using deep-learning-based sound-field uncertainty estimation. A neural network is first used to estimate probability distributions of Transmission Loss (TL PDFs) at the receiver under dynamic environments. TL PDFs are then mapped to probability distributions of the Signal-to-Noise Ratio (SNR PDFs), enabling communication performance evaluation without real-time feedback. Statistical channel capacity and outage capacity are analyzed to characterize the theoretical upper limits of achievable rates in dynamic conditions. Finally, by integrating the SNR distribution with the bit-error-rate characteristics of a representative deep-sea single-carrier communication system under the corresponding channel, a rate–reliability prediction model is constructed. This model estimates the probability of reliable communication at different data rates and serves as a practical tool for forecasting link performance in highly dynamic and feedback-limited underwater acoustic environments.  Results and Discussions  The method is validated using simulation data and sea trial data. The TL PDFs predicted by the deep learning model show strong consistency with the traditional Monte Carlo (MC) method across multiple receiver locations (Fig. 6). Under identical computational settings, deep-learning-based TL PDF prediction reduces computation time by 2\begin{document}$ \sim $\end{document}3 orders of magnitude compared with the MC method. The chained mapping from TL PDFs to SNR PDFs and then to channel capacity metrics accurately represents the probabilistic features of communication performance under uncertain conditions (Fig. 7 and Fig. 8). The rate–reliability curves derived from the deep-learning-based TL PDFs are highly consistent with MC-based results. In the high sound-intensity region, prediction errors for reliable communication probabilities across data rates range from 0.1% to 3%, and in the low sound-intensity region errors are approximately 0.3% to 5% (Fig. 12). Sea trial results further indicate that predicted rate–reliability performance agrees well with measured data. In the convergence zone, deviations between predicted and measured reliability probabilities at each rate range from 0.9% to 4%, and in the shadow zone from 1% to 9% (Fig. 18). Under a 90% reliability requirement, the maximum achievable rates predicted by the method match the measurements in both the convergence and shadow zones, demonstrating accuracy and practical applicability in complex channel environments.  Conclusions  A deep-learning-based framework for performance analysis and rapid prediction of long-range underwater acoustic communications in uncertain deep-sea environments is developed and validated. The framework builds a chained mapping from environmental parameters to TL PDFs, SNR PDFs, and communication performance metrics, enabling quantitative capacity assessment under dynamic ocean conditions. Predictive “rate–reliability’’ profiles are obtained by integrating probabilistic propagation characteristics with the performance of a representative deep-sea single-carrier system under the corresponding channel, providing guidance for parameter selection without feedback. Sea trial results confirm strong agreement between predicted and measured performance. The proposed approach offers a technical pathway for feed-forward performance analysis and dynamic adaptation in long-range deep-sea communication systems, and can be extended to other communication scenarios in dynamic ocean environments.
Towards Privacy-Preserving and Lightweight Modulation Recognition for Short-Wave Signals under Channel Shifts
YAO Yizhou, DENG Wen, LI Baoguo
Available online  , doi: 10.11999/JEIT251017
Abstract:
  Objective  Existing short-wave signal modulation recognition methods based on the supervised learning paradigm typically assume that training data (source domain) and test data (target domain) follow identical distributions. However, short-wave channels are susceptible to ionospheric variations, leading to significant distribution discrepancies across domains, which consequently causes model performance degradation. Furthermore, deployment on the edge side of unmanned platforms is constrained by limited device resources, scarce labeled samples, and data privacy requirements. To address these challenges, a lightweight recognition method based on source-model transfer is proposed in this paper, enabling privacy-preserving model adaptation without the need to access source domain data.  Methods  A multi-modal source-model transfer framework (M-SMOT) is developed, which utilizes information maximization loss and self-supervised pseudo-labeling techniques to facilitate model adaptation without revisiting source domain data. This approach achieves effective cross-channel recognition of short-wave modulation signals while reducing computational resource consumption and preserving data privacy. Additionally, multi-modal information—comprising in-phase/quadrature (I/Q) components, amplitude-phase (AP) characteristics, and spectral features—is fused to leverage complementary feature representations, thereby enhancing the robustness of the recognition network against complex channel variations.  Results and Discussions  Experimental results demonstrate that the recognition performance of the proposed method consistently surpasses that of the Source-Only baseline across six cross-channel scenarios, with improvements ranging from 0.31% to 10.81% (Table 1). In terms of few-shot adaptation, average recognition accuracies are maintained at 98.3% and 96% relative to the full-sample baseline, even when target domain training samples are reduced to 10% and 1%, respectively (Fig. 12). Ablation studies verify the necessity and effectiveness of the self-supervised pseudo-labeling module (Fig. 16) and the multi-modal fusion strategy (Fig. 17), confirming that both components contribute to the overall performance. Furthermore, the lightweight advantages are quantified: the method requires zero storage for source data, exhibits a peak memory consumption of only 6.00 MB, and achieves convergence within a single fine-tuning epoch (Table 2). These findings validate the capability of the proposed mechanism to mitigate domain discrepancies and protect privacy under resource-constrained conditions.  Conclusions  The M-SMOT method successfully integrates data privacy protection, source model adaptation, few-shot generalization, and low resource consumption. Consequently, it provides a practical solution for cross-channel modulation recognition in short-wave communications, demonstrating significant potential for deployment on resource-limited edge devices.
Indoor Visible Light Positioning Based on CNN–MLP Multi-Feature Fusion under Random Receiver Tilt Conditions
JIA Kejun, WANG Jian, MAO Lifei, YOU Wei, HUANG Ziyang, PENG Duo
Available online  , doi: 10.11999/JEIT251021
Abstract:
  Objective  Traditional visible light positioning (VLP) methods based on received signal strength (RSS) suffer from instability when the receiver experiences orientation perturbations, which disrupt the correspondence between optical power and spatial position, making reliable three-dimensional (3D) positioning difficult to achieve. Existing approaches typically rely on inertial measurement units (IMUs) to obtain orientation information; however, sensor fusion increases system complexity and hardware cost and introduces cumulative errors. To address these issues, this paper proposes a positioning method that fuses cosine-of-incidence-angle estimation based on a photodiode (PD) array with RSS information, enabling high-accuracy 3D indoor positioning under receiver orientation perturbations.  Methods  In the proposed fusion-based positioning method, a multi-PD array structure is first adopted, and a local coordinate system (LCS) is established at the array center. Constraint equations are then constructed based on the differences in received optical power among PDs in the array. A Gauss–Newton iterative algorithm is employed to estimate the incident light direction vector. By exploiting the orthogonal rotation invariance between the LCS and the global coordinate system (GCS), the cosine of the incident angle is estimated without the need for orientation sensors. Subsequently, a serial CNN–MLP fusion network is constructed, in which the estimated incident-angle cosine is introduced as an additional positioning feature on top of RSS-based localization. The network jointly models the RSS and incident-angle cosine information received by the PD array and maps them to 3D spatial coordinates. Finally, training samples are generated using Latin hypercube sampling (LHS) to uniformly sample spatial positions and orientation dimensions, thereby improving the representativeness of the training dataset.  Results and Discussions  Simulation experiments are conducted in a 4 m × 4 m × 2.5 m indoor environment. First, the effects of different numbers of PDs and tilt angles on the accuracy of incident-angle cosine estimation and spatial coverage are evaluated (Fig. 6), and the cumulative distribution functions (CDFs) of positioning errors under different array configurations are compared (Fig. 7). The results show that a 3-PD array with a tilt angle of 40° achieves the best balance among cost, coverage, and positioning accuracy. Next, positioning performance under different receiver tilt angles is analyzed. When the tilt angle is small, more than 70% of positioning errors are below 5 cm; even when the receiver is tilted up to 55°, the average error remains within 11.7 cm (Fig. 8). Error component comparisons indicate that the error along the Z-axis is significantly smaller than those along the X and Y axes (Fig. 9). Further tests are conducted at a height of 0.0 m covered by the training data and at an unseen height of 0.6 m not included in the training set (Fig. 10). The results demonstrate that the proposed model does not exhibit strong dependence on a specific height plane and maintains stable 3D positioning performance at unseen heights. Finally, the proposed method is compared with related positioning schemes. It outperforms existing methods in terms of CDF convergence speed, RMSE, and standard deviation (Fig. 11), achieving an average error reduction of approximately 2.5 cm and an RMSE reduction of 31.58% compared with Ref. [12].  Conclusions  This paper estimates the cosine of the incident angle at the receiver by exploiting differences in the optical power received by different PDs in an array and introduces this cosine value as a joint positioning feature into conventional RSS-based localization, thereby alleviating the instability of position mapping caused by relying solely on RSS under random receiver perturbations. By further combining the spatial feature extraction capability of CNNs with the nonlinear modeling strength of MLPs, the proposed method effectively maps positioning features to 3D spatial coordinates. The approach reduces reliance on orientation sensors such as IMUs while overcoming the susceptibility of traditional geometric positioning methods to noise and high-dimensional nonlinear features. Under varying heights and receiver orientations, the proposed algorithm demonstrates significant advantages in both positioning accuracy and stability.
Small Object Detection Algorithm for UAV Aerial Images in Complex Environments
LIU Jie, LIU Shuhao, TIAN Ming, CUI Zhigang
Available online  , doi: 10.11999/JEIT251126
Abstract:
  Objective  Small object detection plays a critical role in practical applications such as UAV (Unmanned Aerial Vehicle) inspection and intelligent transportation systems, where precise perception of diminutive targets is essential for operational reliability and safety. It enables the automated identification and tracking of challenging targets. However, the limited pixel size of small objects, coupled with their tendency to be obscured or integrated with complex backgrounds, results in strong background noise, leading to poor performance and elevated false-negative rates in existing detection models. To address this issue and achieve high-performance, high-precision detection of small objects in complex environments, this study proposes HAR-DETR, an enhancement over the RT-DETR baseline model, aimed at improving the detection accuracy for small objects.  Methods  HAR-DETR is proposed for small object detection in aerial images, incorporating three key improvements: Aggregated Attention, RFF-FPN (Recalibrated Feature Fusion Network-FPN), and a high-resolution detection branch. In the backbone network, Aggregated Attention enhances the model's ability to focus on relevant features of small objects. By expanding the receptive field, the model captures more detailed edge and texture information, thereby enabling more effective extraction of multi-scale features of the targets. During the feature fusion phase, RFF-FPN selectively integrates high-level and low-level features, allowing the network to retain critical spatial information and context. This facilitates the refinement of the edges and contours of small objects, improving the accuracy of localization and recognition, especially when object details may be obscured by background clutter or varying lighting conditions. The high-resolution detection head places greater emphasis on the edge features of small objects, providing enhanced small object perception capabilities, and further improving the model's robustness and precision.  Results and Discussions  A comparative analysis is conducted with several widely used object detection models, including YOLOv5, YOLOv8, YOLOv10 and so on, to evaluate the performance of the model in small object detection using precision, recall, and mAP metrics. Experimental results show that the HAR-DETR model outperforms other comparative models in terms of precision, recall, and mAP on the VisDrone2019 dataset (Table 1). The mAP50 and mAP50-95 are improved by 3.8% and 3.2%, respectively, compared to the baseline model (Table 2). This demonstrates that the HAR-DETR model offers superior performance in detecting small objects in aerial images under complex environments. Heatmaps generated using GradCAM are utilized for comparative analysis of the proposed improvements, showing better detection results for all improvements compared to the baseline model (Fig. 6). In the generalization performance experiment, the VisDrone2019 validation set and RSOD dataset are used under identical training conditions. The experimental results indicate that HAR-DETR exhibits strong generalization ability across heterogeneous tasks (Tables 3 and 4).  Conclusions  This paper addresses the issues of false positives and false negatives in small object detection within aerial images captured in complex environments by utilizing the HAR-DETR model. Aggregated Attention is introduced in the backbone feature extraction phase to expand the receptive field and enhance global feature extraction capabilities. In the feature fusion phase, the RFF-FPN structure is proposed to enrich the feature representations. Additionally, a high-resolution detection head is introduced to make the model more sensitive to the edge textures of small objects. The model is evaluated using the Visdrone2019 and RSOD datasets, and the results demonstrate the following: (1) The proposed method improves the small object detection metrics, mAP50 and mAP50-95, by 3.8% and 3.2%, respectively, compared to the baseline model, achieving 51.2% and 32.1%, and mitigating the issues of false negatives and false positives; (2) In comparison with other mainstream object detection models, HAR-DETR exhibits the best performance in small object detection, thereby fully validating the effectiveness of the model; (3) The HAR-DETR model achieves high accuracy in cross-dataset training, demonstrating its excellent generalization performance. These results indicate that HAR-DETR possesses stronger semantic expression and spatial awareness capabilities, making it adaptable to various aerial perspectives and target distribution patterns, thus providing a more versatile solution for UAV visual perception systems in complex environments.
Radio Map Enabled Path Planning for Multiple Cellular-Connected Unmanned Aerial Vehicles
ZHOU Decheng, WANG Wei, SHAO Xiang, CHEN Mei, XIAO Jianghao
Available online  , doi: 10.11999/JEIT250821
Abstract:
  Objective  In collaborative operation scenarios of cellular-connected Unmanned Aerial Vehicles (UAVs), conflict avoidance strategies often cause unbalanced service quality. Traditional schemes focus on reducing total task completion time but do not ensure service fairness. To address this issue, a radio map-assisted cooperative path planning scheme is proposed. The objective is to minimize the maximum weighted sum of task completion time and communication disconnection time across all UAVs to improve service fairness in multi-UAV scenarios.  Methods  A Signal-to-Interference-plus-Noise Ratio (SINR) map is constructed to assess communication quality. The two-dimensional airspace is discretized into grids, and link gain maps are generated through ray tracing and Axis-Aligned Bounding Box detection to determine Line-of-Sight (LoS) or Non-Line-of-Sight (NLoS) conditions. The SINR map is produced by selecting, for each grid, the base station with the highest expected SINR. To solve the optimization problem, an Improved Conflict-Based Search (ICBS) algorithm with a hierarchical structure is developed. At the high-level stage, proximity conflicts are managed to maintain safety distances, and the cost function is reformulated to emphasize fairness by minimizing the maximum weighted time. The low-level stage applies a bidirectional A* algorithm for single-UAV path planning, using parallel search to improve efficiency while meeting the constraints set by the high-level stage.  Results and Discussions  The proposed scheme is evaluated through simulations across different scenarios. Building heights and positions are shown, where base station locations are marked by red stars and building heights are represented with color gradients from light to dark to indicate increasing height (Fig. 2). The wireless propagation characteristics between UAVs and ground base stations are demonstrated by the SINR map at an altitude of 60 m (Fig. 3), which shows significant SINR degradation in areas affected by building blockage and co-channel interference, resulting in communication blind zones. Trajectory planning results for four UAVs at an altitude of 60 m with a SINR threshold of 2 dB show that all UAVs avoid signal blind zones and complete tasks without collision risks under the proposed scheme (Fig. 4). The trade-off between task completion time and disconnection time is controlled by the weight coefficient (Fig. 5). The maximum weighted time increases monotonically as the weight coefficient increases, whereas the maximum disconnection time decreases. The bidirectional A* algorithm achieves higher computational efficiency than Dijkstra’s and traditional A* algorithms while maintaining optimal solution quality (Table 1). All three algorithms yield identical weighted times, confirming the optimality of the bidirectional A* approach, and its runtime is reduced significantly due to parallel search. Compared with three benchmark schemes, the proposed scheme achieves the lowest maximum weighted time for different SINR thresholds (Fig. 6). Performance analysis at different UAV altitudes shows that the proposed scheme maintains stable maximum weighted time below 75 m, while sharp increases appear above 75 m due to intensified interference from non-serving base stations (Fig. 7). The scalability analysis further shows clear improvements over benchmark schemes, especially when conflicts occur more frequently (Fig. 8).  Conclusions  To address fairness in cellular-connected multi-UAV systems, a radio map-assisted path planning scheme is proposed to minimize the maximum weighted time. Based on a discretized SINR map, an ICBS algorithm is developed. At the high-level stage, proximity conflicts and a reformulated cost function ensure safety and fairness, and at the low-level stage, a bidirectional A* algorithm increases search efficiency. Simulation results show that the proposed scheme lowers the maximum weighted time compared with benchmark schemes and improves fairness and overall multi-UAV collaboration performance.
Inverse Design of a Silicon-Based Compact Polarization Splitter-Rotator
HUI Zhanqiang, ZHANG Xinglong, HAN Dongdong, LI Tiantian, GONG Jiamin
Available online  , doi: 10.11999/JEIT250858
Abstract:
  Objective  The Polarization Splitter-Rotator (PSR) is a key device used to control the polarization state of light in Photonic Integrated Circuits (PICs). Device size has become a major constraint on integration density in PICs. Traditional design methods are time-consuming and tend to yield larger device footprints. Inverse design, by contrast, determines structural parameters through optimization algorithms according to target performance and enables compact devices to be obtained while maintaining functionality. This strategy is now applied to wavelength and mode division multiplexers, all-optical logic gates, power splitters, and other integrated photonic components. The objective of this work is to use inverse design to address size limitations in silicon-based PSRs by combining the Momentum Optimization algorithm with the Adjoint Method. This combined approach improves the integration level of PICs and provides a feasible pathway for the miniaturization of other photonic devices.  Methods  The design region is defined on a 220 nm Silicon-on-Insulator (SOI) wafer and is discretized into 25×50 cylindrical elements. Each element has a 50 nm radius, a 150 nm height, and an initial relative permittivity of 6.55. The adjoint method is used to obtain gradient information across the design region, and this gradient is processed with the Momentum Optimization algorithm. The relative permittivity of each element is then updated according to the processed gradient. During optimization, the momentum factor is dynamically adjusted with the iteration number to accelerate convergence, and a linear bias is applied to guide the permittivity toward the values of silicon and air as the iterations progress. After optimization, the elements are binarized based on their final permittivity: values below 6.55 are assigned to air, whereas values above 6.55 are assigned to silicon. This results in a structure containing irregularly distributed air holes. To compensate for performance loss introduced during binarization, the etching depth of air holes with pre-binarization permittivity between 3 and 6.55 is optimized. Adjacent air holes are merged to reduce fabrication errors. The final device consists of air holes with five radii, among which three larger-radius types are selected for further refinement. Their etching radii and depths are optimized to recover remaining performance loss. Device performance is evaluated through numerical analysis. Calculated parameters include Insertion Loss (IL), Crosstalk (CT), Polarization Extinction Ratio (PER), and bandwidth. Tolerance analysis is also conducted to assess robustness under fabrication variations.  Results and Discussions   A compact PSR is designed on a 220 nm SOI wafer with dimensions of 5 μm in length and 2.5 μm in width. During optimization, the momentum factor in the Momentum Optimization algorithm is dynamically adjusted. A larger momentum factor is applied in the early stage to accelerate escape from local maxima or plateau regions, whereas a smaller momentum factor is used in later iterations to increase the weight of the current gradient. Compared with other optimization strategies, this algorithm requires only 20%~33% of the iteration count needed by alternative methods to reach a Figure of Merit (FOM) of 1.7, which improves optimization efficiency. Numerical analysis shows that the device achieves stable performance across the 1 520~1 575 nm wavelength range. The IL remains low (TM0 < 1 dB, TE0 < 0.68 dB), and the CT is effectively suppressed (TM0 < –23 dB, TE0 < –25.2 dB). The PER is high (TM0 > 17 dB, TE0 > 28.5 dB). Tolerance analysis indicates strong robustness to fabrication variations. Within the 1 520~1 540 nm range, performance remains stable under etching depth offsets of ±9 nm and etching radius offsets of ±5 nm, demonstrating reliable manufacturability.  Conclusions   Numerical analysis demonstrates that combining the adjoint method with the Momentum Optimization algorithm is a feasible strategy for designing an integrated PSR. The design principle relies on controlling light propagation through adjustments to the relative permittivity, which determine the distribution and placement of air holes to achieve polarization splitting and rotation. Compared with traditional design approaches, inverse design uses the design region more efficiently and enables a more compact device structure. The proposed PSR is markedly smaller and shows enhanced fabrication tolerance. It is suitable for future large-scale PICs and provides useful guidance for the miniaturization of other photonic devices.
Research on UAV Swarm Radiation Source Localization Method Based on Dynamic Formation Optimization
WU Sujie, WU Binbin, YANG Ning, WANG Heng, GUO Daoxing, GU Chuan
Available online  , doi: 10.11999/JEIT251023
Abstract:
In dense and structurally complex urban environments, Unmanned Aerial Vehicle (UAV) swarm radiation source localization is affected by signal attenuation, multipath propagation, and building obstructions. To address these limitations, a dynamic formation-optimization method for UAV swarms is proposed. By improving the geometric configuration of the swarm, the method reduces path loss and interference, which strengthens localization accuracy. Received signal strength is used to evaluate signal quality in real time and supports adaptive formation adjustments that improve propagation conditions. Geometric dilution of precision and root mean square error metrics are integrated to refine swarm geometry and improve distance-estimation reliability. Simulation results show that the proposed method converges faster and improves localization accuracy in complex urban environments, reducing errors by more than 80 percent. The method adapts to environmental variation and demonstrates strong robustness and practical value.  Objective  UAV swarm localization and formation control in urban environments are affected by obstacles, signal attenuation, and rapid variation in the surroundings that reduce the reliability of conventional methods. This study proposes a radiation source localization approach that integrates the Received Signal Strength Indicator (RSSI) with dynamic formation adjustment to improve localization accuracy and strengthen system robustness in complex urban scenarios. RSSI is used once in full form, then referenced consistently.  Methods  The method uses RSSI measurements to estimate the distance to the radiation source and adjusts UAV swarm formation in real time to reduce localization errors. These adjustments are based on feedback that reflects relative positions, signal strength, and environmental variation. Localization accuracy is strengthened through a multi-sensor fusion strategy that integrates GPS, IMU, and depth-camera data. A data-quality assessment mechanism evaluates signal reliability and triggers formation adaptation when the signal drops below a predefined threshold. This optimization process reduces positioning errors and improves system robustness.  Results and Discussions  Simulation experiments in a ROS-based environment were conducted to evaluate the UAV swarm localization method under urban obstacles and multipath conditions. The swarm began in a hexagonal formation and adjusted its geometry according to environmental variation and localization confidence (Fig. 34). As shown in Fig. 5, localization errors fluctuated during initialization but converged to below 1 m after 150 s. Formation comparisons (Fig. 6) showed that symmetric structures such as hexagonal and triangular formations maintained errors below 0.5 m, whereas asymmetric formations (T and Y shape) produced deviations up to 4.9 m. Further comparisons (Fig. 7) showed that traditional RSSI saturated near 15 m, direction of arrival fluctuated between 5 and 14 m, and time difference of arrival failed due to synchronization problems. The proposed method achieved sub-meter accuracy within 60 s and remained robust throughout the mission. These findings indicate that combining RSSI-based distance estimation with dynamic formation adjustment improves localization accuracy, convergence speed, and adaptability under complex environmental conditions.  Conclusions  This study addresses UAV swarm localization in complex urban environments by integrating RSSI-based distance estimation, dynamic formation adjustment, and multi-sensor fusion. ROS-based simulations show that: (1) localization errors converge rapidly to sub-meter levels, reaching below 1 m within 150 s under non-line-of-sight conditions; (2) symmetric formations such as hexagonal and triangular configurations outperform asymmetric ones and reduce errors by up to 67 percent compared with fixed Y-shaped formations; and (3) relative to traditional RSSI, direction of arrival, and time difference of arrival approaches, the proposed method shows faster convergence, higher stability, and stronger robustness.
Conditional Generative Adversarial Networks-based Channel Estimation for ISAC-RIS System
LIU Yu, ZHENG Zelin, LIU Gang
Available online  , doi: 10.11999/JEIT251168
Abstract:
  Objective  In RIS-assisted ISAC systems, accurate channel estimation is crucial to ensure reliable operation. Although traditional deep learning methods can partially address the channel estimation problem, their generalization ability and estimation accuracy remain limited in complex multi-user channel environments. To tackle these challenges, this paper proposes a two-stage channel estimation method based on Conditional Generative Adversarial Network(CGAN) for RIS-assisted multi-user ISAC systems, aiming to enhance both the accuracy and stability of channel estimation.  Methods  This paper proposes a two-stage channel estimation method based on CGAN for estimating the SAC channels in RIS-assisted multi-user ISAC systems. By adjusting the switching states of the RIS, the overall estimation problem is decomposed into subproblems, enabling sequential estimation of the direct and reflected channels. Within the proposed CGAN framework, the adversarial training between the generator and discriminator allows the model not only to learn the mapping relationship between the observed signals and the true channels but also to optimize the output according to the discriminator’s feedback, thereby effectively improving both training efficiency and estimation accuracy.  Results and Discussions  Extensive simulation experiments were conducted to verify the effectiveness of the proposed method. First, the estimation performance of the SAC channel under different SNR conditions was compared. The results demonstrate that the proposed CGAN-based method achieves significantly better NMSE performance than the LS benchmark and traditional models such as FNN and ELM (Fig. 4). Then, the impact of increasing the number of antennas and RIS elements on SAC channel estimation performance was investigated. Compared with the LS benchmark, the proposed CGAN method consistently maintains superior performance under various SNR conditions (Figs. 5 and 6).  Conclusions  This paper investigates the channel estimation problem in RIS-assisted multi-user ISAC systems and proposes a two-stage channel estimation method based on CGAN. By adjusting the switching states of the RIS and employing adversarial training between the generator and discriminator networks, the proposed method achieves accurate estimation of the SAC channel. Simulation results demonstrate that, under various SNR conditions and channel dimensions, the CGAN-based estimation method exhibits strong generalization capability and significantly outperforms the benchmark schemes in estimation accuracy. Therefore, it shows great potential as an effective solution for enhancing system stability and efficiency.
Cross-modal Retrieval Enhanced Energy-efficient Multimodal Federated Learning in Wireless Networks
LIU Jingyuan, MA Ke, XU Runchen, CHANG Zheng
Available online  , doi: 10.11999/JEIT251221
Abstract:
  Objective  Multimodal Federated Learning (MFL) uses complementary information from multiple modalities, yet in wireless edge networks it is restricted by limited energy and frequent missing modalities because many clients store only images or only reports. This study presents Cross-modal Retrieval Enhanced Energy-efficient Multimodal Federated Learning (CREEMFL), which applies selective completion and joint communication–computation optimization to reduce training energy under latency and wireless constraints.  Methods  CREEMFL completes part of the incomplete samples by querying a public multimodal subset, and processes the remaining samples through zero padding. Each selected user downloads the global model, performs image-to-text or text-to-image retrieval, conducts local multimodal training, and uploads model updates for aggregation. An energy–delay model couples local computation and wireless communication and treats the required number of global rounds as a function of retrieval ratios. Based on this model, an energy minimization problem is formulated and solved using a two-layer algorithm with an outer search over retrieval ratios and an inner optimization of transmission time, Central Processing Unit (CPU) frequency, and transmit power.  Results and Discussions  Simulations on a single-cell wireless MFL system show that increasing the ratio of completing text from images improves test accuracy and reduces total energy. In contrast, a large ratio of completing images from text provides limited accuracy gain but increases energy consumption (Fig. 3, Fig. 4). Compared with four representative baselines, CREEMFL achieves shorter completion time and lower total energy across a wide range of maximum average transmit powers (Fig. 5, Fig. 6). For CREEMFL, increased system bandwidth further reduces completion time and energy consumption (Fig. 7, Fig. 8). Under different user modality compositions, CREEMFL also attains higher test accuracy than local training, zero padding, and cross-modal retrieval without energy optimization (Fig. 9).  Conclusions  CREEMFL integrates selective cross-modal retrieval and joint communication–computation optimization for energy-efficient MFL. By treating retrieval ratios as variables and modeling their effect on global convergence rounds, it captures the coupling between per-round costs and global training progress. Simulations verify that CREEMFL reduces training completion time and total energy while preserving classification accuracy in resource-constrained wireless edge networks.
Modeling, Detection, and Defense Theories and Methods for Cyber-Physical Fusion Attacks in Smart Grid
WANG Wenting, TIAN Boyan, WU Fazong, HE Yunpeng, WANG Xin, YANG Ming, FENG Dongqin
Available online  , doi: 10.11999/JEIT250659
Abstract:
  Significance   Smart Grid (SG), the core of modern power systems, enables efficient energy management and dynamic regulation through cyber–physical integration. However, its high interconnectivity makes it a prime target for cyberattacks, including False Data Injection Attacks (FDIAs) and Denial-of-Service (DoS) attacks. These threats jeopardize the stability of power grids and may trigger severe consequences such as large-scale blackouts. Therefore, advancing research on the modeling, detection, and defense of cyber–physical attacks is essential to ensure the safe and reliable operation of SGs.  Progress   Significant progress has been achieved in cyber–physical security research for SGs. In attack modeling, discrete linear time-invariant system models effectively capture diverse attack patterns. Detection technologies are advancing rapidly, with physical-based methods (e.g., physical watermarking and moving target defense) complementing intelligent algorithms (e.g., deep learning and reinforcement learning). Defense systems are also being strengthened: lightweight encryption and blockchain technologies are applied to prevention, security-optimized Phasor Measurement Unit (PMU) deployment enhances equipment protection, and response mechanisms are being continuously refined.  Conclusions  Current research still requires improvement in attack modeling accuracy and real-time detection algorithms. Future work should focus on developing collaborative protection mechanisms between the cyber and physical layers, designing solutions that balance security with cost-effectiveness, and validating defense effectiveness through high-fidelity simulation platforms. This study establishes a systematic theoretical framework and technical roadmap for SG security, providing essential insights for safeguarding critical infrastructure.  Prospects   Future research should advance in several directions: (1) deepening synergistic defense mechanisms between the information and physical layers; (2) prioritizing the development of cost-effective security solutions; (3) constructing high-fidelity information–physical simulation platforms to support research; and (4) exploring the application of emerging technologies such as digital twins and interpretable Artificial Intelligence (AI).
A Review of Research on Voiceprint Fault Diagnosis of Transformers
GONG Wenjie, LIN Guosong, WEI Xiaoguang
Available online  , doi: 10.11999/JEIT251076
Abstract:
  Significance   Voiceprint fault diagnosis of transformers has become an active research area for ensuring the safe and reliable operation of power systems. Traditional monitoring methods, such as dissolved gas analysis, infrared temperature measurement, and online partial discharge monitoring, exhibit limited real-time capability and rely heavily on expert experience. These limitations hinder effective detection of early-stage faults. Voiceprint fault diagnosis captures operational voiceprint signals from transformers and enables non-contact monitoring for early anomaly warning. This approach offers advantages in real-time performance, sensitivity, and fault coverage. This review systematically traces the technological evolution from traditional signal analysis to deep learning and compares the advantages, limitations, and application scenarios of different models across multiple dimensions. Key challenges are identified, including limited robustness to noise and imbalanced datasets. Potential research directions are proposed, including integration of physical mechanisms with data-driven methods and improvement of diagnostic transparency and interpretability. These analyses provide theoretical support and practical guidance for promoting the transition of voiceprint fault diagnosis from laboratory research to engineering applications.  Progress   Research on voiceprint fault diagnosis of transformers has progressed from traditional signal analysis to an intelligent recognition paradigm based on deep learning, reflecting a clear technological evolution. A bibliometric analysis of 188 papers from the CNKI and Web of Science databases shows that annual publications remained at 1–10 papers between 1997 and 2020, corresponding to an exploratory stage. Studies during this period focused mainly on fundamental voiceprint signal processing methods, including acoustic wave detection, wavelet transform, and Empirical Mode Decomposition (EMD). After 2020, Variational Modal Decomposition (VMD), Mel spectrum, and Mel Frequency Cepstral Coefficient (MFCC) were gradually applied to voiceprint feature extraction. Since 2021, publication output has increased rapidly and reached a historical peak in 2023. This growth was driven by advances in image and speech processing technologies. Early studies emphasized time-domain and frequency-domain analysis of voiceprint signals. Recent research increasingly converts voiceprint signals into two-dimensional time–frequency spectrogram representations. Model architectures have evolved from single-channel feature inputs with single-model outputs to complex frameworks with multi-channel feature extraction and multi-model fusion. Classical machine learning models, including Gaussian Mixture Model (GMM), Support Vector Machine (SVM), Random Forest (RF), and Back Propagation Neural Network (BPNN), form the foundation of voiceprint fault diagnosis but are limited in handling high-dimensional features. Deep learning models, such as Convolutional Neural Network (CNN), Residual Neural Network (ResNet), Recurrent Neural Network (RNN), and Transformer, demonstrate advantages in automatic feature extraction and complex pattern recognition, although they require substantial computational resources.  Conclusions  This review summarizes the technological development of transformer voiceprint fault diagnosis from machine learning to deep learning. Although deep learning methods achieve high recognition accuracy for complex voiceprint signals, five major challenges remain. These challenges include limited robustness to noise in non-stationary environments, severe data imbalance caused by scarce fault samples, the black-box nature of deep learning models, fragmented evaluation systems resulting from inconsistent data acquisition standards, and insufficient cross-modal fusion of multi-source data. Sensitivity to environmental noise limits diagnostic performance under varying operating conditions. Data imbalance reduces recognition accuracy for rare fault types. Limited interpretability restricts fault mechanism analysis and diagnostic credibility. Inconsistent sensor placement and sampling parameters lead to poor comparability across datasets. Single-modal voiceprint analysis restricts effective utilization of complementary information from other data sources. Addressing these challenges is essential for advancing voiceprint fault diagnosis from laboratory validation to field deployment.  Prospects   Future research should focus on five directions. First, noise-robust voiceprint feature extraction methods based on physical mechanisms should be developed to address non-stationary interference in complex operating environments. Second, the lack of real-world fault data should be alleviated by constructing electromagnetic field–structural mechanics–acoustic coupling models of transformers to generate high-fidelity voiceprint fault samples, while unsupervised clustering methods should be applied to improve annotation efficiency and quality. Third, explainable deep learning architectures for voiceprint fault diagnosis that incorporate physical mechanisms should be designed. Attention mechanisms combined with SHapley Additive exPlanations, Grad-CAM, and physical equations can support process-level and post hoc interpretation of diagnostic results. Fourth, industry-wide collaboration is required to establish standardized voiceprint data acquisition protocols, benchmark datasets, and unified evaluation systems. Fifth, cross-modal fusion models based on multi-channel and multi-feature analysis should be developed to enable integrated transformer fault diagnosis through comprehensive utilization of multi-source information.
Finite-time Adaptive Sliding Mode Control of Servo Motors Considering Frictional Nonlinearity and Unknown Loads
ZHANG Tianyu, GUO Qinxia, YANG Tingkai, GUO Xiangji, MING Ming
Available online  , doi: 10.11999/JEIT250521
Abstract:
  Objective  Ultra-fast laser processing with an infinite field of view requires servo motor systems with superior tracking accuracy and robustness. However, such systems are highly nonlinear and affected by coupled unknown load disturbances and complex friction, which constrain the performance of conventional controllers. Although Sliding Mode Control (SMC) exhibits inherent robustness, traditional SMC and observer designs cannot achieve accurate finite-time disturbance compensation under strong nonlinearities, thus limiting high-speed and high-precision trajectory tracking. To address this limitation, a novel finite-time adaptive SMC approach is proposed to ensure rapid and precise angular position tracking within a finite time, satisfying the stringent synchronization requirements of advanced laser processing systems.  Methods  A novel control strategy is developed by integrating an adaptive disturbance observer fused with a Radial Basis Function Neural Network (RBFNN) and finite-time Sliding Mode Control (SMC). First, the unknown load disturbance and complex frictional nonlinear dynamics are combined into a unified "lumped disturbance" term, improving model generality and the ability to represent real operating conditions. Second, a finite-time adaptive disturbance observer is constructed to estimate this lumped disturbance. The observer utilizes the universal approximation capability of the RBFNN to learn and approximate the dynamic characteristics of unknown disturbances online. Simultaneously, a finite-time adaptive law based on the error norm is introduced to update the neural network weights in real time, ensuring rapid and accurate finite-time estimation of the lumped disturbance while reducing dependence on precise model parameters. Based on this design, a finite-time SMC is developed. The controller uses the observer’s disturbance estimation as a feedforward compensation term, incorporates a carefully formulated finite-time sliding surface and equivalent control law, and introduces a saturation function to suppress control input chattering. A suitable Lyapunov function is then constructed, and the finite-time stability theory is rigorously applied to prove the practical finite-time convergence of both the adaptive observer and the closed-loop control system, guaranteeing that the system tracking error converges to a bounded neighborhood near the origin within finite time.  Results and Discussions  To verify the effectiveness and superiority of the proposed control strategy, a typical Permanent Magnet Synchronous Motor (PMSM) servo system model is constructed in the MATLAB environment, and a simulation scenario with desired trajectories of varying frequencies is established. The proposed method is comprehensively compared with the widely used Proportional–Integral (PI) control and the advanced method reported in reference [7]. Simulation results demonstrate the following: 1. Tracking performance: Under various reference trajectories, the proposed controller enables the system to accurately follow the target trajectory with a tracking error substantially smaller than that of the PI controller. Compared with the method in reference [7], it achieves smoother responses and smaller residual errors, effectively eliminating the chattering observed in some operating conditions of the latter. 2 Disturbance rejection and robustness: The adaptive disturbance observer based on the RBFNN rapidly and effectively learns and compensates for the lumped disturbance composed of unknown load variations and frictional nonlinearities. Even in the presence of these disturbances, the proposed controller maintains high-precision trajectory tracking, demonstrating strong disturbance rejection and robustness to system parameter variations. 3. Control input characteristics: Compared with the reference methods, the control signal of the proposed approach quickly stabilizes after the initial transient phase, effectively suppressing chattering caused by high-frequency switching. The amplitude range of the control input remains reasonable, facilitating practical actuator implementation. 4. Comprehensive evaluation: Based on multiple error performance indices, including Integral Squared Error (ISE), Integral Absolute Error (IAE), Time-weighted Integral Absolute Error (ITAE), and Time-weighted Integral Squared Error (ITSE), the proposed controller consistently outperforms both PI control and the method in reference [7]. It demonstrates comprehensive advantages in suppressing transient errors rapidly and reducing overall error accumulation. The method also improves steady-state accuracy and achieves a balanced response speed with effective noise attenuation. 5. Observer performance: The RBFNN weight norm estimation converges rapidly and stabilizes at a low level after initial adaptation, confirming the effectiveness of the proposed adaptive law and the learning efficiency of the observer.  Conclusions  A finite-time sliding mode control strategy with an adaptive disturbance observer is proposed for servo systems used in ultra-fast laser processing. The method models unknown load disturbances and frictional nonlinearities as a lumped disturbance term. An adaptive observer, integrating an RBF neural network with a finite-time mechanism, accurately estimates this disturbance for real-time compensation. Based on the observer, a finite-time SMC law is formulated, and the practical finite-time stability of the closed-loop system is theoretically proven. Simulations conducted on a permanent magnet synchronous motor platform confirm that the proposed approach achieves superior tracking accuracy, robustness, and control smoothness compared with conventional PI and existing advanced methods. This work offers an effective solution for achieving high-precision control in nonlinear systems subject to strong disturbances.
A Learning-Based Security Control Method for Cyber-Physical Systems Based on False Data Detection
MIAO Jinzhao, LIU Jinliang, SUN Le, ZHA Lijuan, TIAN Engang
Available online  , doi: 10.11999/JEIT250537
Abstract:
  Objective  Cyber-Physical Systems (CPS) constitute the backbone of critical infrastructures and industrial applications, but the tight coupling of cyber and physical components renders them highly susceptible to cyberattacks. False data injection attacks are particularly dangerous because they compromise sensor integrity, mislead controllers, and can trigger severe system failures. Existing control strategies often assume reliable sensor data and lack resilience under adversarial conditions. Furthermore, most conventional approaches decouple attack detection from control adaptation, leading to delayed or ineffective responses to dynamic threats. To overcome these limitations, this study develops a unified secure learning control framework that integrates real-time attack detection with adaptive control policy learning. By enabling the dynamic identification and mitigation of false data injection attacks, the proposed method enhances both stability and performance of CPS under uncertain and adversarial environments.  Methods  To address false data injection attacks in CPS, this study proposes an integrated secure control framework that combines attack detection, state estimation, and adaptive control strategy learning. A sensor grouping-based security assessment index is first developed to detect anomalous sensor data in real time without requiring prior knowledge of attacks. Next, a multi-source sensor fusion estimation method is introduced to reconstruct the system’s true state, thereby improving accuracy and robustness under adversarial disturbances. Finally, an adaptive learning control algorithm is designed, in which dynamic weight updating via gradient descent approximates the optimal control policy online. This unified framework enhances both steady-state performance and resilience of CPS against sophisticated attack scenarios. Its effectiveness and security performance are validated through simulation studies under diverse false data injection attack settings.  Results and Discussions  Simulation results confirm the effectiveness of the proposed secure adaptive learning control framework under multiple false data injection attacks in CPS. As shown in Fig. 1, system states rapidly converge to steady values and maintain stability despite sensor attacks. Fig. 2 demonstrates that the fused state estimator tracks the true system state with greater accuracy than individual local estimators. In Fig. 3, the compensated observation outputs align closely with the original, uncorrupted measurements, indicating precise attack estimation. Fig. 4 shows that detection indicators for sensor groups 2–5 increase sharply during attack intervals, while unaffected sensors remain near zero, verifying timely and accurate detection. Fig. 5 further confirms that the estimated attack signals closely match the true injected values. Finally, Fig. 6 compares different control strategies, showing that the proposed method achieves faster stabilization and smaller state deviations. Together, these results demonstrate robust control, accurate state estimation, and real-time detection under unknown attack conditions.  Conclusions  This study addresses secure perception and control in CPS under false data injection attacks by developing an integrated adaptive learning control framework that unifies detection, estimation, and control. A sensor-level anomaly detection mechanism is introduced to identify and localize malicious data, substantially enhancing attack detection capability. The fusion-based state estimation method further improves reconstruction accuracy of true system states, even when observations are compromised. At the control level, an adaptive learning controller with online weight adjustment enables real-time approximation of the optimal control policy without requiring prior knowledge of the attack model. Future research will extend the proposed framework to broader application scenarios and evaluate its resilience under diverse attack environments.
A Two-Stage Framework for CAN Bus Attack Detection by Fusing Temporal and Deep Features
TAN Mingming, ZHANG Heng, WANG Xin, LI Ming, ZHANG Jian, YANG Ming
Available online  , doi: 10.11999/JEIT250651
Abstract:
  Objective  The Controller Area Network (CAN), the de facto standard for in-vehicle communication, is inherently vulnerable to cyberattacks. Existing Intrusion Detection Systems (IDSs) face a fundamental trade-off: achieving fine-grained classification of diverse attack types often requires computationally intensive models that exceed the resource limitations of on-board Electronic Control Units (ECUs). To address this problem, this study proposes a two-stage attack detection framework for the CAN bus that fuses temporal and deep features. The framework is designed to achieve both high classification accuracy and computational efficiency, thereby reconciling the tension between detection performance and practical deployability.  Methods  The proposed framework adopts a “detect-then-classify” strategy and incorporates two key innovations. (1) Stage 1: Temporal Feature-Aware Anomaly Detection. Two custom features are designed to quantify anomalies: Payload Data Entropy (PDE), which measures content randomness, and ID Frequency Mean Deviation (IFMD), which captures behavioral deviations. These features are processed by a Bidirectional Long Short-Term Memory (BiLSTM) network that exploits contextual temporal information to achieve high-recall anomaly detection. (2) Stage 2: Deep Feature-Based Fine-Grained Classification. Triggered only for samples flagged as anomalous, this stage employs a lightweight one-dimensional ParC1D-Net. The core ParC1D Block (Fig. 4) integrates depthwise separable one-dimensional convolution, Squeeze-and-Excitation (SE) attention, and a Feed-Forward Network (FFN), enabling efficient feature extraction with minimal parameters. Stage 1 is optimized using BCEWithLogitsLoss, whereas Stage 2 is trained with Cross-Entropy Loss.  Results and Discussions  The efficacy of the proposed framework is evaluated on public datasets. (1) State-of-the-art performance. On the Car-Hacking dataset (Table 5), an accuracy and F1-score of 99.99% are achieved, exceeding advanced baselines. On the more challenging Challenge dataset (Table 6), superior accuracy (99.90%) and a competitive F1-score (99.70% are also obtained. (2) Feature contribution analysis. Ablation studies (Tables 7 and 8) confirm the critical role of the proposed features. Removal of the IFMD feature results in the largest performance reduction, highlighting the importance of behavioral modeling. A synergistic effect is observed when PDE and IFMD are applied together. (3) Spatiotemporal efficiency. The complete model remains lightweight at only 0.39 MB. Latency tests (Table 9) demonstrate real-time capability, with average detection times of 0.62 ms on a GPU and 0.93 ms on a simulated CPU (batch size = 1). A system-level analysis (Section 3.5.4) further shows that the two-stage framework is approximately 1.65 times more efficient than a single-stage model in a realistic sparse-attack scenario.  Conclusions  This study establishes the two-stage framework as an effective and practical solution for CAN bus intrusion detection. By decoupling detection from classification, the framework resolves the trade-off between accuracy and on-board deployability. Its strong performance, combined with a minimal computational footprint, indicates its potential for securing real-world vehicular systems. Future research could extend the framework and explore hardware-specific optimizations.
Entropy Quantum Collaborative Planning Method for Emergency Path of Unmanned Aerial Vehicles Driven by Survival Probability
WANG Enliang, ZHANG Zhen, SUN Zhixin
Available online  , doi: 10.11999/JEIT250694
Abstract:
  Objective  Natural disaster emergency rescue places stringent requirements on the timeliness and safety of Unmanned Aerial Vehicle (UAV) path planning. Conventional optimization objectives, such as minimizing total distance, often fail to reflect the critical time-sensitive priority of maximizing the survival probability of trapped victims. Moreover, existing algorithms struggle with the complex constraints of disaster environments, including no-fly zones, caution zones, and dynamic obstacles. To address these challenges, this paper proposes an Entropy-Enhanced Quantum Ripple Synergy Algorithm (E2QRSA). The primary goals are to establish a survival probability maximization model that incorporates time decay characteristics and to design a robust optimization algorithm capable of efficiently handling complex spatiotemporal constraints in dynamic disaster scenarios.  Methods  E2QRSA enhances the Quantum Ripple Optimization framework through four key innovations: (1) information entropy–based quantum state initialization, which guides population generation toward high-entropy regions; (2) multi-ripple collaborative interference, which promotes beneficial feature propagation through constructive superposition; (3) entropy-driven parameter control, which dynamically adjusts ripple propagation according to search entropy rates; and (4) quantum entanglement, which enables information sharing among elite individuals. The model employs a survival probability objective function that accounts for time-sensitive decay, base conditions, and mission success probability, subject to constraints including no-fly zones, warning zones, and dynamic obstacles.  Results and Discussions  Simulation experiments are conducted in medium- and large-scale typhoon disaster scenarios. The proposed E2QRSA achieves the highest survival probabilities of 0.847 and 0.762, respectively (Table 1), exceeding comparison algorithms such as SEWOA and PSO by 4.2–16.0%. Although the paths generated by E2QRSA are not the shortest, they are the most effective in maximizing survival chances. The ablation study (Table 3) confirms the contribution of each component, with the removal of multi-ripple interference causing the largest performance decrease (9.97%). The dynamic coupling between search entropy and ripple parameters (Fig. 2) is validated, demonstrating the effectiveness of the adaptive control mechanism. The entanglement effect (Fig. 4) is shown to maintain population diversity. In terms of constraint satisfaction, E2QRSA-planned paths consume only 85.2% of the total available energy (Table 5), ensuring a safe return, and all static and dynamic obstacles are successfully avoided, as visually verified in the 3D path plots (Figs. 6 and 7).  Conclusions  E2QRSA effectively addresses the challenge of UAV path planning for disaster relief by integrating adaptive entropy control with quantum-inspired mechanisms. The survival probability objective captures the essential requirements of disaster scenarios more accurately than conventional distance minimization. Experimental validation demonstrates that E2QRSA achieves superior solution quality and faster convergence, providing a robust technical basis for strengthening emergency response capabilities.
Breakthrough in Solving NP-Complete Problems Using Electronic Probe Computers
XU Jin, YU Le, YANG Huihui, JI Siyuan, ZHANG Yu, YANG Anqi, LI Quanyou, LI Haisheng, ZHU Enqiang, SHI Xiaolong, WU Pu, SHAO Zehui, LENG Huang, LIU Xiaoqing
Available online  , doi: 10.11999/JEIT250352
Abstract:
This study presents a breakthrough in addressing NP-complete problems using a newly developed Electronic Probe Computer (EPC60). The system employs a hybrid serial–parallel computational model and performs large-scale parallel operations through seven probe operators. In benchmark tests on 3-coloring problems in graphs with 2,000 vertices, EPC60 achieves 100% accuracy, outperforming the mainstream solver Gurobi, which succeeds in only 6% of cases. Computation time is reduced from 15 days to 54 seconds. The system demonstrates high scalability and offers a general-purpose solution for complex optimization problems in areas such as supply chain management, finance, and telecommunications.  Objective   NP-complete problems pose a fundamental challenge in computer science. As problem size increases, the required computational effort grows exponentially, making it infeasible for traditional electronic computers to provide timely solutions. Alternative computational models have been proposed, with biological approaches—particularly DNA computing—demonstrating notable theoretical advances. However, DNA computing systems continue to face major limitations in practical implementation.  Methods  Computational Model: EPC is based on a non-Turing computational model in which data are multidimensional and processed in parallel. Its database comprises four types of graphs, and the probe library includes seven operators, each designed for specific graph operations. By executing parallel probe operations, EPC efficiently addresses NP-complete problems.Structural Features:EPC consists of four subsystems: a conversion system, input system, computation system, and output system. The conversion system transforms the target problem into a graph coloring problem; the input system allocates tasks to the computation system; the computation system performs parallel operations via probe computation cards; and the output system maps the solution back to the original problem format.EPC60 features a three-tier hierarchical hardware architecture comprising a control layer, optical routing layer, and probe computation layer. The control layer manages data conversion, format transformation, and task scheduling. The optical routing layer supports high-throughput data transmission, while the probe computation layer conducts large-scale parallel operations using probe computation cards.  Results and Discussions  EPC60 successfully solved 100 instances of the 3-coloring problem for graphs with 2,000 vertices, achieving a 100% success rate. In comparison, the mainstream solver Gurobi succeeded in only 6% of cases. Additionally, EPC60 rapidly solved two 3-coloring problems for graphs with 1,500 and 2,000 vertices, which Gurobi failed to resolve after 15 days of continuous computation on a high-performance workstation.Using an open-source dataset, we identified 1,000 3-colorable graphs with 1,000 vertices and 100 3-colorable graphs with 2,000 vertices. These correspond to theoretical complexities of O(1.3289n) for both cases. The test results are summarized in Table 1.Currently, EPC60 can directly solve 3-coloring problems for graphs with up to n vertices, with theoretical complexity of at least O(1.3289n).On April 15, 2023, a scientific and technological achievement appraisal meeting organized by the Chinese Institute of Electronics was held at Beijing Technology and Business University. A panel of ten senior experts conducted a comprehensive technical evaluation and Q&A session. The committee reached the following unanimous conclusions:1. The probe computer represents an original breakthrough in computational models.2. The system architecture design demonstrates significant innovation.3. The technical complexity reaches internationally leading levels.4. It provides a novel approach to solving NP-complete problems.Experts at the appraisal meeting stated, “This is a major breakthrough in computational science achieved by our country, with not only theoretical value but also broad application prospects.” In cybersecurity, EPC60 has also demonstrated remarkable potential. Supported by the National Key R&D Program of China (2019YFA0706400), Professor Xu Jin’s team developed an automated binary vulnerability mining system based on a function call graph model. Evaluation of the system using the Modbus Slave software showed over 95% vulnerability coverage, far exceeding the 75 vulnerabilities detected by conventional depth-first search algorithms. The system also discovered a previously unknown flaw, the “Unauthorized Access Vulnerability in Changyuan Shenrui PRS-7910 Data Gateway” (CNVD-2020-31406), highlighting EPC60’s efficacy in cybersecurity applications.The high efficiency of EPC60 derives from its unique computational model and hardware architecture. Given that all NP-complete problems can be polynomially reduced to one another, EPC60 provides a general-purpose solution framework. It is therefore expected to be applicable in a wide range of domains, including supply chain management, financial services, telecommunications, energy, and manufacturing.  Conclusions   The successful development of EPC offers a novel approach to solving NP-complete problems. As technological capabilities continue to evolve, EPC is expected to demonstrate strong computational performance across a broader range of application domains. Its distinctive computational model and hardware architecture also provide important insights for the design of next-generation computing systems.
Personalized Federated Learning Method Based on Collation Game and Knowledge Distillation
SUN Yanhua, SHI Yahui, LI Meng, YANG Ruizhe, SI Pengbo
Available online  , doi: 10.11999/JEIT221203
Abstract:
To overcome the limitation of the Federated Learning (FL) when the data and model of each client are all heterogenous and improve the accuracy, a personalized Federated learning algorithm with Collation game and Knowledge distillation (pFedCK) is proposed. Firstly, each client uploads its soft-predict on public dataset and download the most correlative of the k soft-predict. Then, this method apply the shapley value from collation game to measure the multi-wise influences among clients and quantify their marginal contribution to others on personalized learning performance. Lastly, each client identify it’s optimal coalition and then distill the knowledge to local model and train on private dataset. The results show that compared with the state-of-the-art algorithm, this approach can achieve superior personalized accuracy and can improve by about 10%.
The Range-angle Estimation of Target Based on Time-invariant and Spot Beam Optimization
Wei CHU, Yunqing LIU, Wenyug LIU, Xiaolong LI
Available online  , doi: 10.11999/JEIT210265
Abstract:
The application of Frequency Diverse Array and Multiple Input Multiple Output (FDA-MIMO) radar to achieve range-angle estimation of target has attracted more and more attention. The FDA can simultaneously obtain the degree of freedom of transmitting beam pattern in angle and range. However, its performance is degraded due to the periodicity and time-varying of the beam pattern. Therefore, an improved Estimating Signal Parameter via Rotational Invariance Techniques (ESPRIT) algorithm to estimate the target’s parameters based on a new waveform synthesis model of the Time Modulation and Range Compensation FDA-MIMO (TMRC-FDA-MIMO) radar is proposed. Finally, the proposed method is compared with identical frequency increment FDA-MIMO radar system, logarithmically increased frequency offset FDA-MIMO radar system and MUltiple SIgnal Classification (MUSIC) algorithm through the Cramer Rao lower bound and root mean square error of range and angle estimation, and the excellent performance of the proposed method is verified.
Wireless Communication and Internet of Things
Low-Complexity Joint Estimation Algorithm for Carrier Frequency Offset and Sampling Frequency Offset in 5G-NTN Low Earth Orbit Satellite Communications
GONG Xianfeng, LI Ying, LIU Mingyang, ZHAI Shenghua
Available online  , doi: 10.11999/JEIT251086
Abstract:
  Objective   The Doppler effect is a major impairment in Low Earth Orbit (LEO) satellite communications within 5G Non-Terrestrial Networks (5G-NTN). It introduces Carrier Frequency Offset (CFO), Sampling Frequency Offset (SFO), and Inter-Subcarrier Frequency Offset (ISFO) across subcarriers. Although existing estimation algorithms focus mainly on CFO and SFO, the effect of ISFO is insufficiently addressed. ISFO becomes highly detrimental to receiver performance when Orthogonal Frequency-Division Multiplexing (OFDM) systems use a large number of subcarriers and high-order modulation. Moreover, under joint CFO and SFO conditions, conventional Maximum Likelihood Estimation (MLE) methods often require one- or two-dimensional grid searches. This results in high computational cost. To reduce this cost, two joint estimation algorithms for CFO and SFO are proposed.  Methods   The influence of non-ideal factors at the transmitter, receiver, and channel, such as local oscillator offset, SFO in Digital-to-Analog Converters (DACs) and Analog-to-Digital Converters (ADCs), and the Doppler effect, is analyzed. A mathematical model for the received OFDM signal is developed, and the mechanism through which SFO and ISFO distort the phase of frequency-domain subcarriers is derived. Leveraging the pilot structure of 5G-NTN, two joint CFO and SFO estimation algorithms are proposed. (1) Algorithm 1 uses the sequence correlation between the received frequency-domain Demodulation Reference Signal (DMRS) vectors. After phase pre-compensation is applied, the normalized cross-correlation vector is computed. An objective function is constructed from this vector, and its unimodal behavior in the main lobe is used to estimate the parameters through a bisection search. (2) Algorithm 2 treats the estimation parameter as analogous to a CFO in single-carrier systems and adopts an L&R-based autocorrelation method to derive approximate closed-form expressions.  Results and Discussions   A computational complexity analysis compares the proposed algorithms with one-dimensional (1D-ML) and two-dimensional (2D-ML) grid-search MLE methods. Numerical results show that Algorithm 1 reduces complexity substantially. The number of complex multiplications, which represent the main computational cost, is 4% of that of the 2D-ML method, 8% of that of Algorithm 2, and 44% of that of the 1D-ML method. Although Algorithm 2 is more computationally demanding, it yields a closed-form estimation expression. The performance of each algorithm is evaluated through the Mean Square Error (MSE) of the estimated parameters. Simulations show that for a subcarrier number of 3072, the 1D-ML algorithm performs slightly better than the others at Signal-to-Noise Ratios (SNRs) below 5 dB. However, because robust modulation schemes such as BPSK and QPSK typically used at low SNRs tolerate larger offsets, the medium-to-high SNR range is of greater practical relevance. In this range, all four algorithms demonstrate comparable estimation performance.  Conclusions  This study addresses the effect of Doppler in 5G-NTN LEO satellite communications by analyzing the mechanism and influence of ISFO and by proposing two joint estimation algorithms for CFO and SFO. First, a mathematical model of the received signal is established considering non-ideal factors such as CFO, SFO, and ISFO. The combined effect of SFO and ISFO on OFDM signals is derived to be equivalent to their linear superposition, which expands the range of the equivalent SFO. Second, the objective function is defined using the cross-correlation vector of two DMRS sequences. By using its unimodal behavior within the main lobe, a binary search enables fast convergence. Subsequently, the parameter determined by SFO and ISFO is then treated as analogous to the CFO in single-carrier systems, allowing an approximate closed-form estimation solution to be obtained through the L&R method. Finally, complexity analysis and performance simulations show that the proposed algorithms provide significant computational savings and strong estimation performance. These results can support the development of 5G-NTN LEO satellite payloads and terminal products.
Secrecy Rate Maximization Algorithm for IRS Assisted UAV-RSMA Systems
WANG Zhengqiang, KONG Weidong, WAN Xiaoyu, FAN Zifu, DUO Bin
Available online  , doi: 10.11999/JEIT250452
Abstract:
  Objective  Under the stringent requirements of Sixth-Generation(6G) mobile communication networks for spectral efficiency, energy efficiency, low latency, and wide coverage, Unmanned Aerial Vehicle (UAV) communication has emerged as a key solution for 6G and beyond, leveraging its Line-of-Sight propagation advantages and flexible deployment capabilities. Functioning as aerial base stations, UAVs significantly enhance network performance by improving spectral efficiency and connection reliability, demonstrating irreplaceable value in critical scenarios such as emergency communications, remote area coverage, and maritime operations. However, UAV communication systems face dual challenges in high-mobility environments: severe multi-user interference in dense access scenarios that substantially degrades system performance, alongside critical physical-layer security threats resulting from the broadcast nature and spatial openness of wireless channels that enable malicious interception of transmitted signals. Rate-Splitting Multiple Access (RSMA) mitigates these challenges by decomposing user messages into common and private streams, thereby providing a flexible interference management mechanism that balances decoding complexity with spectral efficiency. This makes RSMA especially suitable for high-density user access scenarios. In parallel, Intelligent Reflecting Surfaces (IRS) have emerged as a promising technology to dynamically reconfigure wireless propagation through programmable electromagnetic unit arrays. IRS improves the quality of legitimate links while reducing the capacity of eavesdropping links, thereby enhancing physical-layer security in UAV communications. It is noteworthy that while existing research has predominantly centered on conventional multiple access schemes, the application potential of RSMA technology in IRS-assisted UAV communication systems remains relatively unexplored. Against this background, this paper investigates secure transmission strategies in IRS-assisted UAV-RSMA systems.  Methods  This paper investigates the effect of eavesdroppers on the security performance of UAV communication systems and proposes an IRS-assisted RSMA-based UAV communication model. The system comprises a multi-antenna UAV base station, an IRS mounted on a building, multiple single-antenna legitimate users, and multiple single-antenna eavesdroppers. The optimization problem is formulated to maximize the system secrecy rate by jointly optimizing precoding vectors, common secrecy rate allocation, IRS phase shifts, and UAV positioning. The problem is highly non-convex due to the strong coupling among these variables, rendering direct solutions intractable. To overcome this challenge, a two-layer optimization framework is developed. In the inner layer, with UAV position fixed, an alternating optimization strategy divides the problem into two subproblems: (1) joint optimization of precoding vectors and common secrecy rate allocation and (2) optimization of IRS phase shifts. Non-convex constraints are transformed into convex forms using techniques such as Successive Convex Approximation (SCA), relaxation variables, first-order Taylor expansion, and Semidefinite Relaxation (SDR). In the outer layer, the Particle Swarm Optimization (PSO) algorithm determines the UAV deployment position based on the optimized inner-layer variables.  Results and Discussions  Simulation results show that the proposed algorithm outperforms RSMA without IRS, NOMA with IRS, and NOMA without IRS in terms of secrecy rate. (Fig. 2) illustrates that the secrecy rate increases with the number of iterations and converges under different UAV maximum transmit power levels and antenna configurations. (Fig. 3) demonstrates that increasing UAV transmit power significantly enhances the secrecy rate for both the proposed and benchmark schemes. This improvement arises because higher transmit power strengthens the signal received by legitimate users, increasing their achievable rates and enhancing system secrecy performance. (Fig. 4) indicates that the secrecy rate grows with the number of UAV antennas. This improvement is due to expanded signal coverage and greater spatial degrees of freedom, which amplify effective signal strength in legitimate user channels. (Fig. 5) shows that both the proposed scheme and NOMA with IRS achieve higher secrecy rate as the number of IRS reflecting elements increases. The additional elements provide greater spatial degrees of freedom, improving channel gains for legitimate users and strengthening resistance to eavesdropping. In contrast, benchmark schemes operating without IRS assistance exhibit no performance improvement and maintain constant secrecy rate. This result highlights the critical role of the IRS in enabling secure communications. Finally, (Fig. 6) demonstrates the optimal UAV position when \begin{document}${P_{\max }} = 30{\text{ dBm}}$\end{document}. Deploying the UAV near the center of legitimate users and adjacent to the IRS minimizes the average distance to users, thereby reducing path loss and fully exploiting IRS passive beamforming. This placement strengthens legitimate signals while suppressing the eavesdropping link, leading to enhanced secrecy performance.  Conclusions  This study addresses secure communication scenarios with multiple eavesdroppers by proposing an IRS-assisted secure resource allocation algorithm for UAV-enabled RSMA systems. An optimization problem is formulated to maximize the system secrecy rate under multiple constraints, including UAV transmit power, by jointly optimizing precoding vectors, common rate allocation, IRS configurations, and UAV positioning. Due to the non-convex nature of the problem, a hierarchical optimization framework is developed to decompose it into two subproblems. These are effectively solved using techniques such as SCA, SDR, Gaussian randomization, and PSO. Simulation results confirm that the proposed algorithm achieves substantial secrecy rate gains over three benchmark schemes, thereby validating its effectiveness.
Special Topic on Security and Privacy Protection in Cyber-Physical Systems
Resilient Average Consensus for Second-Order Multi-Agent Systems: Algorithms and Application
FANG Chongrong, HUAN Yuehui, ZHENG Wenzhe, BAO Xianchen, LI Zheng
Available online  , doi: 10.11999/JEIT251155
Abstract:
  Objective  Multi-Agent Systems (MASs) are central to collaborative tasks in dynamic environments, and consensus algorithms are essential for applications such as formation control. However, MASs are vulnerable to misbehaviors (e.g., malicious attacks or accidental faults) that disrupt consensus and degrade system performance. Existing resilient consensus methods for first-order systems are insufficient for second-order MASs, where both position and velocity states must be considered. This study develops a resilient average consensus framework for second-order MASs that maintains accurate collaboration under misbehaviors. The main challenges are distributed error detection and compensation for two-dimensional state errors (position and velocity) using one-dimensional acceleration inputs.  Methods  The study derives sufficient conditions for second-order average consensus under misbehaviors using graph theory and Lyapunov stability analysis. The system is modeled as an undirected graph \begin{document}$ \mathcal{G}=(\mathcal{V},\mathcal{E}) $\end{document}, and agents follow double-integrator dynamics. Two algorithms are proposed. Finite Input-Errors Detection-Compensation (FIDC): For finite control input errors, Detection Strategies 1 and 2 use two-hop communication to detect discrepancies in neighbors’ states or control inputs. Compensation Scheme 1 generates input sequences that satisfy the consensus conditions in Corollary 1. Infinite Attack Detection-Compensation (IADC): For infinite errors in control inputs, velocities, and positions, the detection strategies are extended to identify falsified data. Compensation Schemes 2 and 3 reduce the effect of these errors, and an exponentially decaying error bound isolates persistent attackers. The algorithms are fully distributed and require no global information.  Results and Discussions  Simulations on a 10-agent network demonstrate the effectiveness of the algorithms. Under FIDC, agents reach exact average consensus despite finite input errors caused by malicious or faulty agents (Fig. 3). IADC ensures consensus among normal agents after isolating malicious agents that exceed the error bound (Fig. 4). Experiments on a multi-robot platform confirm resilience to real-world faults (e.g., actuator failures) and attacks (e.g., false data injection). In fault scenarios, FIDC reduces the deviation of the formation center from 180 mm to 34 mm (Fig. 6). Under attacks, IADC isolates malicious robots, allowing normal agents to converge correctly (Fig. 7). Analyses of relaxed Assumption 1 (non-adjacent misbehaving agents) show that Detection Strategy 3 and majority voting address certain connected malicious topologies (Fig. 2), although complex cases need further study.   Conclusions  This work presents a resilient average consensus framework for second-order MASs. Theoretically, the study provides sufficient conditions for consensus under misbehaviors. The FIDC and IADC algorithms enable distributed detection, compensation, and isolation of errors. Simulations and physical experiments verify that the methods achieve accurate average consensus under both finite and infinite errors. Future research will explore extensions to directed networks, time-varying topologies, and higher-dimensional systems.
AutoPenGPT: Drift-Resistant Penetration Testing Driven by Search-Space Convergence and Dependency Modeling
HUANG Weigang, FU Lirong, LIU Peiyu, DU Linkang, YE Tong, XIA Yifan, WANG Wenhai
Available online  , doi: 10.11999/JEIT250873
Abstract:
  Objective  Industrial Control Systems (ICS) are widely deployed in critical sectors and often contain long-standing vulnerabilities due to strict availability requirements and limited patching opportunities. The increasing exposure of external management and access infrastructure has expanded the attack surface and allows adversaries to pivot from boundary components into fragile production networks. Continuous penetration testing of these components is essential but remains costly and difficult to scale when carried out manually. Recent work examines Large Language Models (LLMs) for automated penetration testing; however, existing systems often experience strategy drift and intention drift, which produce incoherent testing behaviors and ineffective exploitation chains.  Methods  This study proposes AutoPenGPT, a multi-agent framework for automated Web security testing. AutoPenGPT uses an adaptive exploration-space convergence mechanism that predicts likely vulnerability types from target semantics and constrains LLM-driven testing through a dynamically updated payload knowledge base. To reduce intention drift in multi-step exploitation, a dependency-driven strategy module rewrites historical feedback, models step dependencies, and generates coherent, executable strategies in a closed-loop workflow. A semi-structured prompt embedding scheme is also developed to support heterogeneous penetration testing tasks while preserving semantic integrity.  Results and Discussions  AutoPenGPT is evaluated on Capture-the-Flag (CTF) benchmarks and real-world ICS and Web platforms. On CTF datasets, it achieves 97.62% vulnerability-type detection accuracy and an 80.95% requirement completion rate, exceeding state-of-the-art tools by a wide margin. In real-world deployments, it reaches approximately 70% requirement completion and identifies six previously undisclosed vulnerabilities, demonstrating practical effectiveness.  Conclusions   The contributions are threefold. (1) Strategy drift and intention drift in LLM-driven penetration testing are examined and addressed through adaptive exploration and dependency-aware strategy mechanisms that stabilize long-horizon testing behaviors. (2) AutoPenGPT is designed and implemented as a multi-agent penetration testing system that integrates semantic vulnerability prediction, closed-loop strategy generation, and semi-structured prompt embedding. (3) Extensive evaluation on CTF and real-world ICS and Web platforms confirms the effectiveness and practicality of the system, including the discovery of previously unknown vulnerabilities.
Image and Intelligent Information Processing
Research on Proximal Policy Optimization for Autonomous Long-Distance Rapid Rendezvous of Spacecraft
LIN Zheng, HU Haiying, DI Peng, ZHU Yongsheng, ZHOU Meijiang
Available online  , doi: 10.11999/JEIT250844
Abstract:
  Objective   With increasing demands from deep-space exploration, on-orbit servicing, and space debris removal missions, autonomous long-distance rapid rendezvous capabilities are required for future space operations. Traditional trajectory planning approaches based on analytical methods or heuristic optimization show limitations when complex dynamics, strong disturbances, and uncertainties are present, which makes it difficult to balance efficiency and robustness. Deep Reinforcement Learning (DRL) combines the approximation capability of deep neural networks with reinforcement learning-based decision-making, which supports adaptive learning and real-time decisions in high-dimensional continuous state and action spaces. In particular, Proximal Policy Optimization (PPO) is a representative policy gradient method because of its training stability, sample efficiency, and ease of implementation. Integration of DRL with PPO for spacecraft long-distance rapid rendezvous is therefore expected to overcome the limits of conventional methods and provide an intelligent, efficient, and robust solution for autonomous guidance in complex orbital environments.   Methods   A spacecraft orbital dynamics model is established by incorporating J2 perturbation, together with uncertainties arising from position and velocity measurement errors and actuator deviations during on-orbit operations. The long-distance rapid rendezvous problem is formulated as a Markov Decision Process, in which the state space includes position, velocity, and relative distance, and the action space is defined by impulse duration and direction. Fuel consumption and terminal position and velocity constraints are integrated into the model. On this basis, a DRL framework based on PPO is constructed. The policy network outputs maneuver command distributions, whereas the value network estimates state values to improve training stability. To address convergence difficulties caused by sparse rewards, an enhanced dense reward function is designed by combining a position potential function with a velocity guidance function. This design guides the agent toward the target while enabling gradual deceleration and improved fuel efficiency. The optimal maneuver strategy is obtained through simulation-based training, and robustness is evaluated under different uncertainty conditions.   Results and Discussions   Based on the proposed DRL framework, comprehensive simulations are conducted to assess effectiveness and robustness. In Case 1, three reward structures are examined: sparse reward, traditional dense reward, and an improved dense reward that integrates a relative position potential function with a velocity guidance term. The results show that reward design strongly affects convergence behavior and policy stability. Under sparse rewards, insufficient process feedback limits exploration of feasible actions. Traditional dense rewards provide continuous feedback and enable gradual convergence, but terminal velocity deviations are not fully corrected at later stages, which leads to suboptimal convergence and incomplete satisfaction of terminal constraints. In contrast, the improved dense reward guides the agent toward favorable behaviors from early training stages while penalizing undesirable actions at each step, which accelerates convergence and improves robustness. The velocity guidance term allows anticipatory adjustments during mid-to-late approach phases rather than delaying corrections to the terminal stage, resulting in improved fuel efficiency.Simulation results show that the maneuvering spacecraft performs 10 impulsive maneuvers, achieving a terminal relative distance of 21.326 km, a relative velocity of 0.005 0 km/s, and a total fuel consumption of 111.212 3 kg. To evaluate robustness under realistic uncertainties, 1 000 Monte Carlo simulations are performed. As summarized in Table 6, the mission success rate reaches 63.40%, and fuel consumption in all trials remains within acceptable bounds. In Case 2, PPO performance is compared with that of Deep Deterministic Policy Gradient (DDPG) for a multi-impulse fast-approach rendezvous mission. PPO results show five impulsive maneuvers, a terminal separation of 2.281 8 km, a relative velocity of 0.003 8 km/s, and a total fuel consumption of 4.148 6 kg. DDPG results show a fuel consumption of 4.322 5 kg, a final separation of 4.273 1 km, and a relative velocity of 0.002 0 km/s. Both methods satisfy mission requirements with comparable fuel use. However, DDPG requires a training time of 9 h 23 min, whereas PPO converges within 6 h 4 min, indicating lower computational cost. Overall, the improved PPO framework provides better learning efficiency, policy stability, and robustness.  Conclusions   The problem of autonomous long-distance rapid rendezvous under J2 perturbation and uncertainties is investigated, and a PPO-based trajectory optimization method is proposed. The results demonstrate that feasible maneuver trajectories satisfying terminal constraints can be generated under limited fuel and transfer time, with improved convergence speed, fuel efficiency, and robustness. The main contributions include: (1) development of an orbital dynamics framework that incorporates J2 perturbation and uncertainty modeling, with formulation of the rendezvous problem as a Markov Decision Process; (2) design of an enhanced dense reward function that combines position potential and velocity guidance, which improves training stability and convergence efficiency; and (3) simulation-based validation of PPO robustness in complex orbital environments. Future work will address sensor noise, environmental disturbances, and multi-spacecraft cooperative rendezvous in more complex mission scenarios to further improve practical applicability and generalization.
Satellite Navigation
Research on GRI Combination Design of eLORAN System
LIU Shiyao, ZHANG Shougang, HUA Yu
Available online  , doi: 10.11999/JEIT201066
Abstract:
To solve the problem of Group Repetition Interval (GRI) selection in the construction of the enhanced LORAN (eLORAN) system supplementary transmission station, a screening algorithm based on cross interference rate is proposed mainly from the mathematical point of view. Firstly, this method considers the requirement of second information, and on this basis, conducts a first screening by comparing the mutual Cross Rate Interference (CRI) with the adjacent Loran-C stations in the neighboring countries. Secondly, a second screening is conducted through permutation and pairwise comparison. Finally, the optimal GRI combination scheme is given by considering the requirements of data rate and system specification. Then, in view of the high-precision timing requirements for the new eLORAN system, an optimized selection is made in multiple optimal combinations. The analysis results show that the average interference rate of the optimal combination scheme obtained by this algorithm is comparable to that between the current navigation chains and can take into account the timing requirements, which can provide referential suggestions and theoretical basis for the construction of high-precision ground-based timing system.