Email alert
2024 Vol. 46, No. 1
column
- Cover
- Special Topic on Underwater Acoustic Sensing Technology and Application
- Wireless Communication and Internet of Things
- Radar, Sonar and Array Signal Processing
- Image and Intelligent Information Processing
- Cryption and Network Information Security
- Circuit and System Design
- Information of National Natural Science Foundation
Display Method:
2024, 46(1): 1-21.
doi: 10.11999/JEIT230424
Abstract:
Underwater Acoustic Communication(UAC) and network technology play an important role in marine environment monitoring, commercial field and military field, caring about the ocean, understanding the ocean, and managing the ocean are inseparable from the development of underwater acoustic communication and network technology. The UAC technology and Underwater Acoustic Communication Network(UACN) are reviewed in this paper. Firstly, the development of underwater acoustic communication technology and underwater acoustic communication network is reviewed, and the characteristics of underwater acoustic channel is summarized. Then, the incoherent modulation technology, coherent modulation technology and new communication technology oriented to application requirements in underwater acoustic communication technology are described, and the data link layer media access control protocol, network layer routing protocol and cross-layer design in underwater acoustic communication network are classified and discussed. Finally, the shortcomings of current underwater acoustic communication and network technology are summarized, and the future development of underwater acoustic communication and network technology is prospected.
Underwater Acoustic Communication(UAC) and network technology play an important role in marine environment monitoring, commercial field and military field, caring about the ocean, understanding the ocean, and managing the ocean are inseparable from the development of underwater acoustic communication and network technology. The UAC technology and Underwater Acoustic Communication Network(UACN) are reviewed in this paper. Firstly, the development of underwater acoustic communication technology and underwater acoustic communication network is reviewed, and the characteristics of underwater acoustic channel is summarized. Then, the incoherent modulation technology, coherent modulation technology and new communication technology oriented to application requirements in underwater acoustic communication technology are described, and the data link layer media access control protocol, network layer routing protocol and cross-layer design in underwater acoustic communication network are classified and discussed. Finally, the shortcomings of current underwater acoustic communication and network technology are summarized, and the future development of underwater acoustic communication and network technology is prospected.
2024, 46(1): 22-30.
doi: 10.11999/JEIT221304
Abstract:
Barrier coverage has become a research hotspot of Underwater Wireless Sensor Networks (UWSNs) in recent years. However, the barrier coverage of Underwater Directed Sensor Networks (UDSNs) has not been paid enough attention. The barrier coverage of static UDSNs under random deployment is so difficult that there are a few relevant research results on this problem. In this paper, the barrier coverage strategy of UDSNs on the basis of hierarchy graph is proposed for offseting that deficiency. In this strategy, the conditions for strong (weak) connection between two adjacent nodes under multiple location relationships is studied for the first time, then the coverage graph is built and graded based on this condition. On the foundation of hierarchy graph, appropriate nodes can be selected from the randomly distributed static UDSN. The experimental results show that adopting this algorithm less sensor nodes are adopted to construct barrier coverage on the premise of ensuring a high success rate. What's more, this algorithm can ensure higher network detection probability and longer network lifetime.
Barrier coverage has become a research hotspot of Underwater Wireless Sensor Networks (UWSNs) in recent years. However, the barrier coverage of Underwater Directed Sensor Networks (UDSNs) has not been paid enough attention. The barrier coverage of static UDSNs under random deployment is so difficult that there are a few relevant research results on this problem. In this paper, the barrier coverage strategy of UDSNs on the basis of hierarchy graph is proposed for offseting that deficiency. In this strategy, the conditions for strong (weak) connection between two adjacent nodes under multiple location relationships is studied for the first time, then the coverage graph is built and graded based on this condition. On the foundation of hierarchy graph, appropriate nodes can be selected from the randomly distributed static UDSN. The experimental results show that adopting this algorithm less sensor nodes are adopted to construct barrier coverage on the premise of ensuring a high success rate. What's more, this algorithm can ensure higher network detection probability and longer network lifetime.
2024, 46(1): 31-40.
doi: 10.11999/JEIT230344
Abstract:
The traditional ultra-short baseline positioning algorithm for stereo arrays exhibits high computational complexity and struggles to accurately represent the positioning error through an explicit formula. To address these challenges, a stereo array positioning algorithm based on vector projection is proposed. The observation equations between the baseline vectors in the stereo array and the target bearing were constructed based on the vector projection theorem, simplifying the positioning model of the traditional algorithm. In the proposed algorithm, the target bearing is obtained by solving a system of linear equations, which results in lower time complexity. Additionally, an accurate analytical representation of the positioning error applicable to the stereo array is provided based on the concise observation equations of the proposed algorithm. The simulation results show that the computational time required by the proposed algorithm is significantly lower compared to the traditional algorithm, and the variation pattern of the positioning error is consistent with the conclusion drawn from the theoretical analysis. The experimental results further indicate that the proposed algorithm achieves almost the same positioning accuracy but higher computation efficiency comparable to the traditional algorithm.
The traditional ultra-short baseline positioning algorithm for stereo arrays exhibits high computational complexity and struggles to accurately represent the positioning error through an explicit formula. To address these challenges, a stereo array positioning algorithm based on vector projection is proposed. The observation equations between the baseline vectors in the stereo array and the target bearing were constructed based on the vector projection theorem, simplifying the positioning model of the traditional algorithm. In the proposed algorithm, the target bearing is obtained by solving a system of linear equations, which results in lower time complexity. Additionally, an accurate analytical representation of the positioning error applicable to the stereo array is provided based on the concise observation equations of the proposed algorithm. The simulation results show that the computational time required by the proposed algorithm is significantly lower compared to the traditional algorithm, and the variation pattern of the positioning error is consistent with the conclusion drawn from the theoretical analysis. The experimental results further indicate that the proposed algorithm achieves almost the same positioning accuracy but higher computation efficiency comparable to the traditional algorithm.
2024, 46(1): 41-48.
doi: 10.11999/JEIT230380
Abstract:
Considering the hysteresis of model switching and the slow conversion rate of existing adaptive interacting multiple models, an improved algorithm of adaptive interacting multiple models with an unscented Kalman filter based on monotone transformation of model probability (mIMM-UKF) is proposed. In this algorithm, the monotonicity of the model probability in the posterior information is used, and this algorithm makes a secondary modification to the Markov probability transition matrix and model estimation probability is introduced. Consequently, an accelerated switching speed and conversion rate of the matching model are obtained. The simulation results show that compared to existing algorithms, this algorithm significantly improves the accuracy of target tracking by enabling swift switching of matching models.
Considering the hysteresis of model switching and the slow conversion rate of existing adaptive interacting multiple models, an improved algorithm of adaptive interacting multiple models with an unscented Kalman filter based on monotone transformation of model probability (mIMM-UKF) is proposed. In this algorithm, the monotonicity of the model probability in the posterior information is used, and this algorithm makes a secondary modification to the Markov probability transition matrix and model estimation probability is introduced. Consequently, an accelerated switching speed and conversion rate of the matching model are obtained. The simulation results show that compared to existing algorithms, this algorithm significantly improves the accuracy of target tracking by enabling swift switching of matching models.
2024, 46(1): 49-57.
doi: 10.11999/JEIT230026
Abstract:
Considering voids in the routing process of underwater acoustic sensor networks and low energy efficiency in data transmission, An Opportunistic Routing fusing Depth Adjustment and Adaptive Forwarding (OR-DAAF) technique is developed. Aiming at routing voids, instead of adopting the traditional detour strategy, OR-DAAF proposes a topology control-based void recovery mode algorithm, which uses the residual energy to grade void nodes and successively adjusts them to the new depth to overcome routing voids and restore network connectivity. Aiming at low energy efficiency in data transmission, OR-DAAF proposes a forwarding area division mechanism that selects the forwarding area to suppress redundant packets. It also puts forward a multi-hop and multi-objective routing decision index based on weighted advance distance, energy and link quality, comprehensively considering regional energy, link quality and advance distance to achieve an energy efficiency balance. Experimental results show that compared with a Doppler VHF omnidirectional range, OR-DAAF improves packet delivery rate by 10% and network lifetime by 48.7%, respectively and reduces delay by 22%.
Considering voids in the routing process of underwater acoustic sensor networks and low energy efficiency in data transmission, An Opportunistic Routing fusing Depth Adjustment and Adaptive Forwarding (OR-DAAF) technique is developed. Aiming at routing voids, instead of adopting the traditional detour strategy, OR-DAAF proposes a topology control-based void recovery mode algorithm, which uses the residual energy to grade void nodes and successively adjusts them to the new depth to overcome routing voids and restore network connectivity. Aiming at low energy efficiency in data transmission, OR-DAAF proposes a forwarding area division mechanism that selects the forwarding area to suppress redundant packets. It also puts forward a multi-hop and multi-objective routing decision index based on weighted advance distance, energy and link quality, comprehensively considering regional energy, link quality and advance distance to achieve an energy efficiency balance. Experimental results show that compared with a Doppler VHF omnidirectional range, OR-DAAF improves packet delivery rate by 10% and network lifetime by 48.7%, respectively and reduces delay by 22%.
2024, 46(1): 58-66.
doi: 10.11999/JEIT230253
Abstract:
Underwater acoustic signal detection plays a crucial role in ocean defense systems and has broad applications in civilian domains. However, contemporary underwater acoustic signal detection methods need to be improved for effectiveness when prior information about the target is unavailable. This paper proposes a new algorithm - a similarity network - to address the challenge of underwater target detection in complex oceanic backgrounds. In this method, information geometry and complex network theory are combined, and the problem of measuring node similarity is converted into a geometric problem on a matrix manifold, wherein the similarity between data at different time scales is determined, and a network representation of the time series data is achieved. Concurrently, a graph signal processing theory is introduced to extract the hidden dynamic characteristics of the target signal, thereby achieving underwater acoustic signal detection without prior target information. Further, the effectiveness of this method is demonstrated through research and verification of the simulated and actual. Our results show that the similarity network method is superior to existing network construction and passive target detection methods, can detect underwater acoustic signals more effectively, and can achieve underwater acoustic signal detection without any prior target information.
Underwater acoustic signal detection plays a crucial role in ocean defense systems and has broad applications in civilian domains. However, contemporary underwater acoustic signal detection methods need to be improved for effectiveness when prior information about the target is unavailable. This paper proposes a new algorithm - a similarity network - to address the challenge of underwater target detection in complex oceanic backgrounds. In this method, information geometry and complex network theory are combined, and the problem of measuring node similarity is converted into a geometric problem on a matrix manifold, wherein the similarity between data at different time scales is determined, and a network representation of the time series data is achieved. Concurrently, a graph signal processing theory is introduced to extract the hidden dynamic characteristics of the target signal, thereby achieving underwater acoustic signal detection without prior target information. Further, the effectiveness of this method is demonstrated through research and verification of the simulated and actual. Our results show that the similarity network method is superior to existing network construction and passive target detection methods, can detect underwater acoustic signals more effectively, and can achieve underwater acoustic signal detection without any prior target information.
2024, 46(1): 67-73.
doi: 10.11999/JEIT221563
Abstract:
Considering the effects of an asynchronous clock and acoustic stratification, the localization problem of an underwater target node was studied when the measurement process was disrupted by unknown noise and the anchor position was uncertain. The time of flight model between underwater nodes is constructed, an interactive asynchronous communication protocol is designed, and an optimization objective function to minimize the localization error is established. An underwater target localization algorithm based on deep reinforcement learning is proposed, and layer normalization is used to improve the generalization ability of the model. Finally, simulation and experimental results validate the effectiveness of the proposed method.
Considering the effects of an asynchronous clock and acoustic stratification, the localization problem of an underwater target node was studied when the measurement process was disrupted by unknown noise and the anchor position was uncertain. The time of flight model between underwater nodes is constructed, an interactive asynchronous communication protocol is designed, and an optimization objective function to minimize the localization error is established. An underwater target localization algorithm based on deep reinforcement learning is proposed, and layer normalization is used to improve the generalization ability of the model. Finally, simulation and experimental results validate the effectiveness of the proposed method.
2024, 46(1): 74-82.
doi: 10.11999/JEIT230149
Abstract:
Current research on the classification of ship-radiated noise utilizing deep neural networks primarily focuses on aspects of classification performance and disregards model interpretation. To address this issue, an approach involving guided backwardpropagation and input space optimization has been utilized to develop a Convolutional Neural Network (CNN) for ship-radiated noise classification. This CNN takes a logarithmic scale spectrum as input and is based on the DeepShip dataset, thus presenting a visualization method for ship-radiated noise classification. Results reveal that the multiframe feature alignment algorithm enhances the visualization effect, and the deep convolutional kernel detects two types of features: line spectrum and background. Notably, the line spectrum has been identified as a reliable feature for ship classification. Therefore, a convolutional kernel pruning method has been proposed. This approach not only enhances the performance of CNN classification, but also enhances the stability of the training process. The results of the guided backwardpropagation visualization suggest that the post-pruning CNN increasingly emphasizes the consideration of line spectrum information.
Current research on the classification of ship-radiated noise utilizing deep neural networks primarily focuses on aspects of classification performance and disregards model interpretation. To address this issue, an approach involving guided backwardpropagation and input space optimization has been utilized to develop a Convolutional Neural Network (CNN) for ship-radiated noise classification. This CNN takes a logarithmic scale spectrum as input and is based on the DeepShip dataset, thus presenting a visualization method for ship-radiated noise classification. Results reveal that the multiframe feature alignment algorithm enhances the visualization effect, and the deep convolutional kernel detects two types of features: line spectrum and background. Notably, the line spectrum has been identified as a reliable feature for ship classification. Therefore, a convolutional kernel pruning method has been proposed. This approach not only enhances the performance of CNN classification, but also enhances the stability of the training process. The results of the guided backwardpropagation visualization suggest that the post-pruning CNN increasingly emphasizes the consideration of line spectrum information.
2024, 46(1): 83-91.
doi: 10.11999/JEIT230183
Abstract:
In the Multiple Input Multiple Output Orthogonal Time Frequency Space (MIMO-OTFS) underwater acoustic communication system, MIMO-OTFS communication based on the Message Passing (MP) algorithm have problems with high computational complexity, which may increase equipment costs in practical applications. To solve this problem, an MIMO-OTFS equalization algorithm based on two-dimensional Virtual Time Reversal Mirror (VTRM) is proposed, which uses the time-frequency-space focusing characteristics of VTRM to effectively improve the equalization performance. The channel estimation is performed using the Improved two-dimensional Proportional Normalized Least Mean Square (IPNLMS) algorithm, which utilizes the sparse characteristics of the time-delay Doppler domain channel to improve convergence speed at a lower computational complexity. Finally, residual inter-symbol interference is eliminated and system performance is further improved through the use of the two-dimensional adaptive decision feedback equalization algorithm. The simulation results demonstrate the feasibility of the proposed equalization algorithm, and show that it has lower complexity than the MP algorithm while ensuring the same performance.
In the Multiple Input Multiple Output Orthogonal Time Frequency Space (MIMO-OTFS) underwater acoustic communication system, MIMO-OTFS communication based on the Message Passing (MP) algorithm have problems with high computational complexity, which may increase equipment costs in practical applications. To solve this problem, an MIMO-OTFS equalization algorithm based on two-dimensional Virtual Time Reversal Mirror (VTRM) is proposed, which uses the time-frequency-space focusing characteristics of VTRM to effectively improve the equalization performance. The channel estimation is performed using the Improved two-dimensional Proportional Normalized Least Mean Square (IPNLMS) algorithm, which utilizes the sparse characteristics of the time-delay Doppler domain channel to improve convergence speed at a lower computational complexity. Finally, residual inter-symbol interference is eliminated and system performance is further improved through the use of the two-dimensional adaptive decision feedback equalization algorithm. The simulation results demonstrate the feasibility of the proposed equalization algorithm, and show that it has lower complexity than the MP algorithm while ensuring the same performance.
Line Spectrum Enhancement of Underwater Acoustic Targets Based on a Time-Frequency Attention Network
2024, 46(1): 92-100.
doi: 10.11999/JEIT230217
Abstract:
Deep learning-based line spectrum enhancement methods have received increasing attention for improving the detection performance of underwater low-noise targets using passive sonar. Among them, Long Short-Term Memory (LSTM)-based line spectrum enhancement networks have high flexibility due to their nonlinear processing capabilities in time and frequency domains. However, their performance requires further improvement. Therefore, a Time-Frequency Attention Network (TFA-Net) is proposed herein. The line spectrum enhancement effect of the LOw-Frequency Analysis Record (LOFAR) spectrum can be improved by incorporating the time and frequency-domain attention mechanisms into LSTM networks, In TFA-Net, the time-domain attention mechanism utilizes the correlation between the hidden states of LSTM to increase the model’s attention in the time domain, while the frequency-domain attention mechanism increases the model’s attention in the frequency domain by designing the full link layer of the shrinkage sub-network in deep residual shrinkage networks as a one-dimensional convolutional layer. Compared to LSTM, TFA-Net has a higher system signal-to-noise ratio gain: when the input signal-to-noise ratio is –3 dB and –11 dB, the system signal-to-noise ratio gain is increased from 2.17 to 12.56 dB and from 0.71 to 10.6 dB, respectively. Experimental results based on simulated and real data show that TFA-Net could effectively improve the line spectrum enhancement effect of the LOFAR spectrum and address the problem of detecting underwater low-noise targets.
Deep learning-based line spectrum enhancement methods have received increasing attention for improving the detection performance of underwater low-noise targets using passive sonar. Among them, Long Short-Term Memory (LSTM)-based line spectrum enhancement networks have high flexibility due to their nonlinear processing capabilities in time and frequency domains. However, their performance requires further improvement. Therefore, a Time-Frequency Attention Network (TFA-Net) is proposed herein. The line spectrum enhancement effect of the LOw-Frequency Analysis Record (LOFAR) spectrum can be improved by incorporating the time and frequency-domain attention mechanisms into LSTM networks, In TFA-Net, the time-domain attention mechanism utilizes the correlation between the hidden states of LSTM to increase the model’s attention in the time domain, while the frequency-domain attention mechanism increases the model’s attention in the frequency domain by designing the full link layer of the shrinkage sub-network in deep residual shrinkage networks as a one-dimensional convolutional layer. Compared to LSTM, TFA-Net has a higher system signal-to-noise ratio gain: when the input signal-to-noise ratio is –3 dB and –11 dB, the system signal-to-noise ratio gain is increased from 2.17 to 12.56 dB and from 0.71 to 10.6 dB, respectively. Experimental results based on simulated and real data show that TFA-Net could effectively improve the line spectrum enhancement effect of the LOFAR spectrum and address the problem of detecting underwater low-noise targets.
2024, 46(1): 101-108.
doi: 10.11999/JEIT230337
Abstract:
Image registration is the cornerstone of sonar for high-precision interferometry. This study presents an innovative method for registering sonar interference images, utilizing the Fourth-order Partial Differential Equation (FPDE) in conjunction with the scale-invariant feature transform. This technique is specifically tailored for underwater sonar targets. This method specifically addresses the challenges associated with sonar image registration. First, we establish the scale space by employing the FPDE. This process filters noise while preserving image details, resulting in an improved accuracy of feature extraction. The proposed method utilizes phase congruency information to counter false feature point detection due to the residual noise, thereby screening and simplifying the sample set of feature points. Ultimately, the features point matching strategy undergoes optimization, with an enhanced fast sample consensus matching strategy proposed to rectify feature point mismatches. The algorithm increases the number of matching point pairs and augments their precision, ultimately achieving precise registration of sonar interference images. Rigorous tests, both under controlled conditions and lake environments, demonstrate the algorithm’s superior applicability to sonar images compared with existing approaches. The root-mean-square-error and mean-square-error are calculated post-registration using leave-one-out analysis, both are under one pixel, attesting to the algorithm’s achievement of sub-pixel registration accuracy.
Image registration is the cornerstone of sonar for high-precision interferometry. This study presents an innovative method for registering sonar interference images, utilizing the Fourth-order Partial Differential Equation (FPDE) in conjunction with the scale-invariant feature transform. This technique is specifically tailored for underwater sonar targets. This method specifically addresses the challenges associated with sonar image registration. First, we establish the scale space by employing the FPDE. This process filters noise while preserving image details, resulting in an improved accuracy of feature extraction. The proposed method utilizes phase congruency information to counter false feature point detection due to the residual noise, thereby screening and simplifying the sample set of feature points. Ultimately, the features point matching strategy undergoes optimization, with an enhanced fast sample consensus matching strategy proposed to rectify feature point mismatches. The algorithm increases the number of matching point pairs and augments their precision, ultimately achieving precise registration of sonar interference images. Rigorous tests, both under controlled conditions and lake environments, demonstrate the algorithm’s superior applicability to sonar images compared with existing approaches. The root-mean-square-error and mean-square-error are calculated post-registration using leave-one-out analysis, both are under one pixel, attesting to the algorithm’s achievement of sub-pixel registration accuracy.
2024, 46(1): 109-117.
doi: 10.11999/JEIT230375
Abstract:
Considering the underwater acoustic bearings-only passive localization, the current research usually uses the optimal estimation point trace to represent the tracking state of the measured target, but point estimation cannot express directional position error information, resulting in the inability to provide better decision support for the actual battlefield. In view of the above problems, bearing-only underwater target tracking scheme based on Area Of Uncertainty (AOU) containing spatial error information is proposed. Firstly, localization algorithm based on variable weighting analysis is introduced to obtain accurate target position information. The target position is then used as prior knowledge for the AOU construction algorithm. Subsequently, the algorithms for constructing uncertain regions with and without filtering are employed to output the target's position uncertainty area. By statistically analyzing the evaluation metrics of the AOU under different simulation scenarios, the results demonstrate that the target tracking scheme based on AOU can reliably and accurately estimate the target position. It indicates that the proposed target tracking scheme based on uncertain regions can effectively fulfill the task of target tracking, the advantage of this approach lies that the target estimation results include directional position errors and confidence intervals for interval estimation. this provides clear fault-tolerant and judgment regions for subsequent decision-making, This offers enhanced reference value and practical value.
Considering the underwater acoustic bearings-only passive localization, the current research usually uses the optimal estimation point trace to represent the tracking state of the measured target, but point estimation cannot express directional position error information, resulting in the inability to provide better decision support for the actual battlefield. In view of the above problems, bearing-only underwater target tracking scheme based on Area Of Uncertainty (AOU) containing spatial error information is proposed. Firstly, localization algorithm based on variable weighting analysis is introduced to obtain accurate target position information. The target position is then used as prior knowledge for the AOU construction algorithm. Subsequently, the algorithms for constructing uncertain regions with and without filtering are employed to output the target's position uncertainty area. By statistically analyzing the evaluation metrics of the AOU under different simulation scenarios, the results demonstrate that the target tracking scheme based on AOU can reliably and accurately estimate the target position. It indicates that the proposed target tracking scheme based on uncertain regions can effectively fulfill the task of target tracking, the advantage of this approach lies that the target estimation results include directional position errors and confidence intervals for interval estimation. this provides clear fault-tolerant and judgment regions for subsequent decision-making, This offers enhanced reference value and practical value.
2024, 46(1): 118-128.
doi: 10.11999/JEIT230495
Abstract:
The absorption or scattering of light under water causes problems such as color cast, blur and occlusion in underwater image imaging, which affects underwater vision tasks. Traditional image enhancement methods use histogram equalization, gamma correction and white balance methods to enhance underwater images well. However, there are few studies on the complementarity and correlation of the three methods fused to enhance underwater images. Therefore, an underwater image enhancement network based on multi-channel hybrid attention mechanism is proposed. Firstly, a multi-channel feature extraction module is proposed to extract the contrast, brightness and color features of the image by multi-channel feature extraction of histogram equalization branch, gamma correction branch and white balance branch. Then, the three branch features of histogram equalization, gamma correction and white balance are fused to enhance the complementarity of three branch feature fusion. Finally, a hybrid attention learning module is designed to deeply mine the correlation matrix of the three branches in contrast, brightness and color, and skip connections are introduced to enhance the image output. Experimental results on multiple datasets show that the proposed method can effectively recover the color cast, blur occlusion and improve the brightness of underwater images.
The absorption or scattering of light under water causes problems such as color cast, blur and occlusion in underwater image imaging, which affects underwater vision tasks. Traditional image enhancement methods use histogram equalization, gamma correction and white balance methods to enhance underwater images well. However, there are few studies on the complementarity and correlation of the three methods fused to enhance underwater images. Therefore, an underwater image enhancement network based on multi-channel hybrid attention mechanism is proposed. Firstly, a multi-channel feature extraction module is proposed to extract the contrast, brightness and color features of the image by multi-channel feature extraction of histogram equalization branch, gamma correction branch and white balance branch. Then, the three branch features of histogram equalization, gamma correction and white balance are fused to enhance the complementarity of three branch feature fusion. Finally, a hybrid attention learning module is designed to deeply mine the correlation matrix of the three branches in contrast, brightness and color, and skip connections are introduced to enhance the image output. Experimental results on multiple datasets show that the proposed method can effectively recover the color cast, blur occlusion and improve the brightness of underwater images.
2024, 46(1): 129-137.
doi: 10.11999/JEIT221509
Abstract:
Focusing on examine the influence of channel errors and the fairness of energy collected by users in 6G internet of things, the problem of maximizing fairness energy for Intelligent Reflecting Surface (IRS)-aided Simultaneous Wireless Information and Power Transfer (SWIPT) is examined when the users have a limited signal-to-interference noise ratio, a transmission power constraint and a reflection phase mode one constraint. As part of the process of solving the nonconvex problem, Schur Complete and S-Process are used to convert the infinite dimensional constraint into a linear inequality involving a finite dimensional matrix, and then the original difficult-to-solve problem is transformed into a standard convex optimization problem using the penalty function and continuous convex approximation, and then an iterative robust fairness energy acquisition algorithm is proposed. Numerical results indicate that the proposed robust optimization algorithm improves the fairness of network harvested energy significantly compared to previous algorithms.
Focusing on examine the influence of channel errors and the fairness of energy collected by users in 6G internet of things, the problem of maximizing fairness energy for Intelligent Reflecting Surface (IRS)-aided Simultaneous Wireless Information and Power Transfer (SWIPT) is examined when the users have a limited signal-to-interference noise ratio, a transmission power constraint and a reflection phase mode one constraint. As part of the process of solving the nonconvex problem, Schur Complete and S-Process are used to convert the infinite dimensional constraint into a linear inequality involving a finite dimensional matrix, and then the original difficult-to-solve problem is transformed into a standard convex optimization problem using the penalty function and continuous convex approximation, and then an iterative robust fairness energy acquisition algorithm is proposed. Numerical results indicate that the proposed robust optimization algorithm improves the fairness of network harvested energy significantly compared to previous algorithms.
Beam Configuration for Millimeter Wave Communication Systems Based on Distributed Federated Learning
2024, 46(1): 138-145.
doi: 10.11999/JEIT221536
Abstract:
Considering the complex beam configuration problem of ultra-dense millimeter wave communication systems, a Beam management Method based on Distributed Federation Learning (BMDFL) is proposed to maximize the beam coverage by using the limited beam resources. To solve the problem of user data security in traditional centralized learning, the system model is constructed based on DFL, which can reduce the leakage of user privacy information. In order to realize intelligent configuration of beams, Double Deep Q-Network (DDQN) is introduced to train the system model, and the long-term dynamic optimization problem is transformed into the corresponding mathematical model through the Markov decision process. Simulation results demonstrate the effectiveness and robustness of the proposed method in terms of network throughput and user coverage.
Considering the complex beam configuration problem of ultra-dense millimeter wave communication systems, a Beam management Method based on Distributed Federation Learning (BMDFL) is proposed to maximize the beam coverage by using the limited beam resources. To solve the problem of user data security in traditional centralized learning, the system model is constructed based on DFL, which can reduce the leakage of user privacy information. In order to realize intelligent configuration of beams, Double Deep Q-Network (DDQN) is introduced to train the system model, and the long-term dynamic optimization problem is transformed into the corresponding mathematical model through the Markov decision process. Simulation results demonstrate the effectiveness and robustness of the proposed method in terms of network throughput and user coverage.
2024, 46(1): 146-154.
doi: 10.11999/JEIT221560
Abstract:
Considering the insufficient measurements for reflection and transmission characteristics of millimeter-wave channel and inaccurate calculation methods for propagation coefficients of multilayer building materials, an extensive investigation on the reflection and transmission characteristics of millimeter-wave channel at 40–50 GHz is conducted for 6G Integrated Sensing And Communications (ISAC). Firstly, a method for calculating propagation coefficients of multilayer building materials is proposed based on Fresnel theory and shooting and bouncing ray principle. Furthermore, extensive measurement campaigns at 40–50 GHz are carried out to obtain reflection and transmission coefficients of multilayer wood and glass using VNA-based millimeter-wave channel sounder. The results show that the measured values are in good agreement with the theoretical values and the propagation coefficient error is less than 0.1, which verify the accuracy and effectiveness of the proposed method. Besides, it is also found that the resonant period and effective brewster angle of reflection coefficient are dependent on polarization, incident angle, and material thickness.
Considering the insufficient measurements for reflection and transmission characteristics of millimeter-wave channel and inaccurate calculation methods for propagation coefficients of multilayer building materials, an extensive investigation on the reflection and transmission characteristics of millimeter-wave channel at 40–50 GHz is conducted for 6G Integrated Sensing And Communications (ISAC). Firstly, a method for calculating propagation coefficients of multilayer building materials is proposed based on Fresnel theory and shooting and bouncing ray principle. Furthermore, extensive measurement campaigns at 40–50 GHz are carried out to obtain reflection and transmission coefficients of multilayer wood and glass using VNA-based millimeter-wave channel sounder. The results show that the measured values are in good agreement with the theoretical values and the propagation coefficient error is less than 0.1, which verify the accuracy and effectiveness of the proposed method. Besides, it is also found that the resonant period and effective brewster angle of reflection coefficient are dependent on polarization, incident angle, and material thickness.
2024, 46(1): 155-164.
doi: 10.11999/JEIT221599
Abstract:
In vehicular networks, the Point Of Interest (POI) query is widely used in Location-Based Services (LBS) for vehicle applications. However, since the attackers can easily access the location, query content, and other information, it is difficult to protect the LBS privacy of vehicle users only using location privacy protection or query privacy protection independently. Therefore, a location privacy and query privacy joint protection scheme based on dummy sequences is proposed. According to the limitations of the POI query, the correlations between location privacy and query privacy are modelled to obtain the correlation judgment model characterized by the Euclidean distance and the association rule algorithm. Moreover, based on dummy sequences, the joint protection is transformed into the dummy sequence selection according to the factors that affect user privacy and the correlation value of real query. Then a constrained multi-objective optimization model is established to obtain the query sequence with a high level of anonymity and a big cloaking region. Experimental results demonstrate that our scheme can resist joint attacks on location privacy and query privacy and protect users’ LBS privacy more efficiently than existing schemes.
In vehicular networks, the Point Of Interest (POI) query is widely used in Location-Based Services (LBS) for vehicle applications. However, since the attackers can easily access the location, query content, and other information, it is difficult to protect the LBS privacy of vehicle users only using location privacy protection or query privacy protection independently. Therefore, a location privacy and query privacy joint protection scheme based on dummy sequences is proposed. According to the limitations of the POI query, the correlations between location privacy and query privacy are modelled to obtain the correlation judgment model characterized by the Euclidean distance and the association rule algorithm. Moreover, based on dummy sequences, the joint protection is transformed into the dummy sequence selection according to the factors that affect user privacy and the correlation value of real query. Then a constrained multi-objective optimization model is established to obtain the query sequence with a high level of anonymity and a big cloaking region. Experimental results demonstrate that our scheme can resist joint attacks on location privacy and query privacy and protect users’ LBS privacy more efficiently than existing schemes.
2024, 46(1): 165-174.
doi: 10.11999/JEIT221554
Abstract:
To solve the problem of low security and poor transmission quality in cellular communication systems caused by eavesdroppers, obstacles and channel uncertainties, a robust secure resource allocation algorithm for Intelligent Reflecting Surface (IRS)-assisted multi-antenna communication systems is proposed. Firstly, a robust resource allocation problem with bounded channel uncertainties is formulated by jointly optimizing the active beam of the base station, the passive beam of the IRS, meanwhile the secure rate constraint of legitimate users, the maximum transmit power constraint and the phase shift constraint of the IRS are considered. Then, the original non-convex problem with parametric perturbation is transformed using S-procedure, successive convex approximation, alternating optimization and penalty function to obtain a deterministic convex optimization problem that can be solved directly. Finally, an iteration-based robust energy efficiency maximization algorithm is proposed. Simulation results show that the proposed algorithm has good energy efficiency and strong robustness.
To solve the problem of low security and poor transmission quality in cellular communication systems caused by eavesdroppers, obstacles and channel uncertainties, a robust secure resource allocation algorithm for Intelligent Reflecting Surface (IRS)-assisted multi-antenna communication systems is proposed. Firstly, a robust resource allocation problem with bounded channel uncertainties is formulated by jointly optimizing the active beam of the base station, the passive beam of the IRS, meanwhile the secure rate constraint of legitimate users, the maximum transmit power constraint and the phase shift constraint of the IRS are considered. Then, the original non-convex problem with parametric perturbation is transformed using S-procedure, successive convex approximation, alternating optimization and penalty function to obtain a deterministic convex optimization problem that can be solved directly. Finally, an iteration-based robust energy efficiency maximization algorithm is proposed. Simulation results show that the proposed algorithm has good energy efficiency and strong robustness.
2024, 46(1): 175-183.
doi: 10.11999/JEIT230051
Abstract:
Vehicular Edge Computing (VEC) has become a promising and prospective paradigm for computation-intensive and delay-sensitive tasks. However, edge servers are less capable of integrating renewable energy. Therefore, in order to improve the energy efficiency of edge servers, a green computing oriented vehicle collaborative task offloading framework is proposed. In this framework, vehicles equipped with Energy Harvest (EH) devices cooperate to perform tasks by sharing green energy and computing resources with each other. To effectively enhance the participation enthusiasm of vehicles, dynamic pricing is adopted to motivate vehicles, and the mobility and task priority are also considered comprehensively. In order to adapt the offloading decisions to the dynamic environment, a Twin Delayed Deep Deterministic policy gradient (TD3) based task offloading method is proposed to maximize the average task completion utility of all vehicles while reducing the use of grid power. Finally, simulation results verify the effectiveness of the proposed method, and the performance achieves 7.34% and 37.47% improvement respectively compared with Deep Deterministic Policy Gradient (DDPG) based method and Greedy Principle Execution (GPE) method.
Vehicular Edge Computing (VEC) has become a promising and prospective paradigm for computation-intensive and delay-sensitive tasks. However, edge servers are less capable of integrating renewable energy. Therefore, in order to improve the energy efficiency of edge servers, a green computing oriented vehicle collaborative task offloading framework is proposed. In this framework, vehicles equipped with Energy Harvest (EH) devices cooperate to perform tasks by sharing green energy and computing resources with each other. To effectively enhance the participation enthusiasm of vehicles, dynamic pricing is adopted to motivate vehicles, and the mobility and task priority are also considered comprehensively. In order to adapt the offloading decisions to the dynamic environment, a Twin Delayed Deep Deterministic policy gradient (TD3) based task offloading method is proposed to maximize the average task completion utility of all vehicles while reducing the use of grid power. Finally, simulation results verify the effectiveness of the proposed method, and the performance achieves 7.34% and 37.47% improvement respectively compared with Deep Deterministic Policy Gradient (DDPG) based method and Greedy Principle Execution (GPE) method.
2024, 46(1): 184-194.
doi: 10.11999/JEIT230015
Abstract:
To solve the problem that Multicast Request flows (MRs) need to traverse sequentially a Service Function Tree (SFT) consisting of Virtual Network Functions (VNFs) as well as ensuring stringent delay and jitter constraints of SFT in Network Function Virtualization (NFV)-enabled Software-Defined Networks (SDNs), a routing algorithm for constructing a multicast SFT based on depth-first search with an optimal link selection function is proposed. Firstly, the relative cost functions of network resources are proposed to guarantee the automatic load balancing of the network. Secondly, an Integer Linear Programming model (ILP) for the SFT dynamic embedding is constructed by jointly considering network resources, VNF dynamic placement and delay and jitter constraints of a multicast flow. Finally, for this NP-hard problem, an auxiliary edge-weight graph and optimal link selection function are designed for routing path selection, and a delay and jitter-aware SFT Embedding Algorithm (SFT-EA) is proposed with the objective of minimizing the resource consumption cost. Simulation results demonstrate the SFT-EA has better performance in terms of throughput, traffic acceptance rate, and network load balance.
To solve the problem that Multicast Request flows (MRs) need to traverse sequentially a Service Function Tree (SFT) consisting of Virtual Network Functions (VNFs) as well as ensuring stringent delay and jitter constraints of SFT in Network Function Virtualization (NFV)-enabled Software-Defined Networks (SDNs), a routing algorithm for constructing a multicast SFT based on depth-first search with an optimal link selection function is proposed. Firstly, the relative cost functions of network resources are proposed to guarantee the automatic load balancing of the network. Secondly, an Integer Linear Programming model (ILP) for the SFT dynamic embedding is constructed by jointly considering network resources, VNF dynamic placement and delay and jitter constraints of a multicast flow. Finally, for this NP-hard problem, an auxiliary edge-weight graph and optimal link selection function are designed for routing path selection, and a delay and jitter-aware SFT Embedding Algorithm (SFT-EA) is proposed with the objective of minimizing the resource consumption cost. Simulation results demonstrate the SFT-EA has better performance in terms of throughput, traffic acceptance rate, and network load balance.
2024, 46(1): 195-203.
doi: 10.11999/JEIT221517
Abstract:
Because of the information explosion caused by the surge of data, traditional centralized cloud computing is overwhelmed, Edge Computing Network (ECN) is proposed to alleviate the burden on cloud servers. In contrast, by permitting Federated Learning (FL) in the ECN, data localization processing could be realized to successfully address the data security problem of Edge Nodes (ENs) in collaborative learning. However, traditional FL exposes the central server to single-point attacks, resulting in system performance degradation or even task failure. In this paper, we propose Asynchronous Federated Learning based on Blockchain technology (AFLChain) in the ECN that can dynamically assign learning tasks to ENs based on their computing capabilities to boost learning efficiency. In addition, based on the computing capability of ENs, model training progress and historical reputation, the entropy weight reputation mechanism is implemented to assess and rank the enthusiasm of ENs, eliminating low quality ENs to further improve the performance of the AFLChain. Finally, the Subgradient based Optimal Resource Allocation (SORA) algorithm is proposed to reduce network latency by optimizing transmission power and computing resource allocation simultaneously. The simulation results demonstrate the model training efficiency of the AFLChain and the convergence of the SORA algorithm and the efficacy of the proposed algorithms.
Because of the information explosion caused by the surge of data, traditional centralized cloud computing is overwhelmed, Edge Computing Network (ECN) is proposed to alleviate the burden on cloud servers. In contrast, by permitting Federated Learning (FL) in the ECN, data localization processing could be realized to successfully address the data security problem of Edge Nodes (ENs) in collaborative learning. However, traditional FL exposes the central server to single-point attacks, resulting in system performance degradation or even task failure. In this paper, we propose Asynchronous Federated Learning based on Blockchain technology (AFLChain) in the ECN that can dynamically assign learning tasks to ENs based on their computing capabilities to boost learning efficiency. In addition, based on the computing capability of ENs, model training progress and historical reputation, the entropy weight reputation mechanism is implemented to assess and rank the enthusiasm of ENs, eliminating low quality ENs to further improve the performance of the AFLChain. Finally, the Subgradient based Optimal Resource Allocation (SORA) algorithm is proposed to reduce network latency by optimizing transmission power and computing resource allocation simultaneously. The simulation results demonstrate the model training efficiency of the AFLChain and the convergence of the SORA algorithm and the efficacy of the proposed algorithms.
Robust Resource Allocation Algorithm in MU-MISO Backscatter Communication Systems with Eavesdroppers
2024, 46(1): 204-212.
doi: 10.11999/JEIT221508
Abstract:
Focusing on the problems of inaccurate channel estimation and easy eavesdropping of information in backscatter communication systems, a robust resource allocation algorithm for Multi-User Multi-Input Single-Output (MU-MISO) backscatter communication systems based on user eavesdropping is proposed to improve the transmission robustness and information security of the systems. Firstly, considering the constraints on the maximum power of the base station, time allocation, channel uncertainties, energy collection, and security rate, a robust resource allocation problem for MU-MISO backscatter communication systems is established. Secondly, based on the nonlinear energy harvesting model and the bounded spherical uncertainty model, the original NP-hard problem is transformed into a deterministic one by using the variable relaxation and S-Procedure methods, and then it is transformed into a convex optimization problem by using successive convex approximation, semi-positive definite relaxation and block coordinate descent methods. Simulation results show that the proposed algorithm has higher system capacity and lower outage probability compared with the traditional non-robust algorithm.
Focusing on the problems of inaccurate channel estimation and easy eavesdropping of information in backscatter communication systems, a robust resource allocation algorithm for Multi-User Multi-Input Single-Output (MU-MISO) backscatter communication systems based on user eavesdropping is proposed to improve the transmission robustness and information security of the systems. Firstly, considering the constraints on the maximum power of the base station, time allocation, channel uncertainties, energy collection, and security rate, a robust resource allocation problem for MU-MISO backscatter communication systems is established. Secondly, based on the nonlinear energy harvesting model and the bounded spherical uncertainty model, the original NP-hard problem is transformed into a deterministic one by using the variable relaxation and S-Procedure methods, and then it is transformed into a convex optimization problem by using successive convex approximation, semi-positive definite relaxation and block coordinate descent methods. Simulation results show that the proposed algorithm has higher system capacity and lower outage probability compared with the traditional non-robust algorithm.
2024, 46(1): 213-221.
doi: 10.11999/JEIT221577
Abstract:
In order to improve the monopulse angle measurement performance for the rectangular array when the mainlobe and sidelobe jammings exist, a Two-Dimensional Hierarchical Joint Adaptive Digital BeamForming (TDHJ-ADBF) method is proposed in this paper. The beamforming of TDHJ-ADBF method is divided into two stages. In the first stage, the mainlobe jamming corresponding to the angular measurement dimension is quickly estimated by the compressed multiple signal classification algorithm and pre-eliminated by a blocking matrix. Then the adaptive weights and beamforming are calculated using the joint constraint of beam direction and the linearity of monopulse response curve. The residue mainlobe jamming is further suppressed in orthogonal dimension at the second stage. Thereby both the mainlobe and sidelobe jammings are jointly suppressed, the linearity of the monopulse response curve is maintained. Simulation results demonstrate that the TDHJ-ADBF method exhibits an excellent jamming suppression capacity and a high precision of angle measurement.
In order to improve the monopulse angle measurement performance for the rectangular array when the mainlobe and sidelobe jammings exist, a Two-Dimensional Hierarchical Joint Adaptive Digital BeamForming (TDHJ-ADBF) method is proposed in this paper. The beamforming of TDHJ-ADBF method is divided into two stages. In the first stage, the mainlobe jamming corresponding to the angular measurement dimension is quickly estimated by the compressed multiple signal classification algorithm and pre-eliminated by a blocking matrix. Then the adaptive weights and beamforming are calculated using the joint constraint of beam direction and the linearity of monopulse response curve. The residue mainlobe jamming is further suppressed in orthogonal dimension at the second stage. Thereby both the mainlobe and sidelobe jammings are jointly suppressed, the linearity of the monopulse response curve is maintained. Simulation results demonstrate that the TDHJ-ADBF method exhibits an excellent jamming suppression capacity and a high precision of angle measurement.
2024, 46(1): 222-228.
doi: 10.11999/JEIT221537
Abstract:
Dual-functional Radar-Communication Systems (DFRC) is one of the ideal technologies to effectively solve the problem of network spectrum resource congestion in the future, in this paper, Reconfigurable Intelligent Surface (RIS) technology is introduced to improve the weighted sum rate of users and the detection performance of the system. First, under the radar power constraint, the constant mode constraint of reconfigurable intelligent surface and the overall power budget of communication, an optimization model is built to maximize the weighted sum rate of communication users and the detection performance of the system. By jointly optimizing the active beam of the base station and the passive beam forming of the reconfigurable intelligent surface, an effective alternative optimization algorithm based on weighted least mean square error, fractional programming and manifold optimization is designed. The non-convex optimization problem is transformed into two subproblems and solved by iteration. The simulation results show that the proposed scheme is effective in solving the problem and the user weighted sum rate converges at a lower number of iterations, and it can increase the user's weighted sum rate upper limit by 0.86 bit/(s·Hz) and make the system detection more directional.
Dual-functional Radar-Communication Systems (DFRC) is one of the ideal technologies to effectively solve the problem of network spectrum resource congestion in the future, in this paper, Reconfigurable Intelligent Surface (RIS) technology is introduced to improve the weighted sum rate of users and the detection performance of the system. First, under the radar power constraint, the constant mode constraint of reconfigurable intelligent surface and the overall power budget of communication, an optimization model is built to maximize the weighted sum rate of communication users and the detection performance of the system. By jointly optimizing the active beam of the base station and the passive beam forming of the reconfigurable intelligent surface, an effective alternative optimization algorithm based on weighted least mean square error, fractional programming and manifold optimization is designed. The non-convex optimization problem is transformed into two subproblems and solved by iteration. The simulation results show that the proposed scheme is effective in solving the problem and the user weighted sum rate converges at a lower number of iterations, and it can increase the user's weighted sum rate upper limit by 0.86 bit/(s·Hz) and make the system detection more directional.
2024, 46(1): 229-239.
doi: 10.11999/JEIT230039
Abstract:
A multitarget parameter estimation method based on overlapping element MIMO arrays is presented for Doppler-angle coupling and velocity ambiguity. Based on the virtual aperture principle, overlapping elements are introduced into the traditional MIMO antenna array to construct the overlapping element MIMO antenna array. Array position parameters are estimated by introducing a cyclic iteration into the angular Fast Fourier Transform (FFT) algorithm, and the frequency is estimated by using the phase difference of the overlapping array element echo signals. The spectral shift method is introduced to convert the speed interval to achieve multi-objective distance and speed estimation. Under the Monte Carlo simulation with 15 dB signal-to-noise ratio, the blur resolution accuracy is 100%, the speed error is 0.1 m/s, and the angle error is 0.1 degree. Tests based on the self-collected data set of urban roads show that the method can accurately estimate the speed and angle of vehicle targets, and can meet the real-time and accuracy requirements of traffic radar for vehicle information monitoring.
A multitarget parameter estimation method based on overlapping element MIMO arrays is presented for Doppler-angle coupling and velocity ambiguity. Based on the virtual aperture principle, overlapping elements are introduced into the traditional MIMO antenna array to construct the overlapping element MIMO antenna array. Array position parameters are estimated by introducing a cyclic iteration into the angular Fast Fourier Transform (FFT) algorithm, and the frequency is estimated by using the phase difference of the overlapping array element echo signals. The spectral shift method is introduced to convert the speed interval to achieve multi-objective distance and speed estimation. Under the Monte Carlo simulation with 15 dB signal-to-noise ratio, the blur resolution accuracy is 100%, the speed error is 0.1 m/s, and the angle error is 0.1 degree. Tests based on the self-collected data set of urban roads show that the method can accurately estimate the speed and angle of vehicle targets, and can meet the real-time and accuracy requirements of traffic radar for vehicle information monitoring.
2024, 46(1): 240-248.
doi: 10.11999/JEIT221539
Abstract:
Focusing on investigating the problems of array calibration and beamforming for Coprime Location Arrays (CLA), a new beamforming algorithm, which is called CLA-SILAC-INCM algorithm is proposed for the partly calibrated CLAs, by exploiting the Simultaneous Interference Localization and Array Calibration (SILAC) technique. Theoretical analysis shows that when the CLA contains not less than 3 fully calibrated antenna elements, highly accurate and unambiguous estimation for interference direction and array gain-phase error vector can be obtained using the SILAC technique. Afterward, the Interference plus Noise Covariance Matrix (INCM) is reconstructed and the optimal beamforming weighting vector is computed. Simulation results show that the proposed CLA-SILAC-INCM algorithm exhibits better performance compared with existing algorithms, especially when the signal-to-noise ratio is close to interference-to-noise ratio.
Focusing on investigating the problems of array calibration and beamforming for Coprime Location Arrays (CLA), a new beamforming algorithm, which is called CLA-SILAC-INCM algorithm is proposed for the partly calibrated CLAs, by exploiting the Simultaneous Interference Localization and Array Calibration (SILAC) technique. Theoretical analysis shows that when the CLA contains not less than 3 fully calibrated antenna elements, highly accurate and unambiguous estimation for interference direction and array gain-phase error vector can be obtained using the SILAC technique. Afterward, the Interference plus Noise Covariance Matrix (INCM) is reconstructed and the optimal beamforming weighting vector is computed. Simulation results show that the proposed CLA-SILAC-INCM algorithm exhibits better performance compared with existing algorithms, especially when the signal-to-noise ratio is close to interference-to-noise ratio.
2024, 46(1): 249-257.
doi: 10.11999/JEIT221507
Abstract:
Cylindrical millimeter-wave Synthetic Aperture Radar (CSAR) is one of the important technologies in the field of close-range non-contact imaging. High-resolution imaging algorithms based on Fourier transform theory require Two-Dimensional (2D) interpolation to eliminate the non-uniformity of wavenumber domain data in both the azimuth and distance dimensions. However, these two dimensions exhibit a high degree of coupling, in the form of a concentric-circle shaped filling in the wavenumber domain. This results in a high temporal complexity of the traditional interpolation method based on a 2D point-by-point traversal, leading to a low efficiency of the imaging algorithm. Therefore, the interpolation decomposition method of concentric square mesh is proposed by deriving the CSAR imaging algorithm based on the analytical solution. Through the zero padding operation, the radial 1D interpolation, and partition, the strong coupling of the azimuth dimension and distance dimension in the wavenumber domain would be eliminated. The uniform resampling of the 2D non-uniform wavenumber domain is achieved by two independent 1D interpolations with respect to two non-overlapping partitions. It yields the expected concentric-square-belt filling shape of the wavenumber domain. Experimental results demonstrate that the proposed method can effectively reduce the time complexity of the straightforward 2D interpolation and increase the efficiency of the imaging algorithm. And the interpolation processing speed of the proposed algorithm is increased by 7 times compared with the traditional algorithm, which is consistent with the theoretical analysis of the algorithm complexity.
Cylindrical millimeter-wave Synthetic Aperture Radar (CSAR) is one of the important technologies in the field of close-range non-contact imaging. High-resolution imaging algorithms based on Fourier transform theory require Two-Dimensional (2D) interpolation to eliminate the non-uniformity of wavenumber domain data in both the azimuth and distance dimensions. However, these two dimensions exhibit a high degree of coupling, in the form of a concentric-circle shaped filling in the wavenumber domain. This results in a high temporal complexity of the traditional interpolation method based on a 2D point-by-point traversal, leading to a low efficiency of the imaging algorithm. Therefore, the interpolation decomposition method of concentric square mesh is proposed by deriving the CSAR imaging algorithm based on the analytical solution. Through the zero padding operation, the radial 1D interpolation, and partition, the strong coupling of the azimuth dimension and distance dimension in the wavenumber domain would be eliminated. The uniform resampling of the 2D non-uniform wavenumber domain is achieved by two independent 1D interpolations with respect to two non-overlapping partitions. It yields the expected concentric-square-belt filling shape of the wavenumber domain. Experimental results demonstrate that the proposed method can effectively reduce the time complexity of the straightforward 2D interpolation and increase the efficiency of the imaging algorithm. And the interpolation processing speed of the proposed algorithm is increased by 7 times compared with the traditional algorithm, which is consistent with the theoretical analysis of the algorithm complexity.
2024, 46(1): 258-266.
doi: 10.11999/JEIT221506
Abstract:
Existing fault diagnosis methods developed for Air Handling Unit (AHU) of Heating Ventilation and Air Conditioning (HVAC) tend to be centralized. The few distributed methods usually require solving a large number of time-consuming optimization problems, making it impossible to complete fault diagnosis in a timely manner. In response to the above challenges, a distributed fault diagnosis method based on a novel voting mechanism is proposed. In this method, a novel voting mechanism is proposed to establish a Boltzmann machine to describe the sensor network, determine the edge weights of the Boltzmann machine through mutual voting among sensors, and iterate over the state of the Boltzmann machine, which is also the state of the sensors, based on the edge weights to locate the sensor faults. Moreover, a novel voting strategy based on Euclidean distance is designed to determine the voting values. Additionally, a method is developed to reset the Boltzmann machine’s weight matrix by adding a node to the Boltzmann machine, which maintains the original voting relationship among the sensors while symmetrizing the Boltzmann machine to ensure convergence of the iteration of the Boltzmann machine state. This method does not need solving many optimization problems, leading to lower computational requirements compared to existing distributed methods. The proposed method is validated using actual data provided by ASHRAE Project RP-1312. The experimental results show that the proposed method can accurately and efficiently diagnose bias and drift faults in AHU sensors.
Existing fault diagnosis methods developed for Air Handling Unit (AHU) of Heating Ventilation and Air Conditioning (HVAC) tend to be centralized. The few distributed methods usually require solving a large number of time-consuming optimization problems, making it impossible to complete fault diagnosis in a timely manner. In response to the above challenges, a distributed fault diagnosis method based on a novel voting mechanism is proposed. In this method, a novel voting mechanism is proposed to establish a Boltzmann machine to describe the sensor network, determine the edge weights of the Boltzmann machine through mutual voting among sensors, and iterate over the state of the Boltzmann machine, which is also the state of the sensors, based on the edge weights to locate the sensor faults. Moreover, a novel voting strategy based on Euclidean distance is designed to determine the voting values. Additionally, a method is developed to reset the Boltzmann machine’s weight matrix by adding a node to the Boltzmann machine, which maintains the original voting relationship among the sensors while symmetrizing the Boltzmann machine to ensure convergence of the iteration of the Boltzmann machine state. This method does not need solving many optimization problems, leading to lower computational requirements compared to existing distributed methods. The proposed method is validated using actual data provided by ASHRAE Project RP-1312. The experimental results show that the proposed method can accurately and efficiently diagnose bias and drift faults in AHU sensors.
2024, 46(1): 267-276.
doi: 10.11999/JEIT221562
Abstract:
The technology for detecting infrared dim and small targets in the sky background is relatively mature. However, detecting these targets in near-ground complex backgrounds poses challenges such as low accuracy, high false alarm rates, and poor real-time performance. To address these problems, a novel algorithm for detecting infrared dim and small targets based on an improved top-hat transform, referred to as OTHOLCM, is proposed in this study. The algorithm uses an image preprocessing method, OTH, based on an improved top-hat transformation to enhance the target and suppress the background. Different strategies are employed to process images with different gray values. Additionally, the algorithm uses an infrared dim and small target detection technique, OLCM, based on improved multi-scale local contrast. The OLCM uses target size characteristics to expand the target detection range while ensuring real-time performance. Experimental results show that the OTHOLCM algorithm can guarantee good real-time performance, improve target detection accuracy, and reduce the number of false alarms. Compared with advanced algorithms such as the three-layer template local difference measurement algorithm and the edge and corner awareness-based spatial-temporal tensor, the OTHOLCM algorithm increases the actual positive rate by almost 79% and 61%, respectively. In addition, it reduces the false positive rate by nearly 77% and 73%, respectively. Moreover, the target detection speed reaches 25 frames per second.
The technology for detecting infrared dim and small targets in the sky background is relatively mature. However, detecting these targets in near-ground complex backgrounds poses challenges such as low accuracy, high false alarm rates, and poor real-time performance. To address these problems, a novel algorithm for detecting infrared dim and small targets based on an improved top-hat transform, referred to as OTHOLCM, is proposed in this study. The algorithm uses an image preprocessing method, OTH, based on an improved top-hat transformation to enhance the target and suppress the background. Different strategies are employed to process images with different gray values. Additionally, the algorithm uses an infrared dim and small target detection technique, OLCM, based on improved multi-scale local contrast. The OLCM uses target size characteristics to expand the target detection range while ensuring real-time performance. Experimental results show that the OTHOLCM algorithm can guarantee good real-time performance, improve target detection accuracy, and reduce the number of false alarms. Compared with advanced algorithms such as the three-layer template local difference measurement algorithm and the edge and corner awareness-based spatial-temporal tensor, the OTHOLCM algorithm increases the actual positive rate by almost 79% and 61%, respectively. In addition, it reduces the false positive rate by nearly 77% and 73%, respectively. Moreover, the target detection speed reaches 25 frames per second.
2024, 46(1): 277-286.
doi: 10.11999/JEIT221502
Abstract:
Image super-resolution reconstruction methods have very important uses in public safety detection, satellite imaging, medicine and photo restoration. In this paper, super-resolution reconstruction methods based on generative adversarial networks are investigated, from the training Real-world blind Enhanced Super-Resolution Generative Adversarial Networks pure synthetic data (Real-ESRGAN) method, a double UNet3+ discriminators Real-ESRGAN (DU3-Real-ESRGAN) method is proposed. Firstly, a UNet3+ structure is introduced in the discriminator to capture fine-grained details and coarse-grained semantics from the full scale. Secondly, a dual discriminator structure is adopted, with one discriminator learning image texture details and the other focusing on image edges to achieve complementary image information. Compared with several methods based on generative adversarial networks on Set5, Set14, BSD100 and Urban100 data sets, except for Set5, the Peak Signal to Noise Ration (PSNR), Structure SIMilarity (SSIM) and Natural Image Quality Evaluator (NIQE) values of the DU3-Real-ESRGAN method are superior to those of other methods to produce more intuitive and realistic high-resolution images.
Image super-resolution reconstruction methods have very important uses in public safety detection, satellite imaging, medicine and photo restoration. In this paper, super-resolution reconstruction methods based on generative adversarial networks are investigated, from the training Real-world blind Enhanced Super-Resolution Generative Adversarial Networks pure synthetic data (Real-ESRGAN) method, a double UNet3+ discriminators Real-ESRGAN (DU3-Real-ESRGAN) method is proposed. Firstly, a UNet3+ structure is introduced in the discriminator to capture fine-grained details and coarse-grained semantics from the full scale. Secondly, a dual discriminator structure is adopted, with one discriminator learning image texture details and the other focusing on image edges to achieve complementary image information. Compared with several methods based on generative adversarial networks on Set5, Set14, BSD100 and Urban100 data sets, except for Set5, the Peak Signal to Noise Ration (PSNR), Structure SIMilarity (SSIM) and Natural Image Quality Evaluator (NIQE) values of the DU3-Real-ESRGAN method are superior to those of other methods to produce more intuitive and realistic high-resolution images.
2024, 46(1): 287-298.
doi: 10.11999/JEIT221582
Abstract:
Multiple Unmanned Ground Vehicle (multi-UGV) dispersion is commonly used in military combat missions. The existing conventional methods of dispersion are complex, long time-consuming, and have limited applicability. To address these problems, a multi-UGV dispersion strategy is proposed based on the AUction Multi-Agent Deep Deterministic Policy Gradient (AU-MADDPG) algorithm. Founded on the single unmanned vehicle model, the multi-UGV dispersion model is established based on deep reinforcement learning. Then, the MADDPG structure is optimized, and the auction algorithm is used to calculate the dispersion points corresponding to each unmanned vehicle when the absolute path is shortest to reduce the randomness of dispersion points allocation. Plan the path according to the MADDPG algorithm to improve training efficiency and running efficiency. The reward function is optimized by taking into account both during and the end of training process to consider the constraints comprehensively. The multi-constraint problem is converted into the reward function design problem to realize maximization of the reward f unction. The simulation results show that, compared with the traditional MADDPG algorithms, the proposed algorithm has a 3.96% reduction in training time-consuming and a 14.5% reduction in total path length, which is more effective in solving the decentralized problems, and can be used as a general solution for dispersion problems.
Multiple Unmanned Ground Vehicle (multi-UGV) dispersion is commonly used in military combat missions. The existing conventional methods of dispersion are complex, long time-consuming, and have limited applicability. To address these problems, a multi-UGV dispersion strategy is proposed based on the AUction Multi-Agent Deep Deterministic Policy Gradient (AU-MADDPG) algorithm. Founded on the single unmanned vehicle model, the multi-UGV dispersion model is established based on deep reinforcement learning. Then, the MADDPG structure is optimized, and the auction algorithm is used to calculate the dispersion points corresponding to each unmanned vehicle when the absolute path is shortest to reduce the randomness of dispersion points allocation. Plan the path according to the MADDPG algorithm to improve training efficiency and running efficiency. The reward function is optimized by taking into account both during and the end of training process to consider the constraints comprehensively. The multi-constraint problem is converted into the reward function design problem to realize maximization of the reward f unction. The simulation results show that, compared with the traditional MADDPG algorithms, the proposed algorithm has a 3.96% reduction in training time-consuming and a 14.5% reduction in total path length, which is more effective in solving the decentralized problems, and can be used as a general solution for dispersion problems.
2024, 46(1): 299-307.
doi: 10.11999/JEIT221580
Abstract:
To enhance the denoising performance of an unsupervised Deep Image Prior (DIP) model, an improved approach known as the Improved Deep Image Prior (IDIP) is proposed, which comprises sample generation and sample fusion modules, and leverages a prior hybrid image that combines internal and external factors, along with image fusion techniques. In the sample generation module, two representative denoising models are utilized, which capture internal and external priors and process the noisy image to produce two initial denoised images. Subsequently, a spatially random mixer is implemented on these initial denoised images to generate a sufficient number of mixed images. These mixed images, along with the noisy image, form dual-target images with a 50% mixing ratio. Furthermore, executing the standard DIP denoising process multiple times with different random inputs and dual-target images generates a set of diverse sample images with complementary characteristics. In the sample fusion module, to enhance randomness and stability, 50% of the sample images are randomly discarded using dropout. Next, an unsupervised fusion network is used, which performs adaptive fusion on the remaining sample images. The resulting fused image exhibits improved image quality compared to the individual sample images and serves as the final denoised output. The experimental results on artificially generated noisy images reveal that the IDIP model is effective, with an improvement of approximately 2 dB in terms of Peak Signal-to-Noise Ratio (PSNR) compared to the original DIP model. Moreover, the IDIP model outperforms other unsupervised denoising models by a significant margin and approaches the performance level of supervised denoising models. When evaluated on real-world noisy images, the IDIP model exhibits superior denoising performance to the compared methods, thus verifying its robustness.
To enhance the denoising performance of an unsupervised Deep Image Prior (DIP) model, an improved approach known as the Improved Deep Image Prior (IDIP) is proposed, which comprises sample generation and sample fusion modules, and leverages a prior hybrid image that combines internal and external factors, along with image fusion techniques. In the sample generation module, two representative denoising models are utilized, which capture internal and external priors and process the noisy image to produce two initial denoised images. Subsequently, a spatially random mixer is implemented on these initial denoised images to generate a sufficient number of mixed images. These mixed images, along with the noisy image, form dual-target images with a 50% mixing ratio. Furthermore, executing the standard DIP denoising process multiple times with different random inputs and dual-target images generates a set of diverse sample images with complementary characteristics. In the sample fusion module, to enhance randomness and stability, 50% of the sample images are randomly discarded using dropout. Next, an unsupervised fusion network is used, which performs adaptive fusion on the remaining sample images. The resulting fused image exhibits improved image quality compared to the individual sample images and serves as the final denoised output. The experimental results on artificially generated noisy images reveal that the IDIP model is effective, with an improvement of approximately 2 dB in terms of Peak Signal-to-Noise Ratio (PSNR) compared to the original DIP model. Moreover, the IDIP model outperforms other unsupervised denoising models by a significant margin and approaches the performance level of supervised denoising models. When evaluated on real-world noisy images, the IDIP model exhibits superior denoising performance to the compared methods, thus verifying its robustness.
2024, 46(1): 308-316.
doi: 10.11999/JEIT230023
Abstract:
Age of Information (AoI) is an emerging time-related indicator in the industry. It is often used to evaluate the freshness of received data. Considering a multi-cluster live streaming system with mixed video data and environmental data, a scheduling policy is formulated to jointly optimize the system data value and AoI. To overcome the problem that the effective solution to the optimization problem is difficult to achieve due to the action space being too large, the scheduling policy of the optimization problem is decomposed into two interrelated internal layer and external layer policies. The external layer policy utilizes deep reinforcement learning for channel allocation between clusters. The internal layer policy implements the link selection in the cluster on the basis of the constructed virtual queue. The two-layer policy embeds the internal layer policy of each cluster into the external layer policy for training. Simulation results show that compared with the existing scheduling policy, the proposed scheduling policy can increase the time-averaged data value of received data and reduce the time-averaged AoI.
Age of Information (AoI) is an emerging time-related indicator in the industry. It is often used to evaluate the freshness of received data. Considering a multi-cluster live streaming system with mixed video data and environmental data, a scheduling policy is formulated to jointly optimize the system data value and AoI. To overcome the problem that the effective solution to the optimization problem is difficult to achieve due to the action space being too large, the scheduling policy of the optimization problem is decomposed into two interrelated internal layer and external layer policies. The external layer policy utilizes deep reinforcement learning for channel allocation between clusters. The internal layer policy implements the link selection in the cluster on the basis of the constructed virtual queue. The two-layer policy embeds the internal layer policy of each cluster into the external layer policy for training. Simulation results show that compared with the existing scheduling policy, the proposed scheduling policy can increase the time-averaged data value of received data and reduce the time-averaged AoI.
2024, 46(1): 317-326.
doi: 10.11999/JEIT221441
Abstract:
A widely recognized evaluation standard has not been reached on how to evaluate the performance of neural encoding models. Most current neural encoding evaluation methods are based on the measurement of the similarity between the encoded responses from neural encoding models and the real physiological responses. A method to validate the performance of neural encoding models through neural decoding is proposed. Using this method, a visual encoding validation framework including traditional metrics and the proposed method is constructed and experimentally validated based on a physiological dataset of Retinal Ganglion Cell (RGC) spike signals collected from salamanders over dynamic visual stimuli. Three neural encoding models with the capability of encoding the spike responses of dynamic visual stimuli and a neural decoding model with advanced performance are selected as the standard decoding models. The experiments comprehensively measure the neural encoding performance of the three neural encoding models in terms of different neural encoding methods from different perspectives. In addition, the experimental results show that there are non-negligible effects of the two neural encoding methods, i.e., rate coding and spike count coding, on the neural encoding performance.
A widely recognized evaluation standard has not been reached on how to evaluate the performance of neural encoding models. Most current neural encoding evaluation methods are based on the measurement of the similarity between the encoded responses from neural encoding models and the real physiological responses. A method to validate the performance of neural encoding models through neural decoding is proposed. Using this method, a visual encoding validation framework including traditional metrics and the proposed method is constructed and experimentally validated based on a physiological dataset of Retinal Ganglion Cell (RGC) spike signals collected from salamanders over dynamic visual stimuli. Three neural encoding models with the capability of encoding the spike responses of dynamic visual stimuli and a neural decoding model with advanced performance are selected as the standard decoding models. The experiments comprehensively measure the neural encoding performance of the three neural encoding models in terms of different neural encoding methods from different perspectives. In addition, the experimental results show that there are non-negligible effects of the two neural encoding methods, i.e., rate coding and spike count coding, on the neural encoding performance.
2024, 46(1): 327-334.
doi: 10.11999/JEIT221593
Abstract:
RAIN is a lightweight block cipher with SPN structure, which not only has strong security, but also possesses high software and hardware implementation efficiency. Meet-in-the-middle attacks are widely used in the security analysis of block ciphers algorithms. In this paper, the meet-in-the-middle attack on RAIN is researched. By examining the structural characteristics and the properties of truncated differential of RAIN-128, both 4-round and 6-round meet-in-the-middle distinguishers are first constructed by using differential enumeration technique, and meet-in-the-middle attacks on 8-round and 10-round RAIN-128 are presented, respectively. For 8-round attack, in the preprocessing, the time complexity is\begin{document}$ {2^{68}} $\end{document} 8-round encryptions, and the memory complexity is \begin{document}$ {2^{75}} $\end{document} bit, in the online, the time complexity is \begin{document}$ {2^{109}} $\end{document} 8-round encryptions, and the data complexity is \begin{document}$ {2^{72}} $\end{document} chosen plaintexts. For 10-round attack, in the preprocessing, the time complexity is \begin{document}$ {2^{214}} $\end{document} 10-round encryptions, and the memory complexity is \begin{document}$ {2^{219}} $\end{document} bit, in the online, the time complexity is \begin{document}$ {2^{109}} $\end{document} 10-round encryptions, and the data complexity is \begin{document}$ {2^{72}} $\end{document} chosen plaintexts. The result shows that RAIN-128 can be against meet-in-the-middle attack and has high security redundancy.
RAIN is a lightweight block cipher with SPN structure, which not only has strong security, but also possesses high software and hardware implementation efficiency. Meet-in-the-middle attacks are widely used in the security analysis of block ciphers algorithms. In this paper, the meet-in-the-middle attack on RAIN is researched. By examining the structural characteristics and the properties of truncated differential of RAIN-128, both 4-round and 6-round meet-in-the-middle distinguishers are first constructed by using differential enumeration technique, and meet-in-the-middle attacks on 8-round and 10-round RAIN-128 are presented, respectively. For 8-round attack, in the preprocessing, the time complexity is
2024, 46(1): 335-343.
doi: 10.11999/JEIT230001
Abstract:
Negabent function is a Boolean function with optimal autocorrelation and high nonlinearity, which has been widely used in cryptography, coding theory and combination design. In this paper, by combining trace function on a finite field with permutation polynomials, two methods for constructing negabent functions are proposed. Both the two kinds of constructed negabent functions take on such form:\begin{document}${\text{Tr}}_1^k(\lambda {x^{{2^k} + 1}}) + $\end{document} \begin{document}$ {\text{Tr}}_1^n(ux){\text{Tr}}_1^n(vx) + {\text{Tr}}_1^n(mx){{\rm{Tr}}} _1^n(dx)$\end{document} . In the first construction method, negabent functions can be obtained by adjusting the three parameters in \begin{document}$\lambda ,{\text{ }}u,{\text{ }}v,{\text{ }}m$\end{document} . In particular, when \begin{document}$\lambda \ne 1$\end{document} , \begin{document}$({2^{n - 1}} - 2)({2^n} - 1)({2^n} - 4)$\end{document} negabent functions can be obtained. In the second construction method, negabent functions can be obtained by adjusting the four parameters in \begin{document}$\lambda ,{\text{ }}u,{\text{ }}v,{\text{ }}m,{\text{ }}d$\end{document} . In particular, when \begin{document}$\lambda \ne 1$\end{document} , at least \begin{document}${2^{n - 1}} [({2^{n - 1}} - 2) $\end{document} \begin{document}$ ({2^{n - 1}} - 3) + {2^{n - 1}} - 4]$\end{document} negabent functions can be obtained.
Negabent function is a Boolean function with optimal autocorrelation and high nonlinearity, which has been widely used in cryptography, coding theory and combination design. In this paper, by combining trace function on a finite field with permutation polynomials, two methods for constructing negabent functions are proposed. Both the two kinds of constructed negabent functions take on such form:
2024, 46(1): 344-352.
doi: 10.11999/JEIT221446
Abstract:
To solve the problem that existing elliptic curve cryptography scalar multipliers are difficult to balance flexibility and area efficiency, a scalar multiplier with high area efficiency based on bit reorganization fast modular reduction is designed. Firstly, according to the operation characteristics of elliptic curve scalar multiplication, a hardware multiplexing operation unit that can realize two operations of multiplication and modular inversion is designed to improve the utilization rate of hardware resources, and the Karatsuba-Ofman algorithm is used to improve the calculation performance. Secondly, a fast modular reduction algorithm based on bit reorganization is designed, and a hardware architecture supporting secp256k1, secp256r1 and SCA-256 (SM2 standard recommended curve) fast modular reduction calculation is implemented. Finally, the scheduling of modular operations for point addition and point doubling is optimized to improve the utilization of multiplication and fast modular reduction, and reduce the number of cycles required for scalar multiplication calculations. The designed scalar multiplier requires 275 k equivalent gates in 55 nm CMOS technology, the scalar multiplication operation speed is 48309 times/s, and the area-time product reaches 5.7.
To solve the problem that existing elliptic curve cryptography scalar multipliers are difficult to balance flexibility and area efficiency, a scalar multiplier with high area efficiency based on bit reorganization fast modular reduction is designed. Firstly, according to the operation characteristics of elliptic curve scalar multiplication, a hardware multiplexing operation unit that can realize two operations of multiplication and modular inversion is designed to improve the utilization rate of hardware resources, and the Karatsuba-Ofman algorithm is used to improve the calculation performance. Secondly, a fast modular reduction algorithm based on bit reorganization is designed, and a hardware architecture supporting secp256k1, secp256r1 and SCA-256 (SM2 standard recommended curve) fast modular reduction calculation is implemented. Finally, the scheduling of modular operations for point addition and point doubling is optimized to improve the utilization of multiplication and fast modular reduction, and reduce the number of cycles required for scalar multiplication calculations. The designed scalar multiplier requires 275 k equivalent gates in 55 nm CMOS technology, the scalar multiplication operation speed is 48309 times/s, and the area-time product reaches 5.7.
2024, 46(1): 353-361.
doi: 10.11999/JEIT221479
Abstract:
The quasi-superdirective reradiation based on the magnetic dipole resonance occurring within a passive dielectric resonator is presented to augment the backscattering cross-section of thin conducting plates at edge-on incidence. It is demonstrated that a hybrid electromagnetic resonance mode reradiating like a magnetic dipole can be induced within a properly dimensioned cuboid dielectric under the illumination of a plane electromagnetic wave. Using this cuboid dielectric as a basic unit cell, a supercell consisting of two identical dielectrics closely cascaded along the propagation direction of the impinging wave is formed. It is observed that both the magnetic and electric fields induced within the two dielectrics of the supercell exhibit opposite senses along with almost equal magnitudes. Because of the internal field distribution with opposite phases and almost equal magnitudes, the supercell acts as a two-element quasi-superdirective magnetic dipole array and the resultant quasi-superdirective reradiation effectively contributes to the backscattering cross-section enhancement. Further, the supercells with halved profile are loaded onto the surfaces of thin conducting plates according to the image theory. The results indicate that the magnetic-dipole-based quasi-superdirective reradiation assisted by dielectric resonators with a profile of only 0.078λ0 noticeably modifies the edge-on scattering characteristic of thin conducting plates. Effective augmentation of backscattering cross-section for edge-on incidence is therefore achieved within relatively wide band and angular ranges.
The quasi-superdirective reradiation based on the magnetic dipole resonance occurring within a passive dielectric resonator is presented to augment the backscattering cross-section of thin conducting plates at edge-on incidence. It is demonstrated that a hybrid electromagnetic resonance mode reradiating like a magnetic dipole can be induced within a properly dimensioned cuboid dielectric under the illumination of a plane electromagnetic wave. Using this cuboid dielectric as a basic unit cell, a supercell consisting of two identical dielectrics closely cascaded along the propagation direction of the impinging wave is formed. It is observed that both the magnetic and electric fields induced within the two dielectrics of the supercell exhibit opposite senses along with almost equal magnitudes. Because of the internal field distribution with opposite phases and almost equal magnitudes, the supercell acts as a two-element quasi-superdirective magnetic dipole array and the resultant quasi-superdirective reradiation effectively contributes to the backscattering cross-section enhancement. Further, the supercells with halved profile are loaded onto the surfaces of thin conducting plates according to the image theory. The results indicate that the magnetic-dipole-based quasi-superdirective reradiation assisted by dielectric resonators with a profile of only 0.078λ0 noticeably modifies the edge-on scattering characteristic of thin conducting plates. Effective augmentation of backscattering cross-section for edge-on incidence is therefore achieved within relatively wide band and angular ranges.
2024, 46(1): 362-372.
doi: 10.11999/JEIT230012
Abstract:
With the feature size of CMOS device entering the nanoscale, the circuit failure issue caused by high-energy particle radiation is becoming more and more serious, which brings severe challenges to the circuit reliability. At present, it is urgent to accurately evaluate the reliability of the integrated circuit and reinforce the fault tolerance of circuit, so as to improve the reliability of the circuit system. However, due to the large number of fan-out reconvergence structures in the logic circuit, the resulting signal correlation causes difficulties in reliability evaluation and critical gates location. This paper proposes critical gates location algorithm for logic circuit based on correlation separation. First, the circuit is divided into multiple Independent Circuit Structures (ICS); second, taking ICS as the basic unit to analyze fault propagation and signal correlation; Then, the circuit module after correlation separation and the reverse search algorithm is used to accurately locate the circuit critical gates; Finally, critical gates location and targeted fault tolerance reinforcement for the input vector space are comprehensively considered. The experimental results show that proposed algorithm can accurately and efficiently locate the critical gates of logic circuit, and it is suitable for reliability evaluation and efficient fault-tolerant design of large-scale and super-scale circuits.
With the feature size of CMOS device entering the nanoscale, the circuit failure issue caused by high-energy particle radiation is becoming more and more serious, which brings severe challenges to the circuit reliability. At present, it is urgent to accurately evaluate the reliability of the integrated circuit and reinforce the fault tolerance of circuit, so as to improve the reliability of the circuit system. However, due to the large number of fan-out reconvergence structures in the logic circuit, the resulting signal correlation causes difficulties in reliability evaluation and critical gates location. This paper proposes critical gates location algorithm for logic circuit based on correlation separation. First, the circuit is divided into multiple Independent Circuit Structures (ICS); second, taking ICS as the basic unit to analyze fault propagation and signal correlation; Then, the circuit module after correlation separation and the reverse search algorithm is used to accurately locate the circuit critical gates; Finally, critical gates location and targeted fault tolerance reinforcement for the input vector space are comprehensively considered. The experimental results show that proposed algorithm can accurately and efficiently locate the critical gates of logic circuit, and it is suitable for reliability evaluation and efficient fault-tolerant design of large-scale and super-scale circuits.
2024, 46(1): 373-382.
doi: 10.11999/JEIT240000
Abstract:
The electronics and technology area in Division I of Information Science Department of National Natural Science Foundation of China covers the research fields of electrical circuits and systems, electromagnetic fields and waves, electronics and applications. This report introduces the application and funding statistics of several types of projects that are classified into the talent and exploratory funding categories. The analysis is from various aspects, including application codes, age of applicants, host institutions and the trend in recent five years. It aims to provide a general guidance for the scientific researchers about the hot topics and future development directions in this area.
The electronics and technology area in Division I of Information Science Department of National Natural Science Foundation of China covers the research fields of electrical circuits and systems, electromagnetic fields and waves, electronics and applications. This report introduces the application and funding statistics of several types of projects that are classified into the talent and exploratory funding categories. The analysis is from various aspects, including application codes, age of applicants, host institutions and the trend in recent five years. It aims to provide a general guidance for the scientific researchers about the hot topics and future development directions in this area.