Email alert
2023 Vol. 45, No. 8
column
- Cover
- Special Topic on Brain-inspired Visual Spatiotemporal Information Perception
- Special Topic on Effectiveness Enhancement of Human-machine Collaboration Based on Brain Behavioral Consistency
- Overviews
- Wireless Communication and Internet of Things
- Radar, Sonar and Array Signal Processing
- Image and Intelligent Information Processing
- Circuit and System Design
Display Method:
2023, 45(8): 2675-2688.
doi: 10.11999/JEIT221459
Abstract:
Spiking Neural Networks (SNN) are gaining popularity in the computational simulation and artificial intelligence fields owing to their biological plausibility and computational efficiency. Herein, the historical development of SNN are analyzed to conclude that these two fields are intersecting and merging rapidly. After the successful application of Dynamic Vision Sensors (DVS) and Dynamic Audio Sensors (DAS), SNNs have found some proper paradigms, such as continuous visual signal tracking, automatic speech recognition, and reinforcement learning of continuous control, that have extensively supported their key features, including spiking encoding, neuronal heterogeneity, specific functional circuits, and multiscale plasticity. In comparison to these real-world paradigms, the brain contains a spiked version of the biology-world paradigm, which exhibits a similar level of complexity and is usually considered a mirror of the real world. Considering the projected rapid development of invasive and parallel Brain-computer Interface (BCI), as well as the new BCI-based paradigm, which includes online pattern recognition and stimulus control of biological spike trains, it is natural for SNNs to exhibit their key advantages of energy efficiency, robustness, and flexibility. The biological brain has inspired the present study of SNNs and effective SNN machine-learning algorithms, which can help enhance neuroscience discoveries in the brain by applying them to the new BCI paradigm. Such two-way interactions with positive feedback can accelerate brain science research and brain-inspired intelligence technology.
Spiking Neural Networks (SNN) are gaining popularity in the computational simulation and artificial intelligence fields owing to their biological plausibility and computational efficiency. Herein, the historical development of SNN are analyzed to conclude that these two fields are intersecting and merging rapidly. After the successful application of Dynamic Vision Sensors (DVS) and Dynamic Audio Sensors (DAS), SNNs have found some proper paradigms, such as continuous visual signal tracking, automatic speech recognition, and reinforcement learning of continuous control, that have extensively supported their key features, including spiking encoding, neuronal heterogeneity, specific functional circuits, and multiscale plasticity. In comparison to these real-world paradigms, the brain contains a spiked version of the biology-world paradigm, which exhibits a similar level of complexity and is usually considered a mirror of the real world. Considering the projected rapid development of invasive and parallel Brain-computer Interface (BCI), as well as the new BCI-based paradigm, which includes online pattern recognition and stimulus control of biological spike trains, it is natural for SNNs to exhibit their key advantages of energy efficiency, robustness, and flexibility. The biological brain has inspired the present study of SNNs and effective SNN machine-learning algorithms, which can help enhance neuroscience discoveries in the brain by applying them to the new BCI paradigm. Such two-way interactions with positive feedback can accelerate brain science research and brain-inspired intelligence technology.
2023, 45(8): 2689-2698.
doi: 10.11999/JEIT221368
Abstract:
The visual system encodes rich and dense dynamic visual stimuli into time-varying neural responses through neurons. Exploring the functional relationship between visual stimuli and neural responses is a common approach to understanding neural encoding mechanisms. Neural encoding models of the visual system are presented throughout this paper, which can be grouped into two categories: biophysical encoding models and artificial neural network encoding models. Then parameter estimation methods for various models are introduced. By comparing the characteristics of various models, the respective advantages, application scenarios and existing problems are summarized. Finally, the current situation and future challenges of visual encoding research are summarized and forecasted.
The visual system encodes rich and dense dynamic visual stimuli into time-varying neural responses through neurons. Exploring the functional relationship between visual stimuli and neural responses is a common approach to understanding neural encoding mechanisms. Neural encoding models of the visual system are presented throughout this paper, which can be grouped into two categories: biophysical encoding models and artificial neural network encoding models. Then parameter estimation methods for various models are introduced. By comparing the characteristics of various models, the respective advantages, application scenarios and existing problems are summarized. Finally, the current situation and future challenges of visual encoding research are summarized and forecasted.
2023, 45(8): 2699-2709.
doi: 10.11999/JEIT221456
Abstract:
Event cameras are bio-inspired sensors that outputs a stream of events when the brightness change of pixels exceeds the threshold. This type of visual sensor asynchronously outputs events that encode the time, location and sign of the brightness changes. Hence, event cameras offer attractive properties, such as high temporal resolution, very high dynamic range, low latency, low power consumption, and high pixel bandwidth. It can capture information in high-speed motion and high-dynamic scenes, which can be used to reconstruct high-dynamic range and high-speed motion scenes. Brightness images obtained by image reconstruction can be interpreted as a representation, and be used for recognition, segmentation, tracking and optical flow estimation, which is one of the important research directions in the field of vision. This survey first briefly introduces event cameras from their working principle, developmental history, advantages, and challenges of event cameras. Then, the working principles of various types of event cameras and some event camera-based image reconstruction algorithms are introduced. Finally, the challenges and future trends faced by event cameras are described, and the article is concluded.
Event cameras are bio-inspired sensors that outputs a stream of events when the brightness change of pixels exceeds the threshold. This type of visual sensor asynchronously outputs events that encode the time, location and sign of the brightness changes. Hence, event cameras offer attractive properties, such as high temporal resolution, very high dynamic range, low latency, low power consumption, and high pixel bandwidth. It can capture information in high-speed motion and high-dynamic scenes, which can be used to reconstruct high-dynamic range and high-speed motion scenes. Brightness images obtained by image reconstruction can be interpreted as a representation, and be used for recognition, segmentation, tracking and optical flow estimation, which is one of the important research directions in the field of vision. This survey first briefly introduces event cameras from their working principle, developmental history, advantages, and challenges of event cameras. Then, the working principles of various types of event cameras and some event camera-based image reconstruction algorithms are introduced. Finally, the challenges and future trends faced by event cameras are described, and the article is concluded.
2023, 45(8): 2710-2721.
doi: 10.11999/JEIT221418
Abstract:
Visual optical flow calculation is an important technique for computer vision to move from processing 2D images to processing 3D videos, and is the main way of describing visual motion information. The optical flow calculation technique has been developed for a long time. With the rapid development of related technologies, especially deep learning technology in recent years, the performance of optical flow calculation has been greatly improved. However, there are still many limitations that have not been solved. Accurate, fast, and robust optical flow calculation is still a challenging research field and a hot topic in the industry. As a low-level visual information processing technology, the implementation of related high-level visual tasks will also be contributed by the technological advances of optical flow calculation. In this paper, the development path of optical flow calculation based on computer vision is mainly introduced. The important theories, methods, and models generated during the technological development process from the two mainstream technology paths of classical algorithms and deep learning algorithms are summarized, the core ideas of various methods and models are being introduced and the various datasets and performance indicators are explained, the main application scenarios of optical flow calculation technology are briefly introduced, and the future technical directions are also prospected.
Visual optical flow calculation is an important technique for computer vision to move from processing 2D images to processing 3D videos, and is the main way of describing visual motion information. The optical flow calculation technique has been developed for a long time. With the rapid development of related technologies, especially deep learning technology in recent years, the performance of optical flow calculation has been greatly improved. However, there are still many limitations that have not been solved. Accurate, fast, and robust optical flow calculation is still a challenging research field and a hot topic in the industry. As a low-level visual information processing technology, the implementation of related high-level visual tasks will also be contributed by the technological advances of optical flow calculation. In this paper, the development path of optical flow calculation based on computer vision is mainly introduced. The important theories, methods, and models generated during the technological development process from the two mainstream technology paths of classical algorithms and deep learning algorithms are summarized, the core ideas of various methods and models are being introduced and the various datasets and performance indicators are explained, the main application scenarios of optical flow calculation technology are briefly introduced, and the future technical directions are also prospected.
2023, 45(8): 2722-2730.
doi: 10.11999/JEIT221367
Abstract:
Compared with traditional Artificial Neural Network (ANN), the Spiking Neural Network (SNN) has advantages of bioligical reliability and high computational efficiency. However, for object detection task, SNN has problems such as high training difficulty and low accuracy. In response to the above problems, an object detection method with SNN based on Dynamic Threshold Leaky Integrate-and-Fire (DT-LIF) neuron and Single Shot multibox Detector (SSD) is proposed. First, a DT-LIF neuron is designed, which can dynamically adjust the threshold of neuron according to the cumulative membrane potential to drive spike activity of the deep network and imporve the inferance speed. Meanwhile, using DT-LIF neuron as primitive, a hybrid SNN based on SSD is constructed. The network uses Spiking Visual Geometry Group (Spiking VGG) and Spiking Densely Connected Convolutional Network (Spiking DenseNet) as the backbone, and combines with SSD prediction head and three additional layers composed of Batch Normalization (BN) layer , Spiking Convolution (SC) layer, and DT-LIF neuron. Experimental results show that compared with LIF neuron network, the object detection accuracy of DT-LIF neuron network on the Prophesee GEN1 dataset is improved by 25.2%. Compared with the AsyNet algorithm, the object detection accuracy of the proposed method is improved by 17.9%.
Compared with traditional Artificial Neural Network (ANN), the Spiking Neural Network (SNN) has advantages of bioligical reliability and high computational efficiency. However, for object detection task, SNN has problems such as high training difficulty and low accuracy. In response to the above problems, an object detection method with SNN based on Dynamic Threshold Leaky Integrate-and-Fire (DT-LIF) neuron and Single Shot multibox Detector (SSD) is proposed. First, a DT-LIF neuron is designed, which can dynamically adjust the threshold of neuron according to the cumulative membrane potential to drive spike activity of the deep network and imporve the inferance speed. Meanwhile, using DT-LIF neuron as primitive, a hybrid SNN based on SSD is constructed. The network uses Spiking Visual Geometry Group (Spiking VGG) and Spiking Densely Connected Convolutional Network (Spiking DenseNet) as the backbone, and combines with SSD prediction head and three additional layers composed of Batch Normalization (BN) layer , Spiking Convolution (SC) layer, and DT-LIF neuron. Experimental results show that compared with LIF neuron network, the object detection accuracy of DT-LIF neuron network on the Prophesee GEN1 dataset is improved by 25.2%. Compared with the AsyNet algorithm, the object detection accuracy of the proposed method is improved by 17.9%.
2023, 45(8): 2731-2738.
doi: 10.11999/JEIT221478
Abstract:
Considering the shortcomings of the low recognition accuracy and poor real-time performance of existing Spiking Neural Networks (SNN) for dynamic visual event streams, a SNN recognition method based on dynamic visual motion features is proposed in this paper. First, the dynamic motion features in the event stream are extracted using the event-based motion history information representation and gradient direction calculation. Then, the spatiotemporal pooling operation is introduced to eliminate the redundancy of events in the temporal and spatial domain, further retaining the significant motion features. Finally, the feature event streams are fed into the SNN for learning and recognition. Experiments conducted on benchmark dynamic visual datasets show that dynamic visual motion features can significantly improve the recognition accuracy and computational speed of SNN for event streams.
Considering the shortcomings of the low recognition accuracy and poor real-time performance of existing Spiking Neural Networks (SNN) for dynamic visual event streams, a SNN recognition method based on dynamic visual motion features is proposed in this paper. First, the dynamic motion features in the event stream are extracted using the event-based motion history information representation and gradient direction calculation. Then, the spatiotemporal pooling operation is introduced to eliminate the redundancy of events in the temporal and spatial domain, further retaining the significant motion features. Finally, the feature event streams are fed into the SNN for learning and recognition. Experiments conducted on benchmark dynamic visual datasets show that dynamic visual motion features can significantly improve the recognition accuracy and computational speed of SNN for event streams.
2023, 45(8): 2739-2748.
doi: 10.11999/JEIT221346
Abstract:
A nighttime image enhancement model is proposed in this paper, which is inspired by biological vision mechanism and implemented on Field Programmable Gate Arrays (FPGA) for real-time enhancement of low-light videos and images. Inspired by the Midget cells and the Parasol cells in the early visual system, the proposed method processes the structure and detail information through two independent pathways respectively, and obtains a nice effect and efficiency. To achieve real-time enhancement of high-resolution videos, this paper implements the proposed method on Field Programmable Gate Arrays. High data throughput is ensured through hardware design such as sliding data window parallel processing, adjacent frame information sharing, and multi-channel parallelization. Implemented on Field Programmable Gate Arrays XC7Z100, the proposed design achieves processing 60 frames per second for 1024 × 768 RGB images. Compared with existing designs in this field, the proposed design has higher data throughput and is suitable for high-resolution real-time image enhancement applications.
A nighttime image enhancement model is proposed in this paper, which is inspired by biological vision mechanism and implemented on Field Programmable Gate Arrays (FPGA) for real-time enhancement of low-light videos and images. Inspired by the Midget cells and the Parasol cells in the early visual system, the proposed method processes the structure and detail information through two independent pathways respectively, and obtains a nice effect and efficiency. To achieve real-time enhancement of high-resolution videos, this paper implements the proposed method on Field Programmable Gate Arrays. High data throughput is ensured through hardware design such as sliding data window parallel processing, adjacent frame information sharing, and multi-channel parallelization. Implemented on Field Programmable Gate Arrays XC7Z100, the proposed design achieves processing 60 frames per second for 1024 × 768 RGB images. Compared with existing designs in this field, the proposed design has higher data throughput and is suitable for high-resolution real-time image enhancement applications.
2023, 45(8): 2749-2758.
doi: 10.11999/JEIT221361
Abstract:
Most existing infrared and visible image fusion methods neglect the disparities between daytime and nighttime scenarios and consider them similar, leading to low accuracy. However, the adaptive properties of the biological vision system allow for the capture of helpful information from source images and adaptive visual information processing. This concept provides a new direction for improving the accuracy of the deep-learning-based infrared and visible image fusion methods. Inspired by the visual multi-pathway mechanism, this study proposes a multi-scenario aware infrared and visible image fusion framework to incorporate two distinct visual pathways capable of perceiving daytime and nighttime scenarios. Specifically, daytime- and nighttime-scenario-aware fusion networks process the source images to generate two intermediate fusion results. Finally, a learnable weighting network obtains the final result. Additionally, the proposed framework utilizes a novel center-surround convolution module that simulates the widely distributed center-surround receptive field in biological vision. Qualitative and quantitative experiments demonstrate that the proposed framework improves significantly the quality of the fused image and outperforms existing methods in objective evaluation metrics.
Most existing infrared and visible image fusion methods neglect the disparities between daytime and nighttime scenarios and consider them similar, leading to low accuracy. However, the adaptive properties of the biological vision system allow for the capture of helpful information from source images and adaptive visual information processing. This concept provides a new direction for improving the accuracy of the deep-learning-based infrared and visible image fusion methods. Inspired by the visual multi-pathway mechanism, this study proposes a multi-scenario aware infrared and visible image fusion framework to incorporate two distinct visual pathways capable of perceiving daytime and nighttime scenarios. Specifically, daytime- and nighttime-scenario-aware fusion networks process the source images to generate two intermediate fusion results. Finally, a learnable weighting network obtains the final result. Additionally, the proposed framework utilizes a novel center-surround convolution module that simulates the widely distributed center-surround receptive field in biological vision. Qualitative and quantitative experiments demonstrate that the proposed framework improves significantly the quality of the fused image and outperforms existing methods in objective evaluation metrics.
2023, 45(8): 2759-2769.
doi: 10.11999/JEIT221388
Abstract:
Although the non-contact liquid level detection method based on deep learning can perform well, its high demand on computational resources makes it not suitable for embedded devices with limited resource. To solve this problem, a non-contact liquid level detection method is first proposed based on multilayer spiking neural network; Furthermore, spiking encoding methods based on single frame and frame difference are proposed to encode the temporal dynamics of video stream into reconfigurable spike patterns; Finally, the model is tested in the real scene. The experimental results show that the proposed method has high application value.
Although the non-contact liquid level detection method based on deep learning can perform well, its high demand on computational resources makes it not suitable for embedded devices with limited resource. To solve this problem, a non-contact liquid level detection method is first proposed based on multilayer spiking neural network; Furthermore, spiking encoding methods based on single frame and frame difference are proposed to encode the temporal dynamics of video stream into reconfigurable spike patterns; Finally, the model is tested in the real scene. The experimental results show that the proposed method has high application value.
2023, 45(8): 2770-2779.
doi: 10.11999/JEIT221122
Abstract:
For the problems that the existing evaluation index of patient active engagement is complicated to model and training intensity does not match the exercise ability and participation of the participants. A challenge force controller based on Bayesian optimization to enhance adaptively the participation of rehabilitation training is proposed. Firstly, muscle activation based on surface ElectroMyoGram (sEMG) signal is used to evaluate the participant's participation. Secondly, the resistance training mode based on trajectory error amplification is used to train the upper limb, and a comprehensive objective function combining normalized intensity and muscle activation is constructed. Then, Bayesian optimization method is used to update the resistance coefficient and dead zone width of the challenge force field in each training, and optimize the objective function continuously to improve the smoothness of the motion trajectory, while maintaining the participants' participation in training. Finally, 16 healthy subjects are randomly divided into experimental group and control group and trained with their non-handedness to verify the effectiveness of the proposed method. The experimental results show that the muscle activation of the experimental group is 2.51% higher than that of the control group. After training, the improvement of exercise ability in the experimental group is significantly better than that in the control group (59.8% vs 40.7%), which verifies that the adaptive rehabilitation training engagement strategy proposed has more advantages than the fixed parameter strategy.
For the problems that the existing evaluation index of patient active engagement is complicated to model and training intensity does not match the exercise ability and participation of the participants. A challenge force controller based on Bayesian optimization to enhance adaptively the participation of rehabilitation training is proposed. Firstly, muscle activation based on surface ElectroMyoGram (sEMG) signal is used to evaluate the participant's participation. Secondly, the resistance training mode based on trajectory error amplification is used to train the upper limb, and a comprehensive objective function combining normalized intensity and muscle activation is constructed. Then, Bayesian optimization method is used to update the resistance coefficient and dead zone width of the challenge force field in each training, and optimize the objective function continuously to improve the smoothness of the motion trajectory, while maintaining the participants' participation in training. Finally, 16 healthy subjects are randomly divided into experimental group and control group and trained with their non-handedness to verify the effectiveness of the proposed method. The experimental results show that the muscle activation of the experimental group is 2.51% higher than that of the control group. After training, the improvement of exercise ability in the experimental group is significantly better than that in the control group (59.8% vs 40.7%), which verifies that the adaptive rehabilitation training engagement strategy proposed has more advantages than the fixed parameter strategy.
2023, 45(8): 2780-2787.
doi: 10.11999/JEIT221260
Abstract:
It has been shown that sustained high mental workload will lead to poor self-regulation behaviors, but the effect of self-regulation behavior on mental workload is not clear when facing different difficulty tasks. An arithmetic paradigm based on self-regulating behavior for tasks of varying difficulty is proposed. The subjects can choose the questions according to their own decisions before the start of each round. The paradigm can observe the effect of different difficulty tasks on the subjects’ mental workload under self-regulation. The analysis can be performed using Event-Related Potential (ERP), Power Spectral Density (PSD), and microstates. The results show that under different tasks, self-regulation behaviors cause more mental workload. The self-regulation behavior is mainly related to the frontal, which shows stronger P300 amplitudes and theta and alpha band power, and smaller P600 amplitudes. On the moderately difficult task, the mental workload induced by self-regulation is smaller and prompts the subjects to exhibit better performance levels. This paradigm can effectively identify the task difficulty suitable for the subjects. In the actual task design, the difficulty of the task suitable for the subjects should be considered, so as to reduce the occurrence of poor self-regulation behaviors and improve the performance level of the subjects.
It has been shown that sustained high mental workload will lead to poor self-regulation behaviors, but the effect of self-regulation behavior on mental workload is not clear when facing different difficulty tasks. An arithmetic paradigm based on self-regulating behavior for tasks of varying difficulty is proposed. The subjects can choose the questions according to their own decisions before the start of each round. The paradigm can observe the effect of different difficulty tasks on the subjects’ mental workload under self-regulation. The analysis can be performed using Event-Related Potential (ERP), Power Spectral Density (PSD), and microstates. The results show that under different tasks, self-regulation behaviors cause more mental workload. The self-regulation behavior is mainly related to the frontal, which shows stronger P300 amplitudes and theta and alpha band power, and smaller P600 amplitudes. On the moderately difficult task, the mental workload induced by self-regulation is smaller and prompts the subjects to exhibit better performance levels. This paradigm can effectively identify the task difficulty suitable for the subjects. In the actual task design, the difficulty of the task suitable for the subjects should be considered, so as to reduce the occurrence of poor self-regulation behaviors and improve the performance level of the subjects.
2023, 45(8): 2788-2795.
doi: 10.11999/JEIT221496
Abstract:
A brain-computer interface based on Steady-State Visual Evoked Potential (SSVEP) has recently garnered considerable interest in human-computer cooperation. Nevertheless, SSVEP signals with short time windows suffer from a low signal-to-noise ratio and insufficient feature extraction. This study examines and extracts the SSVEP signal characteristics from three perspectives: frequency domain, time domain and spatial domain. The proposed method extracts the amplitude and phase feature information from a three-dimensional recalibrated feature matrix developed by incorporating the real part and the imaginary part information in the frequency domain. Subsequently, the model’s representation ability is enhanced by training samples across multiple stimulus time window scales in the time domain. Finally, multiscale feature information in the channel space and frequency domain is extracted in parallel by using distinct scaled one-dimensional convolution kernels with. In this paper, experiments are conducted on two open datasets characterized by different visual stimulus frequencies and frequency intervals. The average accuracy and average information transfer rate at a time window of 1 s surpass the performance of existing methods.
A brain-computer interface based on Steady-State Visual Evoked Potential (SSVEP) has recently garnered considerable interest in human-computer cooperation. Nevertheless, SSVEP signals with short time windows suffer from a low signal-to-noise ratio and insufficient feature extraction. This study examines and extracts the SSVEP signal characteristics from three perspectives: frequency domain, time domain and spatial domain. The proposed method extracts the amplitude and phase feature information from a three-dimensional recalibrated feature matrix developed by incorporating the real part and the imaginary part information in the frequency domain. Subsequently, the model’s representation ability is enhanced by training samples across multiple stimulus time window scales in the time domain. Finally, multiscale feature information in the channel space and frequency domain is extracted in parallel by using distinct scaled one-dimensional convolution kernels with. In this paper, experiments are conducted on two open datasets characterized by different visual stimulus frequencies and frequency intervals. The average accuracy and average information transfer rate at a time window of 1 s surpass the performance of existing methods.
2023, 45(8): 2796-2805.
doi: 10.11999/JEIT221491
Abstract:
ElectroEncephaloGraphy (EEG)-based Cognitive Workload Recognition (CWR) is valuable for human-robot interaction systems and passive brain-computer interfaces. However, the none-stationary of EEG and the difference between subjects hinder the rapid application of cross-operator CWR, a realistic scenario. To deal with the above problem, a jointly shared feature optimization method based on the Convolutional Neural Network (CNN) and Domain Generalization (DG) is proposed, denoted as CNN_DG. The data of existing operators (source domains) is used to improve the CWR performance of unknown operators (target domain). It includes three modules: EEG feature extractor, label classifier, and domain generalizer. The EEG feature extractor learns the transferable shared knowledge representation between source domains. The label classifier learns further the deep representation and predicted the workload levels. By adversarial training with the feature extractor, the domain generalizer reduces the difference in source domain distribution and ensures further the sharing of learned features. Two three-categories cross-operator CWR experiments are conducted on the Multi-attribute Task Battery (MATB II) simulated flight competition datasets 1 and 2, and the model performance is verified by using leave-one-subject-out cross-validation. Experimental results showed the CNN_DG performed significantly better than comparing methods, indicating its effectiveness and generalization in the field of cross-operator CWR.
ElectroEncephaloGraphy (EEG)-based Cognitive Workload Recognition (CWR) is valuable for human-robot interaction systems and passive brain-computer interfaces. However, the none-stationary of EEG and the difference between subjects hinder the rapid application of cross-operator CWR, a realistic scenario. To deal with the above problem, a jointly shared feature optimization method based on the Convolutional Neural Network (CNN) and Domain Generalization (DG) is proposed, denoted as CNN_DG. The data of existing operators (source domains) is used to improve the CWR performance of unknown operators (target domain). It includes three modules: EEG feature extractor, label classifier, and domain generalizer. The EEG feature extractor learns the transferable shared knowledge representation between source domains. The label classifier learns further the deep representation and predicted the workload levels. By adversarial training with the feature extractor, the domain generalizer reduces the difference in source domain distribution and ensures further the sharing of learned features. Two three-categories cross-operator CWR experiments are conducted on the Multi-attribute Task Battery (MATB II) simulated flight competition datasets 1 and 2, and the model performance is verified by using leave-one-subject-out cross-validation. Experimental results showed the CNN_DG performed significantly better than comparing methods, indicating its effectiveness and generalization in the field of cross-operator CWR.
2023, 45(8): 2806-2817.
doi: 10.11999/JEIT220923
Abstract:
To satisfy the new requirements brought by the intelligent development of high-speed railways, future railway mobile networks based on the Fifth Generation (5G) wireless technologies will apply broadband millimeter wave bands to enhance the transmission capability. Therefore, in this paper, considering the transmission requirements and scenario characteristics of high-speed railways, the problems of millimeter wave communications in network coverage robustness, mobility support capability, link stability and management are analyzed. Then, to guarantee the network coverage while improving the transmission capacity, future high-speed railway wireless network architecture based on the integration of conventional sub-6 GHz and millimeter wave bands is discussed, where the omni-directional sub-6 GHz bands provide robust coverage, and the directional millimeter wave communications improve transmission rate. Finally, under this network architecture, this paper investigates how to employ deep learning algorithms to predict the service characteristics and propagation environments, and make decisions for radio resource allocation, beam alignment, and handover optimization for sub-6 GHz and millimeter wave bands, to realize eventually the high reliability, low latency, and large capacity for the future high-speed railway mobile systems.
To satisfy the new requirements brought by the intelligent development of high-speed railways, future railway mobile networks based on the Fifth Generation (5G) wireless technologies will apply broadband millimeter wave bands to enhance the transmission capability. Therefore, in this paper, considering the transmission requirements and scenario characteristics of high-speed railways, the problems of millimeter wave communications in network coverage robustness, mobility support capability, link stability and management are analyzed. Then, to guarantee the network coverage while improving the transmission capacity, future high-speed railway wireless network architecture based on the integration of conventional sub-6 GHz and millimeter wave bands is discussed, where the omni-directional sub-6 GHz bands provide robust coverage, and the directional millimeter wave communications improve transmission rate. Finally, under this network architecture, this paper investigates how to employ deep learning algorithms to predict the service characteristics and propagation environments, and make decisions for radio resource allocation, beam alignment, and handover optimization for sub-6 GHz and millimeter wave bands, to realize eventually the high reliability, low latency, and large capacity for the future high-speed railway mobile systems.
2023, 45(8): 2818-2827.
doi: 10.11999/JEIT221171
Abstract:
In this paper, the code designs based on the channel coding and the joint source channel coding systems are surveyed over the transmission environment of the Wireless Body Area Network (WBAN). For the requirements of low-power consumption and high reliability on the physical layer, the perspective of optimal design based on the Low-Density Parity-Check (LDPC) code is investigated. Four aspects are summarized, including the classification of channel models, the construction of transmission system, the technical challenges and solutions, and the coding design with channel adaptability. Finally, some future works are prospected for optimally designing the LDPC codes over the WBAN environment, which will provide some references to construct the next-generation communication technology.
In this paper, the code designs based on the channel coding and the joint source channel coding systems are surveyed over the transmission environment of the Wireless Body Area Network (WBAN). For the requirements of low-power consumption and high reliability on the physical layer, the perspective of optimal design based on the Low-Density Parity-Check (LDPC) code is investigated. Four aspects are summarized, including the classification of channel models, the construction of transmission system, the technical challenges and solutions, and the coding design with channel adaptability. Finally, some future works are prospected for optimally designing the LDPC codes over the WBAN environment, which will provide some references to construct the next-generation communication technology.
2023, 45(8): 2828-2838.
doi: 10.11999/JEIT220815
Abstract:
The integrated chip architecture based on SRAM memory combines sensing, storage and computing functions, solving the problem of “storage wall” faced by the Von Neumann architecture by enabling the memory unit to have computing power and avoiding the transfer of data during the calculation process. This structure is combined with the sensor part to achieve ultra-high-speed, ultra-low-power computing power. SRAM memory has great advantages in terms of speed compared to other memory, mainly reflected in the fact that the architecture can achieve a high energy efficiency ratio, which can ensure high accuracy after the accuracy is enhanced, and is suitable for large computing power scenario design under the requirements of low power consumption and high performance. This paper introduces the structure of the sensor-memory-computing chip based on SRAM memory, investigates the research and development of SRAM-based sensor-memory-computing integration in the voltage domain, charge domain and digital domain, and discusses the future development direction of this field.
The integrated chip architecture based on SRAM memory combines sensing, storage and computing functions, solving the problem of “storage wall” faced by the Von Neumann architecture by enabling the memory unit to have computing power and avoiding the transfer of data during the calculation process. This structure is combined with the sensor part to achieve ultra-high-speed, ultra-low-power computing power. SRAM memory has great advantages in terms of speed compared to other memory, mainly reflected in the fact that the architecture can achieve a high energy efficiency ratio, which can ensure high accuracy after the accuracy is enhanced, and is suitable for large computing power scenario design under the requirements of low power consumption and high performance. This paper introduces the structure of the sensor-memory-computing chip based on SRAM memory, investigates the research and development of SRAM-based sensor-memory-computing integration in the voltage domain, charge domain and digital domain, and discusses the future development direction of this field.
2023, 45(8): 2839-2846.
doi: 10.11999/JEIT220880
Abstract:
With the continuous enrichment of Unmanned Aerial Vehicles (UAV) application scenarios, in recent years, the use of UAV formation to achieve air-ground coordination tasks is increasing. According to the current formation systems and control methods, an air-ground collaborative UAV formation control algorithm based on behavior strategy is designed: by introducing the idea of air-ground collaboration, it provides relay communication services for ground moving users and expands the communication coverage of UAV formations. Four UAV formations are designed for the formation of seven UAVs, the corresponding unit center position standards are derived, and the air-ground collaborative UAV formation control algorithm is simulated by the Unity software. The turning performance of the UAV formation using the algorithm is tested in the ideal environment and the obstacle avoidance, relay communication and formation change capabilities are tested in the simulated actual environment. Based on the proposed air-ground cooperative algorithm, two mission schemes are designed: the main mission scheme of regional search coverage, user lost contact search and rescue crash plan. Theoretical analysis and experimental simulation show that the algorithm and three schemes are feasible.
With the continuous enrichment of Unmanned Aerial Vehicles (UAV) application scenarios, in recent years, the use of UAV formation to achieve air-ground coordination tasks is increasing. According to the current formation systems and control methods, an air-ground collaborative UAV formation control algorithm based on behavior strategy is designed: by introducing the idea of air-ground collaboration, it provides relay communication services for ground moving users and expands the communication coverage of UAV formations. Four UAV formations are designed for the formation of seven UAVs, the corresponding unit center position standards are derived, and the air-ground collaborative UAV formation control algorithm is simulated by the Unity software. The turning performance of the UAV formation using the algorithm is tested in the ideal environment and the obstacle avoidance, relay communication and formation change capabilities are tested in the simulated actual environment. Based on the proposed air-ground cooperative algorithm, two mission schemes are designed: the main mission scheme of regional search coverage, user lost contact search and rescue crash plan. Theoretical analysis and experimental simulation show that the algorithm and three schemes are feasible.
Resource-Efficient Hierarchical Collaborative Federated Learning in Heterogeneous Internet of Things
2023, 45(8): 2847-2855.
doi: 10.11999/JEIT220914
Abstract:
The high heterogeneity of Internet of Things (IoT) devices and resources affects severely the training efficiency and accuracy of Federated Learning (FL). This characteristical heterogeneity of IoT devices and resources is fully investigated by existing research, and the design of a collaborative training acceleration mechanism among heterogeneous IoT devices is rare, resulting in limited training efficiency and low resource utilization of IoT devices. To this end, a resource-efficient Hierarchical Collaborative Federated Learning (HCFL) approach is proposed, and a device-edge-cloud hierarchical hybrid aggregation mechanism is devised, including an adaptive asynchronous weighted aggregation method to improve the model parameter aggregation efficiency by exploiting the differentiated parameter aggregation frequency of edge servers. A resource rebalancing client selection algorithm is proposed to select dynamically clients considering model accuracy and data distribution characteristics to mitigate the impact of resource heterogeneity on FL performance. A self-organized collaborative training algorithm is designed to leverage idle IoT devices and resources to accelerate the FL training process. Simulation results show that, given different heterogeneity degrees, the average training time of FL models is reduced by 15%, the average accuracy of FL models is improved by 6%, and the average resource utilization of IoT devices is improved by 52%.
The high heterogeneity of Internet of Things (IoT) devices and resources affects severely the training efficiency and accuracy of Federated Learning (FL). This characteristical heterogeneity of IoT devices and resources is fully investigated by existing research, and the design of a collaborative training acceleration mechanism among heterogeneous IoT devices is rare, resulting in limited training efficiency and low resource utilization of IoT devices. To this end, a resource-efficient Hierarchical Collaborative Federated Learning (HCFL) approach is proposed, and a device-edge-cloud hierarchical hybrid aggregation mechanism is devised, including an adaptive asynchronous weighted aggregation method to improve the model parameter aggregation efficiency by exploiting the differentiated parameter aggregation frequency of edge servers. A resource rebalancing client selection algorithm is proposed to select dynamically clients considering model accuracy and data distribution characteristics to mitigate the impact of resource heterogeneity on FL performance. A self-organized collaborative training algorithm is designed to leverage idle IoT devices and resources to accelerate the FL training process. Simulation results show that, given different heterogeneity degrees, the average training time of FL models is reduced by 15%, the average accuracy of FL models is improved by 6%, and the average resource utilization of IoT devices is improved by 52%.
2023, 45(8): 2856-2866.
doi: 10.11999/JEIT220929
Abstract:
The stable operation of the wireless power transmission system is inseparable from the data transmission technology. In this paper, a new method based on Orthogonal Frequency Division Multiplexing (OFDM) technology is proposed to realize the reverse simultaneous transmission of data and power on account of the problems on coupling interference and low spectrum utilization in the shared channel transmission of data and power. The power carrier is equated as the data carrier loaded with all 1 information, data is decoupled synchronously and transmitted reliably at high speed by using OFDM technology and the crosstalk generated by the power transmission process to the data transmission process can be reduced in this method. In order to stabilize the output voltage when the load varies within a certain range, the Series LCC Circuit (S/LCC) compensation topology is adopted by the power transmission channel. As a shared channel for data and power transmission, loosely coupled transformer could simultaneously and reversely transmit two different frequency carriers of data and power. The structure of the system and the basic principle of OFDM are firstly introduced in this paper; Secondly, mathematical modeling of the system is carried out to analysis the transmission characteristics; and then the design methods of data modulation and demodulation are given; Finally, an experimental platform with 20 W power and 85 kbit/s data transmission has been built to verify the validation of the proposed method.
The stable operation of the wireless power transmission system is inseparable from the data transmission technology. In this paper, a new method based on Orthogonal Frequency Division Multiplexing (OFDM) technology is proposed to realize the reverse simultaneous transmission of data and power on account of the problems on coupling interference and low spectrum utilization in the shared channel transmission of data and power. The power carrier is equated as the data carrier loaded with all 1 information, data is decoupled synchronously and transmitted reliably at high speed by using OFDM technology and the crosstalk generated by the power transmission process to the data transmission process can be reduced in this method. In order to stabilize the output voltage when the load varies within a certain range, the Series LCC Circuit (S/LCC) compensation topology is adopted by the power transmission channel. As a shared channel for data and power transmission, loosely coupled transformer could simultaneously and reversely transmit two different frequency carriers of data and power. The structure of the system and the basic principle of OFDM are firstly introduced in this paper; Secondly, mathematical modeling of the system is carried out to analysis the transmission characteristics; and then the design methods of data modulation and demodulation are given; Finally, an experimental platform with 20 W power and 85 kbit/s data transmission has been built to verify the validation of the proposed method.
2023, 45(8): 2867-2875.
doi: 10.11999/JEIT220894
Abstract:
To solve the problem of information transmission security caused by wireless channel openness and system transmission performance deterioration caused by channel estimation error, a Charnes-Cooper robust beamforming algorithm is proposed for Reconfigurable Intelligent Surface (RIS) assisted Multiple-Input Single-Output (MISO) systems with eavesdroppers. A bounded channel uncertainty model is established for the eavesdropper, and the base station beamforming and RIS phase shift are jointly optimized to maximize user secrecy rate based on the constraints of maximum transmit power and RIS phase shifts. To solve the non-convex problem, it is first converted to a convex optimization problem by variable substitution, Charnes-Cooper method and S-procedure method, and then the coupling variables are indirectly and alternately optimized to obtain robust beamforming matrix and RIS phase shifts. Simulation results show that the proposed RIS-based joint optimization algorithm has better user confidentiality and robustness.
To solve the problem of information transmission security caused by wireless channel openness and system transmission performance deterioration caused by channel estimation error, a Charnes-Cooper robust beamforming algorithm is proposed for Reconfigurable Intelligent Surface (RIS) assisted Multiple-Input Single-Output (MISO) systems with eavesdroppers. A bounded channel uncertainty model is established for the eavesdropper, and the base station beamforming and RIS phase shift are jointly optimized to maximize user secrecy rate based on the constraints of maximum transmit power and RIS phase shifts. To solve the non-convex problem, it is first converted to a convex optimization problem by variable substitution, Charnes-Cooper method and S-procedure method, and then the coupling variables are indirectly and alternately optimized to obtain robust beamforming matrix and RIS phase shifts. Simulation results show that the proposed RIS-based joint optimization algorithm has better user confidentiality and robustness.
2023, 45(8): 2876-2884.
doi: 10.11999/JEIT220915
Abstract:
Terahertz communication, as one of the key technologies of 6G research, will coexist with other frequency band links in the next generation of Low Earth Orbit (LEO) mega-constellation network. In such LEO mega-constellation network with incremental deployment of terahertz, the path suboptimization problem when the inter-satellite links are distorted will become more obvious, and the existing routing algorithm only based on the shortest delay path can not solve this problem. The modeling of space-time graph for incremental deployment of terahertz links is proposed and the routing algorithm for Adaptive Transmission Link Selection (ATLS) with combination of bent-pipe and inter-satellite link is considered. Simulation result of the proposed ATLS routing algorithm in the Hypatia network simulator show that compared with the existing methods, ATLS routing algorithm reduces task completion time and end-to-end latency by 17.14% and by 16.67%, respectively.
Terahertz communication, as one of the key technologies of 6G research, will coexist with other frequency band links in the next generation of Low Earth Orbit (LEO) mega-constellation network. In such LEO mega-constellation network with incremental deployment of terahertz, the path suboptimization problem when the inter-satellite links are distorted will become more obvious, and the existing routing algorithm only based on the shortest delay path can not solve this problem. The modeling of space-time graph for incremental deployment of terahertz links is proposed and the routing algorithm for Adaptive Transmission Link Selection (ATLS) with combination of bent-pipe and inter-satellite link is considered. Simulation result of the proposed ATLS routing algorithm in the Hypatia network simulator show that compared with the existing methods, ATLS routing algorithm reduces task completion time and end-to-end latency by 17.14% and by 16.67%, respectively.
2023, 45(8): 2885-2892.
doi: 10.11999/JEIT220907
Abstract:
Recent research has demonstrated that the temperature variation of smartphone caused by high data rate transmission could affect the corresponding performance on transmission. Considering the problem of performance degradation on transmission caused by the ignorance of the power-consumption outage which is related with the temperature of smartphone, a deep reinforcement learning based resource management scheme is proposed to consider the power-consumption outage for Unmanned Aerial Vehicle (UAV) communication scenario. Firstly, the analysis for the network model of UAV communication and heat transfer model in smartphone is established. Then, the influence of power-consumption outage is integrated into the optimization problem of UAV scenario in the form of constraint, and the system throughput is optimized via the joint consideration of bandwidth allocation, power allocation and trajectory design. Finally, Markov decision process is adopted to depict the problem and the optimization target is achieved by a deep reinforcement learning algorithm named normalized advantage function. Simulation results manifest that the proposed scheme can effectively enhance the system throughput and achieve appropriate trajectory of UAV.
Recent research has demonstrated that the temperature variation of smartphone caused by high data rate transmission could affect the corresponding performance on transmission. Considering the problem of performance degradation on transmission caused by the ignorance of the power-consumption outage which is related with the temperature of smartphone, a deep reinforcement learning based resource management scheme is proposed to consider the power-consumption outage for Unmanned Aerial Vehicle (UAV) communication scenario. Firstly, the analysis for the network model of UAV communication and heat transfer model in smartphone is established. Then, the influence of power-consumption outage is integrated into the optimization problem of UAV scenario in the form of constraint, and the system throughput is optimized via the joint consideration of bandwidth allocation, power allocation and trajectory design. Finally, Markov decision process is adopted to depict the problem and the optimization target is achieved by a deep reinforcement learning algorithm named normalized advantage function. Simulation results manifest that the proposed scheme can effectively enhance the system throughput and achieve appropriate trajectory of UAV.
2023, 45(8): 2893-2901.
doi: 10.11999/JEIT220803
Abstract:
Considering the problem of Service Function Chain (SFC) deployment optimization caused by the dynamic change of service requests under the Network Function Virtualization (NFV) architecture, an SFC deployment optimization algorithm based on Multi-Agent Soft Actor-Critic (MASAC) learning is proposed. Firstly, the model of minimizing resource load penalty, SFC deployment cost and delay cost is established, which is constrained by SFC end-to-end delay and reservation threshold of network resource. Secondly, the stochastic optimization is transformed into a Markov Decision Process (MDP) to realize the dynamic deployment of SFC and the balanced scheduling of resources. The arrangement scheme according to services division for multiple decision makers is further proposed. At last, the Soft Actor-Critic (SAC) algorithm is adopted in distributed multi-agent system to enhance exploration, then the central attention mechanism and advantage function are further introduced, which can dynamically and selectively focus on the information to obtain greater deployment return. Simulation results show that the proposed algorithm can optimize the load penalty, delay and deployment cost, and scale better with the increase of service requests.
Considering the problem of Service Function Chain (SFC) deployment optimization caused by the dynamic change of service requests under the Network Function Virtualization (NFV) architecture, an SFC deployment optimization algorithm based on Multi-Agent Soft Actor-Critic (MASAC) learning is proposed. Firstly, the model of minimizing resource load penalty, SFC deployment cost and delay cost is established, which is constrained by SFC end-to-end delay and reservation threshold of network resource. Secondly, the stochastic optimization is transformed into a Markov Decision Process (MDP) to realize the dynamic deployment of SFC and the balanced scheduling of resources. The arrangement scheme according to services division for multiple decision makers is further proposed. At last, the Soft Actor-Critic (SAC) algorithm is adopted in distributed multi-agent system to enhance exploration, then the central attention mechanism and advantage function are further introduced, which can dynamically and selectively focus on the information to obtain greater deployment return. Simulation results show that the proposed algorithm can optimize the load penalty, delay and deployment cost, and scale better with the increase of service requests.
2023, 45(8): 2902-2910.
doi: 10.11999/JEIT220846
Abstract:
Considering the problem that the current distributed authentication protocols in the Internet of Vehicles (IoV) directly depend on semi-trusted Road Side Units (RSU), a new distributed authentication model is proposed. The RSUs in this model establish automatically the edge authentication area through a three-stage broadcast, and these RSUs in the area save synchronously the vehicle certification records. RSU can prevent abnormal authentication behavior of malicious RSU by verifying the authentication records saved synchronously by nodes. Then, a distributed anonymous authentication protocol in IoV is designed by using Chebyshev chaotic mapping, to avoid the storage burden caused by the pseudonym mechanism, the vehicle sends messages without directly carrying identity information. Finally, the security of the protocol is proved by using random oracle. The simulation results show that the proposed scheme has lower authentication delay and communication cost.
Considering the problem that the current distributed authentication protocols in the Internet of Vehicles (IoV) directly depend on semi-trusted Road Side Units (RSU), a new distributed authentication model is proposed. The RSUs in this model establish automatically the edge authentication area through a three-stage broadcast, and these RSUs in the area save synchronously the vehicle certification records. RSU can prevent abnormal authentication behavior of malicious RSU by verifying the authentication records saved synchronously by nodes. Then, a distributed anonymous authentication protocol in IoV is designed by using Chebyshev chaotic mapping, to avoid the storage burden caused by the pseudonym mechanism, the vehicle sends messages without directly carrying identity information. Finally, the security of the protocol is proved by using random oracle. The simulation results show that the proposed scheme has lower authentication delay and communication cost.
2023, 45(8): 2911-2918.
doi: 10.11999/JEIT220875
Abstract:
In this paper, a low complexity channel tracking scheme based on Newton algorithm is proposed for the millimeter wave communication system assisted by Reconfigurable Intelligent Surfaces (RIS). The proposed tracking algorithm is used to track the slow variation of the angle between the user and the RIS. In the proposed scheme, some elements of RIS are connected to the Radio Frequency (RF) chains. The two-Dimensional Fast Fourier Transform (2D-FFT) algorithm is used to initialize the angle estimation, and then the Newton algorithm is used to track the angle parameters in each time slot. The channel gain of each slot is estimated by maximum likelihood algorithm. The channel abrupt changes is caused by sudden environmental change and slow change of user terminal. If the abrupt change is detected, the angle parameters are initialized again, otherwise the Newton algorithm is still used to track the angle parameters. Simulation results show that the proposed channel tracking scheme not only achieves the lowest complexity but also ensures excellent performance, which achieves a great tradeoff between computational complexity and channel estimation performance.
In this paper, a low complexity channel tracking scheme based on Newton algorithm is proposed for the millimeter wave communication system assisted by Reconfigurable Intelligent Surfaces (RIS). The proposed tracking algorithm is used to track the slow variation of the angle between the user and the RIS. In the proposed scheme, some elements of RIS are connected to the Radio Frequency (RF) chains. The two-Dimensional Fast Fourier Transform (2D-FFT) algorithm is used to initialize the angle estimation, and then the Newton algorithm is used to track the angle parameters in each time slot. The channel gain of each slot is estimated by maximum likelihood algorithm. The channel abrupt changes is caused by sudden environmental change and slow change of user terminal. If the abrupt change is detected, the angle parameters are initialized again, otherwise the Newton algorithm is still used to track the angle parameters. Simulation results show that the proposed channel tracking scheme not only achieves the lowest complexity but also ensures excellent performance, which achieves a great tradeoff between computational complexity and channel estimation performance.
2023, 45(8): 2919-2926.
doi: 10.11999/JEIT221533
Abstract:
Considering the technical requirements of one-point to multi-point simultaneous laser communication for miniaturization, light weight and networking of optical transceiver, the multiple gyroscopes on the optical transceiver are simplified, and a scheme of realizing simultaneous stability of multiple optical line-of-sights by using a single gyroscope is proposed. In order to calculate the attitude of multiple optical line-of-sights, the coordinate system of each pointing mirror is redefined according to the Euler theorem, and the mathematical model of multiple optical line-of-sights attitude based on rotating quaternion is established. In order to calculate the parameters of the mathematical model, the fourth-order Runge-Kutta algorithm is given, and the three-sample algorithm is optimized. Finally, the numerical solution results are compared with the true values of three typical cone motions, and the solution error curves of different pointing mirror line-of-sight postures are obtained. The results show that the fourth-order Runge-Kutta method is superior to 10-4 μrad in the simulation time of 60 s, which verifies the effectiveness of the model. After the optimization of the three-sample algorithm, the accuracy of the joint solution of three typical conical motions is improved by 3 orders of magnitude, 3 orders of magnitude and 1 order of magnitude respectively, and the purpose of accuracy optimization is achieved. This method provides a theoretical basis for the application of strapdown stabilization technology to laser communication networking.
Considering the technical requirements of one-point to multi-point simultaneous laser communication for miniaturization, light weight and networking of optical transceiver, the multiple gyroscopes on the optical transceiver are simplified, and a scheme of realizing simultaneous stability of multiple optical line-of-sights by using a single gyroscope is proposed. In order to calculate the attitude of multiple optical line-of-sights, the coordinate system of each pointing mirror is redefined according to the Euler theorem, and the mathematical model of multiple optical line-of-sights attitude based on rotating quaternion is established. In order to calculate the parameters of the mathematical model, the fourth-order Runge-Kutta algorithm is given, and the three-sample algorithm is optimized. Finally, the numerical solution results are compared with the true values of three typical cone motions, and the solution error curves of different pointing mirror line-of-sight postures are obtained. The results show that the fourth-order Runge-Kutta method is superior to 10-4 μrad in the simulation time of 60 s, which verifies the effectiveness of the model. After the optimization of the three-sample algorithm, the accuracy of the joint solution of three typical conical motions is improved by 3 orders of magnitude, 3 orders of magnitude and 1 order of magnitude respectively, and the purpose of accuracy optimization is achieved. This method provides a theoretical basis for the application of strapdown stabilization technology to laser communication networking.
2023, 45(8): 2927-2935.
doi: 10.11999/JEIT220838
Abstract:
Modeling of radar target scattering center is a key step in radar target characteristic analysis and radar target recognition. The wide application of smooth streamlined structure to radar target brings great challenges to traditional scattering center modeling. This paper focuses on the modeling of the sliding scattering center. Firstly, the position expression of the sliding scattering center is deduced based on the two cases of surface edge scattering and surface scattering respectively. Secondly, a sliding scattering center estimation method based on extended OTSM map is proposed. The location of the sliding scattering center is derived from the projection geometry of adjacent perspectives. Then, combined with the fixed scattering centers obtained by RANSAC algorithm, the complete scattering center model of the target is obtained. The algorithm is verified by the experiment results of the anechoic chamber data, which demonstrate the effectiveness of the proposed method.
Modeling of radar target scattering center is a key step in radar target characteristic analysis and radar target recognition. The wide application of smooth streamlined structure to radar target brings great challenges to traditional scattering center modeling. This paper focuses on the modeling of the sliding scattering center. Firstly, the position expression of the sliding scattering center is deduced based on the two cases of surface edge scattering and surface scattering respectively. Secondly, a sliding scattering center estimation method based on extended OTSM map is proposed. The location of the sliding scattering center is derived from the projection geometry of adjacent perspectives. Then, combined with the fixed scattering centers obtained by RANSAC algorithm, the complete scattering center model of the target is obtained. The algorithm is verified by the experiment results of the anechoic chamber data, which demonstrate the effectiveness of the proposed method.
2023, 45(8): 2936-2944.
doi: 10.11999/JEIT220873
Abstract:
Frequency-shift jamming is a kind of spoofing jamming for Linear Frequency Modulation (LFM) pulse radar based on digital radio frequency memory. In order to resist the frequency shifting interference, by studying the pulse compression process of frequency shifting interference signals, the conclusion that the output energy of two signals with a frequency difference is negatively correlated with the frequency difference after time domain convolution is concluded. Based on this characteristic, a method for identification of frequency shifting interference based on phase-coherent accumulation of positive and negative carrier frequency pulse pressure is proposed, which can effectively counter frequency shifting interference with conventional LFM pulse Doppler radar under the condition of self-defense interference. After pulse compression and phase-coherent accumulation with matched filters with positive and negative carrier frequencies respectively, the output results of real target and interference signal after processing have peak difference, and then frequency shift interference is identifed. Simulation results demonstrate the effectiveness of the proposed method.
Frequency-shift jamming is a kind of spoofing jamming for Linear Frequency Modulation (LFM) pulse radar based on digital radio frequency memory. In order to resist the frequency shifting interference, by studying the pulse compression process of frequency shifting interference signals, the conclusion that the output energy of two signals with a frequency difference is negatively correlated with the frequency difference after time domain convolution is concluded. Based on this characteristic, a method for identification of frequency shifting interference based on phase-coherent accumulation of positive and negative carrier frequency pulse pressure is proposed, which can effectively counter frequency shifting interference with conventional LFM pulse Doppler radar under the condition of self-defense interference. After pulse compression and phase-coherent accumulation with matched filters with positive and negative carrier frequencies respectively, the output results of real target and interference signal after processing have peak difference, and then frequency shift interference is identifed. Simulation results demonstrate the effectiveness of the proposed method.
2023, 45(8): 2945-2954.
doi: 10.11999/JEIT220830
Abstract:
In this paper, a simulation method of airborne dual-polarized weather radar precipitation particle echo based is proposed. This method is based on T-Matrix method and Weather Research and Forecasting (WRF). Firstly, WRF model is used to simulate a weather scenario. Secondly, considering that the precipitation particles are spherical, the T-matrix method and microphysical properties are combined to calculate the reflectance factors of six precipitation particles. Finally, six kinds of precipitation particle echo signals are obtained by using radar meteorological equation, and the simulation of precipitation particle echo signal of Airborne polarized weather radar is realized. The simulation results show that the results can accurately reflect the meteorological characteristics, and the comparative analysis with the measured data confirms further the effectiveness and reliability of the proposed method.
In this paper, a simulation method of airborne dual-polarized weather radar precipitation particle echo based is proposed. This method is based on T-Matrix method and Weather Research and Forecasting (WRF). Firstly, WRF model is used to simulate a weather scenario. Secondly, considering that the precipitation particles are spherical, the T-matrix method and microphysical properties are combined to calculate the reflectance factors of six precipitation particles. Finally, six kinds of precipitation particle echo signals are obtained by using radar meteorological equation, and the simulation of precipitation particle echo signal of Airborne polarized weather radar is realized. The simulation results show that the results can accurately reflect the meteorological characteristics, and the comparative analysis with the measured data confirms further the effectiveness and reliability of the proposed method.
2023, 45(8): 2955-2964.
doi: 10.11999/JEIT220811
Abstract:
The low transmission power and low signal-to-noise ratio increase the challenges of target detection for compact High-Frequency Surface Wave Radar (HFSWR). Track fragmentation often occurs due to missed detections during the tracking procedure. An adaptive weak target detection method using joint detection and tracking is suggested to enhance its detection performance. The tracker will communicate back the current target prediction state to the detector when it discovers that a target track can not be connected to any new plot. The detector establishes a local detection gate on the Range-Doppler (R-D) spectrum, and the detection background is perceived using the binary hypothesis test. According to the detected background, an appropriate detection threshold adjustment method is used to lower the Constant False Alarm Rate (CFAR) detection threshold and determines whether a weak target can be detected. The newly generated plot is obtained after the azimuth estimate and transmitted to the tracker for additional processing if a target is detected. The experimental results with field data reveal that the track length obtained by the proposed method is 29.76% longer than that of the detection before tracking methods, and the tracking time increases by 19.25 minutes on average.
The low transmission power and low signal-to-noise ratio increase the challenges of target detection for compact High-Frequency Surface Wave Radar (HFSWR). Track fragmentation often occurs due to missed detections during the tracking procedure. An adaptive weak target detection method using joint detection and tracking is suggested to enhance its detection performance. The tracker will communicate back the current target prediction state to the detector when it discovers that a target track can not be connected to any new plot. The detector establishes a local detection gate on the Range-Doppler (R-D) spectrum, and the detection background is perceived using the binary hypothesis test. According to the detected background, an appropriate detection threshold adjustment method is used to lower the Constant False Alarm Rate (CFAR) detection threshold and determines whether a weak target can be detected. The newly generated plot is obtained after the azimuth estimate and transmitted to the tracker for additional processing if a target is detected. The experimental results with field data reveal that the track length obtained by the proposed method is 29.76% longer than that of the detection before tracking methods, and the tracking time increases by 19.25 minutes on average.
2023, 45(8): 2965-2974.
doi: 10.11999/JEIT220992
Abstract:
In a countermeasure electromagnetic environment, airborne Synthetic Aperture Radar (SAR) is prone to electronic interference, which makes some echo pulses unavailable, resulting in partial data loss of the SAR echo and limited imaging performance. Thus, a Feature Reconstruction SAR (FR-SAR) imaging algorithm based on low-rank matrix completion is proposed. By considering the low-rank characteristics of the echoed data, the nonzero column number of rows or columns is obtained through matrix decomposition, and the nonzero column number is convexly optimized by Factor Group-Sparse Regularization (FGSR) to obtain the correlation between SAR echoes, to achieve data completion. Additionally, the proposed algorithm in the rank function is more accurate than the conventional nuclear function. Meanwhile, a sparse prior is introduced into the regularization model to improve the noise suppression and super-resolution performance. The Alternating Direction Method of Multipliers (ADMM) is used to realize a collaborative solution between matrix completion and sparse feature enhancement. The FR-SAR algorithm is more efficient because it does not use Singular Value Decomposition (SVD). Simulated and measured data verify the effectiveness of the FR-SAR algorithm. The recovery abilities of the proposed and traditional algorithms are quantitatively compared using a Phase Transition Diagram (PTD), establishing the superiority of the FR-SAR algorithm.
In a countermeasure electromagnetic environment, airborne Synthetic Aperture Radar (SAR) is prone to electronic interference, which makes some echo pulses unavailable, resulting in partial data loss of the SAR echo and limited imaging performance. Thus, a Feature Reconstruction SAR (FR-SAR) imaging algorithm based on low-rank matrix completion is proposed. By considering the low-rank characteristics of the echoed data, the nonzero column number of rows or columns is obtained through matrix decomposition, and the nonzero column number is convexly optimized by Factor Group-Sparse Regularization (FGSR) to obtain the correlation between SAR echoes, to achieve data completion. Additionally, the proposed algorithm in the rank function is more accurate than the conventional nuclear function. Meanwhile, a sparse prior is introduced into the regularization model to improve the noise suppression and super-resolution performance. The Alternating Direction Method of Multipliers (ADMM) is used to realize a collaborative solution between matrix completion and sparse feature enhancement. The FR-SAR algorithm is more efficient because it does not use Singular Value Decomposition (SVD). Simulated and measured data verify the effectiveness of the FR-SAR algorithm. The recovery abilities of the proposed and traditional algorithms are quantitatively compared using a Phase Transition Diagram (PTD), establishing the superiority of the FR-SAR algorithm.
2023, 45(8): 2975-2985.
doi: 10.11999/JEIT220867
Abstract:
As one of the important research contents of Synthetic Aperture Radar(SAR) image interpretation, Polarimetric Synthetic Aperture Radar(PolSAR) terrain classification has attracted more and more attention from scholars at home and abroad. Different from natural images, the PolSAR dataset not only has unique data attributes but also belongs to a small sample dataset. Therefore, how to make full use of the data characteristics and label samples is a key consideration. Based on the above problems, a new network on the basis of UNet for PolSAR terrain classification—Multiscale Separable Residual Unet(MSR-Unet) is proposed in this paper. In order to extract separately the spatial and channel features of the input data while reducing the redundancy of features, the ordinary 2D convolution is replaced by the depthwise separable convolution in MSR-Unet. Then, an improved multi-scale residual structure based on the residual structure is proposed. This structure obtains features of different scales by setting convolution kernels of different sizes, and at the same time the feature is reused by dense connection, using the structure can not only increase the depth of the network to a certain extent and obtain better features, but also enable the network to make full use of label samples and enhance the transmission efficiency of features information, thereby improving the classification accuracy of PolSAR terrain. The experimental results on three standard datasets show that compared with the traditional classification methods and other mainstream deep learning network models such as UNet, the MSR-Unet can improve average accuracy, overall accuracy and Kappa coefficient in different degrees and has better robustness.
As one of the important research contents of Synthetic Aperture Radar(SAR) image interpretation, Polarimetric Synthetic Aperture Radar(PolSAR) terrain classification has attracted more and more attention from scholars at home and abroad. Different from natural images, the PolSAR dataset not only has unique data attributes but also belongs to a small sample dataset. Therefore, how to make full use of the data characteristics and label samples is a key consideration. Based on the above problems, a new network on the basis of UNet for PolSAR terrain classification—Multiscale Separable Residual Unet(MSR-Unet) is proposed in this paper. In order to extract separately the spatial and channel features of the input data while reducing the redundancy of features, the ordinary 2D convolution is replaced by the depthwise separable convolution in MSR-Unet. Then, an improved multi-scale residual structure based on the residual structure is proposed. This structure obtains features of different scales by setting convolution kernels of different sizes, and at the same time the feature is reused by dense connection, using the structure can not only increase the depth of the network to a certain extent and obtain better features, but also enable the network to make full use of label samples and enhance the transmission efficiency of features information, thereby improving the classification accuracy of PolSAR terrain. The experimental results on three standard datasets show that compared with the traditional classification methods and other mainstream deep learning network models such as UNet, the MSR-Unet can improve average accuracy, overall accuracy and Kappa coefficient in different degrees and has better robustness.
2023, 45(8): 2986-2990.
doi: 10.11999/JEIT220918
Abstract:
A robust adaptive beamforming method based on covariance matrix reconstruction for a coprime array with gain/phase uncertainties is proposed. The main idea of this method is to reconstruct the covariance matrix of the signals. However, the accuracy of the reconstruction of the covariance matrix might be influenced by the gain/phase uncertainties . To eliminate the influence of the gain/phase uncertainties and reconstruct accurately the covariance matrix of the signals, a Total Least Squares (TLS) based method is proposed. First, the basic model of the covariance matrix reconstruction with gain/phase uncertainties is established. Then, the problem is converted into an Errors In Variables (EIV) model. The calibration of the gain/phase uncertainties is then converted into the estimation of an error matrix related to the gain/phase uncertainties. An alternating descent algorithm is developed to solve this problem. Simulation results showed that the proposed method can improve the accuracy of the reconstruction of the covariance matrix and is effective for adaptive beamforming.
A robust adaptive beamforming method based on covariance matrix reconstruction for a coprime array with gain/phase uncertainties is proposed. The main idea of this method is to reconstruct the covariance matrix of the signals. However, the accuracy of the reconstruction of the covariance matrix might be influenced by the gain/phase uncertainties . To eliminate the influence of the gain/phase uncertainties and reconstruct accurately the covariance matrix of the signals, a Total Least Squares (TLS) based method is proposed. First, the basic model of the covariance matrix reconstruction with gain/phase uncertainties is established. Then, the problem is converted into an Errors In Variables (EIV) model. The calibration of the gain/phase uncertainties is then converted into the estimation of an error matrix related to the gain/phase uncertainties. An alternating descent algorithm is developed to solve this problem. Simulation results showed that the proposed method can improve the accuracy of the reconstruction of the covariance matrix and is effective for adaptive beamforming.
2023, 45(8): 2991-3001.
doi: 10.11999/JEIT220895
Abstract:
In order to solve the crossing target tracking problem for passive sonar, a target association and tracking approach based on Historical kinematic characteristics and SVM (His-SVM) spectrum classification is presented, which combines the improved kinematic feature association method with the revised signal feature association method. The historical bearing changing rate is firstly extracted from historical track points to be used as a main feature for the overlapping target association and tracking. Furthermore, the SVM model, which is trained by the spectrum of track points, is utilized to classify the close trace points and each trace points can be assigned to different targets according to the classification results. Finally, the framework of the crossing target tracking algorithm is constructed by integrating historical kinematic characteristics with the SVM spectrum classification. The results of simulation studies verify the effectiveness of the proposed approach for close target association and crossing target tracking, and indicate that the tracking performance of the proposed approach is better than the traditional kinematic feature association method.
In order to solve the crossing target tracking problem for passive sonar, a target association and tracking approach based on Historical kinematic characteristics and SVM (His-SVM) spectrum classification is presented, which combines the improved kinematic feature association method with the revised signal feature association method. The historical bearing changing rate is firstly extracted from historical track points to be used as a main feature for the overlapping target association and tracking. Furthermore, the SVM model, which is trained by the spectrum of track points, is utilized to classify the close trace points and each trace points can be assigned to different targets according to the classification results. Finally, the framework of the crossing target tracking algorithm is constructed by integrating historical kinematic characteristics with the SVM spectrum classification. The results of simulation studies verify the effectiveness of the proposed approach for close target association and crossing target tracking, and indicate that the tracking performance of the proposed approach is better than the traditional kinematic feature association method.
2023, 45(8): 3002-3011.
doi: 10.11999/JEIT220919
Abstract:
In order to improve the performance of infrared image small target detection, an end-to-end infrared small target detection model that integrates multi-scale fractal attention is designed by combining prior knowledge of traditional methods and feature learning ability of deep learning methods. Firstly, the procedure of accelerating the calculation of multi-scale fractal feature with deep learning operator is proposed based on analysis of this feature, which is suitable for detecting dim and small targets in infrared images. Secondly, the Convolutional Neural Network(CNN) is designed to obtain the target significance distribution map, and a multi-scale fractal feature attention module is proposed by combining the feature pyramid attention and pyramid pooling downsampling module. When embedding it into the infrared target semantic segmentation model, asymmetric context modulation is adopted to improve fusion performance of shallow features and deep features, and asymmetric pyramid non-local block is used to obtain global attention to improve infrared small target detection performance. Finally, the performance of the proposed algorithm is verified by experiments on the Single-frame InfRared Small Target(SIRST) dataset, where Intersection over Union (IoU) and normalized IOU(nIoU) reach 77.4% and 76.1%, respectively, which is better than the performance of the currently known methods. Meanwhile, the effectiveness of the proposed model is further verified by migration experiments. Due to the effective integration of the advantages of traditional methods and deep learning methods, the proposed model is suitable for infrared small target detection in complex environments.
In order to improve the performance of infrared image small target detection, an end-to-end infrared small target detection model that integrates multi-scale fractal attention is designed by combining prior knowledge of traditional methods and feature learning ability of deep learning methods. Firstly, the procedure of accelerating the calculation of multi-scale fractal feature with deep learning operator is proposed based on analysis of this feature, which is suitable for detecting dim and small targets in infrared images. Secondly, the Convolutional Neural Network(CNN) is designed to obtain the target significance distribution map, and a multi-scale fractal feature attention module is proposed by combining the feature pyramid attention and pyramid pooling downsampling module. When embedding it into the infrared target semantic segmentation model, asymmetric context modulation is adopted to improve fusion performance of shallow features and deep features, and asymmetric pyramid non-local block is used to obtain global attention to improve infrared small target detection performance. Finally, the performance of the proposed algorithm is verified by experiments on the Single-frame InfRared Small Target(SIRST) dataset, where Intersection over Union (IoU) and normalized IOU(nIoU) reach 77.4% and 76.1%, respectively, which is better than the performance of the currently known methods. Meanwhile, the effectiveness of the proposed model is further verified by migration experiments. Due to the effective integration of the advantages of traditional methods and deep learning methods, the proposed model is suitable for infrared small target detection in complex environments.
2023, 45(8): 3012-3021.
doi: 10.11999/JEIT220819
Abstract:
An end-to-end dual fusion path Generation Adversarial Network (GAN) is proposed to preserve more information from the source image. Firstly, in the generator, a double path dense connection network with the same structure and independent parameters is used to construct the infrared difference path and the visible difference path to improve the contrast of the fused image, and the channel attention mechanism is introduced to make the network focus more on the typical infrared targets and the visible texture details; Secondly, two source images are directly input into each layer of the network to extract more source image feature information; Finally, considering the complementarity between the loss functions, the difference intensity loss function, the difference gradient loss function and the structural similarity loss function are added to obtain a more contrast fused image. Experiments show that, compared with a Generative Adversarial Network with Multi-classification Constraints (GANMcC), Residual Fusion network for infrared and visible images (RFnest) and other related fusion algorithms, the fusion image obtained by this method not only achieves the best effect in multiple evaluation indicators, but also has better visual effect and is more in line with human visual perception.
An end-to-end dual fusion path Generation Adversarial Network (GAN) is proposed to preserve more information from the source image. Firstly, in the generator, a double path dense connection network with the same structure and independent parameters is used to construct the infrared difference path and the visible difference path to improve the contrast of the fused image, and the channel attention mechanism is introduced to make the network focus more on the typical infrared targets and the visible texture details; Secondly, two source images are directly input into each layer of the network to extract more source image feature information; Finally, considering the complementarity between the loss functions, the difference intensity loss function, the difference gradient loss function and the structural similarity loss function are added to obtain a more contrast fused image. Experiments show that, compared with a Generative Adversarial Network with Multi-classification Constraints (GANMcC), Residual Fusion network for infrared and visible images (RFnest) and other related fusion algorithms, the fusion image obtained by this method not only achieves the best effect in multiple evaluation indicators, but also has better visual effect and is more in line with human visual perception.
2023, 45(8): 3022-3031.
doi: 10.11999/JEIT220749
Abstract:
Considering the problem that skeleton action recognition can not fully exploit spatio-temporal features, a skeleton action recognition model based on Spatio-Temporal Feature Enhanced Graph Convolutional Network (STFE-GCN) is proposed in this paper. Firstly, the definition of adjacency matrix representing the topological structure of human body and the structure of one two-stream self-adaptive graph convolutional network model are introduced. Secondly, the graph attention network in spatial domain is used to assign different weight coefficients according to the importance of the neighbor nodes to generate an attention coefficient matrix, which can fully extract the spatial structure features of human body. Furthermore, a new spatial self-adaptive adjacency matrix is proposed to enhance furtherly the extraction of spatial structure features of human body combined with the global adjacency matrix generated by the non-local network; Then, a mixed pooling model is utilized in temporal domain to extract key action features and global contextual features, these two-above features can be furtherly combined with the features generated by the temporal convolution to enhance the extraction of temporal features from behavioral informations. Furthermore, an Efficient Channel Attention Network (ECA-Net)is introduced for channel attention to better extract the spatio-temporal features of the samples. Meanwhile, combining the spatial feature enhanced, the temporal feature enhanced with the channel attention, an novel model referred to as STFE-GCN is constructed and one end-to-end training can be realized based on mutil-stream network to achieve the full mining of spatio-temporal features. Finally, the researches on skeleton action recognition are carried on NTU-RGB+D and NTU-RGB+D120 datasets. The results show that this model has superior classification accuracy and generalization ability, which also further verifies the effectiveness of the model to fully mine spatio-temporal features.
Considering the problem that skeleton action recognition can not fully exploit spatio-temporal features, a skeleton action recognition model based on Spatio-Temporal Feature Enhanced Graph Convolutional Network (STFE-GCN) is proposed in this paper. Firstly, the definition of adjacency matrix representing the topological structure of human body and the structure of one two-stream self-adaptive graph convolutional network model are introduced. Secondly, the graph attention network in spatial domain is used to assign different weight coefficients according to the importance of the neighbor nodes to generate an attention coefficient matrix, which can fully extract the spatial structure features of human body. Furthermore, a new spatial self-adaptive adjacency matrix is proposed to enhance furtherly the extraction of spatial structure features of human body combined with the global adjacency matrix generated by the non-local network; Then, a mixed pooling model is utilized in temporal domain to extract key action features and global contextual features, these two-above features can be furtherly combined with the features generated by the temporal convolution to enhance the extraction of temporal features from behavioral informations. Furthermore, an Efficient Channel Attention Network (ECA-Net)is introduced for channel attention to better extract the spatio-temporal features of the samples. Meanwhile, combining the spatial feature enhanced, the temporal feature enhanced with the channel attention, an novel model referred to as STFE-GCN is constructed and one end-to-end training can be realized based on mutil-stream network to achieve the full mining of spatio-temporal features. Finally, the researches on skeleton action recognition are carried on NTU-RGB+D and NTU-RGB+D120 datasets. The results show that this model has superior classification accuracy and generalization ability, which also further verifies the effectiveness of the model to fully mine spatio-temporal features.
2023, 45(8): 3032-3039.
doi: 10.11999/JEIT220802
Abstract:
In the medical field, entity recognition can extract valuable information from the text of large-scale electronic medical records. Due to the lack of features for locating entity boundaries and incomplete semantic information extraction, the implementation of Chinese Named Entity Recognition(NER) is more difficult. In this paper, a model combining Multi-Feature Embedding and Multi-Net-work Fusion model (MFE-MNF) is proposed. The model embeds multi-granularity features, i.e. characters, words, radicals and external knowledge, extends the feature representation of characters and defines the entity boundary. The feature vectors are input respectively into the two paths of Bi-directional Long Short-Term Memory (BiLSTM) and adaptive graph convolution network to capture comprehensively and deeply the context semantic information and global semantic information, and alleviate the problem of incomplete semantic information extraction. The experimental results on CCKS2019 and CCKS2020 datasets show that compared with the traditional entity recognition model, the proposed model can extract entities accurately and effectively.
In the medical field, entity recognition can extract valuable information from the text of large-scale electronic medical records. Due to the lack of features for locating entity boundaries and incomplete semantic information extraction, the implementation of Chinese Named Entity Recognition(NER) is more difficult. In this paper, a model combining Multi-Feature Embedding and Multi-Net-work Fusion model (MFE-MNF) is proposed. The model embeds multi-granularity features, i.e. characters, words, radicals and external knowledge, extends the feature representation of characters and defines the entity boundary. The feature vectors are input respectively into the two paths of Bi-directional Long Short-Term Memory (BiLSTM) and adaptive graph convolution network to capture comprehensively and deeply the context semantic information and global semantic information, and alleviate the problem of incomplete semantic information extraction. The experimental results on CCKS2019 and CCKS2020 datasets show that compared with the traditional entity recognition model, the proposed model can extract entities accurately and effectively.
2023, 45(8): 3040-3046.
doi: 10.11999/JEIT220882
Abstract:
To reduce the sensitivity drift caused by working temperature and structural parameters change, an electric field sensor with self-compensation based on Micro-Electro-Mechanical System (MEMS) is proposed. The sensor structure includes the sensing electrodes for measuring the external electric field and the reference electrodes for monitoring the movable structure vibration. With the reference electrodes output, this sensor can track the resonant frequency automatically, and correct the sensing output in real time. Experimental results show that this sensor can achieve a linearity of 0.21% and an accuracy of 1.34% in the electric field range of –18~+18 kV/m, and a less sensitivity drift within 3.0% in the temperature range of –40~70°C, realizing a good self-compensation performance.
To reduce the sensitivity drift caused by working temperature and structural parameters change, an electric field sensor with self-compensation based on Micro-Electro-Mechanical System (MEMS) is proposed. The sensor structure includes the sensing electrodes for measuring the external electric field and the reference electrodes for monitoring the movable structure vibration. With the reference electrodes output, this sensor can track the resonant frequency automatically, and correct the sensing output in real time. Experimental results show that this sensor can achieve a linearity of 0.21% and an accuracy of 1.34% in the electric field range of –18~+18 kV/m, and a less sensitivity drift within 3.0% in the temperature range of –40~70°C, realizing a good self-compensation performance.
2023, 45(8): 3047-3056.
doi: 10.11999/JEIT220718
Abstract:
Memristor or Resistive Random Access Memory (ReRAM) is a novel Non-Volatile Memory (NVM) with storage and computing functions, and it is the basic device of non-Von Neumann computer architecture which is Processing In Memory (PIM). To solve the speed mismatch problem between computing speed and storage of reconfigurable array processor, the model of Voltage ThrEshold Adaptive Memristor (VTEAM) is adopted. And through the simulation of Linear Technology Simulation Program with Integrated Circuit Emphasis (LTSPICE), the complete set of Boolean logic is realized. On this basis, a 1T1M memristor cross array is designed, which has the characteristics of simple structure, reconfiguration and high parallelism. Monte Carlo (MC) method is used for tolerance analysis, and the calculation accuracy had reached 0.998. Compared with the existing advanced array, the performance of this array is improved effectively, the processing delay and energy consumption are reduced, and this array can be combined with the reconfigurable array processor to deal with the “memory wall” problem.
Memristor or Resistive Random Access Memory (ReRAM) is a novel Non-Volatile Memory (NVM) with storage and computing functions, and it is the basic device of non-Von Neumann computer architecture which is Processing In Memory (PIM). To solve the speed mismatch problem between computing speed and storage of reconfigurable array processor, the model of Voltage ThrEshold Adaptive Memristor (VTEAM) is adopted. And through the simulation of Linear Technology Simulation Program with Integrated Circuit Emphasis (LTSPICE), the complete set of Boolean logic is realized. On this basis, a 1T1M memristor cross array is designed, which has the characteristics of simple structure, reconfiguration and high parallelism. Monte Carlo (MC) method is used for tolerance analysis, and the calculation accuracy had reached 0.998. Compared with the existing advanced array, the performance of this array is improved effectively, the processing delay and energy consumption are reduced, and this array can be combined with the reconfigurable array processor to deal with the “memory wall” problem.