Email alert
2021 Vol. 43, No. 12
Display Method:
2021, 43(12): 1-4.
Abstract:
2021, 43(12): 3393-3406.
doi: 10.11999/JEIT211135
Abstract:
With the improvement of the informatization degree of various industries in the national economy and the deep cross integration between industries, the Cyber-Physical System (CPS) is becoming the key technology to support this development. It is also known as the core system leading a new round of industrial technology reform in the world. By accurately mapping the entities, behaviors and interactive environment in the objective physical world to the information space, real-time processing and feedback back to the physical space, CPS can solve the problems of analysis and modeling, decision optimization and uncertainty processing of complex systems from a system perspective and different levels. This paper analyzes the key technologies and difficult bottlenecks of CPS from its architecture, design and development process, discusses the supportive relationship between CPS and cutting-edge technologies such as edge cloud collaborative computing, digital twins, artificial intelligence and blockchain, and summarizes the research status of CPS in four application fields: industrial production, energy and electricity, traffic and medical health. Finally, the future technical development of CPS is prospected. It is hoped to provide reference for experts and scholars in CPS fields, and provide technical support for China's industrial scientific and technological revolution and intelligence transformation.
With the improvement of the informatization degree of various industries in the national economy and the deep cross integration between industries, the Cyber-Physical System (CPS) is becoming the key technology to support this development. It is also known as the core system leading a new round of industrial technology reform in the world. By accurately mapping the entities, behaviors and interactive environment in the objective physical world to the information space, real-time processing and feedback back to the physical space, CPS can solve the problems of analysis and modeling, decision optimization and uncertainty processing of complex systems from a system perspective and different levels. This paper analyzes the key technologies and difficult bottlenecks of CPS from its architecture, design and development process, discusses the supportive relationship between CPS and cutting-edge technologies such as edge cloud collaborative computing, digital twins, artificial intelligence and blockchain, and summarizes the research status of CPS in four application fields: industrial production, energy and electricity, traffic and medical health. Finally, the future technical development of CPS is prospected. It is hoped to provide reference for experts and scholars in CPS fields, and provide technical support for China's industrial scientific and technological revolution and intelligence transformation.
Reinforcement Learning Control Strategy of Quadrotor Unmanned Aerial Vehicles Based on Linear Filter
2021, 43(12): 3407-3417.
doi: 10.11999/JEIT210251
Abstract:
In this paper, based on linear filter, a deep Reinforcement Learning (RL) strategy is proposed, then a novel intelligent control method is put forward for quadrotor Unmanned Aerial Vehicles (UAVs), which improves effectively the robustness against disturbance and unmodeled dynamics. First of all, based on linear reduced-order filtering technology, filter variables with fewer dimensions are designed as the input of the deep network, which reduces the exploration space of the strategy and improves the exploration efficiency. On this basis, to enhance strategy perception of steady-state errors, the filter variables and integration terms are combined to design the lumped error as the new network input, which improves the positioning accuracy of quadrotor UAVs. The novelty of this paper lies in that it is the first intelligent approach based on linear filtering technology, to eliminate successfully the influence of unknown disturbance and unmodeled dynamics of quadrotor UAVs, which improves the positioning accuracy. The results of comparative experiments show the effectiveness of the proposed method in terms of improving positioning accuracy and enhancing robustness.
In this paper, based on linear filter, a deep Reinforcement Learning (RL) strategy is proposed, then a novel intelligent control method is put forward for quadrotor Unmanned Aerial Vehicles (UAVs), which improves effectively the robustness against disturbance and unmodeled dynamics. First of all, based on linear reduced-order filtering technology, filter variables with fewer dimensions are designed as the input of the deep network, which reduces the exploration space of the strategy and improves the exploration efficiency. On this basis, to enhance strategy perception of steady-state errors, the filter variables and integration terms are combined to design the lumped error as the new network input, which improves the positioning accuracy of quadrotor UAVs. The novelty of this paper lies in that it is the first intelligent approach based on linear filtering technology, to eliminate successfully the influence of unknown disturbance and unmodeled dynamics of quadrotor UAVs, which improves the positioning accuracy. The results of comparative experiments show the effectiveness of the proposed method in terms of improving positioning accuracy and enhancing robustness.
2021, 43(12): 3418-3426.
doi: 10.11999/JEIT210509
Abstract:
The resource scheduling method of 5G URLLC (Ultra-Reliable and Low Latency Communication) is studied in this paper to assure the Quality of Service (QoS) of various power business, which utilizes the limited spectrum and power in low-band cellular communication system to meet the requirement of power terminal about transmission rate, scheduling delay and fairness in an efficient manner. Firstly, based on the high reliability and low latency characteristics of URLLC, a multi-cell downlink system model is built. Then, a resource allocation problem model for maximizing downlink throughput is proposed and solved step by step. Power allocation algorithm bases on pricing mechanism and non-cooperative game and an Delay-based Proportional Fair (DPF)algorithm is designed to schedule channel resource dynamicly. Simulation results show that the proposed resource scheduling method can reduce the scheduling delay of power terminals under the constraints of transmission reliability and fairness while meeting the different QoS requirements. The proposed method outperforms some known algorithms.
The resource scheduling method of 5G URLLC (Ultra-Reliable and Low Latency Communication) is studied in this paper to assure the Quality of Service (QoS) of various power business, which utilizes the limited spectrum and power in low-band cellular communication system to meet the requirement of power terminal about transmission rate, scheduling delay and fairness in an efficient manner. Firstly, based on the high reliability and low latency characteristics of URLLC, a multi-cell downlink system model is built. Then, a resource allocation problem model for maximizing downlink throughput is proposed and solved step by step. Power allocation algorithm bases on pricing mechanism and non-cooperative game and an Delay-based Proportional Fair (DPF)algorithm is designed to schedule channel resource dynamicly. Simulation results show that the proposed resource scheduling method can reduce the scheduling delay of power terminals under the constraints of transmission reliability and fairness while meeting the different QoS requirements. The proposed method outperforms some known algorithms.
2021, 43(12): 3427-3433.
doi: 10.11999/JEIT210493
Abstract:
Edge caching mechanism for heterogeneous network is one of the reliable technologies to solve the excessive link load of the traditional backhaul mechanism, but the existing caching policies often can not match the popularity of the required data. To solve this problem, a Popularity Matching Caching Policy (PMCP) is proposed in this paper, which can match the corresponding file cache probability according to the popularity parameters to maximize communication reliability and reduce backhaul bandwidth pressure. The plane position of the base station is modeled by stochastic geometry theory. The results of the Monte Carlo simulation show that the proposed caching policy can effectively reduce the backhaul bandwidth pressure, and the reliability of the proposed caching policy is better than the comparison policies.
Edge caching mechanism for heterogeneous network is one of the reliable technologies to solve the excessive link load of the traditional backhaul mechanism, but the existing caching policies often can not match the popularity of the required data. To solve this problem, a Popularity Matching Caching Policy (PMCP) is proposed in this paper, which can match the corresponding file cache probability according to the popularity parameters to maximize communication reliability and reduce backhaul bandwidth pressure. The plane position of the base station is modeled by stochastic geometry theory. The results of the Monte Carlo simulation show that the proposed caching policy can effectively reduce the backhaul bandwidth pressure, and the reliability of the proposed caching policy is better than the comparison policies.
2021, 43(12): 3434-3441.
doi: 10.11999/JEIT210492
Abstract:
As an important confidential attribute, state opacity can characterize the ability of intruders to steal system privacy information. For the Cyber Physical Systems (CPSs) with unobservable events, an algebraic state space method based on the Semi-Tensor Product (STP) of matrices is proposed to analyze and verify the state opacity of CPSs. First, the state evolution of CPSs is modeled by STP of matrices theory, the system dynamics can be obtained as an algebraic expression, and then the characteristics of STP operation are used to give the necessary and sufficient algebraic condition to verify the current state opacity. Finally, the validity of the method is verified by a numerical simulation. The STP of matrices-based method proposed in this paper provides a new idea and framework for privacy analysis and security control of CPSs.
As an important confidential attribute, state opacity can characterize the ability of intruders to steal system privacy information. For the Cyber Physical Systems (CPSs) with unobservable events, an algebraic state space method based on the Semi-Tensor Product (STP) of matrices is proposed to analyze and verify the state opacity of CPSs. First, the state evolution of CPSs is modeled by STP of matrices theory, the system dynamics can be obtained as an algebraic expression, and then the characteristics of STP operation are used to give the necessary and sufficient algebraic condition to verify the current state opacity. Finally, the validity of the method is verified by a numerical simulation. The STP of matrices-based method proposed in this paper provides a new idea and framework for privacy analysis and security control of CPSs.
2021, 43(12): 3442-3450.
doi: 10.11999/JEIT200905
Abstract:
In order to solve the problem of exact matching between scholars and articles, a new method of author name disambiguation is proposed based on semi-supervised learning with graph convolutional network. In this method, the SciBERT pre-training language model is applied to calculating the semantic embedding vector of each paper with their title and keywords. Authors and organizations of papers are used to obtain the adjacency matrixes of the paper’s co-author network and co-organization network. The pseudo labels are collected from the co-author network to obtain the positive and negative samples. The semantic embedding vector, adjacency matrixes and the positive and negative samples are used as input to be processed by Graph Convolution neural Network (GCN). In semi-supervised learning, the embedding vectors of papers are learned to be clustered in order to realize the name disambiguation of papers. The experimental results show that, compared with other disambiguation methods, this method achieves better results on the experimental dataset.
In order to solve the problem of exact matching between scholars and articles, a new method of author name disambiguation is proposed based on semi-supervised learning with graph convolutional network. In this method, the SciBERT pre-training language model is applied to calculating the semantic embedding vector of each paper with their title and keywords. Authors and organizations of papers are used to obtain the adjacency matrixes of the paper’s co-author network and co-organization network. The pseudo labels are collected from the co-author network to obtain the positive and negative samples. The semantic embedding vector, adjacency matrixes and the positive and negative samples are used as input to be processed by Graph Convolution neural Network (GCN). In semi-supervised learning, the embedding vectors of papers are learned to be clustered in order to realize the name disambiguation of papers. The experimental results show that, compared with other disambiguation methods, this method achieves better results on the experimental dataset.
2021, 43(12): 3451-3458.
doi: 10.11999/JEIT200932
Abstract:
Recommendation systems based on reviews generally use convolutional neural networks to identify the semantics. However, due to the “invariance” of convolutional neural networks, that is, they only pay attention to the existence of features and ignore the details of features. The pooling operation will also lose some important information; In addition, using all the reviews as auxiliary information will not only not improve the quality of semantics, but will be affected by the low-quality reviews, this will lead to inaccurate recommendations. In order to solve the two problems mentioned above, this paper proposes a SACR (Self-Attention Capsule network Rate prediction) model. SACR uses a self-attention capsule network that can retain feature details to mine reviews, uses user and item ID to mark low-quality reviews, and merge the two representations to predict the rate. This paper also improves the squeeze function of the capsule, which can obtain more accurate high-level capsules. The experiments show that SACR has a significant improvement in prediction accuracy compared to some classic models and the latest models.
Recommendation systems based on reviews generally use convolutional neural networks to identify the semantics. However, due to the “invariance” of convolutional neural networks, that is, they only pay attention to the existence of features and ignore the details of features. The pooling operation will also lose some important information; In addition, using all the reviews as auxiliary information will not only not improve the quality of semantics, but will be affected by the low-quality reviews, this will lead to inaccurate recommendations. In order to solve the two problems mentioned above, this paper proposes a SACR (Self-Attention Capsule network Rate prediction) model. SACR uses a self-attention capsule network that can retain feature details to mine reviews, uses user and item ID to mark low-quality reviews, and merge the two representations to predict the rate. This paper also improves the squeeze function of the capsule, which can obtain more accurate high-level capsules. The experiments show that SACR has a significant improvement in prediction accuracy compared to some classic models and the latest models.
2021, 43(12): 3459-3466.
doi: 10.11999/JEIT200561
Abstract:
The intelligent Particle Filter (PF) based on the genetic algorithm can reduce particle degradation. An adaptive processing strategy for low weight particles is proposed for an Intelligent Particle Filter (IPF) based on the genetic algorithm. After the particles are separated and crossed, the genetic operators are optimized to deal with the low weight particles adaptively. Low weight particles determine whether they are the bottom particle according to the weight size. Then the bottom particles mutate directly, and the rest low-weight particles mutate randomly according to the mutation probability. Simulation results show that the performance of the Improved Intelligent Particle Filter (IIPF) is better than intelligent particle filter, general particle filter algorithms and extended Kalman filter. In the one-dimensional simulation experiment, the error of the improved intelligent particle filter is reduced by 10.5% and 8.5% compared with general particle filters and intelligent particle filter, and the improved intelligent particle filter has better convergence. In the multi-dimensional simulation experiment, the improved intelligent particle filter reduces the root-mean-square error and average error of the altitude by 8.5% and 7.5%, and the root-mean-square error and average error of the speed by 11.5% and 7.6%, respectively. Moreover, under the cases of multiplicative noise and non-Gaussian random noise, the improved intelligent particle filter still has more than 10% performance advantage.
The intelligent Particle Filter (PF) based on the genetic algorithm can reduce particle degradation. An adaptive processing strategy for low weight particles is proposed for an Intelligent Particle Filter (IPF) based on the genetic algorithm. After the particles are separated and crossed, the genetic operators are optimized to deal with the low weight particles adaptively. Low weight particles determine whether they are the bottom particle according to the weight size. Then the bottom particles mutate directly, and the rest low-weight particles mutate randomly according to the mutation probability. Simulation results show that the performance of the Improved Intelligent Particle Filter (IIPF) is better than intelligent particle filter, general particle filter algorithms and extended Kalman filter. In the one-dimensional simulation experiment, the error of the improved intelligent particle filter is reduced by 10.5% and 8.5% compared with general particle filters and intelligent particle filter, and the improved intelligent particle filter has better convergence. In the multi-dimensional simulation experiment, the improved intelligent particle filter reduces the root-mean-square error and average error of the altitude by 8.5% and 7.5%, and the root-mean-square error and average error of the speed by 11.5% and 7.6%, respectively. Moreover, under the cases of multiplicative noise and non-Gaussian random noise, the improved intelligent particle filter still has more than 10% performance advantage.
2021, 43(12): 3467-3475.
doi: 10.11999/JEIT200543
Abstract:
Freezing of Gait (FoG) is a common symptom among patients with Parkinson’s Disease (PD). In this paper, a vision-based method is proposed to recognize automatically the shuffling step symptom from the Timed Up-and-Go (TUG) videos based. In this method, a feature extraction block is utilized to extract features from image sequences, then features are fused along a temporal dimension, and these features are fed into a classification layer. In this experiment, the dataset with 364 normal gait examples and 362 shuffling step examples is used. And the experiment on the collected dataset shows that the average accuracy of the best method is 91.3%. Using this method, the symptom of the shuffling step can be recognized automatically and efficiently from TUG videos, showing the possibility to remotely monitor the movement condition of PD patients.
Freezing of Gait (FoG) is a common symptom among patients with Parkinson’s Disease (PD). In this paper, a vision-based method is proposed to recognize automatically the shuffling step symptom from the Timed Up-and-Go (TUG) videos based. In this method, a feature extraction block is utilized to extract features from image sequences, then features are fused along a temporal dimension, and these features are fed into a classification layer. In this experiment, the dataset with 364 normal gait examples and 362 shuffling step examples is used. And the experiment on the collected dataset shows that the average accuracy of the best method is 91.3%. Using this method, the symptom of the shuffling step can be recognized automatically and efficiently from TUG videos, showing the possibility to remotely monitor the movement condition of PD patients.
2021, 43(12): 3476-3485.
doi: 10.11999/JEIT200440
Abstract:
Establishing correspondence between the target model and the input image is an important step for the pose estimation of non-cooperative space target. Current methods always rely on complex image features and generation of candidate, which can be costly and time consuming. To solve the problems above, this paper proposes a pose estimation method that first conducts initial estimation based on deep neural network and then conducts accurate estimation through correspondence between the known target model and the input image is proposed. The deep neural network provides the stable initial value which reduces the candidates of correspondence between the target model and image. In addition, a more efficient feature extraction and matching method is adopted in this paper instead of complex multi-dimensional features. The simulation results show that the method proposed performs well both in efficiency and accuracy.
Establishing correspondence between the target model and the input image is an important step for the pose estimation of non-cooperative space target. Current methods always rely on complex image features and generation of candidate, which can be costly and time consuming. To solve the problems above, this paper proposes a pose estimation method that first conducts initial estimation based on deep neural network and then conducts accurate estimation through correspondence between the known target model and the input image is proposed. The deep neural network provides the stable initial value which reduces the candidates of correspondence between the target model and image. In addition, a more efficient feature extraction and matching method is adopted in this paper instead of complex multi-dimensional features. The simulation results show that the method proposed performs well both in efficiency and accuracy.
2021, 43(12): 3486-3495.
doi: 10.11999/JEIT200094
Abstract:
In order to reduce the communication cost of classic Gossip algorithm for information dissemination, an improved Gossip algorithm BEBG (Gossip with Binary Exponential Backoff) is proposed, which combines the binary exponential backoff algorithm with Gossip algorithm. Its information dissemination strategy is that the more times that a node has received the same information, the lower probability it continues to spread the information. Theoretical analysis and simulation results show that the BEBG can effectively reduce the redundancy of information propagation, and compared with the classic Gossip algorithm, the network load is reduced by about 61% when there are 104 nodes in the network. In order to solve the problem of edge nodes in the BEBG, two improved BEBG algorithms PBEBG that introduces Pull operations and NBEBG that introduces pushing information to a Neighbor node are further proposed. Experimental results show that the two algorithms can eliminate the edge nodes, and when there are 104 nodes in the network, they reduce the network load by about 34% and 37% respectively compared with the corresponding improved classic Gossip algorithms which introduce the same pull and push respectively.
In order to reduce the communication cost of classic Gossip algorithm for information dissemination, an improved Gossip algorithm BEBG (Gossip with Binary Exponential Backoff) is proposed, which combines the binary exponential backoff algorithm with Gossip algorithm. Its information dissemination strategy is that the more times that a node has received the same information, the lower probability it continues to spread the information. Theoretical analysis and simulation results show that the BEBG can effectively reduce the redundancy of information propagation, and compared with the classic Gossip algorithm, the network load is reduced by about 61% when there are 104 nodes in the network. In order to solve the problem of edge nodes in the BEBG, two improved BEBG algorithms PBEBG that introduces Pull operations and NBEBG that introduces pushing information to a Neighbor node are further proposed. Experimental results show that the two algorithms can eliminate the edge nodes, and when there are 104 nodes in the network, they reduce the network load by about 34% and 37% respectively compared with the corresponding improved classic Gossip algorithms which introduce the same pull and push respectively.
2021, 43(12): 3496-3504.
doi: 10.11999/JEIT200891
Abstract:
To solve the problem that the doctors' clinical experience is not fully integrated into the algorithm design in PET-CT lung tumor segmentation, a hybrid active contour model named RSF_ML based on variational level set is proposed by combining with the PET Gaussian distribution prior, Region Scalable Fitting (RSF) model and Maximum Likelihood ratio Classification (MLC) criterion. Furthermore, referring to the important value of fusion image in the process of lung tumor manual delineation, a segmentation method for PET-CT lung tumor fusion image based on RSF_ML is proposed. Experiments show that the proposed method can achieve accurate segmentation of representative Non-Small Cell Lung Cancer (NSCLC), and the subjective and objective results are better than the comparison method, which can provide effective computer-aided segmentation results for clinic.
To solve the problem that the doctors' clinical experience is not fully integrated into the algorithm design in PET-CT lung tumor segmentation, a hybrid active contour model named RSF_ML based on variational level set is proposed by combining with the PET Gaussian distribution prior, Region Scalable Fitting (RSF) model and Maximum Likelihood ratio Classification (MLC) criterion. Furthermore, referring to the important value of fusion image in the process of lung tumor manual delineation, a segmentation method for PET-CT lung tumor fusion image based on RSF_ML is proposed. Experiments show that the proposed method can achieve accurate segmentation of representative Non-Small Cell Lung Cancer (NSCLC), and the subjective and objective results are better than the comparison method, which can provide effective computer-aided segmentation results for clinic.
2021, 43(12): 3505-3512.
doi: 10.11999/JEIT200862
Abstract:
AI+thermal imaging human body temperature monitoring system is widely used for real-time temperature measurement of human body in dense crowds. The artificial intelligence method used in such systems detects the human head region for temperature measurement. The temperature measurement area may be too small to measure correctly due to occlusion. To tackle this problem, an anchor-free instance segmentation network incorporating infrared attention enhancement mechanism is proposed for real-time infrared thermal imaging temperature measurement area segmentation. The instance segmentation network proposed in this paper integrates the Infrared Spatial Attention Module (ISAM) in the detection stage and the segmentation stage, aiming to accurately segment the bare head area in the infrared image. Combined with the public thermal imaging facial dataset and the collected infrared thermal imaging dataset, the "thermal imaging temperature measurement area segmentation dataset" is produced. Experimental results demonstrate that this method reached an average detection precision of 88.6%, average mask precision of 86.5%, average processing speed of 33.5 fps. This network is superior to most state of the art instance segmentation methods in objective evaluation metrics.
AI+thermal imaging human body temperature monitoring system is widely used for real-time temperature measurement of human body in dense crowds. The artificial intelligence method used in such systems detects the human head region for temperature measurement. The temperature measurement area may be too small to measure correctly due to occlusion. To tackle this problem, an anchor-free instance segmentation network incorporating infrared attention enhancement mechanism is proposed for real-time infrared thermal imaging temperature measurement area segmentation. The instance segmentation network proposed in this paper integrates the Infrared Spatial Attention Module (ISAM) in the detection stage and the segmentation stage, aiming to accurately segment the bare head area in the infrared image. Combined with the public thermal imaging facial dataset and the collected infrared thermal imaging dataset, the "thermal imaging temperature measurement area segmentation dataset" is produced. Experimental results demonstrate that this method reached an average detection precision of 88.6%, average mask precision of 86.5%, average processing speed of 33.5 fps. This network is superior to most state of the art instance segmentation methods in objective evaluation metrics.
2021, 43(12): 3513-3521.
doi: 10.11999/JEIT200836
Abstract:
Due to the absorption and scattering, color degradation and detail blurring often occur in underwater images, which will affect the underwater visual tasks. A multi-scale underwater image enhancement network based on attention mechanism is designed in an end-to-end manner by synthesizing dataset closer to underwater images through underwater imaging model. In the network, pixel and channel attention mechanisms are introduced. A new multi-scale feature extraction module is designed to extract the features of different levels at the beginning of the network, and the output results are obtained via a convolution layer and an attention module with skip connections. Experimental results on multiple datasets show that the proposed method is effective in processing both synthetic and real underwater images. It can better recover the color and texture details of images compared with the existing methods.
Due to the absorption and scattering, color degradation and detail blurring often occur in underwater images, which will affect the underwater visual tasks. A multi-scale underwater image enhancement network based on attention mechanism is designed in an end-to-end manner by synthesizing dataset closer to underwater images through underwater imaging model. In the network, pixel and channel attention mechanisms are introduced. A new multi-scale feature extraction module is designed to extract the features of different levels at the beginning of the network, and the output results are obtained via a convolution layer and an attention module with skip connections. Experimental results on multiple datasets show that the proposed method is effective in processing both synthetic and real underwater images. It can better recover the color and texture details of images compared with the existing methods.
2021, 43(12): 3522-3529.
doi: 10.11999/JEIT200735
Abstract:
The knowledge graph as auxiliary information can effectively alleviate the cold start problem of traditional recommendation models. But when extracting structured information, the existing models ignore the neighbor relationship between entities in the graph. To solve this problem, a recommendation model based on KnowledgeGraph Convolutional Networke-Public Neighbor (KFCN-PN) sorting sampling is proposed. The model first sorts and samples each entity’s neighborhood in the knowledge graph based on the number of public neighbors; Secondly, it uses graph convolutional neural networks to integrate the entity’s own information and the receiving domain information along the graph’s relationship path layer by layer; Finally, the user feature vector and the entity feature vector obtained by the fusion are sent to the prediction function to predict the probability of the user interacting with the entity item. The experimental results show that the performance of this model is improved compared with other baseline models in data sparse scenarios.
The knowledge graph as auxiliary information can effectively alleviate the cold start problem of traditional recommendation models. But when extracting structured information, the existing models ignore the neighbor relationship between entities in the graph. To solve this problem, a recommendation model based on KnowledgeGraph Convolutional Networke-Public Neighbor (KFCN-PN) sorting sampling is proposed. The model first sorts and samples each entity’s neighborhood in the knowledge graph based on the number of public neighbors; Secondly, it uses graph convolutional neural networks to integrate the entity’s own information and the receiving domain information along the graph’s relationship path layer by layer; Finally, the user feature vector and the entity feature vector obtained by the fusion are sent to the prediction function to predict the probability of the user interacting with the entity item. The experimental results show that the performance of this model is improved compared with other baseline models in data sparse scenarios.
2021, 43(12): 3530-3537.
doi: 10.11999/JEIT200988
Abstract:
Hashing is widely used for image retrieval tasks. In view of the limitations of existing deep supervised hashing methods, a new Asymmetric Supervised Deep Discrete Hashing (ASDDH) method is proposed to maintain the semantic structure between different categories and generate binary codes. Firstly, a deep network is used to extract image features and reveal the similarity between each pair of images according to their semantic labels. To enhance the similarity between binary codes and ensure the retention of multi-label semantics, this paper designs an asymmetric hashing method that utilizes a multi-label binary code mapping to make the hash codes have multi-label semantic information. In addition, the bit balance of the binary code is introduced to balance each bit, which encourages the number of -1 and +1 to be approximately similar among all training samples. Experimental results on two benchmark datasets show that the proposed method is superior to other methods in image retrieval.
Hashing is widely used for image retrieval tasks. In view of the limitations of existing deep supervised hashing methods, a new Asymmetric Supervised Deep Discrete Hashing (ASDDH) method is proposed to maintain the semantic structure between different categories and generate binary codes. Firstly, a deep network is used to extract image features and reveal the similarity between each pair of images according to their semantic labels. To enhance the similarity between binary codes and ensure the retention of multi-label semantics, this paper designs an asymmetric hashing method that utilizes a multi-label binary code mapping to make the hash codes have multi-label semantic information. In addition, the bit balance of the binary code is introduced to balance each bit, which encourages the number of -1 and +1 to be approximately similar among all training samples. Experimental results on two benchmark datasets show that the proposed method is superior to other methods in image retrieval.
2021, 43(12): 3538-3545.
doi: 10.11999/JEIT200431
Abstract:
To tackle the problem that the existing channel attention mechanism uses global average pooling to generate channel-wise statistics while ignoring its local spatial information, two improved channel attention modules are proposed for human action recognition, namely the Spatial-Temporal (ST) interaction block of matrix operation and the Depth-wise-Separable (DS) block. The ST block extracts the spatiotemporal weighted information sequence of each channel through convolution and dimension conversion operations, and obtains the attention weight of each channel through convolution. The DS block uses firstly depth-wise separable convolution to obtain local spatial information of each channel, then compresses the channel size to make it have a global receptive field. The attention weight of each channel is obtained via convolution operation, which completes feature re-calibration with the channel attention mechanism. The proposed attention block is inserted into the basic network and experimented over the popular UCF101 and HDBM51 datasets, and the results show that the accuracy is improved.
To tackle the problem that the existing channel attention mechanism uses global average pooling to generate channel-wise statistics while ignoring its local spatial information, two improved channel attention modules are proposed for human action recognition, namely the Spatial-Temporal (ST) interaction block of matrix operation and the Depth-wise-Separable (DS) block. The ST block extracts the spatiotemporal weighted information sequence of each channel through convolution and dimension conversion operations, and obtains the attention weight of each channel through convolution. The DS block uses firstly depth-wise separable convolution to obtain local spatial information of each channel, then compresses the channel size to make it have a global receptive field. The attention weight of each channel is obtained via convolution operation, which completes feature re-calibration with the channel attention mechanism. The proposed attention block is inserted into the basic network and experimented over the popular UCF101 and HDBM51 datasets, and the results show that the accuracy is improved.
2021, 43(12): 3546-3553.
doi: 10.11999/JEIT200368
Abstract:
With the high-speed development of Location-Based Social Networking (LBSN) technology, Point-Of-Interest(POI) recommendation for providing personalized services to mobile users has become the focus of attention. Because POI recommendation is faced with the challenges of sparse data, multiple influencing factors and complex user preferences, traditional POI recommendation usually only considers the influence of check-in frequency, check-in time and place on users, but ignores the correlation influence of users’ behaviors before and after the check-in sequence. In order to solve the above problems, this paper takes into account the time influence and spatial influence of the check-in data through the representation of the sequence, and establishes a Spatio-Temporal Context information of POI Recommendations (STCPR), provides a more accurate and personalized preference for POI Recommendations. The model based on the framework of sequence to sequence, the user information, POI information, categories and space-time context information are embedded in GRU helped after vectorization network, at the same time the time attention mechanism, the mechanism of spatial attention of the global and local comprehensive are used for consideration of user preferences and trends, and in Top - N POI is recommended to users. In order to verify the performance of the model, experiments on two real data sets show that the proposed method is superior to several existing methods in terms of recall rate (Recall) and Normalized Discounted Cumulative Gain (NDCG).
With the high-speed development of Location-Based Social Networking (LBSN) technology, Point-Of-Interest(POI) recommendation for providing personalized services to mobile users has become the focus of attention. Because POI recommendation is faced with the challenges of sparse data, multiple influencing factors and complex user preferences, traditional POI recommendation usually only considers the influence of check-in frequency, check-in time and place on users, but ignores the correlation influence of users’ behaviors before and after the check-in sequence. In order to solve the above problems, this paper takes into account the time influence and spatial influence of the check-in data through the representation of the sequence, and establishes a Spatio-Temporal Context information of POI Recommendations (STCPR), provides a more accurate and personalized preference for POI Recommendations. The model based on the framework of sequence to sequence, the user information, POI information, categories and space-time context information are embedded in GRU helped after vectorization network, at the same time the time attention mechanism, the mechanism of spatial attention of the global and local comprehensive are used for consideration of user preferences and trends, and in Top - N POI is recommended to users. In order to verify the performance of the model, experiments on two real data sets show that the proposed method is superior to several existing methods in terms of recall rate (Recall) and Normalized Discounted Cumulative Gain (NDCG).
2021, 43(12): 3554-3562.
doi: 10.11999/JEIT200684
Abstract:
To improve the spectrum efficiency of the Unmanned Aerial Vehicle (UAV) relaying communication systems, a UAV alternate relay scheme is proposed, where two UAV relays alternately forward information from the source to the destination. To coordinate the interference among the two relaying data links, UAV trajectory and transmit power are investigated to maximize the end-to-end throughput of the UAV alternate relay system. The considered optimization problem is subject to the height, maneuver and collision avoidance constraints of the UAVs and the average and peak transmit power constraints of the source and UAV relays, which is non-convex and difficult to obtain the optimal solution. Nevertheless, an efficient iterative algorithm based on the alternating maximization and successive convex optimization techniques is proposed to obtain a suboptimal solution. Simulation results verify the effectiveness of the proposed algorithm.
To improve the spectrum efficiency of the Unmanned Aerial Vehicle (UAV) relaying communication systems, a UAV alternate relay scheme is proposed, where two UAV relays alternately forward information from the source to the destination. To coordinate the interference among the two relaying data links, UAV trajectory and transmit power are investigated to maximize the end-to-end throughput of the UAV alternate relay system. The considered optimization problem is subject to the height, maneuver and collision avoidance constraints of the UAVs and the average and peak transmit power constraints of the source and UAV relays, which is non-convex and difficult to obtain the optimal solution. Nevertheless, an efficient iterative algorithm based on the alternating maximization and successive convex optimization techniques is proposed to obtain a suboptimal solution. Simulation results verify the effectiveness of the proposed algorithm.
2021, 43(12): 3563-3570.
doi: 10.11999/JEIT200898
Abstract:
Combing Mobile Edge Computing (MEC) and Non-Orthogonal Multiple Access (NOMA) technologies while considering fairness, this paper studies the fair energy efficiency of the MEC system using NOMA partial offloading. First, the ratio of user rate to power consumption based on the fair function is defined as the fair energy efficiency function. Then, two energy efficiency scheduling algorithms under the fair energy efficiency scheduling criteria are proposed, namely the DK-SCA algorithm under the maximum-minimum rate criterion and the DK-SCALE algorithm under the maximum system energy efficiency criterion. The optimal CPU-frequency cycle and optimal transmit power under these two fair energy efficiency scheduling criteria are obtained, respectively. Finally, simulations show that compared with the benchmark schemes, the proposed NOMA -based partial offloading scheme can effectively combine local computing with edge offloading based on NOMA, which can achieve the best fair energy efficiency performance.
Combing Mobile Edge Computing (MEC) and Non-Orthogonal Multiple Access (NOMA) technologies while considering fairness, this paper studies the fair energy efficiency of the MEC system using NOMA partial offloading. First, the ratio of user rate to power consumption based on the fair function is defined as the fair energy efficiency function. Then, two energy efficiency scheduling algorithms under the fair energy efficiency scheduling criteria are proposed, namely the DK-SCA algorithm under the maximum-minimum rate criterion and the DK-SCALE algorithm under the maximum system energy efficiency criterion. The optimal CPU-frequency cycle and optimal transmit power under these two fair energy efficiency scheduling criteria are obtained, respectively. Finally, simulations show that compared with the benchmark schemes, the proposed NOMA -based partial offloading scheme can effectively combine local computing with edge offloading based on NOMA, which can achieve the best fair energy efficiency performance.
2021, 43(12): 3571-3579.
doi: 10.11999/JEIT200776
Abstract:
Multiple-Input Multiple-Output (MIMO)-based Heterogeneous Network (HetNet) can improve system capacity and achieve more connectivity, which is dramatically concerned by academia and industry. Therefore, it becomes one of the key technologies in the next-generation communication system. However, due to the effect of factors such as amplifier nonlinearities, phase noise, and I/Q imbalance, these impairments become the bottlenecks for further improving the performance of beamforming in MIMO-based HetNets. In order to solve this problem, this paper studies the beamforming design in MIMO-based HetNets by considering hardware impairments ahead of time. Firstly, the resource allocation problem is formulated as the total transmit power minimization of the system with hardware impairments under the constraints of the maximum transmit power of each base station and the minimum signal-to-interference-plus-noise ratio of each user. Then, the original non-convex problem is transformed into an equivalent convex optimization problem by using the methods of the equivalent transformation and the semidefinite programming relaxation. Simulation results verify that the proposed algorithm has a low outage probability and can overcome the impact of hardware impairments by comparing it with the traditional beamforming algorithm with perfect hardware.
Multiple-Input Multiple-Output (MIMO)-based Heterogeneous Network (HetNet) can improve system capacity and achieve more connectivity, which is dramatically concerned by academia and industry. Therefore, it becomes one of the key technologies in the next-generation communication system. However, due to the effect of factors such as amplifier nonlinearities, phase noise, and I/Q imbalance, these impairments become the bottlenecks for further improving the performance of beamforming in MIMO-based HetNets. In order to solve this problem, this paper studies the beamforming design in MIMO-based HetNets by considering hardware impairments ahead of time. Firstly, the resource allocation problem is formulated as the total transmit power minimization of the system with hardware impairments under the constraints of the maximum transmit power of each base station and the minimum signal-to-interference-plus-noise ratio of each user. Then, the original non-convex problem is transformed into an equivalent convex optimization problem by using the methods of the equivalent transformation and the semidefinite programming relaxation. Simulation results verify that the proposed algorithm has a low outage probability and can overcome the impact of hardware impairments by comparing it with the traditional beamforming algorithm with perfect hardware.
2021, 43(12): 3580-3587.
doi: 10.11999/JEIT200872
Abstract:
In order to improve the security of Non-Orthogonal Multiple Access (NOMA) based Mobile Edge Computation (MEC) system when computation tasks are partially offloading, the physical layer security of MEC network in the presence of eavesdroppers is considered, and the security interruption probability is used to measure the security performance of computation offloading. Considering the transmit power constraint, local task calculation constraint and secret outage probability constraint, and then, the energy consumption weight factor is introduced to balance the transmission energy consumption and the calculated energy consumption. After that, the sum of system weighted energy consumption is finally achieved. In the case of satisfying two user priorities, to reduce the system overhead, a joint task offloading and resource allocation mechanism is proposed to archive the optimal solution of the transformed problem through an iterative optimization algorithm based on bisection search, and the optimal computing task offloading and power allocation are obtained. Simulation results show that the proposed algorithm can effectively reduce the energy consumption of the system.
In order to improve the security of Non-Orthogonal Multiple Access (NOMA) based Mobile Edge Computation (MEC) system when computation tasks are partially offloading, the physical layer security of MEC network in the presence of eavesdroppers is considered, and the security interruption probability is used to measure the security performance of computation offloading. Considering the transmit power constraint, local task calculation constraint and secret outage probability constraint, and then, the energy consumption weight factor is introduced to balance the transmission energy consumption and the calculated energy consumption. After that, the sum of system weighted energy consumption is finally achieved. In the case of satisfying two user priorities, to reduce the system overhead, a joint task offloading and resource allocation mechanism is proposed to archive the optimal solution of the transformed problem through an iterative optimization algorithm based on bisection search, and the optimal computing task offloading and power allocation are obtained. Simulation results show that the proposed algorithm can effectively reduce the energy consumption of the system.
2021, 43(12): 3588-3596.
doi: 10.11999/JEIT200029
Abstract:
To solve the problem of the abnormal performance of multiple service function chains caused by the failure of the underlying physical node under the 5G end-to-end network slicing scenario, a service function chain fault diagnosis algorithm based on Deep Dynamic Bayesian Network (DDBN) is proposed in this paper. This algorithm builds a dependency relationship between faults and symptoms based on a multi-layer propagation model of faults in a network virtualization environment. This algorithm first builds a dependency graph model of faults and symptoms based on the multi-layer propagation relationship of faults in a network virtualization environment, and collects symptoms by monitoring performance data of multiple virtual network functions on physical nodes. Then, considering the diversity of network symptom observation data based on Software Defined Network (SDN) and Network Function Virtualization (NFV) architecture and the spatial correlation between physical nodes and virtual network functions, a deep belief network is introduced to extract the characteristics of the observation data, and the adaptive learning rate algorithm with momentum is used to fine-tune the model to accelerate the convergence speed. Finally, dynamic Bayesian network is introduced to diagnose the root cause of faults in real time by using the temporal correlation between faults. The simulation results show that the algorithm can effectively diagnose the root cause of faults and has good diagnostic accuracy.
To solve the problem of the abnormal performance of multiple service function chains caused by the failure of the underlying physical node under the 5G end-to-end network slicing scenario, a service function chain fault diagnosis algorithm based on Deep Dynamic Bayesian Network (DDBN) is proposed in this paper. This algorithm builds a dependency relationship between faults and symptoms based on a multi-layer propagation model of faults in a network virtualization environment. This algorithm first builds a dependency graph model of faults and symptoms based on the multi-layer propagation relationship of faults in a network virtualization environment, and collects symptoms by monitoring performance data of multiple virtual network functions on physical nodes. Then, considering the diversity of network symptom observation data based on Software Defined Network (SDN) and Network Function Virtualization (NFV) architecture and the spatial correlation between physical nodes and virtual network functions, a deep belief network is introduced to extract the characteristics of the observation data, and the adaptive learning rate algorithm with momentum is used to fine-tune the model to accelerate the convergence speed. Finally, dynamic Bayesian network is introduced to diagnose the root cause of faults in real time by using the temporal correlation between faults. The simulation results show that the algorithm can effectively diagnose the root cause of faults and has good diagnostic accuracy.
2021, 43(12): 3597-3604.
doi: 10.11999/JEIT200766
Abstract:
A three-level name lookup method based on deep Bloom filter is proposed to improve the searching efficiency of content name in the routing progress of the Named Data Networking (NDN). Firstly, in this method, the Long Short Term Memory (LSTM) is combined with standard Bloom filter to optimize the name searching progress. Secondly, a three-level structure is adopted to optimize the accurate content name lookup progresses in the Content Store (CS) and the Pending Interest Table (PIT) to promote lookup accuracy and reduce memory consumption. Finally, the error rate generated by content name searching method based on deep Bloom filter structure is analyzed in theory, and the experiment results prove that the proposed the three-level lookup structure can compress memory and decrease the error effectively.
A three-level name lookup method based on deep Bloom filter is proposed to improve the searching efficiency of content name in the routing progress of the Named Data Networking (NDN). Firstly, in this method, the Long Short Term Memory (LSTM) is combined with standard Bloom filter to optimize the name searching progress. Secondly, a three-level structure is adopted to optimize the accurate content name lookup progresses in the Content Store (CS) and the Pending Interest Table (PIT) to promote lookup accuracy and reduce memory consumption. Finally, the error rate generated by content name searching method based on deep Bloom filter structure is analyzed in theory, and the experiment results prove that the proposed the three-level lookup structure can compress memory and decrease the error effectively.
2021, 43(12): 3605-3611.
doi: 10.11999/JEIT200525
Abstract:
Considering dealing with the problem of random and dynamic communication requests of ground users in a UAV(Unmanned Aerial Vehicle) mounted base station communication system, which can not be tackled by an offline trajectory design scheme, an online trajectory optimization algorithm is proposed for the UAV-mounted base station. In the considered system, a single UAV is utilized as an aerial base station to provide wireless communication service to two ground users. The problem of minimizing the average communication delay of the ground users via optimizing the UAV’s trajectory is considered. First, it is shown that the problem can be casted as a Markov Decision Process (MDP), and then the delay of one single communication is introduced into the action value function. Finally, the Monte Carlo and Q-Learning algorithms from the reinforcement learning technology are respectively adopted to realize the online trajectory optimization. Simulation results show that the proposed algorithm outperforms the “fixed position” and “greedy algorithm” schemes.
Considering dealing with the problem of random and dynamic communication requests of ground users in a UAV(Unmanned Aerial Vehicle) mounted base station communication system, which can not be tackled by an offline trajectory design scheme, an online trajectory optimization algorithm is proposed for the UAV-mounted base station. In the considered system, a single UAV is utilized as an aerial base station to provide wireless communication service to two ground users. The problem of minimizing the average communication delay of the ground users via optimizing the UAV’s trajectory is considered. First, it is shown that the problem can be casted as a Markov Decision Process (MDP), and then the delay of one single communication is introduced into the action value function. Finally, the Monte Carlo and Q-Learning algorithms from the reinforcement learning technology are respectively adopted to realize the online trajectory optimization. Simulation results show that the proposed algorithm outperforms the “fixed position” and “greedy algorithm” schemes.
2021, 43(12): 3612-3620.
doi: 10.11999/JEIT200734
Abstract:
To improve the bit error rate performance of LoRa (Long Range) in fading channels, a lightweight Enhanced Long Range (EnLoRa) physical layer is designed. First, Cyclic Code Shift Keying (CCSK) is chosen as the error correction code, and the diagonal matrix interleaving and Chirp Spread Spectrum (CSS) modulation techniques are concatenated to construct a new Bit Interleaved Coded Modulation (BICM) structure. Then, based on this structure, a soft CSS demodulation and soft decoding algorithm based on bit log-likelihood ratio information is proposed. Further, the decoded external information is fed back to the demodulation module as a priori information for iteration decoding. The simulation results show that, compared with the LoRa system of the same code rate, the coding gain of the EnLoRa system under the Gaussian channel is increased by 0.8 dB, and the coding gain under the Rayleigh channel is increased by 7 dB. On this basis, through multiple iterations, an additional gain of up to 2.5 dB can be obtained. The time complexity is only increased by 10%, and the increase in space complexity is negligible. This method is expected to reduce further the power consumption of LoRa nodes and has great application value to complex multipath scenarios such as indoors, urban areas and industries.
To improve the bit error rate performance of LoRa (Long Range) in fading channels, a lightweight Enhanced Long Range (EnLoRa) physical layer is designed. First, Cyclic Code Shift Keying (CCSK) is chosen as the error correction code, and the diagonal matrix interleaving and Chirp Spread Spectrum (CSS) modulation techniques are concatenated to construct a new Bit Interleaved Coded Modulation (BICM) structure. Then, based on this structure, a soft CSS demodulation and soft decoding algorithm based on bit log-likelihood ratio information is proposed. Further, the decoded external information is fed back to the demodulation module as a priori information for iteration decoding. The simulation results show that, compared with the LoRa system of the same code rate, the coding gain of the EnLoRa system under the Gaussian channel is increased by 0.8 dB, and the coding gain under the Rayleigh channel is increased by 7 dB. On this basis, through multiple iterations, an additional gain of up to 2.5 dB can be obtained. The time complexity is only increased by 10%, and the increase in space complexity is negligible. This method is expected to reduce further the power consumption of LoRa nodes and has great application value to complex multipath scenarios such as indoors, urban areas and industries.
2021, 43(12): 3621-3628.
doi: 10.11999/JEIT200937
Abstract:
In order to alleviate the pressure of terminal devices to deal with the big-data and low-delay services, a resource allocation algorithm is studied for mobile edge computing networks with full-duplex relays. Firstly, the constraints of the maximum task latency, the maximum computing ability of users, and the maximum transmit power of users and relays are considered for achieving total energy consumption minimization by jointly optimizing the relay selection and subcarrier allocation factor, user task offloading coefficient, and the transmission power of users and relays. Secondly, based on the alternating iteration method and the variable-substitution approach, the originally non-convex problem is decomposed into two convex subproblems, which are solved by using the interior-point method and Lagrange dual theory, respectively. Simulation results show that the proposed algorithm has low energy consumption.
In order to alleviate the pressure of terminal devices to deal with the big-data and low-delay services, a resource allocation algorithm is studied for mobile edge computing networks with full-duplex relays. Firstly, the constraints of the maximum task latency, the maximum computing ability of users, and the maximum transmit power of users and relays are considered for achieving total energy consumption minimization by jointly optimizing the relay selection and subcarrier allocation factor, user task offloading coefficient, and the transmission power of users and relays. Secondly, based on the alternating iteration method and the variable-substitution approach, the originally non-convex problem is decomposed into two convex subproblems, which are solved by using the interior-point method and Lagrange dual theory, respectively. Simulation results show that the proposed algorithm has low energy consumption.
2021, 43(12): 3629-3638.
doi: 10.11999/JEIT200628
Abstract:
The essence of network security is confrontation. In view of at the problem that the existing research lacks to analyze the relationship between network attack and defense behavior and situation evolution from the perspective of game, a Network Attack And Defense Game architecture Model (NADGM) is proposed, the theory of infectious disease dynamics is used to define the network attack and defense situation with the density of network nodes in different security states, and the network node security state transition path is analyzed; The network blackmail virus attack and defense game is taken as an example, and NetLogo multi-agent simulation tools is used to carry out comparative experiments of attack and defense situation evolution trend in different scenarios, and the conclusion of enhancing network defense effectiveness is obtained. The experimental results verify the effectiveness and feasibility of the model method.
The essence of network security is confrontation. In view of at the problem that the existing research lacks to analyze the relationship between network attack and defense behavior and situation evolution from the perspective of game, a Network Attack And Defense Game architecture Model (NADGM) is proposed, the theory of infectious disease dynamics is used to define the network attack and defense situation with the density of network nodes in different security states, and the network node security state transition path is analyzed; The network blackmail virus attack and defense game is taken as an example, and NetLogo multi-agent simulation tools is used to carry out comparative experiments of attack and defense situation evolution trend in different scenarios, and the conclusion of enhancing network defense effectiveness is obtained. The experimental results verify the effectiveness and feasibility of the model method.
2021, 43(12): 3639-3646.
doi: 10.11999/JEIT200485
Abstract:
The Direction Of Arrival (DOA) estimation is a hot topic for a monostatic Multiple Input Multiple Output (MIMO) radar in recent years. The conventional Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT) algorithms need to pay much computation cost because of the high-dimensional MIMO radar data. When the Signal-to-Noise Ratio (SNR) is low and the number of sample are small, the performance of the conventional ESPRIT algorithms degrades seriously. To overcome the disadvantages of conventional ESPRIT algorithms, a novel algorithm which is called as reduced-dimensional beamspace with real-valued ESPRIT for monostatic MIMO radar is proposed. To eliminate the redundancy, the high-dimensional MIMO radar data is transformed into the low-dimensional data through the transformation matrix. To reduce further the computation complexity, the low-dimensional data is transformed into beamspace. Then the real-valued rotation invariance equation is constructed to estimate the target’s DOA. Simulation results show the proposed algorithm has better angle estimation performance and less computation burden than traditional ESPRIT algorithms.
The Direction Of Arrival (DOA) estimation is a hot topic for a monostatic Multiple Input Multiple Output (MIMO) radar in recent years. The conventional Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT) algorithms need to pay much computation cost because of the high-dimensional MIMO radar data. When the Signal-to-Noise Ratio (SNR) is low and the number of sample are small, the performance of the conventional ESPRIT algorithms degrades seriously. To overcome the disadvantages of conventional ESPRIT algorithms, a novel algorithm which is called as reduced-dimensional beamspace with real-valued ESPRIT for monostatic MIMO radar is proposed. To eliminate the redundancy, the high-dimensional MIMO radar data is transformed into the low-dimensional data through the transformation matrix. To reduce further the computation complexity, the low-dimensional data is transformed into beamspace. Then the real-valued rotation invariance equation is constructed to estimate the target’s DOA. Simulation results show the proposed algorithm has better angle estimation performance and less computation burden than traditional ESPRIT algorithms.
2021, 43(12): 3647-3655.
doi: 10.11999/JEIT200773
Abstract:
When the airborne weather radar detects low-altitude wind shear in a complex terrain environment, the non-uniform characteristics of ground clutter make it difficult to accurately obtain clutter statistical characteristics, which in turn affects the clutter suppression effect, and makes the wind speed estimation of wind shear inaccurate. A Colored-Loading Knowledge-Aided STAP (CL-KA-STAP) wind speed estimation method of low-altitude wind shear is proposed. This method first constructs a dimensionality reduction joint space-time transformation matrix, and performs dimensionality reduction processing on the echo signal of the distance unit to be detected, and then integrates the prior knowledge obtained by the Digital Elevation Model (DEM) and the National Land Cover Database (NLCD) into the combined space. In the Combined space-time Main Channel Adaptive Processor (CMCAP), the color loading coefficient optimization function is constructed to solve the color loading coefficient, and finally the filter is constructed to realize the adaptive filtering of clutter and accurately estimate the wind speed. The subsequent simulation results prove the effectiveness of the proposed method.
When the airborne weather radar detects low-altitude wind shear in a complex terrain environment, the non-uniform characteristics of ground clutter make it difficult to accurately obtain clutter statistical characteristics, which in turn affects the clutter suppression effect, and makes the wind speed estimation of wind shear inaccurate. A Colored-Loading Knowledge-Aided STAP (CL-KA-STAP) wind speed estimation method of low-altitude wind shear is proposed. This method first constructs a dimensionality reduction joint space-time transformation matrix, and performs dimensionality reduction processing on the echo signal of the distance unit to be detected, and then integrates the prior knowledge obtained by the Digital Elevation Model (DEM) and the National Land Cover Database (NLCD) into the combined space. In the Combined space-time Main Channel Adaptive Processor (CMCAP), the color loading coefficient optimization function is constructed to solve the color loading coefficient, and finally the filter is constructed to realize the adaptive filtering of clutter and accurately estimate the wind speed. The subsequent simulation results prove the effectiveness of the proposed method.
2021, 43(12): 3656-3661.
doi: 10.11999/JEIT200895
Abstract:
In view of the phenomenon that the navigation receiver loses tracking of the satellites due to ElectroMagnetic Interference (EMI) in the complex battlefield electromagnetic environment, the effect prediction model of the navigation receiver’ tracking loop, when facing in-band and out-of-band dual-frequency interference is studied. Through the analysis of the blocking mechanism of the receiver’s RF (Radio Frequency) front-end, the gain formula of the RF front-end signal is deduced by the method of vector analysis, and combined with the relevant processing of the receiver, the effect prediction model under out-of-band and in-band dual-frequency interference is obtained. Then, by using the Carrier-to-Noise ratio (C/N0) threshold as the criterion for loss of lock, a dual-frequency interference effect experiment is carried out. The experiment results show that the above model can predict the state of satellite tracking inside the receiver, whose prediction error is less than ±1 dB, and it is equally applicable to narrowband and wideband interference signals.
In view of the phenomenon that the navigation receiver loses tracking of the satellites due to ElectroMagnetic Interference (EMI) in the complex battlefield electromagnetic environment, the effect prediction model of the navigation receiver’ tracking loop, when facing in-band and out-of-band dual-frequency interference is studied. Through the analysis of the blocking mechanism of the receiver’s RF (Radio Frequency) front-end, the gain formula of the RF front-end signal is deduced by the method of vector analysis, and combined with the relevant processing of the receiver, the effect prediction model under out-of-band and in-band dual-frequency interference is obtained. Then, by using the Carrier-to-Noise ratio (C/N0) threshold as the criterion for loss of lock, a dual-frequency interference effect experiment is carried out. The experiment results show that the above model can predict the state of satellite tracking inside the receiver, whose prediction error is less than ±1 dB, and it is equally applicable to narrowband and wideband interference signals.
2021, 43(12): 3662-3670.
doi: 10.11999/JEIT200755
Abstract:
Soil materials can exhibit strongly dispersive properties in the operating frequency range of a physical system, and the uncertain parameters of the dispersive materials introduce uncertainties in the simulation result of propagating waves. It is essential to quantify the uncertainty in the simulation result when the acceptability of these calculation results is considered. To avoid performing thousands of full-wave simulations, an efficient surrogate model based on ANN (Artificial Neural Network) is proposed, to imitate the concerned Ground Penetrating Radar (GPR) calculation. Meanwhile, the process of constructing the surrogate model and the strategy to overcome the overfitting problem are presented in details. As a surrogate model for full-wave simulation of ground penetrating radar, it can predict the simulation result, and then obtain the statistical information of the simulation result, such as mean value and standard deviation. After comparison, under the same conditions that the same numerical model, the number of uncertain input parameters are same, and the variation in the parameter is 10%, the statistical properties of the prediction results obtained by the proposed method are in good agreement with the results obtained by performing a thousand full-wave simulations. It also significantly reduces the amount of calculations, and the calculation time efficiency is increased by 79.82%.
Soil materials can exhibit strongly dispersive properties in the operating frequency range of a physical system, and the uncertain parameters of the dispersive materials introduce uncertainties in the simulation result of propagating waves. It is essential to quantify the uncertainty in the simulation result when the acceptability of these calculation results is considered. To avoid performing thousands of full-wave simulations, an efficient surrogate model based on ANN (Artificial Neural Network) is proposed, to imitate the concerned Ground Penetrating Radar (GPR) calculation. Meanwhile, the process of constructing the surrogate model and the strategy to overcome the overfitting problem are presented in details. As a surrogate model for full-wave simulation of ground penetrating radar, it can predict the simulation result, and then obtain the statistical information of the simulation result, such as mean value and standard deviation. After comparison, under the same conditions that the same numerical model, the number of uncertain input parameters are same, and the variation in the parameter is 10%, the statistical properties of the prediction results obtained by the proposed method are in good agreement with the results obtained by performing a thousand full-wave simulations. It also significantly reduces the amount of calculations, and the calculation time efficiency is increased by 79.82%.
2021, 43(12): 3671-3679.
doi: 10.11999/JEIT200594
Abstract:
The Robust Principal Component Analysis (RPCA) based speech enhancement algorithm plays an important role for single channel speech processing in white Gaussian noise environment, but it has a poor processing effect on low-rank speech components and can not well suppress color noise. In view of this problem, an improved speech algorithm based on Whitening Spectrum Rearrangement RPCA (WSRRPCA) is proposed in this paper, which by optimizing the noise whitening model, color noise speech enhancement is converted into white noise speech signal processing, and spectrum rearrangement is used to improve RPCA speech enhancement processing algorithm to obtain an overall improvement in speech signal processing performance in a colored noise environment. Simulation experiments show that this algorithm can better achieve speech enhancement in a colored noise environment, and has better noise suppression and speech quality improvement capabilities than other algorithms.
The Robust Principal Component Analysis (RPCA) based speech enhancement algorithm plays an important role for single channel speech processing in white Gaussian noise environment, but it has a poor processing effect on low-rank speech components and can not well suppress color noise. In view of this problem, an improved speech algorithm based on Whitening Spectrum Rearrangement RPCA (WSRRPCA) is proposed in this paper, which by optimizing the noise whitening model, color noise speech enhancement is converted into white noise speech signal processing, and spectrum rearrangement is used to improve RPCA speech enhancement processing algorithm to obtain an overall improvement in speech signal processing performance in a colored noise environment. Simulation experiments show that this algorithm can better achieve speech enhancement in a colored noise environment, and has better noise suppression and speech quality improvement capabilities than other algorithms.
2021, 43(12): 3680-3686.
doi: 10.11999/JEIT200122
Abstract:
Under certain environmental conditions, when the measurement equation of the system is not verified or calibrated, the use of the measurement equation will often produce unknown system errors, resulting in large filtering errors. Similarly, when the noise variance of the system is uncertain, the performance of the filter will deteriorate, and even cause the filter divergence. The introduction of incremental equation can effectively eliminate the unknown measurement error of the system, so that the state estimation of system under poor observation condition with unknown measurement error can be transformed into the state estimation of incremental system. In this paper, a robust incremental Kalman filter based on incremental equation is proposed for linear discrete systems with unknown measurement error and unknown noise variance. Then, based on the linear minimum variance optimal fusion criterion, a weighted fusion robust incremental Kalman filtering algorithm is proposed. Simulation results show the effectiveness and feasibility of the proposed algorithm.
Under certain environmental conditions, when the measurement equation of the system is not verified or calibrated, the use of the measurement equation will often produce unknown system errors, resulting in large filtering errors. Similarly, when the noise variance of the system is uncertain, the performance of the filter will deteriorate, and even cause the filter divergence. The introduction of incremental equation can effectively eliminate the unknown measurement error of the system, so that the state estimation of system under poor observation condition with unknown measurement error can be transformed into the state estimation of incremental system. In this paper, a robust incremental Kalman filter based on incremental equation is proposed for linear discrete systems with unknown measurement error and unknown noise variance. Then, based on the linear minimum variance optimal fusion criterion, a weighted fusion robust incremental Kalman filtering algorithm is proposed. Simulation results show the effectiveness and feasibility of the proposed algorithm.
2021, 43(12): 3687-3694.
doi: 10.11999/JEIT210027
Abstract:
The underdetermined Direction Of Arrival (DOA) estimation method based on the coprime array will degrade in the presence of nonuniform noise. To address this problem, a robust DOA estimation method based on covariance matrix reconstruction and matrix completion is proposed in this paper. Firstly, the covariance matrix of the received data is decomposed to obtain a diagonal matrix containing non-uniform noise terms. Then, the minimum value of the diagonal matrix elements is selected to replace the remaining diagonal matrix elements to obtain the reconstructed data covariance matrix. Finally, based on the matrix completion theory, the reconstructed covariance matrix is extended and filled, and the subspace method is used for DOA estimation. Theoretical analysis and simulation results show that, compared with the existing methods, the proposed method suppresses effectively the influence of nonuniform noise and has better DOA estimation performance.
The underdetermined Direction Of Arrival (DOA) estimation method based on the coprime array will degrade in the presence of nonuniform noise. To address this problem, a robust DOA estimation method based on covariance matrix reconstruction and matrix completion is proposed in this paper. Firstly, the covariance matrix of the received data is decomposed to obtain a diagonal matrix containing non-uniform noise terms. Then, the minimum value of the diagonal matrix elements is selected to replace the remaining diagonal matrix elements to obtain the reconstructed data covariance matrix. Finally, based on the matrix completion theory, the reconstructed covariance matrix is extended and filled, and the subspace method is used for DOA estimation. Theoretical analysis and simulation results show that, compared with the existing methods, the proposed method suppresses effectively the influence of nonuniform noise and has better DOA estimation performance.
2021, 43(12): 3695-3702.
doi: 10.11999/JEIT200793
Abstract:
In this paper, a method of beam pattern optimization based on Radial Basis Function Neural Network (RBFNN) is proposed for controlling sidelobe level of arbitrary geometry array. The proposed method takes advantage of the nonlinear mapping between the input and output of the radial basis function neural network, because of the nonlinear relationship between the position of the elements and the weighted vector of array in the Olen beamforming method. Many positions with errors centered on the real element positions are generated, when the beam pattern obtained by Olen beamforming method meet the design requirements, the corresponding positions and weighted vector are recorded as the input and output of training data. The beam patterns of uniform linear array, uniform arc array and random circular array are designed by using the trained neural networks. The results show that the proposed method is effective.
In this paper, a method of beam pattern optimization based on Radial Basis Function Neural Network (RBFNN) is proposed for controlling sidelobe level of arbitrary geometry array. The proposed method takes advantage of the nonlinear mapping between the input and output of the radial basis function neural network, because of the nonlinear relationship between the position of the elements and the weighted vector of array in the Olen beamforming method. Many positions with errors centered on the real element positions are generated, when the beam pattern obtained by Olen beamforming method meet the design requirements, the corresponding positions and weighted vector are recorded as the input and output of training data. The beam patterns of uniform linear array, uniform arc array and random circular array are designed by using the trained neural networks. The results show that the proposed method is effective.
2021, 43(12): 3703-3709.
doi: 10.11999/JEIT210406
Abstract:
In order to adapt to various array layouts, a direction-finding method with arbitrary planar array interferometer based on mixed baselines is proposed. Firstly, based on the analysis of the mathematical model without phase ambiguity, the direction-finding method via the mixed baselines with arbitrary planar array interferometer is derived. Secondly, for the problem of phase ambiguity, a clustering method based on the improved direction function with normalization is proposed, and the selection method of the baseline pairs is given. Finally, numerical simulations are performed to verify the effectiveness of the proposed method on random arbitrary array, uniform circular array and semicircular array. This method does not restrict the lengths and the slopes of the two baselines in the selected baseline pair. The simulation results show that with the help of flexible selection of baseline pairs and the introduction of the improved direction function with normalization, the ambiguity resolution performance of the mixed baselines algorithm is better than the equal-length baselines algorithm and the stereo baselines algorithm.
In order to adapt to various array layouts, a direction-finding method with arbitrary planar array interferometer based on mixed baselines is proposed. Firstly, based on the analysis of the mathematical model without phase ambiguity, the direction-finding method via the mixed baselines with arbitrary planar array interferometer is derived. Secondly, for the problem of phase ambiguity, a clustering method based on the improved direction function with normalization is proposed, and the selection method of the baseline pairs is given. Finally, numerical simulations are performed to verify the effectiveness of the proposed method on random arbitrary array, uniform circular array and semicircular array. This method does not restrict the lengths and the slopes of the two baselines in the selected baseline pair. The simulation results show that with the help of flexible selection of baseline pairs and the introduction of the improved direction function with normalization, the ambiguity resolution performance of the mixed baselines algorithm is better than the equal-length baselines algorithm and the stereo baselines algorithm.
2021, 43(12): 3710-3717.
doi: 10.11999/JEIT200753
Abstract:
Seismic signal is of great significance in the detection of geological lithology, reservoir, fluid and sedimentary facies, as well as the identification of stratigraphic interface, reservoir analysis, seismic data processing and interpretation. In view of the problems of low time-frequency resolution and poor energy aggregation when the traditional time-frequency analysis algorithms process seismic signals, a new 2nd-order Synchrosqueezing Wavelet Transform (SWT2) algorithm is proposed based on the model of Ricker wavelet. The proposed second-order squeezing algorithm uses the improved mother wavelet to match the seismic signals, and then corrects the reference frequency through spectral peak alignment, thus improving the time-frequency energy concentration and time-frequency resolution. Simulation results show that the proposed method can greatly improve the time-frequency aggregation, accurately reflect the time delay and dominant frequency of signals, and describe the stratigraphic structure more accurately.
Seismic signal is of great significance in the detection of geological lithology, reservoir, fluid and sedimentary facies, as well as the identification of stratigraphic interface, reservoir analysis, seismic data processing and interpretation. In view of the problems of low time-frequency resolution and poor energy aggregation when the traditional time-frequency analysis algorithms process seismic signals, a new 2nd-order Synchrosqueezing Wavelet Transform (SWT2) algorithm is proposed based on the model of Ricker wavelet. The proposed second-order squeezing algorithm uses the improved mother wavelet to match the seismic signals, and then corrects the reference frequency through spectral peak alignment, thus improving the time-frequency energy concentration and time-frequency resolution. Simulation results show that the proposed method can greatly improve the time-frequency aggregation, accurately reflect the time delay and dominant frequency of signals, and describe the stratigraphic structure more accurately.
2021, 43(12): 3718-3726.
doi: 10.11999/JEIT200885
Abstract:
To improve the effective resolution of the Time-Interleaved Analog-to-Digital Converter (TIADC), it is necessary to estimate and compensate the linear/non-linear mismatch error between its channels. An adaptive blind correction algorithm is proposed for the non-linear mismatch error of M-channel TIADC with memory effect. The nonlinear error signal is reconstructed through the Sub-Channel Reconstruction (SCR) structure, and the nonlinear mismatch error coefficient is estimated through the Filtered-Down-sampled Least Mean Square (FDLMS) algorithm. Experimental simulation results show that this method can effectively correct the non-linear mismatch error with memory effect, and can greatly reduce the difficulty of implementation and the consumption of hardware resources.
To improve the effective resolution of the Time-Interleaved Analog-to-Digital Converter (TIADC), it is necessary to estimate and compensate the linear/non-linear mismatch error between its channels. An adaptive blind correction algorithm is proposed for the non-linear mismatch error of M-channel TIADC with memory effect. The nonlinear error signal is reconstructed through the Sub-Channel Reconstruction (SCR) structure, and the nonlinear mismatch error coefficient is estimated through the Filtered-Down-sampled Least Mean Square (FDLMS) algorithm. Experimental simulation results show that this method can effectively correct the non-linear mismatch error with memory effect, and can greatly reduce the difficulty of implementation and the consumption of hardware resources.
Low Complexity and Reconfigurable LDPC Encoder for High-speed Satellite-to-ground Data Transmissions
2021, 43(12): 3727-3734.
doi: 10.11999/JEIT200118
Abstract:
A new low complexity and reconfigurable Low Density Parity Check (LDPC) encoder design based on the Consultative Committee for Space Data Systems (CCSDS) standard is proposed to meet the high throughput, low latency and high reliability requirement for high-speed satellite-to-ground data transmission systems of Low Earth Orbit (LEO). This design is parallel reconfigurable by inserting 0 into information bits and splitting cyclic matrices, and analyzed the structural characteristics of different parallelism encoding. Benefitting from the parallel reconfiguration, the throughput is increased and the flexibility is guaranteed. Furthermore, using optimized shift register adder accumulators can reduce the hardware resources. The proposed encoder design is implemented on Xilinx FPGA. The experimental results show that the maximum encoding speed is up to 1 Gbps @125 MHz, and the normalized throughput is increased by 17.1% compared with the similar parallel encoder. And resources of registers and look-up tables are reduced by 13.7% and 14.8% respectively, compared with the existing encoder.
A new low complexity and reconfigurable Low Density Parity Check (LDPC) encoder design based on the Consultative Committee for Space Data Systems (CCSDS) standard is proposed to meet the high throughput, low latency and high reliability requirement for high-speed satellite-to-ground data transmission systems of Low Earth Orbit (LEO). This design is parallel reconfigurable by inserting 0 into information bits and splitting cyclic matrices, and analyzed the structural characteristics of different parallelism encoding. Benefitting from the parallel reconfiguration, the throughput is increased and the flexibility is guaranteed. Furthermore, using optimized shift register adder accumulators can reduce the hardware resources. The proposed encoder design is implemented on Xilinx FPGA. The experimental results show that the maximum encoding speed is up to 1 Gbps @125 MHz, and the normalized throughput is increased by 17.1% compared with the similar parallel encoder. And resources of registers and look-up tables are reduced by 13.7% and 14.8% respectively, compared with the existing encoder.
2021, 43(12): 3735-3742.
doi: 10.11999/JEIT200599
Abstract:
The power output curve of the photovoltaic array exhibits multi-peak characteristics under partial shading conditions, and the traditional control algorithm can not track the maximum power point continuously and accurately. A method for tracking the global maximum power point based on the Improved Multi-Verse Optimization (IMVO) algorithm is proposed. Spiral update and adaptive compression factor are introduced to enhance the algorithm's global search capability. Travelling distance rate update method is changed, and the convergence speed of algorithm is accelerated, so the optimization ability of the algorithm is improved. The simulation results show that the improved Multi-Verse Optimization (MVO) algorithm can track the maximum power point continuously and stably under the three conditions of uniform irradiance, partial shading and variable irradiance, and the convergence time and convergence accuracy are greatly improved, thus the feasibility of the algorithm is verified in the maximum power point tracking control.
The power output curve of the photovoltaic array exhibits multi-peak characteristics under partial shading conditions, and the traditional control algorithm can not track the maximum power point continuously and accurately. A method for tracking the global maximum power point based on the Improved Multi-Verse Optimization (IMVO) algorithm is proposed. Spiral update and adaptive compression factor are introduced to enhance the algorithm's global search capability. Travelling distance rate update method is changed, and the convergence speed of algorithm is accelerated, so the optimization ability of the algorithm is improved. The simulation results show that the improved Multi-Verse Optimization (MVO) algorithm can track the maximum power point continuously and stably under the three conditions of uniform irradiance, partial shading and variable irradiance, and the convergence time and convergence accuracy are greatly improved, thus the feasibility of the algorithm is verified in the maximum power point tracking control.
2021, 43(12): 3743-3748.
doi: 10.11999/JEIT200855
Abstract:
A high-performance crypto module prescribed in this paper offers advanced security solutions in big data applications. A module architecture, which consists of a high throughput interface, Central Manage & Monitor Module (CMMM) and multiple channels driving a group of crypto engines, is discussed here. CMMM distributes the tasks to the crypto engines and guides the data back to the host after processing by the dedicated algorithm. Since the module's performance is limited by the interface throughput and the scale of the crypto engines, an array with MMC/eMMC bus connections is built for PCIe high-speed interfaces. The more crypto engines are integrated into a system, the higher performance of this system can reach. To verify this architecture, an ASIC encryption card with PCIe Gen2×4 interface is made under semiconductor manufacturing process technology of 55 nm, and tested. The average throughput of this card can achieve up to 419.23 MB.
A high-performance crypto module prescribed in this paper offers advanced security solutions in big data applications. A module architecture, which consists of a high throughput interface, Central Manage & Monitor Module (CMMM) and multiple channels driving a group of crypto engines, is discussed here. CMMM distributes the tasks to the crypto engines and guides the data back to the host after processing by the dedicated algorithm. Since the module's performance is limited by the interface throughput and the scale of the crypto engines, an array with MMC/eMMC bus connections is built for PCIe high-speed interfaces. The more crypto engines are integrated into a system, the higher performance of this system can reach. To verify this architecture, an ASIC encryption card with PCIe Gen2×4 interface is made under semiconductor manufacturing process technology of 55 nm, and tested. The average throughput of this card can achieve up to 419.23 MB.
2021, 43(12): 3749-3757.
doi: 10.11999/JEIT200740
Abstract:
In distributed storage system, when a node fails, Locally Repairable Code (LRC) can access other nodes to recover data. However, the locality of LRC is not the same. Quaternary LRC with short code length and small locality is constructed. When code length is not more than 20 and minimum distance is greater than 2, if the dimension of generator matrix of a quaternary distance optimal linear code does not exceed the dimension of parity-check matrix, an LRC can be constructed from generator matrix, otherwise parity-check matrix can be used to construct an LRC. From generator matrices or parity-check matrices of LRCs constructed, other LRC are given by operations of deleting and juxtaposition. There are 190 LRC with code length n ≤ 20 and minimum distance d ≥ 2 to be constructed. Except for 12 LRC, other LRC are all locality optimal.
In distributed storage system, when a node fails, Locally Repairable Code (LRC) can access other nodes to recover data. However, the locality of LRC is not the same. Quaternary LRC with short code length and small locality is constructed. When code length is not more than 20 and minimum distance is greater than 2, if the dimension of generator matrix of a quaternary distance optimal linear code does not exceed the dimension of parity-check matrix, an LRC can be constructed from generator matrix, otherwise parity-check matrix can be used to construct an LRC. From generator matrices or parity-check matrices of LRCs constructed, other LRC are given by operations of deleting and juxtaposition. There are 190 LRC with code length n ≤ 20 and minimum distance d ≥ 2 to be constructed. Except for 12 LRC, other LRC are all locality optimal.
2021, 43(12): 3758-3765.
doi: 10.11999/JEIT200689
Abstract:
In order to study the dynamic behavior of memristor switch circuit, a memristor-based switched chaotic circuit with multiple coexisting attractors is designed. There exists multiple attractor bifurcation in this circuit system. When boundary collisions occurs in the system, there are different attractors coexisting in the system. It includes the coexistence of the single periodic limit cycles with chaotic attractors, different chaotic attractors, symmetric 2-periodic limit cycles, and symmetric 2-periodic limit cycles with 5-periodic limit cycles. The dynamic behavior of the circuit system is analyzed by numerical simulation of phase diagram and bifurcation diagram. And the feasibility of the circuit is verified by PSIM circuit simulation, this paper is of great significance to the study of multiple attractor bifurcation in switching circuits and the application of chaos.
In order to study the dynamic behavior of memristor switch circuit, a memristor-based switched chaotic circuit with multiple coexisting attractors is designed. There exists multiple attractor bifurcation in this circuit system. When boundary collisions occurs in the system, there are different attractors coexisting in the system. It includes the coexistence of the single periodic limit cycles with chaotic attractors, different chaotic attractors, symmetric 2-periodic limit cycles, and symmetric 2-periodic limit cycles with 5-periodic limit cycles. The dynamic behavior of the circuit system is analyzed by numerical simulation of phase diagram and bifurcation diagram. And the feasibility of the circuit is verified by PSIM circuit simulation, this paper is of great significance to the study of multiple attractor bifurcation in switching circuits and the application of chaos.
2021, 43(12): 3766-3774.
doi: 10.11999/JEIT200575
Abstract:
Digital image encryption algorithm based on chaos is widely used because of its large key space and high key sensitivity. The sinusoidal feedback is introduced into the classical Logistic mapping to form a new discrete mapping, and the chaotic behavior of the mapping is analyzed. The chaotic mapping is used to derive the discrete chaotic encryption sequence, and the encryption sequence is enlarged and rounded to enhance its pseudo-randomness. The pseudo-randomness of encrypted sequences is tested by NIST (National Institute of Standards and Technology) test method. The pseudo-random sequence is XOR (Exclusive OR) with the original image to realize image encryption. Numerical simulation results show that the new encryption algorithm has better encryption effect, and its key shows better sensitivity and pseudo-randomness. Finally, hardware encryption for this algorithm is realized based on FPGA (Field Programmable Gate Array) platform.
Digital image encryption algorithm based on chaos is widely used because of its large key space and high key sensitivity. The sinusoidal feedback is introduced into the classical Logistic mapping to form a new discrete mapping, and the chaotic behavior of the mapping is analyzed. The chaotic mapping is used to derive the discrete chaotic encryption sequence, and the encryption sequence is enlarged and rounded to enhance its pseudo-randomness. The pseudo-randomness of encrypted sequences is tested by NIST (National Institute of Standards and Technology) test method. The pseudo-random sequence is XOR (Exclusive OR) with the original image to realize image encryption. Numerical simulation results show that the new encryption algorithm has better encryption effect, and its key shows better sensitivity and pseudo-randomness. Finally, hardware encryption for this algorithm is realized based on FPGA (Field Programmable Gate Array) platform.