Email alert
2020 Vol. 42, No. 10
Display Method:
2020, (10): 1-4.
Abstract:
2020, 42(10): 2319-2329.
doi: 10.11999/JEIT200058
Abstract:
With the rapid development of mobile communication technologies and the commercial use of 5G, cybersecurity issues are increasingly prominent. For revealing the essence of operation in 5G cybersecurity, current researches on cybersecurity confrontation and game are analyzed from the aspects of basic models including static game, dynamic game, evolutionary game, and graph-based game, as well as the typical confrontation issues including eavesdropping and anti-eavesdropping and jamming and anti-jamming. Furthermore, some potential research directions are also set forth in establishing 5G cybersecurity confrontation theory and general law. Finally, the necessity and challenges of security and game research in 5G networks are discussed, so as to provide new sights for the research of confrontation in 5G cyberspace.
With the rapid development of mobile communication technologies and the commercial use of 5G, cybersecurity issues are increasingly prominent. For revealing the essence of operation in 5G cybersecurity, current researches on cybersecurity confrontation and game are analyzed from the aspects of basic models including static game, dynamic game, evolutionary game, and graph-based game, as well as the typical confrontation issues including eavesdropping and anti-eavesdropping and jamming and anti-jamming. Furthermore, some potential research directions are also set forth in establishing 5G cybersecurity confrontation theory and general law. Finally, the necessity and challenges of security and game research in 5G networks are discussed, so as to provide new sights for the research of confrontation in 5G cyberspace.
2020, 42(10): 2330-2341.
doi: 10.11999/JEIT200002
Abstract:
Physical layer security technology secures wireless communications based on information security theory, which is the key means to realize the integration of security and communication, and has become gradually a research hotspot at home and abroad. The research of key generation technology in the physical layer of wireless communication is studied, mainly focusing on the theoretical model, mechanism and research status of key generation technologies. Through the comparison and analysis of the two different types of key generation algorithms, that is, the source key generation algorithm and the channel key generation algorithm, the essence of physical layer key technologies using communication channel’s inherent security attributes to promote communication security is revealed. In particular, a feasible physical layer key generation 5G engineering implementation framework is presented. Finally, the possible future research directions of physical layer key generation technologies is prospected.
Physical layer security technology secures wireless communications based on information security theory, which is the key means to realize the integration of security and communication, and has become gradually a research hotspot at home and abroad. The research of key generation technology in the physical layer of wireless communication is studied, mainly focusing on the theoretical model, mechanism and research status of key generation technologies. Through the comparison and analysis of the two different types of key generation algorithms, that is, the source key generation algorithm and the channel key generation algorithm, the essence of physical layer key technologies using communication channel’s inherent security attributes to promote communication security is revealed. In particular, a feasible physical layer key generation 5G engineering implementation framework is presented. Finally, the possible future research directions of physical layer key generation technologies is prospected.
2020, 42(10): 2342-2349.
doi: 10.11999/JEIT191047
Abstract:
The flexibility, manageability, and programmability brought by Software-Defined Networking (SDN), however come at the cost of new attack vectors. Malicious manipulation attacks against the key fields in OpenFlow is proposed, and three sniffing technologies based on forwarding delay to ensure the feasibility of manipulation attacks are designed. The experimental results show that the field manipulation attacks consume SDN resources greatly, leading to a significant decrease in the communication performance between legitimate users.
The flexibility, manageability, and programmability brought by Software-Defined Networking (SDN), however come at the cost of new attack vectors. Malicious manipulation attacks against the key fields in OpenFlow is proposed, and three sniffing technologies based on forwarding delay to ensure the feasibility of manipulation attacks are designed. The experimental results show that the field manipulation attacks consume SDN resources greatly, leading to a significant decrease in the communication performance between legitimate users.
2020, 42(10): 2350-2356.
doi: 10.11999/JEIT200005
Abstract:
In order to solve the problem that the switching authentication protocol of terminal nodes in fog computing has some defects in storage, compute and security, an efficient terminal node switching authentication protocol is proposed. In this protocol, mutual authentication and session key agreement between the fog nodes and terminal nodes are realized by the combination of Two Factors Combined Public Key(TF-CPK) and authentication ticket. The security and performance analysis results show that the protocol supports untraceability, which can resist numerous known attacks and security threats, and has a smaller system overhead.
In order to solve the problem that the switching authentication protocol of terminal nodes in fog computing has some defects in storage, compute and security, an efficient terminal node switching authentication protocol is proposed. In this protocol, mutual authentication and session key agreement between the fog nodes and terminal nodes are realized by the combination of Two Factors Combined Public Key(TF-CPK) and authentication ticket. The security and performance analysis results show that the protocol supports untraceability, which can resist numerous known attacks and security threats, and has a smaller system overhead.
2020, 42(10): 2357-2364.
doi: 10.11999/JEIT200001
Abstract:
In recent years, research on motion vector-based video steganography has attracted considerable attention from researchers in the field of information hiding. Many video steganographic methods incorporating motion vector-based additive embedding distortion functions have achieved good performance. However, the mutual embedding impact between cover elements in video steganography is neglected in these additive embedding distortion functions. In this paper, joint distortion which reflects the mutual embedding impact for motion vectors is designed. By decomposing joint embedding distortion, modification probability transformation can be achieved and embedding payloads can be dynamically and reasonably allocated in horizontal components and vertical components of motion vectors. Therefore, the video steganography method using non-additive embedding distortion is proposed. Experimental results demonstrate that the proposed method can enhance the security performance significantly compared with the typical methods using additive embedding distortions and obtain the relatively better video coding quality as well.
In recent years, research on motion vector-based video steganography has attracted considerable attention from researchers in the field of information hiding. Many video steganographic methods incorporating motion vector-based additive embedding distortion functions have achieved good performance. However, the mutual embedding impact between cover elements in video steganography is neglected in these additive embedding distortion functions. In this paper, joint distortion which reflects the mutual embedding impact for motion vectors is designed. By decomposing joint embedding distortion, modification probability transformation can be achieved and embedding payloads can be dynamically and reasonably allocated in horizontal components and vertical components of motion vectors. Therefore, the video steganography method using non-additive embedding distortion is proposed. Experimental results demonstrate that the proposed method can enhance the security performance significantly compared with the typical methods using additive embedding distortions and obtain the relatively better video coding quality as well.
2020, 42(10): 2365-2373.
doi: 10.11999/JEIT191020
Abstract:
In order to conduct effective resilient recovery on Automatic Dependent Surveillance-Broadcast (ADS-B) attack data and ensure the continuous availability of air traffic situation awareness, a resilient recovery method on ADS-B attack data is proposed. Based on attack detection strategies, the measurement and prediction sequences of ADS-B data are obtained to set up deviation data, differential data and neighbor density data sequences, which are designed to construct recovery vectors, mine the temporal correlations and the spatial correlations respectively. The selected data sequences are integrated to accomplish the whole recovery method and decide the end point of recovery. The method is applied to elinimating attack effects and recovering the attack data towards normal data. According to the results of experiments on six classical attack patterns, the proposed method is effective on recovering attack data and eliminating the attack impacts.
In order to conduct effective resilient recovery on Automatic Dependent Surveillance-Broadcast (ADS-B) attack data and ensure the continuous availability of air traffic situation awareness, a resilient recovery method on ADS-B attack data is proposed. Based on attack detection strategies, the measurement and prediction sequences of ADS-B data are obtained to set up deviation data, differential data and neighbor density data sequences, which are designed to construct recovery vectors, mine the temporal correlations and the spatial correlations respectively. The selected data sequences are integrated to accomplish the whole recovery method and decide the end point of recovery. The method is applied to elinimating attack effects and recovering the attack data towards normal data. According to the results of experiments on six classical attack patterns, the proposed method is effective on recovering attack data and eliminating the attack impacts.
2020, 42(10): 2374-2385.
doi: 10.11999/JEIT190890
Abstract:
Cloud computing, as a new computing paradigm, offers dynamically scalable and seemly unbounded storage and computation resources in a pay-as-you-go manner. In order to enjoy superior data services and reduce the local maintenance cost, more and more resource-constrained users prefer to outsource their data to the cloud server. However, outsourcing data to the remote server suffers from data security concerns, because the server may try to learn the information of the outsourced data as much as possible for commercial purpose. The traditional encryption technique can protect the confidentiality of users’ data, however, it leads to the loss of search ability over the encrypted data. Fortunately, searchable encryption, as a promising solution, enables the server to perform keyword-based search over encrypted data. Recently, the design of searchable encryption scheme is becoming more and more diversified, aiming to improve the practicability of the scheme. This paper focuses on the current research for searchable encryption scheme in four aspects, including single keyword search, multi-modal search, forward/ backward secure search and verifiable search. This paper mainly introduces and analyzes the representative research results, summarizes the latest research progress and key technical difficulties, and finally prospects the future research direction.
Cloud computing, as a new computing paradigm, offers dynamically scalable and seemly unbounded storage and computation resources in a pay-as-you-go manner. In order to enjoy superior data services and reduce the local maintenance cost, more and more resource-constrained users prefer to outsource their data to the cloud server. However, outsourcing data to the remote server suffers from data security concerns, because the server may try to learn the information of the outsourced data as much as possible for commercial purpose. The traditional encryption technique can protect the confidentiality of users’ data, however, it leads to the loss of search ability over the encrypted data. Fortunately, searchable encryption, as a promising solution, enables the server to perform keyword-based search over encrypted data. Recently, the design of searchable encryption scheme is becoming more and more diversified, aiming to improve the practicability of the scheme. This paper focuses on the current research for searchable encryption scheme in four aspects, including single keyword search, multi-modal search, forward/ backward secure search and verifiable search. This paper mainly introduces and analyzes the representative research results, summarizes the latest research progress and key technical difficulties, and finally prospects the future research direction.
2020, 42(10): 2386-2393.
doi: 10.11999/JEIT200023
Abstract:
For the high reliability and low delay service requirements of 5G network, a Delay and Reliability Optimization of Service Function Chain (SFC) Deployment (DROSD) method is proposed. Without reservation of redundant resources, firstly, the function mutually exclusive constraints are used to determine whether the adjacent Virtual Network Function (VNF) in SFC can be combined; Secondly, functional constraints and resource constraints are used to select combinatorial physical node set to achieve load balancing and improve the reliability of SFC; Thirdly, the end-to-end delay of SFC is reduced by hop number constraints, and finally the VNF is deployed by the physical node with the maximum value, which is arranged in descending order through the available resources, node degree and hops from the original node. The routing of SFC adopts K-shortest path algorithm. The simulation results show that the proposed algorithm improves the request acceptance rate and the long-term average ratio of revenue to cost, enhances the reliability of SFC, reduces the end-to-end delay, and reduces the average bandwidth cost.
For the high reliability and low delay service requirements of 5G network, a Delay and Reliability Optimization of Service Function Chain (SFC) Deployment (DROSD) method is proposed. Without reservation of redundant resources, firstly, the function mutually exclusive constraints are used to determine whether the adjacent Virtual Network Function (VNF) in SFC can be combined; Secondly, functional constraints and resource constraints are used to select combinatorial physical node set to achieve load balancing and improve the reliability of SFC; Thirdly, the end-to-end delay of SFC is reduced by hop number constraints, and finally the VNF is deployed by the physical node with the maximum value, which is arranged in descending order through the available resources, node degree and hops from the original node. The routing of SFC adopts K-shortest path algorithm. The simulation results show that the proposed algorithm improves the request acceptance rate and the long-term average ratio of revenue to cost, enhances the reliability of SFC, reduces the end-to-end delay, and reduces the average bandwidth cost.
2020, 42(10): 2394-2402.
doi: 10.11999/JEIT190731
Abstract:
Most existing link prediction methods in directed networks fail to consider the structural properties of directed networks when calculating node similarity, nor do they differentiate the contributions of directed neighbors on link formation, resulting in the limitation on prediction performance. To solve these problems, a novel link prediction method in directed networks based on linear programming is proposed. The contributions of three types of directed neighbors are quantified, then the linear programming problem is established based on network topological property. The similarity index is deduced by solving the optimal solution of the linear programming problem. Experimental results on nine real-world directed networks show that the proposed method outperforms nine benchmarks on both accuracy and robustness under two evaluation metrics.
Most existing link prediction methods in directed networks fail to consider the structural properties of directed networks when calculating node similarity, nor do they differentiate the contributions of directed neighbors on link formation, resulting in the limitation on prediction performance. To solve these problems, a novel link prediction method in directed networks based on linear programming is proposed. The contributions of three types of directed neighbors are quantified, then the linear programming problem is established based on network topological property. The similarity index is deduced by solving the optimal solution of the linear programming problem. Experimental results on nine real-world directed networks show that the proposed method outperforms nine benchmarks on both accuracy and robustness under two evaluation metrics.
2020, 42(10): 2403-2411.
doi: 10.11999/JEIT190739
Abstract:
In order to solve the problems caused by the traditional data analysis based on the centralized algorithm in the IoT, such as excessive bandwidth occupation, high communication latency and data privacy leakage, considering the typical linear regression model of elastic net regression, a distributed learning algorithm for Internet of Things (IoT) is proposed in this paper. This algorithm is based on the the Alternating Direction Method of Multipliers (ADMM) framework. It decomposes the objective problem of elastic net regression into several sub-problems that can be solved independently by each IoT node using its local data. Different from traditional centralized algorithms, the proposed algorithm does not require the IoT node to upload its private data to the server for training, but rather the locally trained intermediate parameters to the server for aggregation. In such a collaborative manner, the server can finally obtain the objective model after several iterations. The experimental results on two typical datasets indicate that the proposed algorithm can quickly converge to the optimal solution within dozens of iterations. As compared to the localized algorithm in which each node trains the model solely based on its own local data, the proposed algorithm improves the validity and the accuracy of training models; as compared to the centralized algorithm, the proposed algorithm can guarantee the accuracy and the scalability of model training, and well protect the individual private data from leakage.
In order to solve the problems caused by the traditional data analysis based on the centralized algorithm in the IoT, such as excessive bandwidth occupation, high communication latency and data privacy leakage, considering the typical linear regression model of elastic net regression, a distributed learning algorithm for Internet of Things (IoT) is proposed in this paper. This algorithm is based on the the Alternating Direction Method of Multipliers (ADMM) framework. It decomposes the objective problem of elastic net regression into several sub-problems that can be solved independently by each IoT node using its local data. Different from traditional centralized algorithms, the proposed algorithm does not require the IoT node to upload its private data to the server for training, but rather the locally trained intermediate parameters to the server for aggregation. In such a collaborative manner, the server can finally obtain the objective model after several iterations. The experimental results on two typical datasets indicate that the proposed algorithm can quickly converge to the optimal solution within dozens of iterations. As compared to the localized algorithm in which each node trains the model solely based on its own local data, the proposed algorithm improves the validity and the accuracy of training models; as compared to the centralized algorithm, the proposed algorithm can guarantee the accuracy and the scalability of model training, and well protect the individual private data from leakage.
2020, 42(10): 2412-2419.
doi: 10.11999/JEIT190395
Abstract:
Software-Defined Optical Network (SDON) is the latest generation network architecture in intelligent optical networks. Its control plane carries many core functions. The survivability of control plane, control redundancy and control delay are crucial to the overall performance of the network. In this paper, a Survivability-Constrained software-Defined (SCD) optical network controller deployment algorithm is proposed. Under the premise of ensuring users' network survivability requirements, mathematical principles such as shortest path and minimum dominance set are used to reduce control delay, and reduce the number of controller deployments to reduce control redundancy. A joint judgment condition is used to select the control center deployment node to coordinate the work between the controllers. Experiments show that: Firstly, the proposed algorithm can guarantee the user's survivability requirements for the network 100%. Secondly, the proposed algorithm reduces the network failure alarm probability by at least 15% compared with the C-MPC algorithm, and improves the network survivability. At the same time, about 40% of the number of controller deployments is reduced relative to the deployment algorithm with latency constraints. Especially in the scenario where the survivability requirements are high, the proposed algorithm shows good adaptability. In addition, the deployment algorithm of the control center can dynamically meet the different needs of users for network survivability in a complex large-scale network.
Software-Defined Optical Network (SDON) is the latest generation network architecture in intelligent optical networks. Its control plane carries many core functions. The survivability of control plane, control redundancy and control delay are crucial to the overall performance of the network. In this paper, a Survivability-Constrained software-Defined (SCD) optical network controller deployment algorithm is proposed. Under the premise of ensuring users' network survivability requirements, mathematical principles such as shortest path and minimum dominance set are used to reduce control delay, and reduce the number of controller deployments to reduce control redundancy. A joint judgment condition is used to select the control center deployment node to coordinate the work between the controllers. Experiments show that: Firstly, the proposed algorithm can guarantee the user's survivability requirements for the network 100%. Secondly, the proposed algorithm reduces the network failure alarm probability by at least 15% compared with the C-MPC algorithm, and improves the network survivability. At the same time, about 40% of the number of controller deployments is reduced relative to the deployment algorithm with latency constraints. Especially in the scenario where the survivability requirements are high, the proposed algorithm shows good adaptability. In addition, the deployment algorithm of the control center can dynamically meet the different needs of users for network survivability in a complex large-scale network.
2020, 42(10): 2420-2428.
doi: 10.11999/JEIT190759
Abstract:
Emerging technology applications such as mobile cloud computing, Artificial Intelligence (AI) and 5G promot Elastic Optical Network (EON) to play an important role in the backbone transmission network. Degraded Service (DS) technology can provide a new way to reduce traffic congestion and improve spectrum utilization in EON. Firstly, considering the problems of unfair resource allocation and neglecting the Quality of Experience (QoE) of low-priority services in existing DS algorithms, a Mixed Integer Linear Program (MILP) model with the joint objective of minimizing downgrade frequency, downgrade level and Transmission Delay Loss (TDL) is established. A Delay-aware Degradation-Recovery Routing and Spectrum Assignment (DRR-RSA) algorithm for degraded recovery is proposed. In order to improve the QoE of downgraded services and the revenue of operators, the strategy of degradation recovery is integrated in the optimal DS-window selection phase of the algorithm. Under the premise of guaranteeing the transmission data quantity unchanged, the degradable services are restored to the free spectrum domain, so as to increase the spectrum efficiency, reduce degraded service TDL and maximize revenue. Finally, the simulation results tesfity that the proposed algorithm has advantages in terms of traffic congestion, revenue and degraded service success-rate.
Emerging technology applications such as mobile cloud computing, Artificial Intelligence (AI) and 5G promot Elastic Optical Network (EON) to play an important role in the backbone transmission network. Degraded Service (DS) technology can provide a new way to reduce traffic congestion and improve spectrum utilization in EON. Firstly, considering the problems of unfair resource allocation and neglecting the Quality of Experience (QoE) of low-priority services in existing DS algorithms, a Mixed Integer Linear Program (MILP) model with the joint objective of minimizing downgrade frequency, downgrade level and Transmission Delay Loss (TDL) is established. A Delay-aware Degradation-Recovery Routing and Spectrum Assignment (DRR-RSA) algorithm for degraded recovery is proposed. In order to improve the QoE of downgraded services and the revenue of operators, the strategy of degradation recovery is integrated in the optimal DS-window selection phase of the algorithm. Under the premise of guaranteeing the transmission data quantity unchanged, the degradable services are restored to the free spectrum domain, so as to increase the spectrum efficiency, reduce degraded service TDL and maximize revenue. Finally, the simulation results tesfity that the proposed algorithm has advantages in terms of traffic congestion, revenue and degraded service success-rate.
2020, 42(10): 2429-2436.
doi: 10.11999/JEIT190406
Abstract:
To deal with the problem of estimation of the pseudo noise sequence for Non-Periodic Long Code Direct Sequence Code Division Multiple Access (NPLC-DS-CDMA) signals under low signal-to-noise ratio, a method using multi-antenna based on tensor decomposition and polynomial library search is proposed. Firstly, the received signals are modeled as a third-order tensor model and the tensor is divided into multiple sub-tensors according to the spreading gain. Secondly, the pseudo noise code fragment factor matrixs and the receiver gain factor matrixs are obtained from the sub-tensors by Canonical Polyadic (CP) decomposition which uses the Alternating Least Squares Projection (ALSP) algorithm, and then the pseudo noise sequence of each user is obtained by selecting pseudo noise code fragment combination sequence according to the cross-correlation of the receiver gain factor matrixs and sidelobe energy detection. Finally, the polynomial library search method is applied to identifying the generator polynomial of the pseudo noise sequence in order to further improve the accuracy of the pseudo code sequence estimation. The simulation results show that the proposed method can effectively estimate the pseudo noise sequence of the multi-antenna NPLC-DS-CDMA signals.
To deal with the problem of estimation of the pseudo noise sequence for Non-Periodic Long Code Direct Sequence Code Division Multiple Access (NPLC-DS-CDMA) signals under low signal-to-noise ratio, a method using multi-antenna based on tensor decomposition and polynomial library search is proposed. Firstly, the received signals are modeled as a third-order tensor model and the tensor is divided into multiple sub-tensors according to the spreading gain. Secondly, the pseudo noise code fragment factor matrixs and the receiver gain factor matrixs are obtained from the sub-tensors by Canonical Polyadic (CP) decomposition which uses the Alternating Least Squares Projection (ALSP) algorithm, and then the pseudo noise sequence of each user is obtained by selecting pseudo noise code fragment combination sequence according to the cross-correlation of the receiver gain factor matrixs and sidelobe energy detection. Finally, the polynomial library search method is applied to identifying the generator polynomial of the pseudo noise sequence in order to further improve the accuracy of the pseudo code sequence estimation. The simulation results show that the proposed method can effectively estimate the pseudo noise sequence of the multi-antenna NPLC-DS-CDMA signals.
2020, 42(10): 2437-2444.
doi: 10.11999/JEIT190704
Abstract:
In Adjacent Channel Interference (ACI) suppression, in order to obtain the nonlinear characteristics of interference signal for reconstruction and cancellation, the receiver needs to use high-sampling-rate wideband Analog-to-Digital Converter (ADC) to sample interference signal, which will greatly increase the cost of the receiver. To solve the problem, a ACI suppression method based on deconvolution of interference signal’s out-of-band component is proposed in this paper. By using the known out-of-band nonlinear component, the influence between adjacent frames is calculated and eliminated, and then the narrow band linear convolution frame is constructed from the partial convolution frame. Finally, the original wide band signal is recovered by regularized least square method, thus reducing the ADC sampling rate. The simulation results show that when the sampling rate is only 1/3 of the traditional scheme, the residual interference brought by the proposed method is not higher than the noise floor of 6 dB.
In Adjacent Channel Interference (ACI) suppression, in order to obtain the nonlinear characteristics of interference signal for reconstruction and cancellation, the receiver needs to use high-sampling-rate wideband Analog-to-Digital Converter (ADC) to sample interference signal, which will greatly increase the cost of the receiver. To solve the problem, a ACI suppression method based on deconvolution of interference signal’s out-of-band component is proposed in this paper. By using the known out-of-band nonlinear component, the influence between adjacent frames is calculated and eliminated, and then the narrow band linear convolution frame is constructed from the partial convolution frame. Finally, the original wide band signal is recovered by regularized least square method, thus reducing the ADC sampling rate. The simulation results show that when the sampling rate is only 1/3 of the traditional scheme, the residual interference brought by the proposed method is not higher than the noise floor of 6 dB.
2020, 42(10): 2445-2453.
doi: 10.11999/JEIT190778
Abstract:
An Orthogonal MultiUser Short Reference Differential Chaos Shift Keying (OMU-SR-DCSK) communication system is proposed to overcome the dominant drawbacks of DCSK system relating to low transmission rate and energy efficiency. The proposed system shortens the reference signal to 1/P of information bearing signal. Two consecutive information time slots are added after the reference time slot. Due to the excellent features of Walsh codes, the system sends information from N users in one information time slot. Meanwhile, the use of orthogonality of the Walsh code eliminates completely intra-signal interference and enhances the performance of Bit Error Rate (BER) better. The theoretical BER formula of OMU-SR-DCSK over Additive White Gaussian Noise (AWGN) channel and Rayleigh fading channel are derived and simulations are carried out respectively. The coincidence between the simulation results and the theoretical derivations proves the correctness of the theoretical derivation, providing a theoretical basis for the application of OMU-SR-DCSK to multiuser serial transmission system.
An Orthogonal MultiUser Short Reference Differential Chaos Shift Keying (OMU-SR-DCSK) communication system is proposed to overcome the dominant drawbacks of DCSK system relating to low transmission rate and energy efficiency. The proposed system shortens the reference signal to 1/P of information bearing signal. Two consecutive information time slots are added after the reference time slot. Due to the excellent features of Walsh codes, the system sends information from N users in one information time slot. Meanwhile, the use of orthogonality of the Walsh code eliminates completely intra-signal interference and enhances the performance of Bit Error Rate (BER) better. The theoretical BER formula of OMU-SR-DCSK over Additive White Gaussian Noise (AWGN) channel and Rayleigh fading channel are derived and simulations are carried out respectively. The coincidence between the simulation results and the theoretical derivations proves the correctness of the theoretical derivation, providing a theoretical basis for the application of OMU-SR-DCSK to multiuser serial transmission system.
2020, 42(10): 2454-2461.
doi: 10.11999/JEIT190748
Abstract:
In multi-cell massive Multiple Input Multiple Output (MIMO) systems, pilot contamination has become the bottleneck which restricts the performance of the whole system, so the reasonable usage of pilot resources can mitigate the pilot contamination of the system. In order to find the pilot allocation method that maximizes the total transmission capacity of edge users, a pilot allocation scheme based on Hysteretic Noise Chaotic Neural Network (HNCNN) is proposed for the first time. HNCNN is a famous optimization tool, and its optimization ability is related to the designed energy function. This scheme combines the characteristics of pilot resource usage and the calculation method of maximizing the total transmission capacity of edge users to design a new energy function. The simulation results show that the proposed network can converge to a better pilot allocation mode after a certain number of iterations. Compared with other literature pilot allocation solution, the pilot allocation method based on HNCNN can further reduce the influence of pilot contamination and improve the system performance.
In multi-cell massive Multiple Input Multiple Output (MIMO) systems, pilot contamination has become the bottleneck which restricts the performance of the whole system, so the reasonable usage of pilot resources can mitigate the pilot contamination of the system. In order to find the pilot allocation method that maximizes the total transmission capacity of edge users, a pilot allocation scheme based on Hysteretic Noise Chaotic Neural Network (HNCNN) is proposed for the first time. HNCNN is a famous optimization tool, and its optimization ability is related to the designed energy function. This scheme combines the characteristics of pilot resource usage and the calculation method of maximizing the total transmission capacity of edge users to design a new energy function. The simulation results show that the proposed network can converge to a better pilot allocation mode after a certain number of iterations. Compared with other literature pilot allocation solution, the pilot allocation method based on HNCNN can further reduce the influence of pilot contamination and improve the system performance.
2020, 42(10): 2462-2470.
doi: 10.11999/JEIT190882
Abstract:
Emotion has always been a research hot spot in many disciplines such as psychology, education, and information science. Electro EncephaloGram(EEG) signal has received extensive attention in the field of emotion recognition because of its objective and not easy to disguise. Since human emotions are generated by the interaction of multiple brain regions in the brain, an algorithm of Support Tensor Machine based on Synchronous Brain Network (SBN-STM) for emotion classification is proposed. The algorithm uses Phase Locking Value (PLV) to construct a synchronous brain network, in order to analyze the synchronization and correlation between multi-channel EEG signals, and generate a second-order tensor sequence as a training set. The Support Tensor Machine (STM) model can distinguish a two-category of positive and negative emotions. Based on the DEAP EEG emotion database, this paper analyzes the selection method of synchronic brain network tensor sequence, the research on the size and position of the optimal tensor sequence window solves the problem of traditional emotion classification algorithm which always exists feature redundancy, and improves the model training speed. The results show that the accuracy of the emotional classification method based on SBN-STM is better than support vector machine, C4.5 decision tree, artificial neural network, and K-nearest neighbor which using vectors as input feature.
Emotion has always been a research hot spot in many disciplines such as psychology, education, and information science. Electro EncephaloGram(EEG) signal has received extensive attention in the field of emotion recognition because of its objective and not easy to disguise. Since human emotions are generated by the interaction of multiple brain regions in the brain, an algorithm of Support Tensor Machine based on Synchronous Brain Network (SBN-STM) for emotion classification is proposed. The algorithm uses Phase Locking Value (PLV) to construct a synchronous brain network, in order to analyze the synchronization and correlation between multi-channel EEG signals, and generate a second-order tensor sequence as a training set. The Support Tensor Machine (STM) model can distinguish a two-category of positive and negative emotions. Based on the DEAP EEG emotion database, this paper analyzes the selection method of synchronic brain network tensor sequence, the research on the size and position of the optimal tensor sequence window solves the problem of traditional emotion classification algorithm which always exists feature redundancy, and improves the model training speed. The results show that the accuracy of the emotional classification method based on SBN-STM is better than support vector machine, C4.5 decision tree, artificial neural network, and K-nearest neighbor which using vectors as input feature.
2020, 42(10): 2471-2477.
doi: 10.11999/JEJT190722
Abstract:
For the problem of blind extraction of rolling bearing fault signals under complex working conditions, an adaptive selection method of non-linear functions in Independent Component Analysis (ICA) is proposed, which solves the problem that Equivariant Adaptive Separation via Independence(EASI) can not separate bearing fault signals when multiple vibration sources coexist. In addition, in order to balance the steady-state error and convergence rate of the online blind separation algorithm, an adaptive iterative step selection method based on fuzzy logic is proposed, which improves greatly the convergence speed of the learning algorithm and reduces the steady-state error. The simulation results of blind extraction of bearing fault data verify the performance of the proposed algorithm.
For the problem of blind extraction of rolling bearing fault signals under complex working conditions, an adaptive selection method of non-linear functions in Independent Component Analysis (ICA) is proposed, which solves the problem that Equivariant Adaptive Separation via Independence(EASI) can not separate bearing fault signals when multiple vibration sources coexist. In addition, in order to balance the steady-state error and convergence rate of the online blind separation algorithm, an adaptive iterative step selection method based on fuzzy logic is proposed, which improves greatly the convergence speed of the learning algorithm and reduces the steady-state error. The simulation results of blind extraction of bearing fault data verify the performance of the proposed algorithm.
2020, 42(10): 2478-2484.
doi: 10.11999/JEIT190916
Abstract:
In order to solve the problem of target propeller features extraction under Alpha stable distribution noise, a method based on fractional low-order cyclic spectrum is proposed. Firstly, the low-order cyclic spectrum of ship radiation noise in impulse noise is derived, and the relationship between the propeller features and the peak value in the fractional low-order cyclic spectrum is given. Based on this, a propeller feature estimation method based on fractional low-order cyclic spectrum is proposed. Finally, the performance of method is verified by simulation experiments, and the effectiveness of the algorithm is further verified by the actual data.
In order to solve the problem of target propeller features extraction under Alpha stable distribution noise, a method based on fractional low-order cyclic spectrum is proposed. Firstly, the low-order cyclic spectrum of ship radiation noise in impulse noise is derived, and the relationship between the propeller features and the peak value in the fractional low-order cyclic spectrum is given. Based on this, a propeller feature estimation method based on fractional low-order cyclic spectrum is proposed. Finally, the performance of method is verified by simulation experiments, and the effectiveness of the algorithm is further verified by the actual data.
2020, 42(10): 2485-2492.
doi: 10.11999/JEIT190831
Abstract:
The existence of acceleration and descent velocity makes the imaging parameters of high-squint SAR mounted on maneuvering platform have obvious two-dimensional spatial variability, which affects seriously the focus depth of the scene. To solve this problem, a maneuvering SAR imaging method based on Keystone transform and azimuth perturbation resampling is proposed. First of all, the range azimuth decoupling and the azimuth spectrum de Aliasing are realized by the range walk correction and de-acceleration processing. Then the spatial-variant range cell migration is corrected by the Keystone transform in the azimuth time domain; In the process of azimuth compression, the second- and third-order spatial variabilities of Doppler parameters are removed by introducing the high-order perturbation factor in the time domain, and then the first-order spatial variability of the Doppler parameters is removed by the azimuth resampling processing in the azimuth frequency domain. The proposed method can effectively correct the two-dimensional spatial variability of range cell migration trajectory and azimuth focus parameters, and realize the large scene imaging of high-squint maneuvering SAR. Simulation analysis verifies the effectiveness of the proposed method.
The existence of acceleration and descent velocity makes the imaging parameters of high-squint SAR mounted on maneuvering platform have obvious two-dimensional spatial variability, which affects seriously the focus depth of the scene. To solve this problem, a maneuvering SAR imaging method based on Keystone transform and azimuth perturbation resampling is proposed. First of all, the range azimuth decoupling and the azimuth spectrum de Aliasing are realized by the range walk correction and de-acceleration processing. Then the spatial-variant range cell migration is corrected by the Keystone transform in the azimuth time domain; In the process of azimuth compression, the second- and third-order spatial variabilities of Doppler parameters are removed by introducing the high-order perturbation factor in the time domain, and then the first-order spatial variability of the Doppler parameters is removed by the azimuth resampling processing in the azimuth frequency domain. The proposed method can effectively correct the two-dimensional spatial variability of range cell migration trajectory and azimuth focus parameters, and realize the large scene imaging of high-squint maneuvering SAR. Simulation analysis verifies the effectiveness of the proposed method.
2020, 42(10): 2493-2499.
doi: 10.11999/JEIT190747
Abstract:
The traditional Least Squares-Estimating Signal Parameter via Rotational Invariance Techniques (LS-ESPRIT) algorithm is not effective while estimating parameters of the Geometric Theory of Diffraction (GTD) at lower SNR. To solve this problem, an improved LS-ESPRIT algorithm is proposed in this paper. Firstly, a Hankel matrix is constructed by the echo data of radar targets.Secondly,a low- rank reconstructed Hankel matrix is obtained,which is solved by the nuclear norm convex optimization method. Finally, the traditional LS-ESPRIT algorithm is used to process the data after noise reduction and estimate the parameters of the GTD model. Moreover,the reconstructed Radar Cross Section (RCS) can be obtained by the traditional LS-ESPRIT algorithm and the improved LS-ESPRIT algorithm. The influence of different bandwidths on parameter estimation is also analyzed in this paper. Simulation results show that the estimation accuracy and noise resistance of the improved LS-ESPRIT algorithm is better than the traditional LS-ESPRIT algorithm and the traditional TLS-ESPRIT algorithm. Furthermore, the amplitude error and phase angle error of the RCS which is reconstructed by the improved algorithm are smaller than the traditional algorithm. Different bandwidths also have influences on parameter estimation accuracy, the more wider bandwidth is, the more accurate parameters can be estimated.
The traditional Least Squares-Estimating Signal Parameter via Rotational Invariance Techniques (LS-ESPRIT) algorithm is not effective while estimating parameters of the Geometric Theory of Diffraction (GTD) at lower SNR. To solve this problem, an improved LS-ESPRIT algorithm is proposed in this paper. Firstly, a Hankel matrix is constructed by the echo data of radar targets.Secondly,a low- rank reconstructed Hankel matrix is obtained,which is solved by the nuclear norm convex optimization method. Finally, the traditional LS-ESPRIT algorithm is used to process the data after noise reduction and estimate the parameters of the GTD model. Moreover,the reconstructed Radar Cross Section (RCS) can be obtained by the traditional LS-ESPRIT algorithm and the improved LS-ESPRIT algorithm. The influence of different bandwidths on parameter estimation is also analyzed in this paper. Simulation results show that the estimation accuracy and noise resistance of the improved LS-ESPRIT algorithm is better than the traditional LS-ESPRIT algorithm and the traditional TLS-ESPRIT algorithm. Furthermore, the amplitude error and phase angle error of the RCS which is reconstructed by the improved algorithm are smaller than the traditional algorithm. Different bandwidths also have influences on parameter estimation accuracy, the more wider bandwidth is, the more accurate parameters can be estimated.
2020, 42(10): 2500-2507.
doi: 10.11999/JEIT190822
Abstract:
To solve the problem of track correlation in practical engineering effectively, a new concept of generalized space-time cross point of track pair is defined in this paper. Then a new algorithm which utilize space-time cross-point as feature points and realizes the track correlation through feature point matching is proposed in this paper. The test experiments with measured data illustrate the proposed algorithm of which the performance is effective, stable and robust, can eliminate redundant tracks and provide a unified situation.
To solve the problem of track correlation in practical engineering effectively, a new concept of generalized space-time cross point of track pair is defined in this paper. Then a new algorithm which utilize space-time cross-point as feature points and realizes the track correlation through feature point matching is proposed in this paper. The test experiments with measured data illustrate the proposed algorithm of which the performance is effective, stable and robust, can eliminate redundant tracks and provide a unified situation.
2020, 42(10): 2508-2515.
doi: 10.11999/JEIT190734
Abstract:
The existing SMeared SPectrum (SMSP) jamming suppression algorithms take a jammed echo whose length equal to radar transmitting signal as the processing object and do not involve the whole echo within the coherent processing interval. For this problem, a jamming suppression algorithm based on fast and slow time domain joint processing is proposed under the background of Linear Frequency Modulation (LFM) coherent radar countering SMSP jamming. The time and frequency domain characteristics of SMSP are studied and the effect on coherent radar is analyzed on the condition of self screening jamming. On this basis, four processing steps are designed to suppress the SMSP jamming. Firstly, the jamming fast time location is estimated by calculating the differential entropy of slow time signal. Secondly, the real jamming parameter is found based on the maximum correlation coefficient criterion. Then the jamming signals are reconstructed using Biorthogonal Fourier Transform. Finally, the SMSP jamming is suppressed by cancellation. The simulation results show that the proposed algorithm model is highly consistent with the actual radar processing flow, and the efficiency is further verified through algorithms comparison.
The existing SMeared SPectrum (SMSP) jamming suppression algorithms take a jammed echo whose length equal to radar transmitting signal as the processing object and do not involve the whole echo within the coherent processing interval. For this problem, a jamming suppression algorithm based on fast and slow time domain joint processing is proposed under the background of Linear Frequency Modulation (LFM) coherent radar countering SMSP jamming. The time and frequency domain characteristics of SMSP are studied and the effect on coherent radar is analyzed on the condition of self screening jamming. On this basis, four processing steps are designed to suppress the SMSP jamming. Firstly, the jamming fast time location is estimated by calculating the differential entropy of slow time signal. Secondly, the real jamming parameter is found based on the maximum correlation coefficient criterion. Then the jamming signals are reconstructed using Biorthogonal Fourier Transform. Finally, the SMSP jamming is suppressed by cancellation. The simulation results show that the proposed algorithm model is highly consistent with the actual radar processing flow, and the efficiency is further verified through algorithms comparison.
2020, 42(10): 2516-2523.
doi: 10.11999/JEIT190805
Abstract:
A micro-displacement measurement algorithm is proposed based on the Orientation Code Matching (OCM) and Edge Enhanced Matching (EEM) algorithms for monitoring the structural damage of tall buildings after earthquake. The algorithm first fuses the gradient information of the original image with the pixel intensity to enhance the image information; Then the phase correlation method is used to perform the matching operation, the matching speed is 96.1% higher than the normalized cross-correlation method; Finally, the sub-pixel interpolation method is used to make the measurement achieve sub-pixel accuracy. Experimental results show that the proposed algorithm avoids the loss of image gradient information during the quantization of OCM and EEM algorithms, greatly improves the template matching accuracy, and the matching speed is 43.3% higher than OCM and 19.6% higher than EEM.
A micro-displacement measurement algorithm is proposed based on the Orientation Code Matching (OCM) and Edge Enhanced Matching (EEM) algorithms for monitoring the structural damage of tall buildings after earthquake. The algorithm first fuses the gradient information of the original image with the pixel intensity to enhance the image information; Then the phase correlation method is used to perform the matching operation, the matching speed is 96.1% higher than the normalized cross-correlation method; Finally, the sub-pixel interpolation method is used to make the measurement achieve sub-pixel accuracy. Experimental results show that the proposed algorithm avoids the loss of image gradient information during the quantization of OCM and EEM algorithms, greatly improves the template matching accuracy, and the matching speed is 43.3% higher than OCM and 19.6% higher than EEM.
2020, 42(10): 2524-2532.
doi: 10.11999/JEIT190761
Abstract:
The infrared imaging system of Ultrawide Field Of View (U-FOV) has large monitoring range and is not limited by illumination, but there are diverse scales and abundant small objects. For accurately detecting them, a multi-scale infrared pedestrian detection method is proposed with the ability of background-awareness, which can improve the detection performance of small objects and reduce the redundant computation. Firstly, a four scales feature pyramid network is constructed to predict object independently and supplement detail features with higher resolution. Secondly, attention module is integrated into the horizontal connection of feature pyramid structure to generate salient features, suppress feature response of irrelevant areas and enhance the object features. Finally, the anchor mask generation subnetwork is constructed on the basis of salient coefficient to the location of the anchors, to eliminate the flat background, and to improve the processing efficiency. The experimental results show that the salient generation subnetwork only increases the processing time by 5.94%, and has the lightweight characteristic. The Average-Precision is 93.20% on the U-FOV infrared pedestrian dataset, 26.49% higher than that of YOLOv3. Anchor box constraint strategy can save 18.05% of processing time. The proposed method is lightweight and accurate, which is suitable for detecting multi-scale infrared objects in the U-FOV camera.
The infrared imaging system of Ultrawide Field Of View (U-FOV) has large monitoring range and is not limited by illumination, but there are diverse scales and abundant small objects. For accurately detecting them, a multi-scale infrared pedestrian detection method is proposed with the ability of background-awareness, which can improve the detection performance of small objects and reduce the redundant computation. Firstly, a four scales feature pyramid network is constructed to predict object independently and supplement detail features with higher resolution. Secondly, attention module is integrated into the horizontal connection of feature pyramid structure to generate salient features, suppress feature response of irrelevant areas and enhance the object features. Finally, the anchor mask generation subnetwork is constructed on the basis of salient coefficient to the location of the anchors, to eliminate the flat background, and to improve the processing efficiency. The experimental results show that the salient generation subnetwork only increases the processing time by 5.94%, and has the lightweight characteristic. The Average-Precision is 93.20% on the U-FOV infrared pedestrian dataset, 26.49% higher than that of YOLOv3. Anchor box constraint strategy can save 18.05% of processing time. The proposed method is lightweight and accurate, which is suitable for detecting multi-scale infrared objects in the U-FOV camera.
2020, 42(10): 2533-2540.
doi: 10.11999/JEIT190721
Abstract:
Considering the problem that it is difficult to accurately and effectively extract the quality features of mixed distortion image, an image quality assessment method based on spatial distribution analysis is proposed. Firstly, the brightness coefficients of the image are normalized, and the image is divided into blocks. While the Convolutional Neural Network (CNN) is used for end-to-end depth learning, the multi-level stacking of convolution cores is applied to acquire image quality perception features. The feature is mapped to the mass fraction of the image block through the full connection layer, then the quality pool is obtained by aggregating the quality of the block. Through the analysis of the spatial distribution of local quality in the quality pool, the features that can represent its spatial distribution are extracted, and then the mapping model from local quality to overall quality is established by the neural network to aggregate the local quality of the image. Finally, the effectiveness of the algorithm is verified by the performance tests in MLIVE, MDID2013 and MDID2016 mixed distortion image databases.
Considering the problem that it is difficult to accurately and effectively extract the quality features of mixed distortion image, an image quality assessment method based on spatial distribution analysis is proposed. Firstly, the brightness coefficients of the image are normalized, and the image is divided into blocks. While the Convolutional Neural Network (CNN) is used for end-to-end depth learning, the multi-level stacking of convolution cores is applied to acquire image quality perception features. The feature is mapped to the mass fraction of the image block through the full connection layer, then the quality pool is obtained by aggregating the quality of the block. Through the analysis of the spatial distribution of local quality in the quality pool, the features that can represent its spatial distribution are extracted, and then the mapping model from local quality to overall quality is established by the neural network to aggregate the local quality of the image. Finally, the effectiveness of the algorithm is verified by the performance tests in MLIVE, MDID2013 and MDID2016 mixed distortion image databases.
2020, 42(10): 2541-2548.
doi: 10.11999/JEIT190796
Abstract:
A Dual-channel Denoising Convolutional Neural Network (D-DnCNN) model for the removal of Random-Valued Impulse Noise (RVIN) is proposed. To obtain the reference image quickly, several Rank-Ordered Logarithmic absolute Difference (ROLD) statistics and one edge feature statistic are first extracted from a local window to construct a RVIN-aware feature vector which can describe the central pixel of the patch is RVIN or not. Next, a noise detector based on Deep Belief Network (DBN) is trained to map the extracted feature vectors to their corresponding noise labels to detect all noise-like pixels in the observed image. Then, under the guidance of noise labels, the Delaunay triangulation-based interpolation algorithm is exploited to restore all detected noise-like pixels quickly and generate a preliminary restored image used as reference image. Finally, the reference image and the noisy image are simultaneously fed into the D-DnCNN model to output its corresponding residual image, and the final restored image can be obtained by subtracting the residual image from the noisy image. Extensive experimental results show that, the denoising effect of the proposed D-DnCNN denoising model outperforms the existing state-of-art switching ones across a range of noise ratios, and it also works better than the ordinary single-channel DnCNN model.
A Dual-channel Denoising Convolutional Neural Network (D-DnCNN) model for the removal of Random-Valued Impulse Noise (RVIN) is proposed. To obtain the reference image quickly, several Rank-Ordered Logarithmic absolute Difference (ROLD) statistics and one edge feature statistic are first extracted from a local window to construct a RVIN-aware feature vector which can describe the central pixel of the patch is RVIN or not. Next, a noise detector based on Deep Belief Network (DBN) is trained to map the extracted feature vectors to their corresponding noise labels to detect all noise-like pixels in the observed image. Then, under the guidance of noise labels, the Delaunay triangulation-based interpolation algorithm is exploited to restore all detected noise-like pixels quickly and generate a preliminary restored image used as reference image. Finally, the reference image and the noisy image are simultaneously fed into the D-DnCNN model to output its corresponding residual image, and the final restored image can be obtained by subtracting the residual image from the noisy image. Extensive experimental results show that, the denoising effect of the proposed D-DnCNN denoising model outperforms the existing state-of-art switching ones across a range of noise ratios, and it also works better than the ordinary single-channel DnCNN model.
2020, 42(10): 2549-2556.
doi: 10.11999/JEIT190077
Abstract:
FPGA memory mapping algorithm utilizes distributed storage resources on chip and cooperates with some auxiliary circuits to realize the different needs of users in designing logical storage functions. Previous studies on dual-port memory mapping algorithm are relatively few. There is still much space for improvement in the mapping results by mature commercial EDA tools. An optimization algorithm of dual-port memory mapping is proposed for area, delay and power consumption, and a specific configuration scheme is given. Experiments show that when facing simple storage requirements, the mapping results are consistent with those of commercial tools; when facing complex storage requirements, the mapping results of area optimization and power optimization are improved by at least 50% compared with commercial tools Vivado.
FPGA memory mapping algorithm utilizes distributed storage resources on chip and cooperates with some auxiliary circuits to realize the different needs of users in designing logical storage functions. Previous studies on dual-port memory mapping algorithm are relatively few. There is still much space for improvement in the mapping results by mature commercial EDA tools. An optimization algorithm of dual-port memory mapping is proposed for area, delay and power consumption, and a specific configuration scheme is given. Experiments show that when facing simple storage requirements, the mapping results are consistent with those of commercial tools; when facing complex storage requirements, the mapping results of area optimization and power optimization are improved by at least 50% compared with commercial tools Vivado.
2020, 42(10): 2557-2565.
doi: 10.11999/JEIT190880
Abstract:
DNA strand displacement technology has the characteristics of spontaneity, parallelism, programmability and dynamic cascade, which is widely used to solve mathematical problems. In this paper, a two-bit subtracter is designed by using Gray code encoding and DNA strand displacement technology to extend the operation of DNA subtraction. Finally, Visual DSD software is used to simulate the two-bit subtracter. The circuit, with the strong parallelism and expansibility, achieves the expected function. It can be used in combination with other biochemical circuits.
DNA strand displacement technology has the characteristics of spontaneity, parallelism, programmability and dynamic cascade, which is widely used to solve mathematical problems. In this paper, a two-bit subtracter is designed by using Gray code encoding and DNA strand displacement technology to extend the operation of DNA subtraction. Finally, Visual DSD software is used to simulate the two-bit subtracter. The circuit, with the strong parallelism and expansibility, achieves the expected function. It can be used in combination with other biochemical circuits.
2020, 42(10): 2566-2572.
doi: 10.11999/JEIT190719
Abstract:
Through the research of a True Random Number Generator (TRNG), which is a low-power and high-noise source, a new type of low-frequency clock is designed. It can amplify the thermal noise of resistance more than 100 times, thus reducing the bandwidth and resistance value of the circuit, reducing the area and power consumption of the circuit, and making the jitter of low-frequency clock reach 58.2 ns. The circuit is designed by SMIC 40 nm CMOS technology. The flow sheet and test are completed. The output speed of TRNG ranges from 1.38 to 3.33 Mbit/s. The overall power consumption of the circuit is 0.11 mW and the area is 0.00789 mm2. The output of random number meets the test requirement of AIS31 true random number entropy source, and passes the security test of National Secret 2.
Through the research of a True Random Number Generator (TRNG), which is a low-power and high-noise source, a new type of low-frequency clock is designed. It can amplify the thermal noise of resistance more than 100 times, thus reducing the bandwidth and resistance value of the circuit, reducing the area and power consumption of the circuit, and making the jitter of low-frequency clock reach 58.2 ns. The circuit is designed by SMIC 40 nm CMOS technology. The flow sheet and test are completed. The output speed of TRNG ranges from 1.38 to 3.33 Mbit/s. The overall power consumption of the circuit is 0.11 mW and the area is 0.00789 mm2. The output of random number meets the test requirement of AIS31 true random number entropy source, and passes the security test of National Secret 2.