Email alert
2021 Vol. 43, No. 8
Display Method:
2021, 43(8): 2121-2127.
doi: 10.11999/JEIT200769
Abstract:
This paper proposes a universal Time-to-Digital Converter (TDC) code density calibration signal generation method, which is based on the theory of coherent sampling. By reasonably setting the frequency difference between the TDC master clock and the calibration signal, combining with the output hold circuit, a random signal for calibration is generated to ensure that the random signal is evenly distributed on the TDC delay path to achieve Bin-by-bin calibration of TDC. The paper implements a carry chain plain TDC based on XILINX’s 28 nm Kintex-7 Field Programmable Gate Array (FPGA). The method is used to calibrate the code width (tap delay time) of plain TDC, and the performance parameters of TDC in 2-tap mode are studied and calibrated. The time resolution (corresponding to the least significant bit of TDC, Least Significant Bit, LSB) is 24.9 ps, with the differential nonlinearity is (–0.84~3.1) LSB, and the integral nonlinearity is (–5.0~2.2) LSB. The calibration method described in the paper is implemented using clock logic resources, and multiple tests show that the standard deviation of a single delay unit is better than 0.5 ps. This calibration method uses clock logic resources instead of combinatorial logic resources to realize high-precision automatic calibration of plain TDC, with good repeatability and stability. This method is also suitable for other types of TDC code density calibration.
This paper proposes a universal Time-to-Digital Converter (TDC) code density calibration signal generation method, which is based on the theory of coherent sampling. By reasonably setting the frequency difference between the TDC master clock and the calibration signal, combining with the output hold circuit, a random signal for calibration is generated to ensure that the random signal is evenly distributed on the TDC delay path to achieve Bin-by-bin calibration of TDC. The paper implements a carry chain plain TDC based on XILINX’s 28 nm Kintex-7 Field Programmable Gate Array (FPGA). The method is used to calibrate the code width (tap delay time) of plain TDC, and the performance parameters of TDC in 2-tap mode are studied and calibrated. The time resolution (corresponding to the least significant bit of TDC, Least Significant Bit, LSB) is 24.9 ps, with the differential nonlinearity is (–0.84~3.1) LSB, and the integral nonlinearity is (–5.0~2.2) LSB. The calibration method described in the paper is implemented using clock logic resources, and multiple tests show that the standard deviation of a single delay unit is better than 0.5 ps. This calibration method uses clock logic resources instead of combinatorial logic resources to realize high-precision automatic calibration of plain TDC, with good repeatability and stability. This method is also suitable for other types of TDC code density calibration.
2021, 43(8): 2128-2139.
doi: 10.11999/JEIT210003
Abstract:
Hardware Trojans are the main security threats of the third-party Intellectual Property (IP) cores. The existing pre-silicon hardware Trojan detection methods are difficult to be used in a large amount of hardware Trojans detection and the detection accuracy is hard to be enhanced. A gate-level netlist abstract modeling algorithm is proposed to reduce the cost of trustworthiness analysis method, which establishes a directed graph of the gate-level netlist and stores the graph data into the crosslinked list. Furthermore, the characteristics of hardware Trojans are analyzed in the view of the attacker view and a 7-dimensional feature vector based on the directed graph is proposed. Moreover, a hardware Trojan feature extraction algorithm is proposed to extract the 7-dimensional feature of the gate-level netlist, and a Trojan feature expansion algorithm based on the Synthetic Minority Oversampling Technique and Edited Nearest Neighbor (SMOTEENN) is introduced to expand the number of Trojan samples and the Support Vector Machine (SVM) algorithm is utilized to identify the existence of hardware Trojan. 15 benchmark circuits from the Trust-hub are used to validate the efficacy of the proposed approach and the accuracy rate we achieved is 97.02%. True Positive Rate (TPR) is increased by 13.80%, True Negative Rate (TNR) and ACCuracy (ACC) is increased by 0.92% and 2.48% respectively compared with the existing reference.
Hardware Trojans are the main security threats of the third-party Intellectual Property (IP) cores. The existing pre-silicon hardware Trojan detection methods are difficult to be used in a large amount of hardware Trojans detection and the detection accuracy is hard to be enhanced. A gate-level netlist abstract modeling algorithm is proposed to reduce the cost of trustworthiness analysis method, which establishes a directed graph of the gate-level netlist and stores the graph data into the crosslinked list. Furthermore, the characteristics of hardware Trojans are analyzed in the view of the attacker view and a 7-dimensional feature vector based on the directed graph is proposed. Moreover, a hardware Trojan feature extraction algorithm is proposed to extract the 7-dimensional feature of the gate-level netlist, and a Trojan feature expansion algorithm based on the Synthetic Minority Oversampling Technique and Edited Nearest Neighbor (SMOTEENN) is introduced to expand the number of Trojan samples and the Support Vector Machine (SVM) algorithm is utilized to identify the existence of hardware Trojan. 15 benchmark circuits from the Trust-hub are used to validate the efficacy of the proposed approach and the accuracy rate we achieved is 97.02%. True Positive Rate (TPR) is increased by 13.80%, True Negative Rate (TNR) and ACCuracy (ACC) is increased by 0.92% and 2.48% respectively compared with the existing reference.
2021, 43(8): 2140-2148.
doi: 10.11999/JEIT200633
Abstract:
Synchronization of Tree Parity Machines (TPM) by mutual learning can be used to achieve key exchange schemes. The security of the scheme depends on the structure parameters of TPM. In order to obtain the parameters that make the key exchange scheme more secure and less computation, a key exchange optimization scheme based on TPM is proposed. Firstly, the learning rules of vectorization are defined to improve the efficiency of synchronization of TPM. Secondly, the cooperating attack algorithm for synchronization of TPM is improved to make it adaptive to the change of parameters. Finally, the efficiency and security of the scheme are tested by simulation experiment. The simulation results show that the vectorization of TPM can reduce the synchronization time by about 90%, which does not reduce the number of steps required for synchronization and affect the security. Among the parameters that can be used to generate 512 bit fixed length key, the probability of (14, 14, 2) being attacked by cooperating attack is 0%, and the synchronization time is less. Therefore, the proposed key exchange optimization scheme is secure and efficient.
Synchronization of Tree Parity Machines (TPM) by mutual learning can be used to achieve key exchange schemes. The security of the scheme depends on the structure parameters of TPM. In order to obtain the parameters that make the key exchange scheme more secure and less computation, a key exchange optimization scheme based on TPM is proposed. Firstly, the learning rules of vectorization are defined to improve the efficiency of synchronization of TPM. Secondly, the cooperating attack algorithm for synchronization of TPM is improved to make it adaptive to the change of parameters. Finally, the efficiency and security of the scheme are tested by simulation experiment. The simulation results show that the vectorization of TPM can reduce the synchronization time by about 90%, which does not reduce the number of steps required for synchronization and affect the security. Among the parameters that can be used to generate 512 bit fixed length key, the probability of (14, 14, 2) being attacked by cooperating attack is 0%, and the synchronization time is less. Therefore, the proposed key exchange optimization scheme is secure and efficient.
2021, 43(8): 2149-2155.
doi: 10.11999/JEIT200676
Abstract:
Self-Shrinking Control (SSC) sequences are a class of important pseudo-random sequences, and pseudo-random sequences are widely used in many fields, such as communication encryption, recoding technology. In these applications, sequences are usually required to have large periods and high linear complexity. In order to construct pseudo-random sequences with higher period and higher linear complexity, a new SSC sequence model based on the m-sequence in GF (3) is constructed, the period and the linear complexity of the generated sequence are studied by using finite domain theory, this model greatly improves the period and the linear complexity of the generated sequence, and obtains a more accurate upper bound value of the linear complexity of the generated sequence. Thus, the anti-attack ability and security performance of the generated sequence in communication encryption are improved.
Self-Shrinking Control (SSC) sequences are a class of important pseudo-random sequences, and pseudo-random sequences are widely used in many fields, such as communication encryption, recoding technology. In these applications, sequences are usually required to have large periods and high linear complexity. In order to construct pseudo-random sequences with higher period and higher linear complexity, a new SSC sequence model based on the m-sequence in GF (3) is constructed, the period and the linear complexity of the generated sequence are studied by using finite domain theory, this model greatly improves the period and the linear complexity of the generated sequence, and obtains a more accurate upper bound value of the linear complexity of the generated sequence. Thus, the anti-attack ability and security performance of the generated sequence in communication encryption are improved.
2021, 43(8): 2156-2164.
doi: 10.11999/JEIT200600
Abstract:
To raise the spectrum utilization, a modulation and demodulation method called modulated Direct Sequence Spread Spectrum (m-DSSS) is proposed, which assign the period of m-sequence to be equal to the integer multiple of data bit. Firstly, the feasibility of m-DSSS signal acquisition is verified by numerical simulation results of correlation. Then, according to the permutation of information bits, a multi-channel signal modulation and demodulation method for m-DSSS is designed, and the mathematical model reacting the anti-interference ability of passing through the additive Gaussian white noise channel is established. Finally, the simulation and contrast test are carried out with Codes Shift Keying (CSK) modulation under the same spectrum utilization. The simulation results show that m-DSSS signal not only has lower side peak than CSK signal but also can assist signal acquisition by judging polarity when BPSK modulation is adopted. When bit error rate is better than 1e-3, m-DSSS signal has more than 2 dB advantage than CSK signal, which verifies the feasibility of m-DSSS used in DSSS system.
To raise the spectrum utilization, a modulation and demodulation method called modulated Direct Sequence Spread Spectrum (m-DSSS) is proposed, which assign the period of m-sequence to be equal to the integer multiple of data bit. Firstly, the feasibility of m-DSSS signal acquisition is verified by numerical simulation results of correlation. Then, according to the permutation of information bits, a multi-channel signal modulation and demodulation method for m-DSSS is designed, and the mathematical model reacting the anti-interference ability of passing through the additive Gaussian white noise channel is established. Finally, the simulation and contrast test are carried out with Codes Shift Keying (CSK) modulation under the same spectrum utilization. The simulation results show that m-DSSS signal not only has lower side peak than CSK signal but also can assist signal acquisition by judging polarity when BPSK modulation is adopted. When bit error rate is better than 1e-3, m-DSSS signal has more than 2 dB advantage than CSK signal, which verifies the feasibility of m-DSSS used in DSSS system.
2021, 43(8): 2165-2170.
doi: 10.11999/JEIT200532
Abstract:
Sphere Decoding (SD) based detection algorithms for Sparse Code Multiple Access (SCMA) system receive more and more attention due to excellent performance. However, the existing SD-based detection algorithms can only be applied to some certain constellation structures for SCMA system, which limit their application. An Improved SD (ISD) detection scheme is proposed in this paper, which achieves ML (Maximum Likelihood) performance for any constellation. The improved algorithm splits user constellations and converts them into a multi-layer tree structure, which also uses the research of the tree carried out from the high-layer to the low-layer to achieve the decoding operation. Therefore, the SCMA detection can be converted into minimizing the metrics of the tree structure. In the meanwhile, the improved algorithm does not have any restrictions on the structure of the constellation, so it is suitable for any structure of constellation. In addition, due to the sparse characteristics of SCMA structure, the partial metric at each layer is independent of users assigned to each Resource Element (RE), which further reduces the computational complexity.
Sphere Decoding (SD) based detection algorithms for Sparse Code Multiple Access (SCMA) system receive more and more attention due to excellent performance. However, the existing SD-based detection algorithms can only be applied to some certain constellation structures for SCMA system, which limit their application. An Improved SD (ISD) detection scheme is proposed in this paper, which achieves ML (Maximum Likelihood) performance for any constellation. The improved algorithm splits user constellations and converts them into a multi-layer tree structure, which also uses the research of the tree carried out from the high-layer to the low-layer to achieve the decoding operation. Therefore, the SCMA detection can be converted into minimizing the metrics of the tree structure. In the meanwhile, the improved algorithm does not have any restrictions on the structure of the constellation, so it is suitable for any structure of constellation. In addition, due to the sparse characteristics of SCMA structure, the partial metric at each layer is independent of users assigned to each Resource Element (RE), which further reduces the computational complexity.
2021, 43(8): 2171-2180.
doi: 10.11999/JEIT200545
Abstract:
Synchronization and estimation for Pseudo Noise (PN) code of non-cooperative Direct Sequence Spread Spectrum(DSSS)system is the key to obtain the information correctly. Previous works mostly concentrate on Short or Periodic Long Code DSSS(SC-DSSS, PLC-DSSS)signal. Aiming at the estimation for out-of-step time of Non-Periodic Long Code DSSS (NPLC-DSSS) signal without the prior knowledge about the PN code, a distribution modeling-based method for the elements of correlation matrix is proposed. The auto-correlation matrix of information-bit-long segments is constructed and the accurate estimation for out-of-step time is achieved according to the Frobenius norm as a function of out-of-step time. On this basis, by introducing decision aided idea, the cyclic iterative structure is constructed to realize a blind estimation for PN sequence of NPLC-DSSS signal. Finally, the Cramer-Rao Bound (CRB) for the blind PN code estimation problem is derived. Numerical analysis results demonstrate that the proposed method can achieve better estimation accuracy in the same signal to noise ratio and data volume condition and the performance is close to the theoretical bound.
Synchronization and estimation for Pseudo Noise (PN) code of non-cooperative Direct Sequence Spread Spectrum(DSSS)system is the key to obtain the information correctly. Previous works mostly concentrate on Short or Periodic Long Code DSSS(SC-DSSS, PLC-DSSS)signal. Aiming at the estimation for out-of-step time of Non-Periodic Long Code DSSS (NPLC-DSSS) signal without the prior knowledge about the PN code, a distribution modeling-based method for the elements of correlation matrix is proposed. The auto-correlation matrix of information-bit-long segments is constructed and the accurate estimation for out-of-step time is achieved according to the Frobenius norm as a function of out-of-step time. On this basis, by introducing decision aided idea, the cyclic iterative structure is constructed to realize a blind estimation for PN sequence of NPLC-DSSS signal. Finally, the Cramer-Rao Bound (CRB) for the blind PN code estimation problem is derived. Numerical analysis results demonstrate that the proposed method can achieve better estimation accuracy in the same signal to noise ratio and data volume condition and the performance is close to the theoretical bound.
2021, 43(8): 2181-2188.
doi: 10.11999/JEIT200601
Abstract:
In order to make the Unmanned Aerial Vehicle (UAV) swarm arrival into the specified place quickly and safely, a control strategy of partition rendezvous is proposed. Considering the start distribution position of each UAV, rendezvous area and formation pattern, target rendezvous point are assigned for each UAV and the total range is minimized. The area near the rendezvous point is divided into several zones. UAVs in different zones travel in a straight line to the target rendezvous points in turn according to certain rules, without excess air route energy consumption and without mutual influence. UAVs communicate with each other stably through ultraviolet light, and share known information to realize information sharing within the swarm. The experimental results show that with the increase of the number of zones, the time taken to rendezvous decreases in a stepped manner. There is an approximate linear relationship between the height of the steps and the maximum number of UAVs in each zone, and the prediction of collision probability gradually decreases and finally approaches 0. In addition, a method to select optimal number of zones according to different needs is proposed.
In order to make the Unmanned Aerial Vehicle (UAV) swarm arrival into the specified place quickly and safely, a control strategy of partition rendezvous is proposed. Considering the start distribution position of each UAV, rendezvous area and formation pattern, target rendezvous point are assigned for each UAV and the total range is minimized. The area near the rendezvous point is divided into several zones. UAVs in different zones travel in a straight line to the target rendezvous points in turn according to certain rules, without excess air route energy consumption and without mutual influence. UAVs communicate with each other stably through ultraviolet light, and share known information to realize information sharing within the swarm. The experimental results show that with the increase of the number of zones, the time taken to rendezvous decreases in a stepped manner. There is an approximate linear relationship between the height of the steps and the maximum number of UAVs in each zone, and the prediction of collision probability gradually decreases and finally approaches 0. In addition, a method to select optimal number of zones according to different needs is proposed.
2021, 43(8): 2189-2198.
doi: 10.11999/JEIT200587
Abstract:
Due to the impact of random channel delays and channel estimation errors, traditional optimal resource allocation algorithms in Device-to-Device (D2D) communication networks have weak robustness. In this paper, a robust resource allocation algorithm for the energy-efficient maximization of D2D users is proposed under parametric uncertainties. Specifically, a multi-user resource allocation model in the D2D network with an underlay spectrum sharing mode is established under the constraints of the interference power threshold, the minimum rate requirement, the maximum transmit power, and the sub-channel allocation. Based on the bounded channel uncertainty models, the original non-convex robust resource allocation problem is converted into a deterministic and convex one by using the worst-case approach. Accordingly, the analytical solution of the robust resource allocation problem is obtained by using Lagrange dual theory. Simulation results demonstrate the proposed algorithm has good robustness.
Due to the impact of random channel delays and channel estimation errors, traditional optimal resource allocation algorithms in Device-to-Device (D2D) communication networks have weak robustness. In this paper, a robust resource allocation algorithm for the energy-efficient maximization of D2D users is proposed under parametric uncertainties. Specifically, a multi-user resource allocation model in the D2D network with an underlay spectrum sharing mode is established under the constraints of the interference power threshold, the minimum rate requirement, the maximum transmit power, and the sub-channel allocation. Based on the bounded channel uncertainty models, the original non-convex robust resource allocation problem is converted into a deterministic and convex one by using the worst-case approach. Accordingly, the analytical solution of the robust resource allocation problem is obtained by using Lagrange dual theory. Simulation results demonstrate the proposed algorithm has good robustness.
2021, 43(8): 2199-2206.
doi: 10.11999/JEIT210068
Abstract:
With the increase of signal frequency, bandwidth and transmission distance, the signal distortion problem brought by the coaxial cable becomes serious and can not be ignored. Specifically, if the coaxial cable is accidentally squeezed, stretched or folded during use, the signal distortion problem will become more serious. Herein, a modified signal compensation method is proposed based on the non-negative Tikhonov regularization method with Bayesian inference. This method can effectively avoid the ill-conditioned matrix problem in the inverse analysis. The input signal can be reconstructed by using impulse response function of coaxial cable and measured output signal. Three different types of pulse signals, i.e., double exponential pulse signal, modulated square wave signal, and bipolar pulse signal, transmitted in a 15 m extruded coaxial cable are compensated. The results show that this method can achieve excellent compensation effect, and the deviation between the compensated signal and the input signal is far lower than that of typical attenuation compensation method. Moreover, the modified method exhibits strong robustness. When the signal-to-noise ratio is larger than 30 dB, it can maintain good stability.
With the increase of signal frequency, bandwidth and transmission distance, the signal distortion problem brought by the coaxial cable becomes serious and can not be ignored. Specifically, if the coaxial cable is accidentally squeezed, stretched or folded during use, the signal distortion problem will become more serious. Herein, a modified signal compensation method is proposed based on the non-negative Tikhonov regularization method with Bayesian inference. This method can effectively avoid the ill-conditioned matrix problem in the inverse analysis. The input signal can be reconstructed by using impulse response function of coaxial cable and measured output signal. Three different types of pulse signals, i.e., double exponential pulse signal, modulated square wave signal, and bipolar pulse signal, transmitted in a 15 m extruded coaxial cable are compensated. The results show that this method can achieve excellent compensation effect, and the deviation between the compensated signal and the input signal is far lower than that of typical attenuation compensation method. Moreover, the modified method exhibits strong robustness. When the signal-to-noise ratio is larger than 30 dB, it can maintain good stability.
2021, 43(8): 2207-2213.
doi: 10.11999/JEIT190779
Abstract:
A RAnk REduced MUSIC (RARE-MUSIC) algorithm based on Coprime Linear Array Coprime Shift Spare Planar Array (CLACS-SPA) is proposed, in order to solve the problems in the three-dimension source location, such as too complex array structure, too high-complexity algorithm and large partial spectrum range. The proposed CLACS-SPA has a centrosymmetric coprime sparse array structure, which reduces the number of antennas and structural complexity of the array, compared with the uniform planar array structure of the same caliber. The direction and distance information in the received signal are separated and estimated by Taylor formula, thus the three-dimensional spectral peak search is transformed into the two-dimensional search of the azimuth angle as well as elevation angle and the one-dimensional search of the distance, which reduces the computational complexity of the positioning algorithm. Simulation results show that the complexity of the proposed structure is one to two orders of magnitude lower than that of the uniform planar array structure, under the same aperture and location algorithm. In addition, for the same caliber of the proposed CLACS-SPA structure, the proposed RARE-MUSIC algorithm is less complex than the classical three-dimension MUSIC with two to three orders of magnitude. And under the same aperture and number of antennas, the proposed RARE-MUSIC greatly reduces the computational complexity and improves the measurement accuracy of azimuth and elevation angles, in comparison with the classical three-dimension MUSIC algorithm.
A RAnk REduced MUSIC (RARE-MUSIC) algorithm based on Coprime Linear Array Coprime Shift Spare Planar Array (CLACS-SPA) is proposed, in order to solve the problems in the three-dimension source location, such as too complex array structure, too high-complexity algorithm and large partial spectrum range. The proposed CLACS-SPA has a centrosymmetric coprime sparse array structure, which reduces the number of antennas and structural complexity of the array, compared with the uniform planar array structure of the same caliber. The direction and distance information in the received signal are separated and estimated by Taylor formula, thus the three-dimensional spectral peak search is transformed into the two-dimensional search of the azimuth angle as well as elevation angle and the one-dimensional search of the distance, which reduces the computational complexity of the positioning algorithm. Simulation results show that the complexity of the proposed structure is one to two orders of magnitude lower than that of the uniform planar array structure, under the same aperture and location algorithm. In addition, for the same caliber of the proposed CLACS-SPA structure, the proposed RARE-MUSIC algorithm is less complex than the classical three-dimension MUSIC with two to three orders of magnitude. And under the same aperture and number of antennas, the proposed RARE-MUSIC greatly reduces the computational complexity and improves the measurement accuracy of azimuth and elevation angles, in comparison with the classical three-dimension MUSIC algorithm.
2021, 43(8): 2214-2223.
doi: 10.11999/JEIT200617
Abstract:
With the wide development and application of grid-connected wind power generation and power electronics technologies, the voltage fluctuation and flicker problem can not be ignored in smart grid. Considering the voltage flicker model under rectangular wave modulation which is more sensitive to human eyes, a method for detecting voltage flicker envelope parameters based on improved energy operator and windowed interpolation Fast Fourier Transform (FFT) is proposed. By optimizing the sampling interval of energy operator, the voltage fluctuation components can be accurately extracted. A Maximum Side-Lobe Decay(MSLD) Self-Convolution Window(MSLD-SCW) function with excellent frequency domain performance can be obtained by improved six-term combined cosine window with MSLD, the rectification formula of spectral line interpolation based on new MSLD-SCW is derived, and the detection and analysis of flicker parameters are realized accordingly. The simulation results indicate that the optimization and improvement algorithm can effectively maintain higher detection accuracy than the traditional methods in the case of single frequency rectangular wave modulation, multi-frequency modulation, containing harmonics and sub/supersynchronous inter-harmonics interference, power grid fundamental frequency deviation and noise interference. Finally, the optimization algorithm is applied to voltage flicker envelope parameters detection in a certain area of XinJiang, and the effectiveness of the algorithm is verified.
With the wide development and application of grid-connected wind power generation and power electronics technologies, the voltage fluctuation and flicker problem can not be ignored in smart grid. Considering the voltage flicker model under rectangular wave modulation which is more sensitive to human eyes, a method for detecting voltage flicker envelope parameters based on improved energy operator and windowed interpolation Fast Fourier Transform (FFT) is proposed. By optimizing the sampling interval of energy operator, the voltage fluctuation components can be accurately extracted. A Maximum Side-Lobe Decay(MSLD) Self-Convolution Window(MSLD-SCW) function with excellent frequency domain performance can be obtained by improved six-term combined cosine window with MSLD, the rectification formula of spectral line interpolation based on new MSLD-SCW is derived, and the detection and analysis of flicker parameters are realized accordingly. The simulation results indicate that the optimization and improvement algorithm can effectively maintain higher detection accuracy than the traditional methods in the case of single frequency rectangular wave modulation, multi-frequency modulation, containing harmonics and sub/supersynchronous inter-harmonics interference, power grid fundamental frequency deviation and noise interference. Finally, the optimization algorithm is applied to voltage flicker envelope parameters detection in a certain area of XinJiang, and the effectiveness of the algorithm is verified.
2021, 43(8): 2224-2231.
doi: 10.11999/JEIT200586
Abstract:
For addressing the requests conflict in all-optical switching node of space-division multiplexing elastic optical network with few-mode fibers, an all-optical Layered Architecture based on Resource reservation module and Shared Limited-range spectrum converter (LARSL) is studied in the paper. First, a inter modes Crosstalk Avoidance-based on space, frequency and time domain combing resource Conflict Resolution Algorithm for LARSL (LARSL-CACRA) is put forward. Then, a calculation method for crosstalk-avoidance resource block using sliding window is designed to search the more balanced mode-spectrum resource black for the spectrum conflicted requests in the space-frequency domain. Moreover, a time domain resource reservation module is designed to reserve still conflicting requests to next scheduling time for further reducing the bandwidth blocking probability. Simulation result show that the proposed LARSL-CACRA can decrease node’s bandwidth blocking probability and can also reduce the cache delay.
For addressing the requests conflict in all-optical switching node of space-division multiplexing elastic optical network with few-mode fibers, an all-optical Layered Architecture based on Resource reservation module and Shared Limited-range spectrum converter (LARSL) is studied in the paper. First, a inter modes Crosstalk Avoidance-based on space, frequency and time domain combing resource Conflict Resolution Algorithm for LARSL (LARSL-CACRA) is put forward. Then, a calculation method for crosstalk-avoidance resource block using sliding window is designed to search the more balanced mode-spectrum resource black for the spectrum conflicted requests in the space-frequency domain. Moreover, a time domain resource reservation module is designed to reserve still conflicting requests to next scheduling time for further reducing the bandwidth blocking probability. Simulation result show that the proposed LARSL-CACRA can decrease node’s bandwidth blocking probability and can also reduce the cache delay.
2021, 43(8): 2232-2239.
doi: 10.11999/JEIT200707
Abstract:
In order to enhance the transmission reliability of event-driven wireless sensor networks, a cooperative transmission scheme for event-driven dynamic clustering network is proposed by utilizing collaboration between nodes. When no event occurs, each node transmits data to its cluster head at a very low frequency in pre-formed static cluster. Once an event occurs, the nodes that perceive the occurrence quickly form an event cluster and send data to the cluster head. The cluster head fuses the data and reports it to the sink node. The cluster head fuses the data and sends it to the sink node. If the cluster head fails to transmit, the best relay cooperates with it for forwarding data to improve the transmission reliability. In the best relay selection, considering the continuous movement of events, the nodes on the forward channel of events have strong induction intensity and good cooperation ability. Therefore, a best relay selection strategy based on the forward channel is developed. Simulation and experimental results show that the proposed scheme can effectively improve the transmission reliability.
In order to enhance the transmission reliability of event-driven wireless sensor networks, a cooperative transmission scheme for event-driven dynamic clustering network is proposed by utilizing collaboration between nodes. When no event occurs, each node transmits data to its cluster head at a very low frequency in pre-formed static cluster. Once an event occurs, the nodes that perceive the occurrence quickly form an event cluster and send data to the cluster head. The cluster head fuses the data and reports it to the sink node. The cluster head fuses the data and sends it to the sink node. If the cluster head fails to transmit, the best relay cooperates with it for forwarding data to improve the transmission reliability. In the best relay selection, considering the continuous movement of events, the nodes on the forward channel of events have strong induction intensity and good cooperation ability. Therefore, a best relay selection strategy based on the forward channel is developed. Simulation and experimental results show that the proposed scheme can effectively improve the transmission reliability.
2021, 43(8): 2240-2248.
doi: 10.11999/JEIT200631
Abstract:
Applying the Information-Centric Networking (ICN) to the Internet of Things (IoT) architecture(ICN-IoT)can solve the data distribution problem and improve the efficiency of data transmission. However, existing cache policies in ICN-IoT mainly realize cache configuration in a single dimension, such as content popularity or freshness, which can not adapt to the massive and polymorphic characteristics of IoT data, and result in low cache efficiency. To address the above problems, the characteristics of IoT data are firstly analyzed in this paper and the data are divided into periodic data and event-triggered data. Then, an ICN-IoT Caching Scheme for Data Characteristics of IoT (CS-DCI) with different cache decisions is proposed, combining with the characteristics of the IoT data. The cache routers differentiate the type of arriving data to call corresponding caching decisions. Finally, the caching strategies for IoT data characteristics are introduced in detail. For periodic data, based on content popularity and time request probability to cache the most requested data; For event-triggered data, based on content popularity and event trigger frequency to cache meaningful data. The simulation results show that the scheme can increase diversity of cache content, thereby satisfying the requests of different applications of ICN-IoT, obtaining better cache hit ratio and reducing content acquisition hops.
Applying the Information-Centric Networking (ICN) to the Internet of Things (IoT) architecture(ICN-IoT)can solve the data distribution problem and improve the efficiency of data transmission. However, existing cache policies in ICN-IoT mainly realize cache configuration in a single dimension, such as content popularity or freshness, which can not adapt to the massive and polymorphic characteristics of IoT data, and result in low cache efficiency. To address the above problems, the characteristics of IoT data are firstly analyzed in this paper and the data are divided into periodic data and event-triggered data. Then, an ICN-IoT Caching Scheme for Data Characteristics of IoT (CS-DCI) with different cache decisions is proposed, combining with the characteristics of the IoT data. The cache routers differentiate the type of arriving data to call corresponding caching decisions. Finally, the caching strategies for IoT data characteristics are introduced in detail. For periodic data, based on content popularity and time request probability to cache the most requested data; For event-triggered data, based on content popularity and event trigger frequency to cache meaningful data. The simulation results show that the scheme can increase diversity of cache content, thereby satisfying the requests of different applications of ICN-IoT, obtaining better cache hit ratio and reducing content acquisition hops.
2021, 43(8): 2249-2257.
doi: 10.11999/JEIT200639
Abstract:
Whether it is the traditional fixed step size or the newly proposed Least Mean Square (LMS) algorithm, a priori estimation of the algorithm parameters is required to achieve better results when processing signals of specific mathematical features. However, in the actual signal processing process, the estimation of the algorithm parameters is a very difficult process. In this paper, the mean square deviation and convergence characteristics of LMS algorithm are analyzed, and a variable step size LMS algorithm with relative error as variable is proposed, which can realize self-estimation of step control parameters. It can adapt signals of different mathematical features. The example shows that the new algorithm has faster convergence speed and smaller mean square error.
Whether it is the traditional fixed step size or the newly proposed Least Mean Square (LMS) algorithm, a priori estimation of the algorithm parameters is required to achieve better results when processing signals of specific mathematical features. However, in the actual signal processing process, the estimation of the algorithm parameters is a very difficult process. In this paper, the mean square deviation and convergence characteristics of LMS algorithm are analyzed, and a variable step size LMS algorithm with relative error as variable is proposed, which can realize self-estimation of step control parameters. It can adapt signals of different mathematical features. The example shows that the new algorithm has faster convergence speed and smaller mean square error.
2021, 43(8): 2258-2266.
doi: 10.11999/JEIT200524
Abstract:
In order to solve the problem of underdetermined blind source separation for attenuated and time-delayed mixtures of multiple sources, the approach based on source number estimation is proposed. Firstly, the sparse domain is constructed by calculating the energy of observations in time-frequency domain. Secondly, the potential function is used to estimate the number of sources in the energy domain. Thirdly, the frequency points corresponding to the peak of energy sum are filtered out to predict the time-frequency mask and then the spectrum of the estimated sources are obtained. Finally, the padding line is utilized to solve the boundary effect problem of time-domain separated signals. Experimental results demonstrate that the proposed method can effectively recover the simulated sources with time-delayed mixtures in underdetermined case, and is superior to sparse clustering algorithm and subspace method under different signal to noise ratios. In addition, the proposed method can estimate the modal order and identify the natural frequency of monomodal response successfully in the process of hammering test of the actual cantilever beam.
In order to solve the problem of underdetermined blind source separation for attenuated and time-delayed mixtures of multiple sources, the approach based on source number estimation is proposed. Firstly, the sparse domain is constructed by calculating the energy of observations in time-frequency domain. Secondly, the potential function is used to estimate the number of sources in the energy domain. Thirdly, the frequency points corresponding to the peak of energy sum are filtered out to predict the time-frequency mask and then the spectrum of the estimated sources are obtained. Finally, the padding line is utilized to solve the boundary effect problem of time-domain separated signals. Experimental results demonstrate that the proposed method can effectively recover the simulated sources with time-delayed mixtures in underdetermined case, and is superior to sparse clustering algorithm and subspace method under different signal to noise ratios. In addition, the proposed method can estimate the modal order and identify the natural frequency of monomodal response successfully in the process of hammering test of the actual cantilever beam.
2021, 43(8): 2267-2275.
doi: 10.11999/JEIT200501
Abstract:
In order to solve the problem that the performance of one-dimensional range profile synthesis performance of Stepped Frequency (SF) ISAR based on the traditional discrete compressed sensing method declines under the condition of off-grid, a high-resolution range profile synthesis method of SF ISAR based on Atomic Norm Minimization (ANM) is proposed. Firstly, a grid free SF ISAR range sparse representation model based on atomic norm is constructed, and the one-dimensional range synthesis problem is transformed into the atomic coefficient and frequency estimation problem. Then, the atomic norm minimization problem is transformed into a semi-positive definite programming problem by using the semi-positive definite property of the atomic norm, and the fast solvers are performed via the alternating direction method of multipliers. Finally, the final one-dimensional high-resolution range profile imaging results are obtained by Vandermonde decomposition. Because the grid discretization is avoided, the high-resolution range profile imaging can be realized under the condition of grid mismatch and low measurement, and the high range resolution can be maintained. Theoretical analysis and simulation experiments verify the effectiveness of the proposed method.
In order to solve the problem that the performance of one-dimensional range profile synthesis performance of Stepped Frequency (SF) ISAR based on the traditional discrete compressed sensing method declines under the condition of off-grid, a high-resolution range profile synthesis method of SF ISAR based on Atomic Norm Minimization (ANM) is proposed. Firstly, a grid free SF ISAR range sparse representation model based on atomic norm is constructed, and the one-dimensional range synthesis problem is transformed into the atomic coefficient and frequency estimation problem. Then, the atomic norm minimization problem is transformed into a semi-positive definite programming problem by using the semi-positive definite property of the atomic norm, and the fast solvers are performed via the alternating direction method of multipliers. Finally, the final one-dimensional high-resolution range profile imaging results are obtained by Vandermonde decomposition. Because the grid discretization is avoided, the high-resolution range profile imaging can be realized under the condition of grid mismatch and low measurement, and the high range resolution can be maintained. Theoretical analysis and simulation experiments verify the effectiveness of the proposed method.
2021, 43(8): 2276-2285.
doi: 10.11999/JEIT200785
Abstract:
In the spaceborne azimuth multichannel SAR squinted mode, the squint angle and the velocity of moving target will cause the 2-D spectrum of echo signal to be aliased and the multichannel imbalance, respectively. Both phenomena would affect the azimuth multichannel reconstruction for moving targets. To resolve this problem, an azimuth multichannel reconstruction method for moving targets in the azimuth multichannel squinted mode is proposed. It eliminates the secondary Doppler aliasing problem caused by the squint angle through azimuth de-ramp preprocessing, and then the multichannel imbalance caused by the moving target velocity is resolved by the improved multichannel reconstruction matrix. The clutter suppression ability in the case of channel redundancy is analyzed, and the residual phase error caused by the estimated velocity error is discussed. Furthermore, an effect moving target velocity estimate approach is proposed. Finally, the simulation results on point targets validate the effectiveness of the proposed approach.
In the spaceborne azimuth multichannel SAR squinted mode, the squint angle and the velocity of moving target will cause the 2-D spectrum of echo signal to be aliased and the multichannel imbalance, respectively. Both phenomena would affect the azimuth multichannel reconstruction for moving targets. To resolve this problem, an azimuth multichannel reconstruction method for moving targets in the azimuth multichannel squinted mode is proposed. It eliminates the secondary Doppler aliasing problem caused by the squint angle through azimuth de-ramp preprocessing, and then the multichannel imbalance caused by the moving target velocity is resolved by the improved multichannel reconstruction matrix. The clutter suppression ability in the case of channel redundancy is analyzed, and the residual phase error caused by the estimated velocity error is discussed. Furthermore, an effect moving target velocity estimate approach is proposed. Finally, the simulation results on point targets validate the effectiveness of the proposed approach.
2021, 43(8): 2286-2291.
doi: 10.11999/JEIT190894
Abstract:
In order to solve the problem of inaccurate estimation of low-altitude wind-shear wind speed under non-uniform clutter environment, a kind of low-altitude wind-shear wind speed estimation method based on echo power screening and Digital Land Classification Data (DLCD)-assisted is proposed. The method firstly uses the sample echo power to select initially the training samples, then uses the DLCD to calculate the similarity between the samples, and selects the training samples with higher sample similarity from the higher-powered training samples to estimate the clutter covariance matrix, finally uses the Generalized adjacent Multiple-Beam (GMB)-Joint Domain Localized (JDL) method to achieve wind speed effective estimation of low-altitude wind-shear.
In order to solve the problem of inaccurate estimation of low-altitude wind-shear wind speed under non-uniform clutter environment, a kind of low-altitude wind-shear wind speed estimation method based on echo power screening and Digital Land Classification Data (DLCD)-assisted is proposed. The method firstly uses the sample echo power to select initially the training samples, then uses the DLCD to calculate the similarity between the samples, and selects the training samples with higher sample similarity from the higher-powered training samples to estimate the clutter covariance matrix, finally uses the Generalized adjacent Multiple-Beam (GMB)-Joint Domain Localized (JDL) method to achieve wind speed effective estimation of low-altitude wind-shear.
2021, 43(8): 2292-2299.
doi: 10.11999/JEIT200593
Abstract:
Based on the fusion of SMAP satellite L-band cross-polarized brightness temperature, a multi-iteration clustering Radio Frequency Interference (RFI) detection and recognition algorithm based on its spatial distribution of density and intensity is established, and the spatial and temporal distribution and variation characteristics of the density and cumulative intensity of typical Japanese RFI sources (broadcast satellite TV receivers) are analyzed and extracted. As a typical RFI source, TV receivers are mainly distributed in areas with relatively large urbanization level and range (stripes or planes), with dotted RFI sources (possibly microwave radiation base stations) distributed in local areas, resulting in local areas with high RFI levels. In other areas where the urbanization level and scope are relatively small, the dot-round RFI sources are also detected, but the interference intensity and range are relatively limited. Beginning in 2018, the overall RFI distribution range and intensity level showed a downward trend. This work is of great significance to the establishment of RFI detection, identification and suppression models in China.
Based on the fusion of SMAP satellite L-band cross-polarized brightness temperature, a multi-iteration clustering Radio Frequency Interference (RFI) detection and recognition algorithm based on its spatial distribution of density and intensity is established, and the spatial and temporal distribution and variation characteristics of the density and cumulative intensity of typical Japanese RFI sources (broadcast satellite TV receivers) are analyzed and extracted. As a typical RFI source, TV receivers are mainly distributed in areas with relatively large urbanization level and range (stripes or planes), with dotted RFI sources (possibly microwave radiation base stations) distributed in local areas, resulting in local areas with high RFI levels. In other areas where the urbanization level and scope are relatively small, the dot-round RFI sources are also detected, but the interference intensity and range are relatively limited. Beginning in 2018, the overall RFI distribution range and intensity level showed a downward trend. This work is of great significance to the establishment of RFI detection, identification and suppression models in China.
2021, 43(8): 2300-2307.
doi: 10.11999/JEIT200506
Abstract:
Considering the problems of Low Probability of Intercept (LPI) radar signal processing complexity and low recognition rate under the condition of low SNR, a signal classification and recognition system based on Denoising Convolution Neural Network (DnCNN) and Inception network is proposed. Firstly, eight kinds of LPI radar signals are transformed by Choi Williams Distribution (CWD) to obtain two-dimensional time-frequency images. Then, the denoising convolution neural network is used to denoise the time-frequency images. Finally, the images are sent to the Inception-v4 network for feature extraction, and the softmax classifier is used for classification to realize the effective classification and recognition of LPI radar signals. Simulation results show that the recognition rate of this method can still reach more than 90% under –10 dB Signal-Noise Ratio (SNR).
Considering the problems of Low Probability of Intercept (LPI) radar signal processing complexity and low recognition rate under the condition of low SNR, a signal classification and recognition system based on Denoising Convolution Neural Network (DnCNN) and Inception network is proposed. Firstly, eight kinds of LPI radar signals are transformed by Choi Williams Distribution (CWD) to obtain two-dimensional time-frequency images. Then, the denoising convolution neural network is used to denoise the time-frequency images. Finally, the images are sent to the Inception-v4 network for feature extraction, and the softmax classifier is used for classification to realize the effective classification and recognition of LPI radar signals. Simulation results show that the recognition rate of this method can still reach more than 90% under –10 dB Signal-Noise Ratio (SNR).
2021, 43(8): 2308-2316.
doi: 10.11999/JEIT200671
Abstract:
WiFi-based localization methods suffer from multipath problem in indoor environments, which leads to poor accuracy. Light Detection And Ranging(LiDAR)-based localization methods can have good accuracy. However, they are not feasible in simple and repetitive scenarios as it is difficult for scene feature extraction and matching. Therefore, a novel localization method to fuse WiFi, LiDAR and Map by integrating them into a Kalman filter framework is proposed. In this framework, the state of the filter is defined as the current and historical position sequence of the robot. The observation consists of two parts. The first is the WiFi fingerprint localization results based on the proposed distance-weighted WiFi fingerprint matching method on multi-loop segmentation map; The second part comes from the high-precision relative localization results (such as lateral localization) by LiDAR in a single repeated scene. By utilizing the priori reference position in the scene map, such lateral positioning result can be integrated with the map to formulate linear constraints on the robot position. Finally, the Kalman filter is applied to accurate localization of the robot. The proposed algorithm is verified in two scenarios, where 2D and 3D LiDAR are applied. Experimental results show the average localization error of the proposed algorithm can be reduced by 70%~80%, which demonstrate that proposed method can improve the localization accuracy and stability.
WiFi-based localization methods suffer from multipath problem in indoor environments, which leads to poor accuracy. Light Detection And Ranging(LiDAR)-based localization methods can have good accuracy. However, they are not feasible in simple and repetitive scenarios as it is difficult for scene feature extraction and matching. Therefore, a novel localization method to fuse WiFi, LiDAR and Map by integrating them into a Kalman filter framework is proposed. In this framework, the state of the filter is defined as the current and historical position sequence of the robot. The observation consists of two parts. The first is the WiFi fingerprint localization results based on the proposed distance-weighted WiFi fingerprint matching method on multi-loop segmentation map; The second part comes from the high-precision relative localization results (such as lateral localization) by LiDAR in a single repeated scene. By utilizing the priori reference position in the scene map, such lateral positioning result can be integrated with the map to formulate linear constraints on the robot position. Finally, the Kalman filter is applied to accurate localization of the robot. The proposed algorithm is verified in two scenarios, where 2D and 3D LiDAR are applied. Experimental results show the average localization error of the proposed algorithm can be reduced by 70%~80%, which demonstrate that proposed method can improve the localization accuracy and stability.
2021, 43(8): 2317-2323.
doi: 10.11999/JEIT200534
Abstract:
The navigation signal is the hub connecting space satellites and ground users, and is one of the most important parts of the satellite navigation and positioning system. Its advantages and disadvantages directly affect the subsequent positioning, speed measurement, timing and other performance. The 40 m high-gain antenna of the National Time Service Center is used to conduct multiple signal acquisition and comparison analysis of the Global Positing System III (GPS III) first satellite. Starting from the GPS III L1 frequency point modulation vector and frequency spectrum distribution, an in-depth analysis of the "sliding" phenomenon in the carrier phase of the L1 frequency point is carried out, and it is concluded that the phenomenon is mainly derived from the L1M signal. C/A code is used to analyze the authorized signal M code. The S-curve zero-crossing deviation and the power proportion of the L1 frequency point signal component are quantitatively analyzed. Among them, the L1M S-curve zero-crossing deviation reaches 0.058ns, and the signal power proportion is up to 6.78. The research results of this paper can provide support for the research on the modulation methods of the new generation GPS signals, and also provide references for the subsequent Beidou navigation satellite signal system design and signal quality evaluation methods.
The navigation signal is the hub connecting space satellites and ground users, and is one of the most important parts of the satellite navigation and positioning system. Its advantages and disadvantages directly affect the subsequent positioning, speed measurement, timing and other performance. The 40 m high-gain antenna of the National Time Service Center is used to conduct multiple signal acquisition and comparison analysis of the Global Positing System III (GPS III) first satellite. Starting from the GPS III L1 frequency point modulation vector and frequency spectrum distribution, an in-depth analysis of the "sliding" phenomenon in the carrier phase of the L1 frequency point is carried out, and it is concluded that the phenomenon is mainly derived from the L1M signal. C/A code is used to analyze the authorized signal M code. The S-curve zero-crossing deviation and the power proportion of the L1 frequency point signal component are quantitatively analyzed. Among them, the L1M S-curve zero-crossing deviation reaches 0.058ns, and the signal power proportion is up to 6.78. The research results of this paper can provide support for the research on the modulation methods of the new generation GPS signals, and also provide references for the subsequent Beidou navigation satellite signal system design and signal quality evaluation methods.
2021, 43(8): 2324-2333.
doi: 10.11999/JEIT200059
Abstract:
Due to the interference of atmospheric factors, the ambiguity resolution of network Real Time kinematic (RTK) reference stations is affected, and when new satellites rise above the preset cutoff height, a longer initialization convergence time is required. A fast ambiguity solution method for Network RTK reference stations is proposed. Firstly, the ionosphere weighting strategy is used to assist the fast resolution of baseline ambiguity. Then, Extended Kalman Filter(EKF)is used to estimate ambiguity floating solution; The partial ambiguity solution method is adopted. Finally, the ambiguities are fixed in combination with Least squares AMBiguity Decorrelation Adjustment(LAMBDA) and RATIO detection. Experiments results show that this method can significantly improve the ambiguity fixation rate of network RTK reference stations and shorten the initialization convergence time.
Due to the interference of atmospheric factors, the ambiguity resolution of network Real Time kinematic (RTK) reference stations is affected, and when new satellites rise above the preset cutoff height, a longer initialization convergence time is required. A fast ambiguity solution method for Network RTK reference stations is proposed. Firstly, the ionosphere weighting strategy is used to assist the fast resolution of baseline ambiguity. Then, Extended Kalman Filter(EKF)is used to estimate ambiguity floating solution; The partial ambiguity solution method is adopted. Finally, the ambiguities are fixed in combination with Least squares AMBiguity Decorrelation Adjustment(LAMBDA) and RATIO detection. Experiments results show that this method can significantly improve the ambiguity fixation rate of network RTK reference stations and shorten the initialization convergence time.
2021, 43(8): 2334-2342.
doi: 10.11999/JEIT200618
Abstract:
By using compounded plane wave, it enables the high-frame-rate acquisition of synchronous ultrasonic samples in the all field of view. However, classical clutter filters fail to deal with these big synchronous imaging datasets. In this study, an improved adaptive clutter rejection algorithm based on Casorati Singular Value Decomposition (Casorati-SVD) is proposed to take full advantage of synchronous datasets. The first step is to construct a Casorati matrix based on a block of plane-wave data and perform singular value decomposition on this Casorati matrix. Then the key point is to adaptively determine the cufoff thresholds according to Doppler frequency and energy of component signals and the blood flow signal is extracted through auto-generated filter. Finally, adaptive SVD filtering on each block is performed and the final flow signals are reconstructed from all blocks. To assess its ability in noise suppression, the proposed method is applied to blood flow echos obtained from phantom, arm artery and rabbit brain. These results demonstrate the improved method has 4.4% to 50% higher Signal-to-Noise-Ratio (SNR) and 4.7% to 55.9% Contrast-to-Noise-Ratio (CNR) than conventional Casorati-SVD methods. In conclusion, this method realizes spatial adaptive filtering and can be significant for development of clinical blood flow imaging.
By using compounded plane wave, it enables the high-frame-rate acquisition of synchronous ultrasonic samples in the all field of view. However, classical clutter filters fail to deal with these big synchronous imaging datasets. In this study, an improved adaptive clutter rejection algorithm based on Casorati Singular Value Decomposition (Casorati-SVD) is proposed to take full advantage of synchronous datasets. The first step is to construct a Casorati matrix based on a block of plane-wave data and perform singular value decomposition on this Casorati matrix. Then the key point is to adaptively determine the cufoff thresholds according to Doppler frequency and energy of component signals and the blood flow signal is extracted through auto-generated filter. Finally, adaptive SVD filtering on each block is performed and the final flow signals are reconstructed from all blocks. To assess its ability in noise suppression, the proposed method is applied to blood flow echos obtained from phantom, arm artery and rabbit brain. These results demonstrate the improved method has 4.4% to 50% higher Signal-to-Noise-Ratio (SNR) and 4.7% to 55.9% Contrast-to-Noise-Ratio (CNR) than conventional Casorati-SVD methods. In conclusion, this method realizes spatial adaptive filtering and can be significant for development of clinical blood flow imaging.
2021, 43(8): 2343-2351.
doi: 10.11999/JEIT200562
Abstract:
Considering the problems of inefficient use of spatial information between features and inadequate fusion of different features, a Correntropy Extreme Learning Machine based on Spatial pyramid matching and local Receptive field(SR-CELM) is proposed. In feature extraction part, multi-scale local receptive fields are used to convolve the generated multi-level dictionary feature distribution map, and local position features and global contour features are introduced. In feature classification part, a new network is proposed to fuse the features of each part. Based on the traditional extreme learning machine training method, a discriminative constraint is constructed by using the relevant entropy criterion, and the weight update formula is used to solve the output weight of the new network. In order to verify the effectiveness of the SR-CELM, experiments are performed on the databases Caltech 101, MSRC and 15 Scene. The experiments show that SR-CELM can make full use of the identifiable information in the features and improve the classification accuracy.
Considering the problems of inefficient use of spatial information between features and inadequate fusion of different features, a Correntropy Extreme Learning Machine based on Spatial pyramid matching and local Receptive field(SR-CELM) is proposed. In feature extraction part, multi-scale local receptive fields are used to convolve the generated multi-level dictionary feature distribution map, and local position features and global contour features are introduced. In feature classification part, a new network is proposed to fuse the features of each part. Based on the traditional extreme learning machine training method, a discriminative constraint is constructed by using the relevant entropy criterion, and the weight update formula is used to solve the output weight of the new network. In order to verify the effectiveness of the SR-CELM, experiments are performed on the databases Caltech 101, MSRC and 15 Scene. The experiments show that SR-CELM can make full use of the identifiable information in the features and improve the classification accuracy.
2021, 43(8): 2352-2360.
doi: 10.11999/JEIT200915
Abstract:
In view of the problem that the preprocessing process of most R-wave recognition algorithms affects the accuracy of recognition and spends more time, an algorithm based on Ensemble Empirical Mode Decomposition (EEMD) and signal structure analysis is proposed to recognize R-wave of ElectroCardioGram (ECG) signals with noise directly. Firstly, the ECG signal with noise is decomposed into a series of intrinsic mode components by EEMD. After that, the intrinsic components are analyzed as independent components to extract the most obvious component of R waves. Finally, the structure of the component is analyzed to realize the accurate positioning of R wave. The simulation results show that the proposed algorithm has better performance in R-wave recognition of noisy ECG signals and demonstrates obvious advantages especially for abnormal ECG signals.
In view of the problem that the preprocessing process of most R-wave recognition algorithms affects the accuracy of recognition and spends more time, an algorithm based on Ensemble Empirical Mode Decomposition (EEMD) and signal structure analysis is proposed to recognize R-wave of ElectroCardioGram (ECG) signals with noise directly. Firstly, the ECG signal with noise is decomposed into a series of intrinsic mode components by EEMD. After that, the intrinsic components are analyzed as independent components to extract the most obvious component of R waves. Finally, the structure of the component is analyzed to realize the accurate positioning of R wave. The simulation results show that the proposed algorithm has better performance in R-wave recognition of noisy ECG signals and demonstrates obvious advantages especially for abnormal ECG signals.
2021, 43(8): 2361-2369.
doi: 10.11999/JEIT200211
Abstract:
In the process of style transfer, stylized image details are blurred when style elements are evenly distributed in the whole image. Besides, the existing style transfer methods mainly focus on the diversity of transferred styles, ignoring the content structure and details of the stylized images. To this end, a neural style transfer method of structure refinement is proposed, which refines the content structure of stylized image by adding edge detection network to extract the contour edge of the content image to highlight the main objectives in the content image. By replacing the larger convolution kernel of the conventional convolution layer in the transfer network, the model parameters of the transfer network are reduced, and the transfer speed is improved, while ensuring that the original receptive field is unchanged. Through the adaptive normalization of the conventional convolution layer, the structure of the generated image is refined by using the adaptive normalization to detect certain style of stroke in the feature channel to produce high nonlinearity while preserving the spatial structure of the content image. The method can refine the overall structure of the stylized image, make the stylized image more coherent, that the stylized image details are blurred due to the uniform distribution of style texture, and improve the quality of image style transfer.
In the process of style transfer, stylized image details are blurred when style elements are evenly distributed in the whole image. Besides, the existing style transfer methods mainly focus on the diversity of transferred styles, ignoring the content structure and details of the stylized images. To this end, a neural style transfer method of structure refinement is proposed, which refines the content structure of stylized image by adding edge detection network to extract the contour edge of the content image to highlight the main objectives in the content image. By replacing the larger convolution kernel of the conventional convolution layer in the transfer network, the model parameters of the transfer network are reduced, and the transfer speed is improved, while ensuring that the original receptive field is unchanged. Through the adaptive normalization of the conventional convolution layer, the structure of the generated image is refined by using the adaptive normalization to detect certain style of stroke in the feature channel to produce high nonlinearity while preserving the spatial structure of the content image. The method can refine the overall structure of the stylized image, make the stylized image more coherent, that the stylized image details are blurred due to the uniform distribution of style texture, and improve the quality of image style transfer.
2021, 43(8): 2370-2377.
doi: 10.11999/JEIT200539
Abstract:
In view of the severe influence of sparse data, cold start and irrelevant noise users on recommendation quality in collaborative filtering recommendation algorithm, this paper combines user-project score data with user trust relationship data. A Biased Dynamic Expert Trust Recommendation Algorithm (BDETA) based on fusion bias is proposed. Firstly, the community is divided according to the user trust relationship data and explicit trust values are obtained.Secondly, the credibility and implicit trust values are obtained from the user-project score data in the community. The expert trust factor is dynamically determined by combining the trust between users, explicit trust value and implicit trust value, and the expert data set is determined for each community according to the recommendation ability of the user.Finally, the different scoring criteria of users in the community data set are combined to predict the scoring for the target users. In the experimental results of real data set FilmTrust, it can effectively solve the problem of collaborative filtering cold startup and data sparseness, better meet the personalized recommendation requirements of users, and has a good performance in the commonly used evaluation index MAE and RMSE of the recommendation system.
In view of the severe influence of sparse data, cold start and irrelevant noise users on recommendation quality in collaborative filtering recommendation algorithm, this paper combines user-project score data with user trust relationship data. A Biased Dynamic Expert Trust Recommendation Algorithm (BDETA) based on fusion bias is proposed. Firstly, the community is divided according to the user trust relationship data and explicit trust values are obtained.Secondly, the credibility and implicit trust values are obtained from the user-project score data in the community. The expert trust factor is dynamically determined by combining the trust between users, explicit trust value and implicit trust value, and the expert data set is determined for each community according to the recommendation ability of the user.Finally, the different scoring criteria of users in the community data set are combined to predict the scoring for the target users. In the experimental results of real data set FilmTrust, it can effectively solve the problem of collaborative filtering cold startup and data sparseness, better meet the personalized recommendation requirements of users, and has a good performance in the commonly used evaluation index MAE and RMSE of the recommendation system.
2021, 43(8): 2378-2385.
doi: 10.11999/JEIT200757
Abstract:
Cutset-type Possibilistic C-Means clustering (C-PCM) algorithm can significantly reduce the coincident clustering phenomenon of the Possibilistic C-Means clustering (PCM) algorithm by introducing the cut-set concept into the PCM. The C-PCM also has strong robustness to noise and outliers. However, the C-PCM still suffers from the center migration problem for datasets with small targets. In order to solve this problem, a Semi-Supervised Cutset-type Possibility C-Means (SS-C-PCM) clustering algorithm is proposed by introducing the semi-supervised learning mechanism into the objective function of the C-PCM and utilizing some prior information to guide the clustering process. Meanwhile, in order to improve the segmentation efficiency and accuracy of color images, a differential evolutionary superpixel-based Semi-Supervised Cutset-type Possibilistic C-Means (desSS-C-PCM) clustering algorithm is proposed. In the desSS-C-PCM, the Differential Evolutionary Superpixel(DES) algorithm is used to obtain the spatial neighborhood information of an image, which is integrated into the objective function of the semi-supervised C-PCM to improve the segmentation quality. Simultaneously, the color histogram is used to reconstruct the new objective function to reduce the computational complexity of the algorithm. Several experiments of artificial data clustering and color image segmentation show that the proposed algorithm can effectively improve the clustering effect of datasets with small targets and the execution efficiency compared with several related algorithms.
Cutset-type Possibilistic C-Means clustering (C-PCM) algorithm can significantly reduce the coincident clustering phenomenon of the Possibilistic C-Means clustering (PCM) algorithm by introducing the cut-set concept into the PCM. The C-PCM also has strong robustness to noise and outliers. However, the C-PCM still suffers from the center migration problem for datasets with small targets. In order to solve this problem, a Semi-Supervised Cutset-type Possibility C-Means (SS-C-PCM) clustering algorithm is proposed by introducing the semi-supervised learning mechanism into the objective function of the C-PCM and utilizing some prior information to guide the clustering process. Meanwhile, in order to improve the segmentation efficiency and accuracy of color images, a differential evolutionary superpixel-based Semi-Supervised Cutset-type Possibilistic C-Means (desSS-C-PCM) clustering algorithm is proposed. In the desSS-C-PCM, the Differential Evolutionary Superpixel(DES) algorithm is used to obtain the spatial neighborhood information of an image, which is integrated into the objective function of the semi-supervised C-PCM to improve the segmentation quality. Simultaneously, the color histogram is used to reconstruct the new objective function to reduce the computational complexity of the algorithm. Several experiments of artificial data clustering and color image segmentation show that the proposed algorithm can effectively improve the clustering effect of datasets with small targets and the execution efficiency compared with several related algorithms.
2021, 43(8): 2386-2394.
doi: 10.11999/JEIT200675
Abstract:
In order to improve the quality of the generated images by the image translation model, the generator in the translation model to obtain high-quality generated images is improved, the diversified image translation is explored and the generation ability of the translation model is expanded. In terms of generator improvement, the dynamic receptive field mechanism of Selective Kernel Block (SKBlock) is used to obtain and fuse the multi-scale information of each up sampling feature in the generator. With the help of multi-scale information of features and dynamic receptive field, the Selective Kernel Generative Adversarial Network (SK-GAN) is constructed. Compared with the traditional generator, SK-GAN improves the quality of the generated image by using dynamic receptive field to obtain multi-scale information. In terms of diversified image translation, the Selective Kernel Generative Adversarial Network with Guide (GSK-GAN) is proposed based on SK-GAN in sketch synthesis realistic image task. GSK-GAN uses the guided image to guide the source image translation and extracts the guide image features through the guided image encoder. Then transmits information of the guided image features to the generator by Parameter Generator (PG) and Feature Transformation (FT). In addition, a dual branch guided image encoder is proposed to improve the editing ability of the translation model. The random style image generation is realized by using the latent variable distribution of the guide image. The experimental results show that the improved generator is helpful to improve the quality of the generated images, and SK-GAN can obtain reasonable results in multiple datasets. GSK-GAN no only ensures the quality of the generated images, but also generates more styles of images
In order to improve the quality of the generated images by the image translation model, the generator in the translation model to obtain high-quality generated images is improved, the diversified image translation is explored and the generation ability of the translation model is expanded. In terms of generator improvement, the dynamic receptive field mechanism of Selective Kernel Block (SKBlock) is used to obtain and fuse the multi-scale information of each up sampling feature in the generator. With the help of multi-scale information of features and dynamic receptive field, the Selective Kernel Generative Adversarial Network (SK-GAN) is constructed. Compared with the traditional generator, SK-GAN improves the quality of the generated image by using dynamic receptive field to obtain multi-scale information. In terms of diversified image translation, the Selective Kernel Generative Adversarial Network with Guide (GSK-GAN) is proposed based on SK-GAN in sketch synthesis realistic image task. GSK-GAN uses the guided image to guide the source image translation and extracts the guide image features through the guided image encoder. Then transmits information of the guided image features to the generator by Parameter Generator (PG) and Feature Transformation (FT). In addition, a dual branch guided image encoder is proposed to improve the editing ability of the translation model. The random style image generation is realized by using the latent variable distribution of the guide image. The experimental results show that the improved generator is helpful to improve the quality of the generated images, and SK-GAN can obtain reasonable results in multiple datasets. GSK-GAN no only ensures the quality of the generated images, but also generates more styles of images
2021, 43(8): 2395-2403.
doi: 10.11999/JEIT200869
Abstract:
In the text classification task, many texts in different domains are similarly expressed and have the characteristics of correlation, which can solve the problem of insufficient training data with labels. The text of different fields can be combined with the multi-task learning method, and the training accuracy and speed of the model can be improved. A Recurrent Convolution Multi-Task Learning (MTL-RC) model for text multi-classification is proposed, jointly modeling the text of multiple tasks, and taking advantage of multi-task learning, Recurrent Neural Network(RNN) and Convolutional Neural Network(CNN) models to obtain the correlation between multi-domain texts, long-term dependence of text. Local features of text are extracted. Rich experiments are carried out based on multi-domain text classification datasets, the Recurrent Convolution Multi-Task Learning(MTL-LC) proposed in this paper has an average accuracy of 90.1% for text classification in different fields, which is 6.5% higher than the single-task learning model STL-LC. Compared with mainstream multi-tasking learning models Full Shared Multi-Task Learning(FS-MTL), Adversarial Multi-Task Learninng(ASP-MTL), and Indirect Communciation for Multi-Task Learning(IC-MTL) have increased by 5.4%, 4%, and 2.8%, respectively.
In the text classification task, many texts in different domains are similarly expressed and have the characteristics of correlation, which can solve the problem of insufficient training data with labels. The text of different fields can be combined with the multi-task learning method, and the training accuracy and speed of the model can be improved. A Recurrent Convolution Multi-Task Learning (MTL-RC) model for text multi-classification is proposed, jointly modeling the text of multiple tasks, and taking advantage of multi-task learning, Recurrent Neural Network(RNN) and Convolutional Neural Network(CNN) models to obtain the correlation between multi-domain texts, long-term dependence of text. Local features of text are extracted. Rich experiments are carried out based on multi-domain text classification datasets, the Recurrent Convolution Multi-Task Learning(MTL-LC) proposed in this paper has an average accuracy of 90.1% for text classification in different fields, which is 6.5% higher than the single-task learning model STL-LC. Compared with mainstream multi-tasking learning models Full Shared Multi-Task Learning(FS-MTL), Adversarial Multi-Task Learninng(ASP-MTL), and Indirect Communciation for Multi-Task Learning(IC-MTL) have increased by 5.4%, 4%, and 2.8%, respectively.
2021, 43(8): 2404-2413.
doi: 10.11999/JEIT200591
Abstract:
Generative Adversarial Network (GAN) for Low-Dose CT (LDCT) image noise reduction has certain performance advantages, and has become a new research hot field of CT image noise reduction in recent years. When the intensity of noise and artifact distribution changes in LDCT images of different doses, the noise reduction performance of GAN network is unstable, and the generalization ability of the network is low. In order to overcome these shortcomings, this paper first designs a noise level estimation subnet with a encoder-decoder structure to generate the noise maps corresponding to LDCT images with different doses, which is subtracted from the original input image to initially suppress the noise; Secondly, the backbone of the noise reduction network is designed as a multi-coded U-Net structure that is optimized through game confrontation to suppress further CT image noise; Finally, a variety of loss functions are designed to constrain the parameter optimization of each function modules, thus to guarantee further the performance of the LDCT image noise reduction network. Experimental results show that compared with current popular algorithms, the noise reduction network proposed in this paper can achieve a better noise reduction on the basis of retaining the original important information of LDCT images.
Generative Adversarial Network (GAN) for Low-Dose CT (LDCT) image noise reduction has certain performance advantages, and has become a new research hot field of CT image noise reduction in recent years. When the intensity of noise and artifact distribution changes in LDCT images of different doses, the noise reduction performance of GAN network is unstable, and the generalization ability of the network is low. In order to overcome these shortcomings, this paper first designs a noise level estimation subnet with a encoder-decoder structure to generate the noise maps corresponding to LDCT images with different doses, which is subtracted from the original input image to initially suppress the noise; Secondly, the backbone of the noise reduction network is designed as a multi-coded U-Net structure that is optimized through game confrontation to suppress further CT image noise; Finally, a variety of loss functions are designed to constrain the parameter optimization of each function modules, thus to guarantee further the performance of the LDCT image noise reduction network. Experimental results show that compared with current popular algorithms, the noise reduction network proposed in this paper can achieve a better noise reduction on the basis of retaining the original important information of LDCT images.
2021, 43(8): 2414-2420.
doi: 10.11999/JEIT200756
Abstract:
For discovering time-varying causal relations between time series, a common method is the sliding-window method with Granger causal tests on every window. However, the method performance is sensitive to window sizes, and an unsuitable size probably leads to poor performance. The different-region balance method is proposed. The variation degree of time series in current sliding window W (called variation bound Sw) is first computed, and the degree Su in front neighbor region U of W is computed. Then a forward exploring strategy is adopted: when Su≤Sw, a different-length-region balance test measure is carried out, i.e., causal-relation tests respectively in window W, combined region W and U, and combined window W and back neighbor region V of W; when Su>Sw, it uses the above-mentioned measure where region V has the same length as region U; Finally, in each region, all the test results are synthesized to give a final result. The new method combines the results from different-length regions to reduce its sensitivity to window sizes, and guarantees the accuracy and stability of final results. The experiments on one simulated data set and four real data sets show that, the new method can discover time-varying causal relations between time series effectively, and outperforms the compared methods on the balance performance of high accuracy and stability.
For discovering time-varying causal relations between time series, a common method is the sliding-window method with Granger causal tests on every window. However, the method performance is sensitive to window sizes, and an unsuitable size probably leads to poor performance. The different-region balance method is proposed. The variation degree of time series in current sliding window W (called variation bound Sw) is first computed, and the degree Su in front neighbor region U of W is computed. Then a forward exploring strategy is adopted: when Su≤Sw, a different-length-region balance test measure is carried out, i.e., causal-relation tests respectively in window W, combined region W and U, and combined window W and back neighbor region V of W; when Su>Sw, it uses the above-mentioned measure where region V has the same length as region U; Finally, in each region, all the test results are synthesized to give a final result. The new method combines the results from different-length regions to reduce its sensitivity to window sizes, and guarantees the accuracy and stability of final results. The experiments on one simulated data set and four real data sets show that, the new method can discover time-varying causal relations between time series effectively, and outperforms the compared methods on the balance performance of high accuracy and stability.
2021, 43(8): 2421-2429.
doi: 10.11999/JEIT200558
Abstract:
Pseudoconvex optimization problems are a special kind of nonconvex optimization problems, which often appear in various scientific and engineering applications, so they have great research value. Considering the shortcomings of the existing neural network model to solve the nonsmooth pseudoconvex optimization problem, a new single-layer recurrent neural network model based on differential inclusion theory is proposed. Through theoretical analysis, it is proved that the state solution of the neural network converges to the feasible region within a limited time and stays in it forever. Finally, the state solution of the neural network converges to the optimal solution of the original optimization problem. At the end of the article, the validity of the proposed theory is verified through numerical experiments. Compared with existing neural networks, the neural network model proposed in this paper is simple in structure, does not need to calculate penalty parameters in advance, and has no special requirements for the selection of initial points.
Pseudoconvex optimization problems are a special kind of nonconvex optimization problems, which often appear in various scientific and engineering applications, so they have great research value. Considering the shortcomings of the existing neural network model to solve the nonsmooth pseudoconvex optimization problem, a new single-layer recurrent neural network model based on differential inclusion theory is proposed. Through theoretical analysis, it is proved that the state solution of the neural network converges to the feasible region within a limited time and stays in it forever. Finally, the state solution of the neural network converges to the optimal solution of the original optimization problem. At the end of the article, the validity of the proposed theory is verified through numerical experiments. Compared with existing neural networks, the neural network model proposed in this paper is simple in structure, does not need to calculate penalty parameters in advance, and has no special requirements for the selection of initial points.
2021, 43(8): 2430-2438.
doi: 10.11999/JEIT200579
Abstract:
DNA strand displacement technology is widely used in biological computing, and it has excellent performance in computing power and information processing. However, the use of DNA Strand Displacement (DSD) technology in some calculations, such as signal amplification, restoration, and comparison, not only increases the number of DNA strands, but also brings additional calculation costs. Therefore, in order to reduce the number of DNA strands used, a Winner-Take-All (WTA) neural network based on DNA strand displacement is constructed. Firstly, the logic operations AND, NAND, and OR are realized through neurons, and the linear inseparable problem is solved by cascading them into a WTA neural network. By comparing with the results with others, the effectiveness of the method is proved, and stable and intuitive results are obtained in Visual DSD (DNA Strand Displacement). Then, in order to test the scalability of the neuron cascade, a three-person voter is designed and the scientists are classified. The paper shows how the molecular system demonstrates the ability to think in a similar way to the brain, and finally proves the accuracy is higher than other methods.
DNA strand displacement technology is widely used in biological computing, and it has excellent performance in computing power and information processing. However, the use of DNA Strand Displacement (DSD) technology in some calculations, such as signal amplification, restoration, and comparison, not only increases the number of DNA strands, but also brings additional calculation costs. Therefore, in order to reduce the number of DNA strands used, a Winner-Take-All (WTA) neural network based on DNA strand displacement is constructed. Firstly, the logic operations AND, NAND, and OR are realized through neurons, and the linear inseparable problem is solved by cascading them into a WTA neural network. By comparing with the results with others, the effectiveness of the method is proved, and stable and intuitive results are obtained in Visual DSD (DNA Strand Displacement). Then, in order to test the scalability of the neuron cascade, a three-person voter is designed and the scientists are classified. The paper shows how the molecular system demonstrates the ability to think in a similar way to the brain, and finally proves the accuracy is higher than other methods.