Email alert
2021 Vol. 43, No. 5
Display Method:
2021, (5): 1-4.
Abstract:
2021, 43(5): 1199-1211.
doi: 10.11999/JEIT200843
Abstract:
The core research contents of radar countermeasures are the games of countermeasures between jamming strategies and anti-jamming strategies. As a hotspot in the field of electronic warfare, radar countermeasures have been paid much attention by scholars. This paper summarizes that the scholars employ the cooperative and non-cooperative game methods to analyze the radar against jamming while probing targets. Different radar systems make use of cognitive techniques perceive and learn the complex electromagnetic environment, and reasonably allocate transmitting power, control coding sequence, design waveform, investigate detection and tracking methods and allocate resources of radar communication etc. In this way, radar can not only reduce power consumption, but also search and track the target without being detected by the enemy. Thus, radar can achieve its optimal performance in the complex and changeable modern battlefield environment. Finally, game theory in cognitive radar anti-jamming is summarized and prospected, and it also points out some potential problems and challenges of game theory in cognitive radar anti-jamming.
The core research contents of radar countermeasures are the games of countermeasures between jamming strategies and anti-jamming strategies. As a hotspot in the field of electronic warfare, radar countermeasures have been paid much attention by scholars. This paper summarizes that the scholars employ the cooperative and non-cooperative game methods to analyze the radar against jamming while probing targets. Different radar systems make use of cognitive techniques perceive and learn the complex electromagnetic environment, and reasonably allocate transmitting power, control coding sequence, design waveform, investigate detection and tracking methods and allocate resources of radar communication etc. In this way, radar can not only reduce power consumption, but also search and track the target without being detected by the enemy. Thus, radar can achieve its optimal performance in the complex and changeable modern battlefield environment. Finally, game theory in cognitive radar anti-jamming is summarized and prospected, and it also points out some potential problems and challenges of game theory in cognitive radar anti-jamming.
2021, 43(5): 1212-1218.
doi: 10.11999/JEIT200060
Abstract:
The spaceborne Scanning Synthetic Aperture Radar (ScanSAR) adopts the Burst working mode. While obtaining wide-range mapping capabilities, this mode also causes an inherent scalloping in the image, which seriously affects the visual effects and quantitative applications of the image. Based on the analysis of the azimuth statistical characteristics of ScanSAR images and aimed at the shortcomings of the existing filtering model such as poor stability and high time complexity, an improved Kalman filtering model is proposed, which filters the standard deviation and mean of image in azimuth position to correct scallop stripes. The correction results on the real ScanSAR images acquired by the GF-3 satellite verify the effectiveness and efficiency of the improved algorithm. Furthermore, the experimental results on complex scene images such as buildings and the junction of sea and land indicate that the strong robustness of the improved algorithm.
The spaceborne Scanning Synthetic Aperture Radar (ScanSAR) adopts the Burst working mode. While obtaining wide-range mapping capabilities, this mode also causes an inherent scalloping in the image, which seriously affects the visual effects and quantitative applications of the image. Based on the analysis of the azimuth statistical characteristics of ScanSAR images and aimed at the shortcomings of the existing filtering model such as poor stability and high time complexity, an improved Kalman filtering model is proposed, which filters the standard deviation and mean of image in azimuth position to correct scallop stripes. The correction results on the real ScanSAR images acquired by the GF-3 satellite verify the effectiveness and efficiency of the improved algorithm. Furthermore, the experimental results on complex scene images such as buildings and the junction of sea and land indicate that the strong robustness of the improved algorithm.
2021, 43(5): 1219-1227.
doi: 10.11999/JEIT200080
Abstract:
In the problem of one-class classification, One-Class Classifier (OCC) tries to identify samples of a specific class, called the target class, among samples of all other classes. Traditional one-class classification methods design a classifier using all training samples and ignore the underlying structure of the data, thus their classification performance will be seriously degraded when dealing with complex distributed data. To overcome this problem, an ensembling one-class classification method based on Beta process max-margin one-class classifier is proposed in this paper. In the method, the input data is partitioned into several clusters with the Dirichlet Process Mixture (DPM), and a Beta Process Max-Margin One-Class Classifier (BPMMOCC) is learned in each cluster. With the ensemble of some simple classifiers, the complex nonlinear classification can be implemented to enhance the classification performance. Specifically, the DPM and BPMMOCC are jointly learned in a unified Bayesian frame to guarantee the separability in each cluster. Moreover, in BPMMOCC, a feature selection factor, which obeys the prior distribution of Beta process, is added to reduce feature redundancy and improve classification results. Experimental results based on synthetic data, benchmark datasets and Synthetic Aperture Radar (SAR) real data demonstrate the effectiveness of the proposed method.
In the problem of one-class classification, One-Class Classifier (OCC) tries to identify samples of a specific class, called the target class, among samples of all other classes. Traditional one-class classification methods design a classifier using all training samples and ignore the underlying structure of the data, thus their classification performance will be seriously degraded when dealing with complex distributed data. To overcome this problem, an ensembling one-class classification method based on Beta process max-margin one-class classifier is proposed in this paper. In the method, the input data is partitioned into several clusters with the Dirichlet Process Mixture (DPM), and a Beta Process Max-Margin One-Class Classifier (BPMMOCC) is learned in each cluster. With the ensemble of some simple classifiers, the complex nonlinear classification can be implemented to enhance the classification performance. Specifically, the DPM and BPMMOCC are jointly learned in a unified Bayesian frame to guarantee the separability in each cluster. Moreover, in BPMMOCC, a feature selection factor, which obeys the prior distribution of Beta process, is added to reduce feature redundancy and improve classification results. Experimental results based on synthetic data, benchmark datasets and Synthetic Aperture Radar (SAR) real data demonstrate the effectiveness of the proposed method.
2021, 43(5): 1228-1234.
doi: 10.11999/JEIT200044
Abstract:
Compared with Linear Frequency Modulation (LFM) signals, Hyperbolic Frequency Modulation (HFM) signals, which have good performance of the pulse compression and the Doppler invariance, are widely used in scenes with severe Doppler effects such as radar detection and underwater acoustic detection, and among them, the parameter estimation problem of HFM signals is particularly important. In view of this, this paper proposes a Fast Algorithm for Parameter Estimation of Hyperbolic Frequency Modulation Signals Based on Likelihood Function. Firstly, the Cramer-Rao lower bound of the HFM signal is derived as the performance evaluation standard for parameter estimation; Then based on the Gaussian random noise, the likelihood function of the HFM signal is constructed, and an improved fitness function is proposed in combination with the characteristics of data vectorization, then the Global best guided Artificial Bee Colony (GABC) algorithm is used to optimize the fitness function to realize the parameter estimation of the HFM signal. Finally, Monte Carlo simulation results show that, compared to the method before the improvement, the mean square error of the parameter estimation result of the HFM signal is closer to the Cramer-Rao lower bound when the signal-to-noise ratio is more the 3 dB, and the amount of calculation is about one-third of the method before improvement, which improves the algorithm convergence speed while ensuring the estimation accuracy.
Compared with Linear Frequency Modulation (LFM) signals, Hyperbolic Frequency Modulation (HFM) signals, which have good performance of the pulse compression and the Doppler invariance, are widely used in scenes with severe Doppler effects such as radar detection and underwater acoustic detection, and among them, the parameter estimation problem of HFM signals is particularly important. In view of this, this paper proposes a Fast Algorithm for Parameter Estimation of Hyperbolic Frequency Modulation Signals Based on Likelihood Function. Firstly, the Cramer-Rao lower bound of the HFM signal is derived as the performance evaluation standard for parameter estimation; Then based on the Gaussian random noise, the likelihood function of the HFM signal is constructed, and an improved fitness function is proposed in combination with the characteristics of data vectorization, then the Global best guided Artificial Bee Colony (GABC) algorithm is used to optimize the fitness function to realize the parameter estimation of the HFM signal. Finally, Monte Carlo simulation results show that, compared to the method before the improvement, the mean square error of the parameter estimation result of the HFM signal is closer to the Cramer-Rao lower bound when the signal-to-noise ratio is more the 3 dB, and the amount of calculation is about one-third of the method before improvement, which improves the algorithm convergence speed while ensuring the estimation accuracy.
2021, 43(5): 1235-1242.
doi: 10.11999/JEIT200114
Abstract:
Sparse recovery Space-Time Adaptive Processing (STAP) can reduce the requirements of clutter samples, and suppress effectively clutter using limited training samples for airborne radar. The whole space-time plane is discretized into small grid points uniformly in presently available sparse recovery STAP methods, however, the clutter ridge is not located exactly on the pre-discretized grid points in non-sidelooking STAP radar. The dictionary mismatch effect degrades the performance of STAP significantly. In this paper, a gridless sparse recovery STAP method is proposed based on Atomic Norm Minimization (ANM-STAP), which utilizes the low-rank property of the clutter covariance matrix. In the proposed method, the clutter spectrum is precisely estimated in continuous space-time plane without dictionary mismatch. Numerical results show that the proposed method provides an improved performance to the sparse recovery STAP methods with discretized dictionaries.
Sparse recovery Space-Time Adaptive Processing (STAP) can reduce the requirements of clutter samples, and suppress effectively clutter using limited training samples for airborne radar. The whole space-time plane is discretized into small grid points uniformly in presently available sparse recovery STAP methods, however, the clutter ridge is not located exactly on the pre-discretized grid points in non-sidelooking STAP radar. The dictionary mismatch effect degrades the performance of STAP significantly. In this paper, a gridless sparse recovery STAP method is proposed based on Atomic Norm Minimization (ANM-STAP), which utilizes the low-rank property of the clutter covariance matrix. In the proposed method, the clutter spectrum is precisely estimated in continuous space-time plane without dictionary mismatch. Numerical results show that the proposed method provides an improved performance to the sparse recovery STAP methods with discretized dictionaries.
2021, 43(5): 1243-1250.
doi: 10.11999/JEIT200166
Abstract:
In view of the detection and tracking of aerial targets, the theory of the aerial targets detected by ground-based synthetic aperture microwave measurement technology and the feasibility are discussed. The detection principle of aerial targets by ground-based synthetic aperture microwave is outlined. The target detection probability is estimated, and the relationship between the systematic performance and related factors is analyzed in terms of the detection probability. Meanwhile, the feasibility of the aerial targets detected by ground-based synthetic aperture microwave measurement technology is analyzed. The experiments are performed that aerial targets are detected by a ground-based synthetic aperture microwave radiometer. Both theoretical and experimental results show that aerial targets are detected by a ground-based synthetic aperture microwave radiometer is feasibility.
In view of the detection and tracking of aerial targets, the theory of the aerial targets detected by ground-based synthetic aperture microwave measurement technology and the feasibility are discussed. The detection principle of aerial targets by ground-based synthetic aperture microwave is outlined. The target detection probability is estimated, and the relationship between the systematic performance and related factors is analyzed in terms of the detection probability. Meanwhile, the feasibility of the aerial targets detected by ground-based synthetic aperture microwave measurement technology is analyzed. The experiments are performed that aerial targets are detected by a ground-based synthetic aperture microwave radiometer. Both theoretical and experimental results show that aerial targets are detected by a ground-based synthetic aperture microwave radiometer is feasibility.
2021, 43(5): 1251-1257.
doi: 10.11999/JEIT200081
Abstract:
Different sensor placements have a great influence on the accuracy in Phase Differences Of Arrival (PDOA)-based hit position estimation. Compared with single time target localization, estimation of hit position is more complex because of additionally considering speed and its direction. In order to choose an adequate sensor geometry for source localization, a method of evaluating the performance of sensor placement by sensitivity analysis is proposed, which provides a theoretical foundation for analyzing the accuracy of hit position estimation. Then the sensitivities of three typical sensor geometries are analyzed and Cramer-Rao Lower Bound is compared to verify the effectiveness of the proposed method using computer simulations.
Different sensor placements have a great influence on the accuracy in Phase Differences Of Arrival (PDOA)-based hit position estimation. Compared with single time target localization, estimation of hit position is more complex because of additionally considering speed and its direction. In order to choose an adequate sensor geometry for source localization, a method of evaluating the performance of sensor placement by sensitivity analysis is proposed, which provides a theoretical foundation for analyzing the accuracy of hit position estimation. Then the sensitivities of three typical sensor geometries are analyzed and Cramer-Rao Lower Bound is compared to verify the effectiveness of the proposed method using computer simulations.
2021, 43(5): 1258-1266.
doi: 10.11999/JEIT200099
Abstract:
A new Non-Local Means (NLM) despeckling algorithm (AFS-NLM) with Adaptive Filtering Strength (AFS) is proposed to improve the performance of reducing multiplicative speckle and preserving the edges in SAR images. A modified Kuan filtering coefficient which can better characterize the homogeneous and edge regions of SAR image is formed by using the local mean and variance calculated in the Frost filtered image to improve the estimation of SAR image scene parameters. An improved NLM which adapts to the multiplicative noise characteristics is constructed by the new similarity measurement parameter estimated by the local mean ratio and the new adaptive decay factor estimated by the improved Kuan filtering coefficient. A new weighted filtering model which can automatically adjust the filtering strength is formed. In the new model, the improved NLM filters controlled by the skew smoothing parameters and the skew edge protection parameters are used to replace the local average value of pixels and the gray value of pixels in the classic Kuan filter model as weighting items, and the adaptive adjustment factor constructed by the improved Kuan filter coefficient is used to weight the two items. Experimental results and comparisons with several advanced despeckling algorithms in recent years show that the proposed algorithm has better speckle suppression and edge preservation performance.
A new Non-Local Means (NLM) despeckling algorithm (AFS-NLM) with Adaptive Filtering Strength (AFS) is proposed to improve the performance of reducing multiplicative speckle and preserving the edges in SAR images. A modified Kuan filtering coefficient which can better characterize the homogeneous and edge regions of SAR image is formed by using the local mean and variance calculated in the Frost filtered image to improve the estimation of SAR image scene parameters. An improved NLM which adapts to the multiplicative noise characteristics is constructed by the new similarity measurement parameter estimated by the local mean ratio and the new adaptive decay factor estimated by the improved Kuan filtering coefficient. A new weighted filtering model which can automatically adjust the filtering strength is formed. In the new model, the improved NLM filters controlled by the skew smoothing parameters and the skew edge protection parameters are used to replace the local average value of pixels and the gray value of pixels in the classic Kuan filter model as weighting items, and the adaptive adjustment factor constructed by the improved Kuan filter coefficient is used to weight the two items. Experimental results and comparisons with several advanced despeckling algorithms in recent years show that the proposed algorithm has better speckle suppression and edge preservation performance.
2021, 43(5): 1267-1274.
doi: 10.11999/JEIT200445
Abstract:
In view of the limitation of the phased-array on suppressing the range-dependent interference, a joint design of the transmit and receive beamforming via the Alternating Direction Method of Multipliers (ADMM) method for Low Probability of Intercept (LPI) of Multiple-Input Multiple-Output with Frequency Diverse Array (FDA-MIMO) radar in the presence of clutter is proposed. The problem of joint design is to maximize the performance of target parameter estimation, and minimize the transmit energy at the target region which enhances LPI capability. Following a weighted sum of the performance metric, the original problem is firstly recasts to a multiple-ratio Fractional Programming (FP) problem. Subsequently, an iterative algorithm is developed. Concretely, at each iteration, the transmit beamforming matrix is optimized by employing ADMM method and the quadratic approximation algorithm. Moreover, the computational complexity of the proposed algorithm is discussed. Numerical simulations are provided to demonstrate the effectiveness of the proposed algorithm.
In view of the limitation of the phased-array on suppressing the range-dependent interference, a joint design of the transmit and receive beamforming via the Alternating Direction Method of Multipliers (ADMM) method for Low Probability of Intercept (LPI) of Multiple-Input Multiple-Output with Frequency Diverse Array (FDA-MIMO) radar in the presence of clutter is proposed. The problem of joint design is to maximize the performance of target parameter estimation, and minimize the transmit energy at the target region which enhances LPI capability. Following a weighted sum of the performance metric, the original problem is firstly recasts to a multiple-ratio Fractional Programming (FP) problem. Subsequently, an iterative algorithm is developed. Concretely, at each iteration, the transmit beamforming matrix is optimized by employing ADMM method and the quadratic approximation algorithm. Moreover, the computational complexity of the proposed algorithm is discussed. Numerical simulations are provided to demonstrate the effectiveness of the proposed algorithm.
2021, 43(5): 1275-1281.
doi: 10.11999/JEIT200138
Abstract:
Waveform optimization can effectively suppress the interference, and improve significantly radar performance. With considering polarimetric radars as the object of study and to maximize the output Signal-to-Clutter plus Noise Ratio (SCNR) as the merit of figure, an optimization problem of joint transmit waveform and receive filter design under both the energy and similarity constraints is constructed. Then, an optimization procedure for transmit signal and receive filter which improves sequentially the SCNR is exploited. Each iteration of the algorithm requires the solution of both a convex and a hidden convex optimization problem, and the resulting computational complexity is linear with the number of iterations and polynomial with the receive filter length. Finally, the convergence of the algorithm and the property of the optimized waveform in the ambiguous domain are analyzed through numerical experiments. Results show that, compared to the existing methods, the proposed approach improves significantly the SCNR.
Waveform optimization can effectively suppress the interference, and improve significantly radar performance. With considering polarimetric radars as the object of study and to maximize the output Signal-to-Clutter plus Noise Ratio (SCNR) as the merit of figure, an optimization problem of joint transmit waveform and receive filter design under both the energy and similarity constraints is constructed. Then, an optimization procedure for transmit signal and receive filter which improves sequentially the SCNR is exploited. Each iteration of the algorithm requires the solution of both a convex and a hidden convex optimization problem, and the resulting computational complexity is linear with the number of iterations and polynomial with the receive filter length. Finally, the convergence of the algorithm and the property of the optimized waveform in the ambiguous domain are analyzed through numerical experiments. Results show that, compared to the existing methods, the proposed approach improves significantly the SCNR.
2021, 43(5): 1282-1288.
doi: 10.11999/JEIT200038
Abstract:
A the Interative Multi-criteria Decision making based on Belief Interval (BI-TODIM) approach is proposed to solve the fusion decision problem of heterogeneous information with mixed type data and expert knowledge. According to the construction theorem of trust interval and grey relation method, the trust interval of mixed type data of unknown target is constructed. The equivalence relationship between trust interval and intuitionistic fuzzy number is clarified. The recognition decision model of mixed type data and expert knowledge is established. The unified expression of feature layer information and decision layer information is realized. The shortcomings of the Technique for Order Preference by Similarity to Ideal Solution based on Belief Function (BF-TOPSIS) method are analyzed such as the inversion phenomenon and the complexity. To solve this problem, the order relation of interval numbers is defined, the BI-TODIM recognition decision method and the method of calculating unknown weight based on intuitionistic fuzzy entropy are proposed. The effectiveness of the proposed method in resolving ranking inversion and heterogeneous information fusion is verified by an example and a target identification case, which underlines low time complexity, good stability and high recognition accuracy.
A the Interative Multi-criteria Decision making based on Belief Interval (BI-TODIM) approach is proposed to solve the fusion decision problem of heterogeneous information with mixed type data and expert knowledge. According to the construction theorem of trust interval and grey relation method, the trust interval of mixed type data of unknown target is constructed. The equivalence relationship between trust interval and intuitionistic fuzzy number is clarified. The recognition decision model of mixed type data and expert knowledge is established. The unified expression of feature layer information and decision layer information is realized. The shortcomings of the Technique for Order Preference by Similarity to Ideal Solution based on Belief Function (BF-TOPSIS) method are analyzed such as the inversion phenomenon and the complexity. To solve this problem, the order relation of interval numbers is defined, the BI-TODIM recognition decision method and the method of calculating unknown weight based on intuitionistic fuzzy entropy are proposed. The effectiveness of the proposed method in resolving ranking inversion and heterogeneous information fusion is verified by an example and a target identification case, which underlines low time complexity, good stability and high recognition accuracy.
2021, 43(5): 1289-1297.
doi: 10.11999/JEIT200175
Abstract:
In order to resolve the problems of spectrum shortage, large power consumption, and excessive load at base stations, a Simultaneous Wireless Information and Power Transfer (SWIPT)-based Robust Energy Efficiency (EE) Algorithm (SREA) with imperfect channel state information is proposed to maximize the total EE in Non-Orthogonal Multiple Access (NOMA) assisted Device-to-Device (D2D) networks. Considering the users' Quality of Service (QoS) constraints and maximum transmit power constraints, a robust EE maximization-based resource allocation model is established based on random channel uncertainties. Moreover, the original NP-hard problem is transformed into a deterministic convex optimization problem by using Dinkelbach’s method and the variable substitution method. And the analytical solutions are obtained through Lagrange dual theory. Simulation results demonstrated that the proposed algorithm can effectively improve the system EE and the robustness of D2D users while ensuring the communication quality of cellular users.
In order to resolve the problems of spectrum shortage, large power consumption, and excessive load at base stations, a Simultaneous Wireless Information and Power Transfer (SWIPT)-based Robust Energy Efficiency (EE) Algorithm (SREA) with imperfect channel state information is proposed to maximize the total EE in Non-Orthogonal Multiple Access (NOMA) assisted Device-to-Device (D2D) networks. Considering the users' Quality of Service (QoS) constraints and maximum transmit power constraints, a robust EE maximization-based resource allocation model is established based on random channel uncertainties. Moreover, the original NP-hard problem is transformed into a deterministic convex optimization problem by using Dinkelbach’s method and the variable substitution method. And the analytical solutions are obtained through Lagrange dual theory. Simulation results demonstrated that the proposed algorithm can effectively improve the system EE and the robustness of D2D users while ensuring the communication quality of cellular users.
2021, 43(5): 1298-1305.
doi: 10.11999/JEIT190990
Abstract:
Repeat Accumulate (RA) code is a special kind of Low Density Parity Check (LDPC) code, which not only has the advantages of LDPC code, but also realizes differential encoding. To solve the problems of high encoding complexity and long encoding delay of LDPC-coded cooperative system, Quasi-Cyclic RA (QC-RA) code is introduced. Firstly, a joint parity check matrix corresponding to the QC-RA codes adopted by the sources and relays is deduced; Secondly, the joint check matrix is designed based on the Common Difference Construction (CDC) method, and it is proved that the joint check matrix designed by the CDC method does not have short cycles with girth-4 or girth-6. Theoretical analysis and simulation results show that the system achieves better Bit Error Rate (BER) performance than the corresponding point-to-point system under the same conditions. The simulation results also demonstrate that the multi-source multi-relay coded cooperation with CDC constructed QC-RA code can obtain higher coding gain than that with generally constructed QC-RA code or Z-type constructed QC-RA code.
Repeat Accumulate (RA) code is a special kind of Low Density Parity Check (LDPC) code, which not only has the advantages of LDPC code, but also realizes differential encoding. To solve the problems of high encoding complexity and long encoding delay of LDPC-coded cooperative system, Quasi-Cyclic RA (QC-RA) code is introduced. Firstly, a joint parity check matrix corresponding to the QC-RA codes adopted by the sources and relays is deduced; Secondly, the joint check matrix is designed based on the Common Difference Construction (CDC) method, and it is proved that the joint check matrix designed by the CDC method does not have short cycles with girth-4 or girth-6. Theoretical analysis and simulation results show that the system achieves better Bit Error Rate (BER) performance than the corresponding point-to-point system under the same conditions. The simulation results also demonstrate that the multi-source multi-relay coded cooperation with CDC constructed QC-RA code can obtain higher coding gain than that with generally constructed QC-RA code or Z-type constructed QC-RA code.
2021, 43(5): 1306-1314.
doi: 10.11999/JEIT200104
Abstract:
For amplify-and-forward relay networks where both the source node and the relay node are powered by the harvested energy and the information for the two destination nodes are required to keep secrecy each other, an algorithm is proposed to maximize the long-term average secrecy rate by jointly optimizing the transmission power of the source node and the relay node. Since the energy arrivals and channel states are stochastic processes, the problem is a stochastic optimization problem. The Lyapunov optimization framework is used to transform the long-term optimization problem into a “virtual queue drift plus penalty” minimization problem per time slot under the constraints of battery operation and energy using. The transformed optimization problem is solved. The simulation results show that the proposed algorithm has significant advantages over the comparison algorithms in the long-term average secrecy rate. Furthermore, the proposed algorithm only depends on the current battery state and channel state to make the decision, which is a practical and low-complexity algorithm.
For amplify-and-forward relay networks where both the source node and the relay node are powered by the harvested energy and the information for the two destination nodes are required to keep secrecy each other, an algorithm is proposed to maximize the long-term average secrecy rate by jointly optimizing the transmission power of the source node and the relay node. Since the energy arrivals and channel states are stochastic processes, the problem is a stochastic optimization problem. The Lyapunov optimization framework is used to transform the long-term optimization problem into a “virtual queue drift plus penalty” minimization problem per time slot under the constraints of battery operation and energy using. The transformed optimization problem is solved. The simulation results show that the proposed algorithm has significant advantages over the comparison algorithms in the long-term average secrecy rate. Furthermore, the proposed algorithm only depends on the current battery state and channel state to make the decision, which is a practical and low-complexity algorithm.
2021, 43(5): 1315-1322.
doi: 10.11999/JEIT200183
Abstract:
In consideration of improper power allocation and insufficient relay selection in the current Z-Forward (ZF) scheme, an efficient Decision Threshold-aided Fast Z-Forward (DT-FZF) scheme is proposed to improve power and transmission efficiency. When the absolute value of the Log-Likelihood Ratio (LLR) of a source-relay reception is less than the decision threshold, the relay remains quiet. Otherwise, it directly sends the truncated LLR to the destination. In addition, the proposed DT-FZF scheme incorporates the Amplify-Forward(AF), Decode-Forward(DF), Piecewise-Forward(PF) and ZF schemes, all of which can be the special case of the proposed scheme. At a Bit Error Rate (BER) is of 10–3, the DT-FZF scheme outperforms the ZF scheme by approximately 0.8 dB in a three-relay system.
In consideration of improper power allocation and insufficient relay selection in the current Z-Forward (ZF) scheme, an efficient Decision Threshold-aided Fast Z-Forward (DT-FZF) scheme is proposed to improve power and transmission efficiency. When the absolute value of the Log-Likelihood Ratio (LLR) of a source-relay reception is less than the decision threshold, the relay remains quiet. Otherwise, it directly sends the truncated LLR to the destination. In addition, the proposed DT-FZF scheme incorporates the Amplify-Forward(AF), Decode-Forward(DF), Piecewise-Forward(PF) and ZF schemes, all of which can be the special case of the proposed scheme. At a Bit Error Rate (BER) is of 10–3, the DT-FZF scheme outperforms the ZF scheme by approximately 0.8 dB in a three-relay system.
2021, 43(5): 1323-1330.
doi: 10.11999/JEIT200137
Abstract:
For the temporal features of trajectory intersection sequence and spatial correlation of the actual road network, a trajectory prediction method based on the Deep Belief Networks and SoftMax (DBN-SoftMax) is proposed. At first, considering the sparsity of trajectory in an intersection set and the insufficiency of generalization ability in general feature learning methods for new features, the strong unsupervised feature learning ability of Deep Belief Network (DBN) is used to extract the local spatial features of trajectory. Secondly, considering the temporal features of the trajectory, the logistic regression method and the linear combination of the current trajectory set in the road network features space are used to predict the trajectory. Finally, Based on the idea of word embedding in the field of natural language processing and the contextual relationship of intersections in the actual trajectory, the vector set of intersections is used to represent the spatiotemporal relationship of traffic between intersections. The experimental results show that the model can not only extract the trajectory features effectively, but also obtain better prediction performance in a road network with complex topology.
For the temporal features of trajectory intersection sequence and spatial correlation of the actual road network, a trajectory prediction method based on the Deep Belief Networks and SoftMax (DBN-SoftMax) is proposed. At first, considering the sparsity of trajectory in an intersection set and the insufficiency of generalization ability in general feature learning methods for new features, the strong unsupervised feature learning ability of Deep Belief Network (DBN) is used to extract the local spatial features of trajectory. Secondly, considering the temporal features of the trajectory, the logistic regression method and the linear combination of the current trajectory set in the road network features space are used to predict the trajectory. Finally, Based on the idea of word embedding in the field of natural language processing and the contextual relationship of intersections in the actual trajectory, the vector set of intersections is used to represent the spatiotemporal relationship of traffic between intersections. The experimental results show that the model can not only extract the trajectory features effectively, but also obtain better prediction performance in a road network with complex topology.
2021, 43(5): 1331-1338.
doi: 10.11999/JEIT200129
Abstract:
The one-to-one charging method for Wireless Rechargeable Sensor Networks (WRSNs) mobile chargers has some problems such as low charging efficiency and lack of directional charging model. To cope with the problems, a one-to-many directed charging scheduling scheme based on Maximizing Utility Charging (MUC) is proposed. In this scheme, the directed coverage subsets with the largest charging gain in the network is first searched; Then the charging anchor points are determined according to the directed coverage subset and the charger movement path is planned; Finally, the constraints of mobile charger energy and charging cycle are considered and the charging time is optimized. Experimental results show that in comparation with Average Energy Charge (AEC) and Fixed Energy Charge (FEC) charging time optimization schemes, the charging efficiency of this scheme is increased by 13.7% and 32.7% respectively. In comparation with Maximum Node Coverage (MNC) and Maximum Average Gain Coverage (MAGC) subset screening schemes, the charging efficiency is increased by 4.4% and 35.9% respectively. In addition, the number of starved nodes in the network is significantly reduced compared with the MNC, MAGC schemes.
The one-to-one charging method for Wireless Rechargeable Sensor Networks (WRSNs) mobile chargers has some problems such as low charging efficiency and lack of directional charging model. To cope with the problems, a one-to-many directed charging scheduling scheme based on Maximizing Utility Charging (MUC) is proposed. In this scheme, the directed coverage subsets with the largest charging gain in the network is first searched; Then the charging anchor points are determined according to the directed coverage subset and the charger movement path is planned; Finally, the constraints of mobile charger energy and charging cycle are considered and the charging time is optimized. Experimental results show that in comparation with Average Energy Charge (AEC) and Fixed Energy Charge (FEC) charging time optimization schemes, the charging efficiency of this scheme is increased by 13.7% and 32.7% respectively. In comparation with Maximum Node Coverage (MNC) and Maximum Average Gain Coverage (MAGC) subset screening schemes, the charging efficiency is increased by 4.4% and 35.9% respectively. In addition, the number of starved nodes in the network is significantly reduced compared with the MNC, MAGC schemes.
2021, 43(5): 1339-1348.
doi: 10.11999/JEIT200429
Abstract:
The heterogeneous integration of Cellular-Vehicle to everything (C-V2X) and Vehicle Ad-hoc NETwork (VANET) can effectively increase network capacity. However, the channel conflicts caused by the coexistence of different networks on the unlicensed frequency bands will cause the system throughput to decrease and the user access delay to increase, which can not satisfy the Quality of Service (QoS) requirements. Considering this problem, a time-frequency resource allocation method based on personalized QoS is proposed. Firstly, the throughput and delay models of C-V2X and VANET are established respectively to determine the mathematical relationship between user data transmission time configuration and throughput and delay. Then, based on the above mathematical models, a Delay-Throughput Joint Optimization Algorithm (DT-JOA) is established to optimize throughput and delay in a heterogeneous network according to the personalized QoS requirements of users. Finally, a joint optimization algorithm for delay and throughput based on Particle Swarm Optimization (PSO) is proposed. The simulation results show that the proposed algorithm can meet the personalized QoS requirements of users and significantly improve the comprehensive performance of heterogeneous networks.
The heterogeneous integration of Cellular-Vehicle to everything (C-V2X) and Vehicle Ad-hoc NETwork (VANET) can effectively increase network capacity. However, the channel conflicts caused by the coexistence of different networks on the unlicensed frequency bands will cause the system throughput to decrease and the user access delay to increase, which can not satisfy the Quality of Service (QoS) requirements. Considering this problem, a time-frequency resource allocation method based on personalized QoS is proposed. Firstly, the throughput and delay models of C-V2X and VANET are established respectively to determine the mathematical relationship between user data transmission time configuration and throughput and delay. Then, based on the above mathematical models, a Delay-Throughput Joint Optimization Algorithm (DT-JOA) is established to optimize throughput and delay in a heterogeneous network according to the personalized QoS requirements of users. Finally, a joint optimization algorithm for delay and throughput based on Particle Swarm Optimization (PSO) is proposed. The simulation results show that the proposed algorithm can meet the personalized QoS requirements of users and significantly improve the comprehensive performance of heterogeneous networks.
2021, 43(5): 1349-1356.
doi: 10.11999/JEIT200185
Abstract:
In View of the problem of fixed redundancy parameters in the traditional cascade failure model, this paper comprehensively considers the different attack levels of nodes and the dynamic changes of the network topology during the failure process, and establishes a cascading failure model based on Dynamic control of node Redundancy Capacity (DRC). By defining the critical factor\begin{document}$\theta $\end{document} ![]()
![]()
of the phase transition of the network to measure the probability of node failure leading to cascading failure, the correlation between network robustness and \begin{document}$\theta $\end{document} ![]()
![]()
is analyzed, and the analytic expression of \begin{document}$\theta $\end{document} ![]()
![]()
is derived in detail by combining degree distribution function, Based on analytic expressions, two network robustness enhancement strategies are proposed. The simulation results show that in model network and real network, the robustness of target network can be effectively improved by adjusting the initial load parameter \begin{document}$\tau $\end{document} ![]()
![]()
of nodes according to the difference of degree of nodes under attack. The failure propagation range of DRC model is significantly reduced compared with Motter-Lai (ML) model.
In View of the problem of fixed redundancy parameters in the traditional cascade failure model, this paper comprehensively considers the different attack levels of nodes and the dynamic changes of the network topology during the failure process, and establishes a cascading failure model based on Dynamic control of node Redundancy Capacity (DRC). By defining the critical factor
2021, 43(5): 1357-1364.
doi: 10.11999/JEIT200163
Abstract:
The security of cryptosystem is threatened by fault attacks, and implementation of fault attacks for crypto chips become an important research direction in the field of cryptography and hardware security. The pulse laser is a method with high accuracy for its high temporal-spatial resolution. In this paper, the principle and method of laser injection attacks are described in detail, and experiments are carried out on a Micro-Controller Unit (MCU) with AES-128 algorithm as an example. The SRAMs of the MCU are taken as the attack targets. Differential fault attack and the subkey expansion attack are successfully implemented, and the 16 Byte complete keys are recovered respectively. The latter attack is first implemented by the laser. The research shows that laser injection attack has many benefits to meet the requirements of fault attack models, including accurate location of critical data, error injection in any operation, and generation of single bit flip. The laser injection attacks and ciphertext collection can be completed automatically in a short time in a nearly real-life scenario, which has a great threat to the crypto chips.
The security of cryptosystem is threatened by fault attacks, and implementation of fault attacks for crypto chips become an important research direction in the field of cryptography and hardware security. The pulse laser is a method with high accuracy for its high temporal-spatial resolution. In this paper, the principle and method of laser injection attacks are described in detail, and experiments are carried out on a Micro-Controller Unit (MCU) with AES-128 algorithm as an example. The SRAMs of the MCU are taken as the attack targets. Differential fault attack and the subkey expansion attack are successfully implemented, and the 16 Byte complete keys are recovered respectively. The latter attack is first implemented by the laser. The research shows that laser injection attack has many benefits to meet the requirements of fault attack models, including accurate location of critical data, error injection in any operation, and generation of single bit flip. The laser injection attacks and ciphertext collection can be completed automatically in a short time in a nearly real-life scenario, which has a great threat to the crypto chips.
2021, 43(5): 1365-1371.
doi: 10.11999/JEIT200057
Abstract:
The security of high security and high speed block cipher algorithm of two-module FEistel structure based on Chaos (CFE) is analyzed. The results show that the cipher is not suitable to use integral attack, meat-in-the-middle attack, invariant attack, interpolation attack and circle shift attack to analyze its security. And it can resist the related-key attack. Furthermore, 5 rounds of impossible differential characteristic are constructed and used to distinguish attacks. The lower bound of the active S-box is 6, and the probability is about 2–21. There are 5 rounds of linear characteristic with zero-correlation.
The security of high security and high speed block cipher algorithm of two-module FEistel structure based on Chaos (CFE) is analyzed. The results show that the cipher is not suitable to use integral attack, meat-in-the-middle attack, invariant attack, interpolation attack and circle shift attack to analyze its security. And it can resist the related-key attack. Furthermore, 5 rounds of impossible differential characteristic are constructed and used to distinguish attacks. The lower bound of the active S-box is 6, and the probability is about 2–21. There are 5 rounds of linear characteristic with zero-correlation.
2021, 43(5): 1372-1380.
doi: 10.11999/JEIT200079
Abstract:
Cipher Specific Programmable Logic Array (CSPLA) is a data stream-driven cryptographic processing structure. The relations between cryptographic mapping energy efficiency and array structures of different scales is considered in this paper. First, based on the specific hardware structure of CSPLA and block ciphers, an energy efficiency model of block cipher algorithm mapping based on this structure is established and related factors affecting energy efficiency are analyzed. Then the basic process of algorithm mapping on the array structure is discussed and a mapping algorithm is proposed. Finally, several typical block cipher algorithms are selected to perform mapping experiments on arrays of different scales. The results show that larger scale CSPLA does not necessarily bring higher energy efficiency. When the CSPLA scale is about 4×4~4×6 which achieves the best energy efficiency. In order to obtain the best energy efficiency, the scale parameter of the array should match the specific hardware resource constraints and cryptographic algorithm parameters. The optimal energy efficiency of AES algorithm is 33.68 Mbps/mW. CSPLA has better energy efficiency characteristics compared with other cryptographic processing structures.
Cipher Specific Programmable Logic Array (CSPLA) is a data stream-driven cryptographic processing structure. The relations between cryptographic mapping energy efficiency and array structures of different scales is considered in this paper. First, based on the specific hardware structure of CSPLA and block ciphers, an energy efficiency model of block cipher algorithm mapping based on this structure is established and related factors affecting energy efficiency are analyzed. Then the basic process of algorithm mapping on the array structure is discussed and a mapping algorithm is proposed. Finally, several typical block cipher algorithms are selected to perform mapping experiments on arrays of different scales. The results show that larger scale CSPLA does not necessarily bring higher energy efficiency. When the CSPLA scale is about 4×4~4×6 which achieves the best energy efficiency. In order to obtain the best energy efficiency, the scale parameter of the array should match the specific hardware resource constraints and cryptographic algorithm parameters. The optimal energy efficiency of AES algorithm is 33.68 Mbps/mW. CSPLA has better energy efficiency characteristics compared with other cryptographic processing structures.
2021, 43(5): 1381-1388.
doi: 10.11999/JEIT200174
Abstract:
Fully Homomorphic Encryption (FHE) allows data to be encrypted and out-sourced to commercial cloud environments for processing, while encrypted which diminishes privacy concerns. For the optimization requirements of large integer multiplication operations in fully homomorphic encryption, an operand merge algorithm of a Number Theory Transform (NTT) multiplier butterfly operation unit is proposed. By using a fast algorithm of modulo operation, the operands of the Radix-16 and Radix-32 units are reduced to 43.8% and 39.1%. The hardware architecture of the NTT Radix-32 unit is designed and implemented. The proposed design is synthesized using 90 nm process technology. The results show that the maximum frequency of the circuit is 600 MHz with die area 1.714 mm2. The results also show that the optimization algorithm improves the computational efficiency of NTT multiplier butterfly operation.
Fully Homomorphic Encryption (FHE) allows data to be encrypted and out-sourced to commercial cloud environments for processing, while encrypted which diminishes privacy concerns. For the optimization requirements of large integer multiplication operations in fully homomorphic encryption, an operand merge algorithm of a Number Theory Transform (NTT) multiplier butterfly operation unit is proposed. By using a fast algorithm of modulo operation, the operands of the Radix-16 and Radix-32 units are reduced to 43.8% and 39.1%. The hardware architecture of the NTT Radix-32 unit is designed and implemented. The proposed design is synthesized using 90 nm process technology. The results show that the maximum frequency of the circuit is 600 MHz with die area 1.714 mm2. The results also show that the optimization algorithm improves the computational efficiency of NTT multiplier butterfly operation.
2021, 43(5): 1389-1396.
doi: 10.11999/JEIT200065
Abstract:
Benefits from the progress of computer hardware and computing power, natural and simple dynamic gesture recognition gets a lot of attention in human-computer interaction. In view of the requirement of the accuracy of dynamic gesture recognition in human-computer interaction, a method of dynamic gesture recognition that combines Two-stream Inflated 3D (I3D) Convolution Neural Network (CNN) with the Convolutional Block Attention Module (CBAM-I3D) is proposed. In addition, relevant parameters and structures of the I3D network model are improved. In order to improve the convergence speed and stability of the model, the Batch Normalization (BN) technology is used to optimize the network, which shortens the training time of the optimized network. At the same time, experimental comparisons with various Two-stream 3D convolution methods on the open source Chinese Sign Language (CSL) recognition dataset are performed. The experimental results show that the proposed method can recognize dynamic gestures well, and the recognition rate reaches 90.76%, which is higher than other dynamic gesture recognition methods. The validity and feasibility of the proposed method are verified.
Benefits from the progress of computer hardware and computing power, natural and simple dynamic gesture recognition gets a lot of attention in human-computer interaction. In view of the requirement of the accuracy of dynamic gesture recognition in human-computer interaction, a method of dynamic gesture recognition that combines Two-stream Inflated 3D (I3D) Convolution Neural Network (CNN) with the Convolutional Block Attention Module (CBAM-I3D) is proposed. In addition, relevant parameters and structures of the I3D network model are improved. In order to improve the convergence speed and stability of the model, the Batch Normalization (BN) technology is used to optimize the network, which shortens the training time of the optimized network. At the same time, experimental comparisons with various Two-stream 3D convolution methods on the open source Chinese Sign Language (CSL) recognition dataset are performed. The experimental results show that the proposed method can recognize dynamic gestures well, and the recognition rate reaches 90.76%, which is higher than other dynamic gesture recognition methods. The validity and feasibility of the proposed method are verified.
2021, 43(5): 1397-1404.
doi: 10.11999/JEIT200144
Abstract:
In view of the problems of missed alarm and false alarm caused by the different scales of aircrafts in aircraft target detection tasks for remote sensing images, a Multi-Scale Cirale Frequency Filter (MSCFF) and Convolutional Neural Network (CNN) aircraft target automatic detection algorithm is proposed based on the shape characteristics and gray-scale changes of aircraft targets. Firstly, the multi-scale circle frequency filter is used to filter out the complex background of remote sensing images to extract the candidate region of aircraft targets on different scales. Then, the Convolutional Neural Network (CNN) model is constructed to realize the effective classification of candidate regions, and finally the aircraft target position is accurately determined. The target detection algorithm is experimentally verified based on the obtained real remote sensing images. It shows that the aircraft target detection rate and the false alarm rate are 94.38% and 3.76% respectively. The experimental results fully verify the effectiveness of the proposed algorithm, which can provide important technical support for airport supervision, military reconnaissance and other applications.
In view of the problems of missed alarm and false alarm caused by the different scales of aircrafts in aircraft target detection tasks for remote sensing images, a Multi-Scale Cirale Frequency Filter (MSCFF) and Convolutional Neural Network (CNN) aircraft target automatic detection algorithm is proposed based on the shape characteristics and gray-scale changes of aircraft targets. Firstly, the multi-scale circle frequency filter is used to filter out the complex background of remote sensing images to extract the candidate region of aircraft targets on different scales. Then, the Convolutional Neural Network (CNN) model is constructed to realize the effective classification of candidate regions, and finally the aircraft target position is accurately determined. The target detection algorithm is experimentally verified based on the obtained real remote sensing images. It shows that the aircraft target detection rate and the false alarm rate are 94.38% and 3.76% respectively. The experimental results fully verify the effectiveness of the proposed algorithm, which can provide important technical support for airport supervision, military reconnaissance and other applications.
2021, 43(5): 1405-1413.
doi: 10.11999/JEIT200167
Abstract:
In order to improve the accuracy and interpretability of the grading of malignant nodules in the lung, a method is proposed to achieve grading automatically for lung nodules by using (Computed Tomography, CT) signs. Firstly, features sets are extracted of CT signs by combing the radiomics features with the higher-order features extracted by convolutional neural network. Then, the ensemble classifier is optimized by the evolutionary search mechanism based on the mixed feature sets, and it is used to realize quantitative scores for 7 CT signs. Finally, 7 quantitative scores are input to the optimized multi-classifier to achieve the grading of malignant nodules in the lung. In the experience, 2000 samples of lung nodules in LIDC-IDRI data set are used to train and test the proposed method. The results show that the recognition accuracy of the 7 CT signs can reach more than 0.9642, the grading accuracy reaches 0.8618, the precision reaches 0.8678, the recall reaches 0.8617, and the F1 index reaches 0.8627. With respect to typical algorithms, the proposed method not only has high accuracy, but also can quantitatively analyze the CT signs that make the grade result of malignancy more interpretive.
In order to improve the accuracy and interpretability of the grading of malignant nodules in the lung, a method is proposed to achieve grading automatically for lung nodules by using (Computed Tomography, CT) signs. Firstly, features sets are extracted of CT signs by combing the radiomics features with the higher-order features extracted by convolutional neural network. Then, the ensemble classifier is optimized by the evolutionary search mechanism based on the mixed feature sets, and it is used to realize quantitative scores for 7 CT signs. Finally, 7 quantitative scores are input to the optimized multi-classifier to achieve the grading of malignant nodules in the lung. In the experience, 2000 samples of lung nodules in LIDC-IDRI data set are used to train and test the proposed method. The results show that the recognition accuracy of the 7 CT signs can reach more than 0.9642, the grading accuracy reaches 0.8618, the precision reaches 0.8678, the recall reaches 0.8617, and the F1 index reaches 0.8627. With respect to typical algorithms, the proposed method not only has high accuracy, but also can quantitatively analyze the CT signs that make the grade result of malignancy more interpretive.
2021, 43(5): 1414-1423.
doi: 10.11999/JEIT200140
Abstract:
In view of the fact that the general tracking algorithm can not solve the special problems such as low resolution, large field of view and many changes of view angle, a Unmanned Aerial Vehicle (UAV) tracking algorithm combining target saliency and online learning interference factor is proposed. The deep feature that the general model pre-trained can not effectively identify the aerial target, the tracking algorithm can better select the salient feature of each convolution filter according to the importance of the back propagation gradient, so as to highlight the aerial target feature. In addition, it makes full use of the rich context information of the continuous video, and learn the interference factor of the dynamic target online by guiding the target appearance model as similar as possible to the current frame, so as to achieve reliable adaptive matching tracking. It is proved that the tracking success rate and accuracy rate of the algorithm are 5.3% and 3.6% higher than that of the siamese network benchmark algorithm on the more difficult UAV123 dataset, respectively, and the speed reaches an average of 28.7 frames per second, which basically meet the aerial target tracking accuracy and real-time requirements.
In view of the fact that the general tracking algorithm can not solve the special problems such as low resolution, large field of view and many changes of view angle, a Unmanned Aerial Vehicle (UAV) tracking algorithm combining target saliency and online learning interference factor is proposed. The deep feature that the general model pre-trained can not effectively identify the aerial target, the tracking algorithm can better select the salient feature of each convolution filter according to the importance of the back propagation gradient, so as to highlight the aerial target feature. In addition, it makes full use of the rich context information of the continuous video, and learn the interference factor of the dynamic target online by guiding the target appearance model as similar as possible to the current frame, so as to achieve reliable adaptive matching tracking. It is proved that the tracking success rate and accuracy rate of the algorithm are 5.3% and 3.6% higher than that of the siamese network benchmark algorithm on the more difficult UAV123 dataset, respectively, and the speed reaches an average of 28.7 frames per second, which basically meet the aerial target tracking accuracy and real-time requirements.
2021, 43(5): 1424-1431.
doi: 10.11999/JEIT200102
Abstract:
Considering the limitation of single scale Convolutional Neural Network (CNN) for ship image classification, a self-adaptive entropy weighted decision fusion method for ship image classification based on multi-scale CNN is proposed. Firstly, the multi-scale CNN is used to extract the multi-scale features of ship image with different sizes, and the optimum models of different sub-networks are trained. Then, the ship images of test set are tested on the optimum models, and the probability value that is output by Softmax function of multi-scale CNN is obtained, which is used to calculate the information entropy so as to realize the adaptive weight assigned to different input ship images. Finally, self-adaptive entropy weighted decision fusion is carried out for the probability value that is output by Softmax function of different sub-networks to realize the final ship image classification. Experiments perform on VAIS (Visible And Infrared Spectrums) and self-built datasets respectively, and the proposed method achieves average accuracy of 95.07% and 97.50% on these datasets respectively. The experimental results show that the proposed method has better classification performance than those of the single scale CNN classification method and other state-of-the-art methods.
Considering the limitation of single scale Convolutional Neural Network (CNN) for ship image classification, a self-adaptive entropy weighted decision fusion method for ship image classification based on multi-scale CNN is proposed. Firstly, the multi-scale CNN is used to extract the multi-scale features of ship image with different sizes, and the optimum models of different sub-networks are trained. Then, the ship images of test set are tested on the optimum models, and the probability value that is output by Softmax function of multi-scale CNN is obtained, which is used to calculate the information entropy so as to realize the adaptive weight assigned to different input ship images. Finally, self-adaptive entropy weighted decision fusion is carried out for the probability value that is output by Softmax function of different sub-networks to realize the final ship image classification. Experiments perform on VAIS (Visible And Infrared Spectrums) and self-built datasets respectively, and the proposed method achieves average accuracy of 95.07% and 97.50% on these datasets respectively. The experimental results show that the proposed method has better classification performance than those of the single scale CNN classification method and other state-of-the-art methods.
2021, 43(5): 1432-1440.
doi: 10.11999/JEIT200193
Abstract:
Considering the lack of subspace information digging and inaccurate propagation between nodes in existing saliency detection algorithm based on manifold ranking, an image saliency detection algorithm based on background constraint of low rank and multi-cue propagation is proposed. Primary visual priors such as color, location and boundary connectivity prior are fused to form a background high-level prior, which restrains the low rank decomposition of feature matrix and strengths the difference between low rank matrix and spares matrix, describes structural information of subspace fully to separate foreground and background efficiently. Cues of rareness perception and local smoothing are introduced for improving the reconstruction of propagation matrix, which improves the node’s propagation capacity that has low probability of color feature occurrence, enhances the relevance of local region, strengthens the properties of nodes accurately to obtain the compact and continuous salient regions. The experimental results on three benchmark datasets and the application to image retrieval demonstrate the efficiency and robustness of the proposed algorithm.
Considering the lack of subspace information digging and inaccurate propagation between nodes in existing saliency detection algorithm based on manifold ranking, an image saliency detection algorithm based on background constraint of low rank and multi-cue propagation is proposed. Primary visual priors such as color, location and boundary connectivity prior are fused to form a background high-level prior, which restrains the low rank decomposition of feature matrix and strengths the difference between low rank matrix and spares matrix, describes structural information of subspace fully to separate foreground and background efficiently. Cues of rareness perception and local smoothing are introduced for improving the reconstruction of propagation matrix, which improves the node’s propagation capacity that has low probability of color feature occurrence, enhances the relevance of local region, strengthens the properties of nodes accurately to obtain the compact and continuous salient regions. The experimental results on three benchmark datasets and the application to image retrieval demonstrate the efficiency and robustness of the proposed algorithm.
2021, 43(5): 1441-1447.
doi: 10.11999/JEIT200049
Abstract:
The detection of P and T waves is an important basis for the diagnosis of cardiovascular disease in the clinic. Because of its low waveform energy and complex shape, it is extremely susceptible to noise interference, which leads to the need to improve the accuracy of existing detection algorithms. In this paper, a P and T wave algorithm based on stable and continuous wavelet transform is proposed .First, the smooth wavelet transform is used to smooth the ElectroCardioGram (ECG) signal to eliminate the effect of jagged burrs on the peak point detection. Then, the multiscale information of continuous wavelet transform is used to obtain the main components of P and T waves in the ECG signal. According to the translation correction rule, the time shift of the zero crossings of P and T waves is corrected to improve the detection accuracy of P and T waves. The algorithm of this paper is verified on the MIT-BIH arrhythmic database, and the final P-wave error rate, sensitivity, and correct prediction reach 0.23%, 99.85%, 99.90%; the T-wave error rate, sensitivity, and correct prediction reach 0.27%, 99.85%, 99.87%.
The detection of P and T waves is an important basis for the diagnosis of cardiovascular disease in the clinic. Because of its low waveform energy and complex shape, it is extremely susceptible to noise interference, which leads to the need to improve the accuracy of existing detection algorithms. In this paper, a P and T wave algorithm based on stable and continuous wavelet transform is proposed .First, the smooth wavelet transform is used to smooth the ElectroCardioGram (ECG) signal to eliminate the effect of jagged burrs on the peak point detection. Then, the multiscale information of continuous wavelet transform is used to obtain the main components of P and T waves in the ECG signal. According to the translation correction rule, the time shift of the zero crossings of P and T waves is corrected to improve the detection accuracy of P and T waves. The algorithm of this paper is verified on the MIT-BIH arrhythmic database, and the final P-wave error rate, sensitivity, and correct prediction reach 0.23%, 99.85%, 99.90%; the T-wave error rate, sensitivity, and correct prediction reach 0.27%, 99.85%, 99.87%.
2021, 43(5): 1448-1456.
doi: 10.11999/JEIT200176
Abstract:
In order to solve the problem that eyeglasses reduce often the performance of face recognition, based on the successful application of deep convolution neural network in super-resolution, This paper proposes an automatic eyeglasses removal method ERCNN (Eyeglasses Removal CNN) for fine-grained face recognition. Specifically, the ERCNN network which is designed based on the convolution layer, pool layer, MFM (Max Feature Map)feature selection module and deconvolution layer, are automatically learned the mapping relationship between facial images with eyeglasses and their counterparts without eyeglasses to realize end-to-end eyeglasses removal. Then, massive facial images are captured through surveillance equipment and collected from the Internet as the training set. And, SLLFW data set is established, which is used as the test set of eyeglasses removal and face recognition. The experiment show that the proposed method can better effectively remove the eyeglasses from the real facial image than the traditional eyeglasses removal methods, and the evaluation index of the method is better than other methods. In addition, several face recognition methods are tested separately on the facial images formed by SLLFW data set. Experiments show that when the FAR (False Accept Rate) is 1%, the TAR (True Accept Rate) of the Sphereface method reaches 90.05%, 91.14% and 92.33%, which is 3.92%, 3.08% and 1.26% higher than the Sphereface method is not used to remove the eyeglasses from the F-SLLFW, H-SLLF and R-SLLFW, respectively. Similarly, when the FAR 0.1%, the TAR of Sphereface method is increased by 10.06%, 3.08% and 1.26% respectively. Therefore, the proposed method can better improve the recognition accuracy of fine-grained face recognition.
In order to solve the problem that eyeglasses reduce often the performance of face recognition, based on the successful application of deep convolution neural network in super-resolution, This paper proposes an automatic eyeglasses removal method ERCNN (Eyeglasses Removal CNN) for fine-grained face recognition. Specifically, the ERCNN network which is designed based on the convolution layer, pool layer, MFM (Max Feature Map)feature selection module and deconvolution layer, are automatically learned the mapping relationship between facial images with eyeglasses and their counterparts without eyeglasses to realize end-to-end eyeglasses removal. Then, massive facial images are captured through surveillance equipment and collected from the Internet as the training set. And, SLLFW data set is established, which is used as the test set of eyeglasses removal and face recognition. The experiment show that the proposed method can better effectively remove the eyeglasses from the real facial image than the traditional eyeglasses removal methods, and the evaluation index of the method is better than other methods. In addition, several face recognition methods are tested separately on the facial images formed by SLLFW data set. Experiments show that when the FAR (False Accept Rate) is 1%, the TAR (True Accept Rate) of the Sphereface method reaches 90.05%, 91.14% and 92.33%, which is 3.92%, 3.08% and 1.26% higher than the Sphereface method is not used to remove the eyeglasses from the F-SLLFW, H-SLLF and R-SLLFW, respectively. Similarly, when the FAR 0.1%, the TAR of Sphereface method is increased by 10.06%, 3.08% and 1.26% respectively. Therefore, the proposed method can better improve the recognition accuracy of fine-grained face recognition.
2021, 43(5): 1457-1464.
doi: 10.11999/JEIT190980
Abstract:
In order to meet the increasing demands for the performance of time in various application fields, an optimized frequency steering algorithm is designed and implemented in this paper, which is mainly divided into two parts: paper time scale calculation and physical signal implementation. ALGOS algorithm is adopted for the paper time scale calculation, and then an accurate and reliable time scale is calculated by using real-time atomic clock data and Circular T data, which ensures the accuracy and real-time steering reference scale. Real-time physical signals are implemented using an optimal Linear Quadratic Gaussian (LQG) control algorithmand and Kalman algorithm. By adjusting parameters in real time, the optimal frequency steering value is generated, this value is sent to the frequency adjustment device, and finally the output of the high-precision time signal is realized. The entire steering system is closed-loop. Based on time keeping system and atomic clock assemble, a test platform is built, and the algorithm is used to perform a 140 days frequency steering on a hydrogen maser clock, and finally the performance evaluation of the output physical signal is performed. Experimental results show that this algorithm improves effectively the accuracy and stability of the output physical signal. Compared with Universal Time Co-ordinated (UTC), the output time signal maintains a time deviation within ± 3 ns, and its stability is better than 5×10–16 at 30 days.
In order to meet the increasing demands for the performance of time in various application fields, an optimized frequency steering algorithm is designed and implemented in this paper, which is mainly divided into two parts: paper time scale calculation and physical signal implementation. ALGOS algorithm is adopted for the paper time scale calculation, and then an accurate and reliable time scale is calculated by using real-time atomic clock data and Circular T data, which ensures the accuracy and real-time steering reference scale. Real-time physical signals are implemented using an optimal Linear Quadratic Gaussian (LQG) control algorithmand and Kalman algorithm. By adjusting parameters in real time, the optimal frequency steering value is generated, this value is sent to the frequency adjustment device, and finally the output of the high-precision time signal is realized. The entire steering system is closed-loop. Based on time keeping system and atomic clock assemble, a test platform is built, and the algorithm is used to perform a 140 days frequency steering on a hydrogen maser clock, and finally the performance evaluation of the output physical signal is performed. Experimental results show that this algorithm improves effectively the accuracy and stability of the output physical signal. Compared with Universal Time Co-ordinated (UTC), the output time signal maintains a time deviation within ± 3 ns, and its stability is better than 5×10–16 at 30 days.
2021, 43(5): 1465-1471.
doi: 10.11999/JEIT191057
Abstract:
In this paper, an enhanced method of PTP fiber cascade fine time-frequency synchronization is proposed. Based on the PTP synchronization technology, combined with the synchronous Ethernet clock transfer technology and the multi-level cascade fine clock synchronization technology based on the digital double mixing time difference method, the PTP technology is improved and enhanced. Then, based on this method, the multi-level time-frequency equipment fiber cascade is used to realize the multi node, large-span and networked time-frequency signal transmission and synchronous output can solve the problem that the synchronization accuracy deteriorates step by step under the condition of multi-level cascade, realizing the ns level system time synchronization accuracy, and ensuring the efficient synchronization and linkage of all parts of the system under the highly unified time scale. The feasibility and effectiveness of the method are verified by design and test.
In this paper, an enhanced method of PTP fiber cascade fine time-frequency synchronization is proposed. Based on the PTP synchronization technology, combined with the synchronous Ethernet clock transfer technology and the multi-level cascade fine clock synchronization technology based on the digital double mixing time difference method, the PTP technology is improved and enhanced. Then, based on this method, the multi-level time-frequency equipment fiber cascade is used to realize the multi node, large-span and networked time-frequency signal transmission and synchronous output can solve the problem that the synchronization accuracy deteriorates step by step under the condition of multi-level cascade, realizing the ns level system time synchronization accuracy, and ensuring the efficient synchronization and linkage of all parts of the system under the highly unified time scale. The feasibility and effectiveness of the method are verified by design and test.
2021, 43(5): 1472-1479.
doi: 10.11999/JEIT200045
Abstract:
BallistoCardioGram (BCG) can be used for contactless detection of vital signs. In BCG’s beat-to-beat heart rate extraction, the lower mean absolute error is of great significance for accurately obtaining the user’s Heart Rate Variability (HRV) indicators. In order to solve the shortcomings in the accuracy of beat-to-beat heart rate calculation of most current methods, a BCG acquisition system based on piezoelectric ceramics sensor is designed in this paper. By adopting a suitable structure for the sensor‘s shell and a suitable sampling frequency, the sensitivity of the sensor and the time resolution of the BCG signal are increased. Through the analysis of BCG, the most suitable components in BCG is found to extract beat by beat cardiac cycle. At the same time, this paper proposes an adaptive template matching algorithm using AP clustering to extract accurately cardiac cycle information. Analysis of the data of 5741 heartbeats of 15 subjects shows that the average error of the beat-to-beat heartbeat cycle is 0.48%, the Mean Absolute Error (MAE) is 3.78 ms, and the heartbeat coverage is above 95%, which is better than other similar work.
BallistoCardioGram (BCG) can be used for contactless detection of vital signs. In BCG’s beat-to-beat heart rate extraction, the lower mean absolute error is of great significance for accurately obtaining the user’s Heart Rate Variability (HRV) indicators. In order to solve the shortcomings in the accuracy of beat-to-beat heart rate calculation of most current methods, a BCG acquisition system based on piezoelectric ceramics sensor is designed in this paper. By adopting a suitable structure for the sensor‘s shell and a suitable sampling frequency, the sensitivity of the sensor and the time resolution of the BCG signal are increased. Through the analysis of BCG, the most suitable components in BCG is found to extract beat by beat cardiac cycle. At the same time, this paper proposes an adaptive template matching algorithm using AP clustering to extract accurately cardiac cycle information. Analysis of the data of 5741 heartbeats of 15 subjects shows that the average error of the beat-to-beat heartbeat cycle is 0.48%, the Mean Absolute Error (MAE) is 3.78 ms, and the heartbeat coverage is above 95%, which is better than other similar work.
2021, 43(5): 1480-1484.
doi: 10.11999/JEIT200103
Abstract:
The microwave window is the main component of the space traveling wave tube and an important transmission component for transmitting high-frequency electromagnetic waves in the tube. The microwave window is generally welded with silver solder, and stored in a vacuum cabinet. After storage for some time, the microwave window surface becomes black. The analysis shows that the rubber products placed in the vacuum cabinet cause vulcanization of the silver soldered joints of the high-frequency components, leading to storage failure. Combining experimental results and theoretical analysis, the sulfur in the rubber is relatively stable under atmospheric conditions. However, in a vacuum state, sulfur is relatively easy to sublimate from the rubber into the vacuum cabinet, which will diffuse a large amount of sulfur vapor in the vacuum cabinet. These sulfur vapors react with the silver solder to form Ag2S, so the sulfur-containing rubber should not be placed in a vacuum cabinet.
The microwave window is the main component of the space traveling wave tube and an important transmission component for transmitting high-frequency electromagnetic waves in the tube. The microwave window is generally welded with silver solder, and stored in a vacuum cabinet. After storage for some time, the microwave window surface becomes black. The analysis shows that the rubber products placed in the vacuum cabinet cause vulcanization of the silver soldered joints of the high-frequency components, leading to storage failure. Combining experimental results and theoretical analysis, the sulfur in the rubber is relatively stable under atmospheric conditions. However, in a vacuum state, sulfur is relatively easy to sublimate from the rubber into the vacuum cabinet, which will diffuse a large amount of sulfur vapor in the vacuum cabinet. These sulfur vapors react with the silver solder to form Ag2S, so the sulfur-containing rubber should not be placed in a vacuum cabinet.