Email alert
2015 Vol. 37, No. 5
column
Display Method:
2015, 37(5): 1017-1022.
doi: 10.11999/JEIT141124
Abstract:
Increasing integration time is a main approach to improve performance of passive radar, but the range and Doppler migration may occur for high-speed and accelerated targets, and for non-uniform sampled signal in the slow time such as China mobile multimedia broadcasting signal, the most used migration compensation algorithms such as Keystone transform and Radon-Fourier transform are inapplicable. This paper uses a long-time coherent integration algorithm based on two-step Doppler processing, which can be applied to both the uniform and non-uniform sampled signal, a modified algorithm based on this method is proposed, which can detect higher speed-acceleration targets, and this algorithm can improve computation efficiency. The specialty and difficulty of non-uniform sampled signal are analyzed first, then on the basis of Doppler processing of certain signal, migration reason and migration compensation principle are demonstrated. Finally, simulation and real data processing confirm the effectiveness of the proposed method.
Increasing integration time is a main approach to improve performance of passive radar, but the range and Doppler migration may occur for high-speed and accelerated targets, and for non-uniform sampled signal in the slow time such as China mobile multimedia broadcasting signal, the most used migration compensation algorithms such as Keystone transform and Radon-Fourier transform are inapplicable. This paper uses a long-time coherent integration algorithm based on two-step Doppler processing, which can be applied to both the uniform and non-uniform sampled signal, a modified algorithm based on this method is proposed, which can detect higher speed-acceleration targets, and this algorithm can improve computation efficiency. The specialty and difficulty of non-uniform sampled signal are analyzed first, then on the basis of Doppler processing of certain signal, migration reason and migration compensation principle are demonstrated. Finally, simulation and real data processing confirm the effectiveness of the proposed method.
2015, 37(5): 1260-1265.
doi: 10.11999/JEIT141129
Abstract:
Truth table is an important tool to represent the logic causal relationships between inputs and outputs. The reduction of the truth table is of great significance in analysis and design of digital logic circuit. In this paper, the MIMO truth table is considered as a Logical Information System (LIS), and the traditional truth table reduction issue is converted into the minimal rule discovery of LIS. Granular Computing (GrC) method is then introduced. Firstly, the logical information system is hierarchically granulated. Secondly, the Granular Matrix (GrM) is defined and operated to represent the knowledge in different granularity, together with heuristic information hidden in the matrix, the rapid parallel reduction algorithm for the MIMO truth table is proposed. Light-Emitting Diode (LED) digital display is applied to illustrate the computing process. The mathematical proof and the complexity analysis proves the efficiency and validity of the proposed algorithm.
Truth table is an important tool to represent the logic causal relationships between inputs and outputs. The reduction of the truth table is of great significance in analysis and design of digital logic circuit. In this paper, the MIMO truth table is considered as a Logical Information System (LIS), and the traditional truth table reduction issue is converted into the minimal rule discovery of LIS. Granular Computing (GrC) method is then introduced. Firstly, the logical information system is hierarchically granulated. Secondly, the Granular Matrix (GrM) is defined and operated to represent the knowledge in different granularity, together with heuristic information hidden in the matrix, the rapid parallel reduction algorithm for the MIMO truth table is proposed. Light-Emitting Diode (LED) digital display is applied to illustrate the computing process. The mathematical proof and the complexity analysis proves the efficiency and validity of the proposed algorithm.
2015, 37(5): 1023-1030.
doi: 10.11999/JEIT140899
Abstract:
This paper presents a high-resolution imaging method based on Sparse Bayesian Learning (SBL) for passive radar compressed sensing imaging. Under the one-snapshot echo model, the proposed method firstly takes account of the frequency-dependent statistics of the target scattering centers, and changes passive radar imaging into a joint Multiple Measurement Vector (MMV) sparse optimization problem. Further, a hierarchical Bayesian framework for sparsity-inducing priori of the target is established, then the MMV problem is efficiently solved by utilizing the SBL theory. Unlike the previous sparse recovery algorithms relying on the deterministic assumption of the target, the proposed method makes a better use of the target prior information, and has the advantages of adaptively estimating parameters (including the parameters in the priori model of the target, and the unknown noise power) as well as the high-resolution imaging, etc.. Simulation results show the effectiveness of the proposed method.
This paper presents a high-resolution imaging method based on Sparse Bayesian Learning (SBL) for passive radar compressed sensing imaging. Under the one-snapshot echo model, the proposed method firstly takes account of the frequency-dependent statistics of the target scattering centers, and changes passive radar imaging into a joint Multiple Measurement Vector (MMV) sparse optimization problem. Further, a hierarchical Bayesian framework for sparsity-inducing priori of the target is established, then the MMV problem is efficiently solved by utilizing the SBL theory. Unlike the previous sparse recovery algorithms relying on the deterministic assumption of the target, the proposed method makes a better use of the target prior information, and has the advantages of adaptively estimating parameters (including the parameters in the priori model of the target, and the unknown noise power) as well as the high-resolution imaging, etc.. Simulation results show the effectiveness of the proposed method.
2015, 37(5): 1031-1037.
doi: 10.11999/JEIT140973
Abstract:
The transient interference can dramatically degrade the performance of over-the-horizon radar. The traditional interference suppression methods need to suppress the sea clutter, and only can suppress the strong transient interference, not mitigate the weak one and the noise. A novel interference suppression method is proposed based on the low-rank matrix completion. Firstly, the proposed method detects the interference via the Teager-Kaiser operator, subsequently excises the interference data. Then, considering that the Hankel matrix of clutter-plus-target signals is low-rank, the matrix completion method is exploited to recover the interference- removed signal. The proposed algorithm can suppress not only the strong interference, but also the weak one and the noise, improving the signal to noise ratio of echo. The simulation and experimental results demonstrate the effectiveness of the proposed method.
The transient interference can dramatically degrade the performance of over-the-horizon radar. The traditional interference suppression methods need to suppress the sea clutter, and only can suppress the strong transient interference, not mitigate the weak one and the noise. A novel interference suppression method is proposed based on the low-rank matrix completion. Firstly, the proposed method detects the interference via the Teager-Kaiser operator, subsequently excises the interference data. Then, considering that the Hankel matrix of clutter-plus-target signals is low-rank, the matrix completion method is exploited to recover the interference- removed signal. The proposed algorithm can suppress not only the strong interference, but also the weak one and the noise, improving the signal to noise ratio of echo. The simulation and experimental results demonstrate the effectiveness of the proposed method.
2015, 37(5): 1038-1043.
doi: 10.11999/JEIT140911
Abstract:
A beampattern design method for the airborne Multiple Input Multiple Output (MIMO) radar based on the prior information is proposed to suppress the sidelobe clutter in the non-homogeneous clutter environment. Using the two Dimensional (2D) prior information of the target and clutter in the spatial and Doppler domain, the cost function for the correlation matrix of the transmission waveform is established by employing the maximum output Signal-to-Clutter-plus-Noise Ratio (SCNR) criterion after the space-time matched filter and it can be solved by the Semi-Definate Programming (SDP). The simulation results indicate that the transmission beampattern optimized by the proposed method is able to increase effectively the SCNR output after the space-time 2D matched filter in the non-homogeneous clutter environment.
A beampattern design method for the airborne Multiple Input Multiple Output (MIMO) radar based on the prior information is proposed to suppress the sidelobe clutter in the non-homogeneous clutter environment. Using the two Dimensional (2D) prior information of the target and clutter in the spatial and Doppler domain, the cost function for the correlation matrix of the transmission waveform is established by employing the maximum output Signal-to-Clutter-plus-Noise Ratio (SCNR) criterion after the space-time matched filter and it can be solved by the Semi-Definate Programming (SDP). The simulation results indicate that the transmission beampattern optimized by the proposed method is able to increase effectively the SCNR output after the space-time 2D matched filter in the non-homogeneous clutter environment.
2015, 37(5): 1044-1050.
doi: 10.11999/JEIT141222
Abstract:
Non-sidelooking array configuration leads to heterogeneous airborne radar echo data, it degrades seriously the clutter suppression performance of the traditional Space-Time Adaptive Processing (STAP) algorithms. To solve this problem, a heterogeneous clutter suppression method, which is robust to array error, is proposed. Firstly, a clutter representation basis is constructed based on a priori information such as system parameters. Then, the test data is fitted in an iterative least square manner with consideration of array error, and the closed-form solution is derived and utilized here. Finally, pulse Doppler processing and constant false alarm detection are conducted on the residual data. The proposed method does not need any training sample and can effectively suppress the heterogeneous clutter for airborne non-sidelooking radar without elevation degree of freedom. Simulation results verify the validity of the proposed method.
Non-sidelooking array configuration leads to heterogeneous airborne radar echo data, it degrades seriously the clutter suppression performance of the traditional Space-Time Adaptive Processing (STAP) algorithms. To solve this problem, a heterogeneous clutter suppression method, which is robust to array error, is proposed. Firstly, a clutter representation basis is constructed based on a priori information such as system parameters. Then, the test data is fitted in an iterative least square manner with consideration of array error, and the closed-form solution is derived and utilized here. Finally, pulse Doppler processing and constant false alarm detection are conducted on the residual data. The proposed method does not need any training sample and can effectively suppress the heterogeneous clutter for airborne non-sidelooking radar without elevation degree of freedom. Simulation results verify the validity of the proposed method.
2015, 37(5): 1051-1057.
doi: 10.11999/JEIT140832
Abstract:
To handle the small sample support problem under the heterogeneous clutter environment, a fast convergence Space-Time Adaptive Processing (STAP) algorithm based on low-rank approximation of the weight matrix is proposed. Unlike the traditional Low-Rank Approximation (LRA) algorithm for STAP, the weight matrix is reconstructed so that the numbers of its columns and rows are the same or close to each other by utilizing the special Kronecker property of the space time steering vector, which to reduce the requirement of samples and computational load. By using the low-rank approximation method to approximate the adaptive weight matrix, the original quadratic optimal problem transforms into a bi-quadratic optimal problem. Experimental results verify that the Improved LRA (ILRA) method can improve the convergence rate and reduce the computational load.
To handle the small sample support problem under the heterogeneous clutter environment, a fast convergence Space-Time Adaptive Processing (STAP) algorithm based on low-rank approximation of the weight matrix is proposed. Unlike the traditional Low-Rank Approximation (LRA) algorithm for STAP, the weight matrix is reconstructed so that the numbers of its columns and rows are the same or close to each other by utilizing the special Kronecker property of the space time steering vector, which to reduce the requirement of samples and computational load. By using the low-rank approximation method to approximate the adaptive weight matrix, the original quadratic optimal problem transforms into a bi-quadratic optimal problem. Experimental results verify that the Improved LRA (ILRA) method can improve the convergence rate and reduce the computational load.
2015, 37(5): 1058-1064.
doi: 10.11999/JEIT141226
Abstract:
Active radar stealth technology is an important branch of target radar signature control technology (which is also called as stealth technology). A quantitative analysis is made for an active radar stealth device mentioned in a United States patent, and the stealth device is one kind of smart skin. Firstly, the basic working principle of the device is described, and then the constraint equations of electromagnetic field is given. Secondly, the quantitative relationship between the transfer phase, incident wavelength, incident angle, spatial relationship and other factors is derived. Finally, the stealth performance influenced by incident wavelength, incident angle and spatial position is discussed by numerical simulation, and some meaningful conclusions are drawn.
Active radar stealth technology is an important branch of target radar signature control technology (which is also called as stealth technology). A quantitative analysis is made for an active radar stealth device mentioned in a United States patent, and the stealth device is one kind of smart skin. Firstly, the basic working principle of the device is described, and then the constraint equations of electromagnetic field is given. Secondly, the quantitative relationship between the transfer phase, incident wavelength, incident angle, spatial relationship and other factors is derived. Finally, the stealth performance influenced by incident wavelength, incident angle and spatial position is discussed by numerical simulation, and some meaningful conclusions are drawn.
2015, 37(5): 1065-1070.
doi: 10.11999/JEIT141041
Abstract:
This paper proposes a new signal reconstruction method for the signals with missing samples obtained by narrow-band radar. For the narrow-band radar system, the target echoes can be assumed to follow the complex Gaussian distribution. Based on this precondition, first the probabilistic model between the observed signal with missing samples and the unknown complete signal is formulated. Then the posterior distribution of the complete signal is obtained via the Bayes' theorem. Finally, the maximum likelihood estimation of the model parameters is obtained with the Expectation Maximization (EM) algorithm and the reconstruction of the complete signal can be obtained. The advantage of the method is that the reconstruction of the complete signal only using the observed signal with missing samples based on the complex Gaussian distribution assumption, while no other signal and prior information are needed in the parameter learning process. Experiments based on the measured data and the comparation results with other state-of-the-art approaches show that the proposed method can achieve good reconstruction performance.
This paper proposes a new signal reconstruction method for the signals with missing samples obtained by narrow-band radar. For the narrow-band radar system, the target echoes can be assumed to follow the complex Gaussian distribution. Based on this precondition, first the probabilistic model between the observed signal with missing samples and the unknown complete signal is formulated. Then the posterior distribution of the complete signal is obtained via the Bayes' theorem. Finally, the maximum likelihood estimation of the model parameters is obtained with the Expectation Maximization (EM) algorithm and the reconstruction of the complete signal can be obtained. The advantage of the method is that the reconstruction of the complete signal only using the observed signal with missing samples based on the complex Gaussian distribution assumption, while no other signal and prior information are needed in the parameter learning process. Experiments based on the measured data and the comparation results with other state-of-the-art approaches show that the proposed method can achieve good reconstruction performance.
2015, 37(5): 1071-1077.
doi: 10.11999/JEIT140737
Abstract:
The wideband cross-ambiguity function method is commonly adopted to execute the parameter estimation of wideband noise radar, but it needs three-dimensional search in distance, velocity and acceleration when dealing with maneuvering targets, which takes huge computation burden. A novel method based on the conjugate noise group is proposed for addressing the problem of parameter estimation of maneuvering targets. Firstly, the multiple channel is set up according to the echo stretching effect, and the internal signals of the noise group is cut out in fixed length for mixing in each channel. Then the Doppler phase is estimated with the mixed signal by Fractional Fourier Transform (FrFT). The Phase compensation function is constructed by the Doppler phase and the delay is estimated by Frequency-domain Scale Correlation (FSC) algorithm with the compensated noise group signal. Finally, the range, velocity and acceleration are obtained by the two simultaneous equations of the Doppler phase and delay. The proposed method avoids three-dimensional search and reconstruction of the echo signal in time domain, which reduces a large amount of computation compared to the wideband cross-ambiguity function method. The method is feasible for real time processing as the whole algorithm can be accomplished by Fast Fourier Transform (FFT). The effectiveness and superiority of the proposed method are demonstrated by the simulation results.
The wideband cross-ambiguity function method is commonly adopted to execute the parameter estimation of wideband noise radar, but it needs three-dimensional search in distance, velocity and acceleration when dealing with maneuvering targets, which takes huge computation burden. A novel method based on the conjugate noise group is proposed for addressing the problem of parameter estimation of maneuvering targets. Firstly, the multiple channel is set up according to the echo stretching effect, and the internal signals of the noise group is cut out in fixed length for mixing in each channel. Then the Doppler phase is estimated with the mixed signal by Fractional Fourier Transform (FrFT). The Phase compensation function is constructed by the Doppler phase and the delay is estimated by Frequency-domain Scale Correlation (FSC) algorithm with the compensated noise group signal. Finally, the range, velocity and acceleration are obtained by the two simultaneous equations of the Doppler phase and delay. The proposed method avoids three-dimensional search and reconstruction of the echo signal in time domain, which reduces a large amount of computation compared to the wideband cross-ambiguity function method. The method is feasible for real time processing as the whole algorithm can be accomplished by Fast Fourier Transform (FFT). The effectiveness and superiority of the proposed method are demonstrated by the simulation results.
2015, 37(5): 1078-1084.
doi: 10.11999/JEIT141061
Abstract:
The extraction of the procession and structure parameters for ballistic targets is a critical point for success of the ballistic target identification. This paper proposes a method to deduct the procession and structure parameters of a cone target based on the Inverse Synthetic Aperture Radar (ISAR) sequences. The time-frequency ISARs of the symmetrical cone target are simulated by the range-instantaneous Doppler algorithm, which is based on the electromagnetic simulation data and micro-motion characteristics of the cone target, and the spin motion of the rotational symmetry cone target is ignored. The location information of strong scattering sources is extracted by the CLEAN algorithm besides; the formula of the projection of the slip-type strong scattering sources on the imaging plane is deduced, which can provide mathematics reference for the imaging simulation and parameter estimation of the precession cone target. The strong scattering sources affected by viewing angle is analyzed and the method of inversion of the procession and structure parameters is given. Finally, the simulation result shows that the imaging algorithm and the formula of the projection of the slip-type strong scattering sources on the imaging plane are correct, the procession and structure parameters are inverted by the location information of the strong scattering sources obtained.
The extraction of the procession and structure parameters for ballistic targets is a critical point for success of the ballistic target identification. This paper proposes a method to deduct the procession and structure parameters of a cone target based on the Inverse Synthetic Aperture Radar (ISAR) sequences. The time-frequency ISARs of the symmetrical cone target are simulated by the range-instantaneous Doppler algorithm, which is based on the electromagnetic simulation data and micro-motion characteristics of the cone target, and the spin motion of the rotational symmetry cone target is ignored. The location information of strong scattering sources is extracted by the CLEAN algorithm besides; the formula of the projection of the slip-type strong scattering sources on the imaging plane is deduced, which can provide mathematics reference for the imaging simulation and parameter estimation of the precession cone target. The strong scattering sources affected by viewing angle is analyzed and the method of inversion of the procession and structure parameters is given. Finally, the simulation result shows that the imaging algorithm and the formula of the projection of the slip-type strong scattering sources on the imaging plane are correct, the procession and structure parameters are inverted by the location information of the strong scattering sources obtained.
2015, 37(5): 1085-1090.
doi: 10.11999/JEIT140818
Abstract:
The Random Pulse Repetition Interval Radon-Fourier Transform (RPRI-RFT) is proposed for long-time echo pulses coherent integration and Blind Speed Side Lobe (BSSL) suppression, which makes it a good resolution for low SNR target detection and parameters estimation under Doppler ambiguity of long-range stealth targets. By analyzing the quantitative relation between the random jitter of PRI with the mean of Doppler ambiguity side lobes and the variance of random noise spectrum modulation, it is cleared that increasing the number of pulses can reduce the effect of modulation noise. Furthermore, for the problem of cross range unit migration leaded by increasing pulses, RPRI-RFT is an effective method to achieve echo pulses coherent integration. Both theoretical analysis and simulation results show that RPRI-RFT can be used to reduce random noise while suppressing blind speed side lobes, thus significantly improving the detection and measurement ability of low pulse repetition frequency radar with long-range weak high-speed multi-targets.
The Random Pulse Repetition Interval Radon-Fourier Transform (RPRI-RFT) is proposed for long-time echo pulses coherent integration and Blind Speed Side Lobe (BSSL) suppression, which makes it a good resolution for low SNR target detection and parameters estimation under Doppler ambiguity of long-range stealth targets. By analyzing the quantitative relation between the random jitter of PRI with the mean of Doppler ambiguity side lobes and the variance of random noise spectrum modulation, it is cleared that increasing the number of pulses can reduce the effect of modulation noise. Furthermore, for the problem of cross range unit migration leaded by increasing pulses, RPRI-RFT is an effective method to achieve echo pulses coherent integration. Both theoretical analysis and simulation results show that RPRI-RFT can be used to reduce random noise while suppressing blind speed side lobes, thus significantly improving the detection and measurement ability of low pulse repetition frequency radar with long-range weak high-speed multi-targets.
2015, 37(5): 1091-1096.
doi: 10.11999/JEIT140985
Abstract:
Based on the equivalent scatter point model, the relationship between the micro-Doppler frequencies and the motion parameters of the cone-shaped target with precession is established. Due to the approximately sinusoidal form of the micro-Doppler frequency modulation induced by the precession, an approach to extract the micro-Doppler frequency of the target with precession based on the instantaneous frequency estimation and the RANdom SAmple Consensus (RANSAC) is proposed. In this method, the target echo signal is first divided into several segments. Then each segment of the echo signal approximates to the sum of several components of the Linear Frequency Modulation (LFM) signal, and an algorithm based on the extended Relax method is used to estimate the instantaneous frequency of the each LFM signal. Thus the micro-Doppler curve of each equivalent scatter point is estimated by the RANSAC algorithm. In the simulation experiments, the performance of the proposed method is evaluated via the simulation data and electromagnetic computation simulation data.
Based on the equivalent scatter point model, the relationship between the micro-Doppler frequencies and the motion parameters of the cone-shaped target with precession is established. Due to the approximately sinusoidal form of the micro-Doppler frequency modulation induced by the precession, an approach to extract the micro-Doppler frequency of the target with precession based on the instantaneous frequency estimation and the RANdom SAmple Consensus (RANSAC) is proposed. In this method, the target echo signal is first divided into several segments. Then each segment of the echo signal approximates to the sum of several components of the Linear Frequency Modulation (LFM) signal, and an algorithm based on the extended Relax method is used to estimate the instantaneous frequency of the each LFM signal. Thus the micro-Doppler curve of each equivalent scatter point is estimated by the RANSAC algorithm. In the simulation experiments, the performance of the proposed method is evaluated via the simulation data and electromagnetic computation simulation data.
2015, 37(5): 1097-1103.
doi: 10.11999/JEIT140924
Abstract:
On the condition of a high Pulse Repetition Frequency (PRF) mode, radars may suffer from range ambiguity, which poses a significant challenge to detecting and tracking weak targets. To address this problem, a novel approach, which can handle ambiguous data of weak targets, is proposed within the Track Before Detect (TBD) framework. The main idea is that, without the pre-detection and ambiguity resolution step at each time step, the problem of range ambiguity resolution and target detection are transformed into the decision of the target true track. At first, the space-time relative information can be achieved by a multiple hypothesis ranging procedure, in which all the ambiguous measurements are handled via a batch procedure. Next, based on the relativity in time and PRF domains, the track is detected with a TBD method while the ambiguous data is unfolded. Different to classic methods, the new approach transforms the problem of range ambiguity resolution into the decision of the real track for targets, which provides a new way to such problem, avoiding the loss tracking of the weak target with lower Signal Noise Ratio (SNR). An application example is given to analyze and compare the performance between the proposed approach and the existing approach. The simulation results illustrate the effectiveness of this approach.
On the condition of a high Pulse Repetition Frequency (PRF) mode, radars may suffer from range ambiguity, which poses a significant challenge to detecting and tracking weak targets. To address this problem, a novel approach, which can handle ambiguous data of weak targets, is proposed within the Track Before Detect (TBD) framework. The main idea is that, without the pre-detection and ambiguity resolution step at each time step, the problem of range ambiguity resolution and target detection are transformed into the decision of the target true track. At first, the space-time relative information can be achieved by a multiple hypothesis ranging procedure, in which all the ambiguous measurements are handled via a batch procedure. Next, based on the relativity in time and PRF domains, the track is detected with a TBD method while the ambiguous data is unfolded. Different to classic methods, the new approach transforms the problem of range ambiguity resolution into the decision of the real track for targets, which provides a new way to such problem, avoiding the loss tracking of the weak target with lower Signal Noise Ratio (SNR). An application example is given to analyze and compare the performance between the proposed approach and the existing approach. The simulation results illustrate the effectiveness of this approach.
2015, 37(5): 1104-1110.
doi: 10.11999/JEIT140692
Abstract:
For the issue that the range spread targets got from the wideband radar signal are difficult to integrate for detect by the large velocity of target, a Hough Transformation Detector (HTD) is developed. Adjoin high resolution radar range profiles of range spread target processes high correlation coefficient. Using this characteristic, signal energy integrated by Hough transformation in cross-correlation order and correlation time two dimensional plane and target detect are carried. Theory analysis show HTD is non-dependent on target scatter distribution information and target moving information and processes Constant False Alarm Ratio (CFAR) performance. Computer simulation experiment show HTD achieves better detection performance than Non-Scatterer Density Dependent Generalized Likelihood Ratio Test (NSDD-GLRT) detector.
For the issue that the range spread targets got from the wideband radar signal are difficult to integrate for detect by the large velocity of target, a Hough Transformation Detector (HTD) is developed. Adjoin high resolution radar range profiles of range spread target processes high correlation coefficient. Using this characteristic, signal energy integrated by Hough transformation in cross-correlation order and correlation time two dimensional plane and target detect are carried. Theory analysis show HTD is non-dependent on target scatter distribution information and target moving information and processes Constant False Alarm Ratio (CFAR) performance. Computer simulation experiment show HTD achieves better detection performance than Non-Scatterer Density Dependent Generalized Likelihood Ratio Test (NSDD-GLRT) detector.
2015, 37(5): 1111-1115.
doi: 10.11999/JEIT140955
Abstract:
Multi-baseline phase unwrapping problem can be solved according to find the optimal solution of the L1-norm optimization. However, there are two problems: one is the huge memory required and the other is the difficulty in processing interferograms with severe noise. In order to decrease the memory requirement of the L1-norm method, with a cost function of L-norm is employed to approximate the L1-norm. Consequently, the objective function of the improved multi-baseline phase unwrapping is the form of L-norm+L1-norm, and the size of the new optimization variable is decreased by 57%. The performance of the proposed algorithm is validated via a real dataset with severe noise present, and the experiment demonstrates that the proposed algorithm not only presents a well phase unwrapping result of interferograms with good quality, but also performs a filtering against noise region.
Multi-baseline phase unwrapping problem can be solved according to find the optimal solution of the L1-norm optimization. However, there are two problems: one is the huge memory required and the other is the difficulty in processing interferograms with severe noise. In order to decrease the memory requirement of the L1-norm method, with a cost function of L-norm is employed to approximate the L1-norm. Consequently, the objective function of the improved multi-baseline phase unwrapping is the form of L-norm+L1-norm, and the size of the new optimization variable is decreased by 57%. The performance of the proposed algorithm is validated via a real dataset with severe noise present, and the experiment demonstrates that the proposed algorithm not only presents a well phase unwrapping result of interferograms with good quality, but also performs a filtering against noise region.
2015, 37(5): 1116-1121.
doi: 10.11999/JEIT140928
Abstract:
In the wide-field and super high-resolution spaceborne Synthetic Aperture Radar (SAR), the traditional hyperbolic slant range model is no longer available, which is due to the variance of equivalent velocity in azimuth direction is able to cause azimuth defocus. For this problem, this paper proposes a new acceleration slant range model, which considers the azimuth variance of equivalent velocity. Based on this model, the corresponding signal processing and imaging method is given. Firstly, the azimuth variance of velocity is eliminated by azimuth resampling in the time domain. Secondly, the third-order and fourth-order error is eliminated in the 2D frequency domain. Finally, Range Migration Algorithm (RMA) is used to get the final image. The simulation results validate the effectiveness of the new acceleration slant range model and imaging algorithm.
In the wide-field and super high-resolution spaceborne Synthetic Aperture Radar (SAR), the traditional hyperbolic slant range model is no longer available, which is due to the variance of equivalent velocity in azimuth direction is able to cause azimuth defocus. For this problem, this paper proposes a new acceleration slant range model, which considers the azimuth variance of equivalent velocity. Based on this model, the corresponding signal processing and imaging method is given. Firstly, the azimuth variance of velocity is eliminated by azimuth resampling in the time domain. Secondly, the third-order and fourth-order error is eliminated in the 2D frequency domain. Finally, Range Migration Algorithm (RMA) is used to get the final image. The simulation results validate the effectiveness of the new acceleration slant range model and imaging algorithm.
2015, 37(5): 1122-1127.
doi: 10.11999/JEIT141140
Abstract:
In this paper, the tradition change detection method based on local statistical feature is expanded to two-dimensional feature space, and a SAR image change detection method based on comparison of two-dimensional probability density functions is proposed. In this method, the values of adjacent pixels are combined to build two-dimensional observation vector. Then, in each temporal image, the Probability Density Function (PDF) of the vector is estimated by two-dimensional Gram-Charlier expansion. On the basis, change detection is fulfilled by computing the K-L divergence between the PDFs in different temporal images. Experiment results show that the proposed algorithm has better performance than the traditional method.
In this paper, the tradition change detection method based on local statistical feature is expanded to two-dimensional feature space, and a SAR image change detection method based on comparison of two-dimensional probability density functions is proposed. In this method, the values of adjacent pixels are combined to build two-dimensional observation vector. Then, in each temporal image, the Probability Density Function (PDF) of the vector is estimated by two-dimensional Gram-Charlier expansion. On the basis, change detection is fulfilled by computing the K-L divergence between the PDFs in different temporal images. Experiment results show that the proposed algorithm has better performance than the traditional method.
2015, 37(5): 1128-1134.
doi: 10.11999/JEIT140923
Abstract:
Endmember extraction methods based on geometric?distribution of hyperspectral images usually divide into projection algorithm and the maximum volume formula for simplex, which the former has lower computational complexity and the latter has better precision. A Fast endmember extraction method based on Cofactor of a determinant Algorithm (FCA) is proposed. The algorithm combines the two kinds of algorithms, and which means it has a high speed and accuracy performance for endmember extraction. FCA finds the max volume of simplex by making pixels project to vectors, which are composed of the cofactors of elements in endmember determinant. Besides, FCA is flexible in endmember search, for it can use higher purity pixels to replace the endmembers extracted in the last iteration, which ensures that all the endmembers extracted by FCA are the vertices of simplex. The theoretical analysis and experiments on both simulated and real hyperspectral data demonstrate that the proposed algorithm is a fast and accurate algorithm for endmember extraction.
Endmember extraction methods based on geometric?distribution of hyperspectral images usually divide into projection algorithm and the maximum volume formula for simplex, which the former has lower computational complexity and the latter has better precision. A Fast endmember extraction method based on Cofactor of a determinant Algorithm (FCA) is proposed. The algorithm combines the two kinds of algorithms, and which means it has a high speed and accuracy performance for endmember extraction. FCA finds the max volume of simplex by making pixels project to vectors, which are composed of the cofactors of elements in endmember determinant. Besides, FCA is flexible in endmember search, for it can use higher purity pixels to replace the endmembers extracted in the last iteration, which ensures that all the endmembers extracted by FCA are the vertices of simplex. The theoretical analysis and experiments on both simulated and real hyperspectral data demonstrate that the proposed algorithm is a fast and accurate algorithm for endmember extraction.
2015, 37(5): 1135-1140.
doi: 10.11999/JEIT140876
Abstract:
When using a Choi-Williams Hough (CWH) transform to estimate the parameters of the Linear Frequency Modulated Continuous Wave (LFMCW) signals, the signal observation time is longer than a period, the output SNR at the true parameter value does not increase with the observation time increasing and there are multiple peaks in the time-frequency image. In virtue of the energy congregation of CWH for LFMCW signals and the coherent integrator in signal processing, a multiple period LFMCW signals parameters estimation method based on period CWH (PCWH) is studied. Firstly, the PCWH formula of the multiple period LFMCW signals is given. Then the relationship among the output SNR of PCWH, the observation time and the sample signal SNR is analyzed. Finally, the estimation accuracy formula of PCWH is derived. The numerical simulation shows that the effectiveness of the proposed method and the PCWH is superior to CWH for estimating a multiple periods LFMCW signal.
When using a Choi-Williams Hough (CWH) transform to estimate the parameters of the Linear Frequency Modulated Continuous Wave (LFMCW) signals, the signal observation time is longer than a period, the output SNR at the true parameter value does not increase with the observation time increasing and there are multiple peaks in the time-frequency image. In virtue of the energy congregation of CWH for LFMCW signals and the coherent integrator in signal processing, a multiple period LFMCW signals parameters estimation method based on period CWH (PCWH) is studied. Firstly, the PCWH formula of the multiple period LFMCW signals is given. Then the relationship among the output SNR of PCWH, the observation time and the sample signal SNR is analyzed. Finally, the estimation accuracy formula of PCWH is derived. The numerical simulation shows that the effectiveness of the proposed method and the PCWH is superior to CWH for estimating a multiple periods LFMCW signal.
2015, 37(5): 1141-1148.
doi: 10.11999/JEIT141019
Abstract:
A novel video stabilization algorithm based on preferred feature trajectories is presented. Firstly, Harris feature points are extracted from frames, and foreground feature points are eliminated via K-Means clustering algorithm. Then, the effective feature trajectories are obtained via spatial motion consistency to reduce false matches and temporal motion similarity for long-time tracking. Finally, an objective function is established, which contains both smoothness of feature trajectories and degradation of video qualities to find a set of transformations to smooth out the feature trajectories and obtain stabilized video. As for the blank areas of image warping, optical flow between the defined area of current frame and the reference frame is used as a guide to erode them, mosaicing based on the reference frame is used to get a full-frame video. The simulation experiments show that the blank area of the stabilized video with the proposed method is only about 33% of that with Matsushita method, it is effective to dynamic complex scenes and multiple large moving objects, and can obtain content complete video, the proposed method can not only improve the visual effect of video, but also reduce the motion inpainting.
A novel video stabilization algorithm based on preferred feature trajectories is presented. Firstly, Harris feature points are extracted from frames, and foreground feature points are eliminated via K-Means clustering algorithm. Then, the effective feature trajectories are obtained via spatial motion consistency to reduce false matches and temporal motion similarity for long-time tracking. Finally, an objective function is established, which contains both smoothness of feature trajectories and degradation of video qualities to find a set of transformations to smooth out the feature trajectories and obtain stabilized video. As for the blank areas of image warping, optical flow between the defined area of current frame and the reference frame is used as a guide to erode them, mosaicing based on the reference frame is used to get a full-frame video. The simulation experiments show that the blank area of the stabilized video with the proposed method is only about 33% of that with Matsushita method, it is effective to dynamic complex scenes and multiple large moving objects, and can obtain content complete video, the proposed method can not only improve the visual effect of video, but also reduce the motion inpainting.
2015, 37(5): 1149-1153.
doi: 10.11999/JEIT141185
Abstract:
For the mosaic algorithm of traditional panoramic image, the Harris corner extraction or the Scale-Invariant Feature Transform (SIFT) feature matching are the most commonly used methods for resolving the parts of image overlap. But for the vehicle panoramic image mosaic algorithm, four images of fisheye distortion around a car are spliced by the algorithm of feature extraction. The algorithm has high computational complexity, low efficiency and is not able to satisfy the real-time requirements of vehicle equipment. Aiming at this problem, a mosaic algorithm for the panoramic images specially used in the vehicle system is proposed and simulated in matlab. The result is to maximize the efficiency of the algorithm and meet the requirements of real-time for the vehicle system. It can be achieved that the real traffic information is displayed for protecting the safety of the driver.
For the mosaic algorithm of traditional panoramic image, the Harris corner extraction or the Scale-Invariant Feature Transform (SIFT) feature matching are the most commonly used methods for resolving the parts of image overlap. But for the vehicle panoramic image mosaic algorithm, four images of fisheye distortion around a car are spliced by the algorithm of feature extraction. The algorithm has high computational complexity, low efficiency and is not able to satisfy the real-time requirements of vehicle equipment. Aiming at this problem, a mosaic algorithm for the panoramic images specially used in the vehicle system is proposed and simulated in matlab. The result is to maximize the efficiency of the algorithm and meet the requirements of real-time for the vehicle system. It can be achieved that the real traffic information is displayed for protecting the safety of the driver.
2015, 37(5): 1154-1159.
doi: 10.11999/JEIT141083
Abstract:
To improve the lower limb surface ElectroMyoGraphic (EMG) gait recognition accuracy and real time performance, this paper deals with a pattern recognition method for optimizing the Support Vector Machine (SVM) by using the Particle Swarm Optimization (PSO) algorithm. Firstly, the values of Integrated EMG and variance are extracted as the feature samples from the de-noised EMG signals. Then, the SVM parameters of the punishment and the kernel function are optimized by PSO. Finally, the constructed SVM classifiers are trained and tested by using the EMG sample data of the gait movements. The experimental results show that for five normal walking gaits of the lower extremity, the recognition rate of the PSO-SVM classifier is significantly higher than that of the non-parameter-optimized SVM classifier, and the average recognition rate is up to 97.8%, as well as the classification accuracy and self-adaptability are also improved.
To improve the lower limb surface ElectroMyoGraphic (EMG) gait recognition accuracy and real time performance, this paper deals with a pattern recognition method for optimizing the Support Vector Machine (SVM) by using the Particle Swarm Optimization (PSO) algorithm. Firstly, the values of Integrated EMG and variance are extracted as the feature samples from the de-noised EMG signals. Then, the SVM parameters of the punishment and the kernel function are optimized by PSO. Finally, the constructed SVM classifiers are trained and tested by using the EMG sample data of the gait movements. The experimental results show that for five normal walking gaits of the lower extremity, the recognition rate of the PSO-SVM classifier is significantly higher than that of the non-parameter-optimized SVM classifier, and the average recognition rate is up to 97.8%, as well as the classification accuracy and self-adaptability are also improved.
2015, 37(5): 1160-1166.
doi: 10.11999/JEIT140997
Abstract:
To reduce side effects of background information included in the outer parts of tracking rectangular boxes, a weighted block compressed sensing feature extraction method is proposed based on normalized gradient features. The compressed sensing measurement matrix is converted to a block diagonal matrix. Appropriate weights are assigned to different blocks according to the importance of the blocks. It aims to reduce the measurement matrix size, weaken background interference and simplify feature extraction. Then the extracted features are inputted into Bayesian classifier with adaptive priori probabilities, which is proposed to make full use of existing tracking results. To some extent the classifier with variable priori probabilities can predict the direction of the moving targets, and reduce the ambiguities of target candidates. Each frame classification function changes according to the results of the previous track to improve the classification accuracy. In the experiments compared with four state-of-the-art tracking algorithms on 8 commonly used tracking test sequences, the proposed target tracking algorithm has higher accuracy and stability in terms of tracking results and success rate.
To reduce side effects of background information included in the outer parts of tracking rectangular boxes, a weighted block compressed sensing feature extraction method is proposed based on normalized gradient features. The compressed sensing measurement matrix is converted to a block diagonal matrix. Appropriate weights are assigned to different blocks according to the importance of the blocks. It aims to reduce the measurement matrix size, weaken background interference and simplify feature extraction. Then the extracted features are inputted into Bayesian classifier with adaptive priori probabilities, which is proposed to make full use of existing tracking results. To some extent the classifier with variable priori probabilities can predict the direction of the moving targets, and reduce the ambiguities of target candidates. Each frame classification function changes according to the results of the previous track to improve the classification accuracy. In the experiments compared with four state-of-the-art tracking algorithms on 8 commonly used tracking test sequences, the proposed target tracking algorithm has higher accuracy and stability in terms of tracking results and success rate.
2015, 37(5): 1167-1172.
doi: 10.11999/JEIT141077
Abstract:
For lack of effective methods used by the traditional immune network algorithms to guide the memory cell determination, a dynamic recognition neighborhood based immune network classification algorithm is proposed. The algorithm uses a kernel function representation scheme to describe the antibody-antigen affinity, and constructs dynamic recognition neighborhood with using pair wise antigens to guide the antibody population evolution, in which the antibody nearest to the pairing antigen is determined as the memory cell. The algorithm is applied to multi-class problem and high dimensional classification problem to analyze the classification performance. Furthermore, the algorithm is used for many standard datasets classification to evaluate the algorithm overall performance. The results show that the proposed algorithm can achieve better classification performance, which indicates that the dynamic recognition neighborhood based training method is able to guide the memory cell generation effectively and improve the algorithm performance significantly.
For lack of effective methods used by the traditional immune network algorithms to guide the memory cell determination, a dynamic recognition neighborhood based immune network classification algorithm is proposed. The algorithm uses a kernel function representation scheme to describe the antibody-antigen affinity, and constructs dynamic recognition neighborhood with using pair wise antigens to guide the antibody population evolution, in which the antibody nearest to the pairing antigen is determined as the memory cell. The algorithm is applied to multi-class problem and high dimensional classification problem to analyze the classification performance. Furthermore, the algorithm is used for many standard datasets classification to evaluate the algorithm overall performance. The results show that the proposed algorithm can achieve better classification performance, which indicates that the dynamic recognition neighborhood based training method is able to guide the memory cell generation effectively and improve the algorithm performance significantly.
2015, 37(5): 1173-1179.
doi: 10.11999/JEIT140907
Abstract:
In order to reduce the computational complexity, an improved decoding algorithm based on a layer-wise decomposition transform is proposed for Reed-Solomon (RS) codes in this paper. Firstly, the received codewords are split into a number of sub-sequence codewords by layer-wise decomposition. The random or burst error are dispersed in different sub-sequences, narrowing search areas of the burst or random errors. Secondly, the appropriate rules are developed to determine the number of errors. To help locate the error pattern of the sub-sequence, an adaptive iterative method to solve the key equation is used according to the adjoin matrix dimension. Finally, the correct codewords are obtained by subtracting error estimation from the received sequence. The tests show that in premise of detecting all errors the order of the polynomial is reduced and the computational complexity is lowered. The rate of error correction of the proposed algorithm is higher than DFT (Discrete Fourier Transform) algorithm and BM (Berlekamp-Massey) algorithm. Especially in the tests of the two-dimensional code, error correction efficiency is improved one order of magnitude.
In order to reduce the computational complexity, an improved decoding algorithm based on a layer-wise decomposition transform is proposed for Reed-Solomon (RS) codes in this paper. Firstly, the received codewords are split into a number of sub-sequence codewords by layer-wise decomposition. The random or burst error are dispersed in different sub-sequences, narrowing search areas of the burst or random errors. Secondly, the appropriate rules are developed to determine the number of errors. To help locate the error pattern of the sub-sequence, an adaptive iterative method to solve the key equation is used according to the adjoin matrix dimension. Finally, the correct codewords are obtained by subtracting error estimation from the received sequence. The tests show that in premise of detecting all errors the order of the polynomial is reduced and the computational complexity is lowered. The rate of error correction of the proposed algorithm is higher than DFT (Discrete Fourier Transform) algorithm and BM (Berlekamp-Massey) algorithm. Especially in the tests of the two-dimensional code, error correction efficiency is improved one order of magnitude.
2015, 37(5): 1180-1186.
doi: 10.11999/JEIT141073
Abstract:
The downlink transmission performance of the massive MIMO Time Division Duplex (TDD) system is bottlenecked by the channel reciprocity errors called antenna reciprocity errors. Antenna reciprocity errors are hard to be calibrated completely in practical systems. In order to avoid the performance degradation of the downlink transmission, a linear robust precoding algorithm is proposed, which can maximize each users average Signal to Leakage and Noise Ratio (SLNR) by using the statistical characteristics of the antenna reciprocity errors. To further reduce the equivalent noise power of users, the linear robust precoding algorithm is improved into nonlinear robust precoding algorithm by vector perturbation. Lattice reduction aid is also used to reduce the complexity of the perturbation vector search, and make the nonlinear robust precoding algorithm be available for the massive MIMO. Simulation results show that the proposed linear and nonlinear robust precoding algorithms can achieve better performance than the traditional Zero Forcing (ZF) and SLNR precoding algorithms when antenna reciprocity errors exist.
The downlink transmission performance of the massive MIMO Time Division Duplex (TDD) system is bottlenecked by the channel reciprocity errors called antenna reciprocity errors. Antenna reciprocity errors are hard to be calibrated completely in practical systems. In order to avoid the performance degradation of the downlink transmission, a linear robust precoding algorithm is proposed, which can maximize each users average Signal to Leakage and Noise Ratio (SLNR) by using the statistical characteristics of the antenna reciprocity errors. To further reduce the equivalent noise power of users, the linear robust precoding algorithm is improved into nonlinear robust precoding algorithm by vector perturbation. Lattice reduction aid is also used to reduce the complexity of the perturbation vector search, and make the nonlinear robust precoding algorithm be available for the massive MIMO. Simulation results show that the proposed linear and nonlinear robust precoding algorithms can achieve better performance than the traditional Zero Forcing (ZF) and SLNR precoding algorithms when antenna reciprocity errors exist.
2015, 37(5): 1187-1193.
doi: 10.11999/JEIT140994
Abstract:
The security performance of the communication system degrades dramatically when the covariance- based Channel State Information (CSI) is imperfect at the transmitter. To overcome this problem, a robust Artificial Noise (AN) aided transmit method is proposed. The objective is to jointly design the transmit beamforming vector and the AN covariance with the imperfect covariance-based CSI at the transmitter, such that the Worst-Case Secrecy Rate (WCSR) of the system is maximized. The secrecy rate maximization problem is non-convex. Due to the intractability, this problem is recast into a series of SemiDefinite Programs (SDPs) using the SemiDefinite Relaxation (SDR) technique and the Lagrange duality. Simulation results demonstrate that the proposed method provides substantial performance improvements over the existing method.
The security performance of the communication system degrades dramatically when the covariance- based Channel State Information (CSI) is imperfect at the transmitter. To overcome this problem, a robust Artificial Noise (AN) aided transmit method is proposed. The objective is to jointly design the transmit beamforming vector and the AN covariance with the imperfect covariance-based CSI at the transmitter, such that the Worst-Case Secrecy Rate (WCSR) of the system is maximized. The secrecy rate maximization problem is non-convex. Due to the intractability, this problem is recast into a series of SemiDefinite Programs (SDPs) using the SemiDefinite Relaxation (SDR) technique and the Lagrange duality. Simulation results demonstrate that the proposed method provides substantial performance improvements over the existing method.
2015, 37(5): 1194-1199.
doi: 10.11999/JEIT140986
Abstract:
In the multi-cell MIMO (Multiple-Input Multiple-Output) cooperative communication system, the performances of outage probability and network throughput of interference alignment applied by the cooperative Base Stations (BSs) and users are investigated when the locations of the BSs follow Poisson Point Process (PPP) distribution, and the analytical expressions of the above performances are derived under the conditions of perfect Channel State Information (CSI) and imperfect CSI, respectively. The monotonic relationships between the system performances and the cooperation parameters are also analyzed. The simulation analyses reveal that, under the condition of perfect CSI, the network throughput improves with the increase of BS density, the number of cooperative BSs and the number of antennas; under the condition of imperfect CSI, considering both the resource overhead of channel training and limited feedback and the channel distortion induced by quantized CSI, there exists an optimal number of BSs which can maximize the network throughput. When the number of antenna is small or the velocity of mobile user is not so high, more BSs are expected to participate into the cooperation, and when the number of antenna or the velocity of mobile user is large, the number of cooperative BSs should be appropriately reduced.
In the multi-cell MIMO (Multiple-Input Multiple-Output) cooperative communication system, the performances of outage probability and network throughput of interference alignment applied by the cooperative Base Stations (BSs) and users are investigated when the locations of the BSs follow Poisson Point Process (PPP) distribution, and the analytical expressions of the above performances are derived under the conditions of perfect Channel State Information (CSI) and imperfect CSI, respectively. The monotonic relationships between the system performances and the cooperation parameters are also analyzed. The simulation analyses reveal that, under the condition of perfect CSI, the network throughput improves with the increase of BS density, the number of cooperative BSs and the number of antennas; under the condition of imperfect CSI, considering both the resource overhead of channel training and limited feedback and the channel distortion induced by quantized CSI, there exists an optimal number of BSs which can maximize the network throughput. When the number of antenna is small or the velocity of mobile user is not so high, more BSs are expected to participate into the cooperation, and when the number of antenna or the velocity of mobile user is large, the number of cooperative BSs should be appropriately reduced.
2015, 37(5): 1200-1206.
doi: 10.11999/JEIT140933
Abstract:
FlexRay is becoming the in-vehicle communication network of the next generation. To resolve the problem of Frame IDentification (FID) assignment in the FlexRay Static Segment Scheduling (FSSS), an Automatic Model Coefficient Matrix Generating (AMCMG) algorithm is proposed to obtain the coefficient matrix automatically based on the characteristics of period distribution. A large-scale programming model of FSSS can be generated automatically, and the scheduling properties of all kinds of messages can be derived as well as the minimum number of FID required for the system can be determined quickly. To assign the phase for each message and obtain the complete scheduling table, a Phase Reserving based FID Assignment (PRFIDA) algorithm is designed according to the compatibility of messages scheduling in different periods, which is able to keep the optimal property of the previous programming. Finally, the simulation results demonstrate that the AMCMG algorithm can build the scheduling model rapidly and correctly, and the PRFIDA algorithm can realize the FID assignment optimally based on the known scheduling properties of messages.
FlexRay is becoming the in-vehicle communication network of the next generation. To resolve the problem of Frame IDentification (FID) assignment in the FlexRay Static Segment Scheduling (FSSS), an Automatic Model Coefficient Matrix Generating (AMCMG) algorithm is proposed to obtain the coefficient matrix automatically based on the characteristics of period distribution. A large-scale programming model of FSSS can be generated automatically, and the scheduling properties of all kinds of messages can be derived as well as the minimum number of FID required for the system can be determined quickly. To assign the phase for each message and obtain the complete scheduling table, a Phase Reserving based FID Assignment (PRFIDA) algorithm is designed according to the compatibility of messages scheduling in different periods, which is able to keep the optimal property of the previous programming. Finally, the simulation results demonstrate that the AMCMG algorithm can build the scheduling model rapidly and correctly, and the PRFIDA algorithm can realize the FID assignment optimally based on the known scheduling properties of messages.
2015, 37(5): 1207-1213.
doi: 10.11999/JEIT140935
Abstract:
In order to compensate the outage cell autonomously, this paper derives a solution for Cell Outage Compensation (COC) in Self-Organizing Networks (SON) which is based on the joint adjustment of power and tilt. Firstly, the paper takes the power and tilt as the optimization objects. Then it defines the rational objectives and the evaluation index of the COC, and analyzes the optimization model. Finally, the compensation mechanism based on the Genetic optimization Algorithm (GA) is proposed. The simulation results under the Time Division Long Term Evolution (TD-LTE) scenario show that the proposed solution is better than other three methods in terms of the coverage, interference, and throughput.
In order to compensate the outage cell autonomously, this paper derives a solution for Cell Outage Compensation (COC) in Self-Organizing Networks (SON) which is based on the joint adjustment of power and tilt. Firstly, the paper takes the power and tilt as the optimization objects. Then it defines the rational objectives and the evaluation index of the COC, and analyzes the optimization model. Finally, the compensation mechanism based on the Genetic optimization Algorithm (GA) is proposed. The simulation results under the Time Division Long Term Evolution (TD-LTE) scenario show that the proposed solution is better than other three methods in terms of the coverage, interference, and throughput.
2015, 37(5): 1214-1219.
doi: 10.11999/JEIT140615
Abstract:
Based on distributed networks of the Super-Peer Architecture (SPA), this paper proposes Efficient Method for Skyline Recommendation in Distributed Networks (EMSRDN), to handle u skyline recommendation instructions by prestore w skyline snapshots. The EMSRDN method fully considers the characteristic of storage and communication of SPA networks, and uses the map/reduce distributed computation model. The EMSRDN algorithm can fast produce the optimal w skyline snapshots through the phase of heuristically constructing the initial set of snapshot. The detailed theoretical analyses and extensive experiments demonstrate that the proposed EMSRDN algorithm is both efficient and practical.
Based on distributed networks of the Super-Peer Architecture (SPA), this paper proposes Efficient Method for Skyline Recommendation in Distributed Networks (EMSRDN), to handle u skyline recommendation instructions by prestore w skyline snapshots. The EMSRDN method fully considers the characteristic of storage and communication of SPA networks, and uses the map/reduce distributed computation model. The EMSRDN algorithm can fast produce the optimal w skyline snapshots through the phase of heuristically constructing the initial set of snapshot. The detailed theoretical analyses and extensive experiments demonstrate that the proposed EMSRDN algorithm is both efficient and practical.
2015, 37(5): 1220-1226.
doi: 10.11999/JEIT140874
Abstract:
How to mitigate the privacy attacks related to the ubiquitous presence of caching poses challenges to the content delivery in Content Centric Networking. On the basis of trade-off between content distribution performance and users privacy, a collaborative caching strategy for privacy protection is proposed. First, the privacy metrics are designed and the rationality of the proposed strategy is demonstrated by applying the concept of information entropy. And then, the anonymity domain is constructed to increase the uncertainty of which nearby consumer recently requested certain cached content. When making the caching decision, in order to eliminate the cache redundancy and privacy leaks, the hottest on-path caching and the collaborative hash caching are proposed for the vertical requesting path and horizontal anonymity domains, respectively. The simulation results show that the strategy can decrease the request latency, increase the cache hit ratio, and enhance the protection of users privacy while improving the efficiency of content distribution.
How to mitigate the privacy attacks related to the ubiquitous presence of caching poses challenges to the content delivery in Content Centric Networking. On the basis of trade-off between content distribution performance and users privacy, a collaborative caching strategy for privacy protection is proposed. First, the privacy metrics are designed and the rationality of the proposed strategy is demonstrated by applying the concept of information entropy. And then, the anonymity domain is constructed to increase the uncertainty of which nearby consumer recently requested certain cached content. When making the caching decision, in order to eliminate the cache redundancy and privacy leaks, the hottest on-path caching and the collaborative hash caching are proposed for the vertical requesting path and horizontal anonymity domains, respectively. The simulation results show that the strategy can decrease the request latency, increase the cache hit ratio, and enhance the protection of users privacy while improving the efficiency of content distribution.
2015, 37(5): 1227-1233.
doi: 10.11999/JEIT140884
Abstract:
Multicast is a widely applied communication technology. Multicast source authentication is an important problem in multicast security. Especially, it is a big challenge to implement the multicast source authentication in a noisy channel. In order to solve the problem of multicast source authentication in the noisy channel, a chained multicast source authentication technology based on the threshold cryptography is proposed. Firstly, the security assumption and security model of the chained multicast source authentication is provided based on the security requirement of the multicast source authentication and Dolev-Yao model. Then, a new multicast source authentication protocol adapted to the noisy channel is designed by using the threshold secret sharing technology. Finally, the security of the proposed protocol is analyzed. The simulation results show that the multicast source authentication protocol has a good ability to resist packet loss and ensure good communication performance.
Multicast is a widely applied communication technology. Multicast source authentication is an important problem in multicast security. Especially, it is a big challenge to implement the multicast source authentication in a noisy channel. In order to solve the problem of multicast source authentication in the noisy channel, a chained multicast source authentication technology based on the threshold cryptography is proposed. Firstly, the security assumption and security model of the chained multicast source authentication is provided based on the security requirement of the multicast source authentication and Dolev-Yao model. Then, a new multicast source authentication protocol adapted to the noisy channel is designed by using the threshold secret sharing technology. Finally, the security of the proposed protocol is analyzed. The simulation results show that the multicast source authentication protocol has a good ability to resist packet loss and ensure good communication performance.
2015, 37(5): 1234-1240.
doi: 10.11999/JEIT140851
Abstract:
To deal with the problem of signal estimation for Wireless Sensor Networks (WSN) in a untrustworthy environment where malicious nodes tamper the measured data, two reputation-based algorithms, that are, Reputation-based diffusion Least Mean Square (R-dLMS) algorithm and Reputation-based diffusion Normalized Least Mean Square (R-dNLMS) algorithm, are proposed. The proposed algorithms could assign the appropriate reputation value to each node according to its contribution to the whole network, and minimize the reputation value of malicious nodes to lower the impact of malicious nodes in the network. Simulation results show that the proposed algorithms can greatly improve the performance compared with the one without reputation value, and the performance of R-dNLMS algorithm has been further improved based on R-dLMS algorithm.
To deal with the problem of signal estimation for Wireless Sensor Networks (WSN) in a untrustworthy environment where malicious nodes tamper the measured data, two reputation-based algorithms, that are, Reputation-based diffusion Least Mean Square (R-dLMS) algorithm and Reputation-based diffusion Normalized Least Mean Square (R-dNLMS) algorithm, are proposed. The proposed algorithms could assign the appropriate reputation value to each node according to its contribution to the whole network, and minimize the reputation value of malicious nodes to lower the impact of malicious nodes in the network. Simulation results show that the proposed algorithms can greatly improve the performance compared with the one without reputation value, and the performance of R-dNLMS algorithm has been further improved based on R-dLMS algorithm.
2015, 37(5): 1241-1247.
doi: 10.11999/JEIT140902
Abstract:
In application of RFID (Radio Frequency IDentification), security threats are on the rise. The demand for the privacy protection of tag is becoming more and more urgent. In RFID system, the privacy protection is against not only the outside attacker, but also the readers. In many exists the representative researches, the tags are only anonymous and untraceable for the outside attackers. In this paper, the list of protocol and the the key updated of protocol are proposed, which allow RFID tags to authenticate to readers without revealing the tag identity or any other information that allows tags to be traced. Compared with the scheme proposed by Armknecht et al., two schemes achieves anonymousness and untraceability authentication of RFID tags without the help of anonymizer. Furthermore, the key updated of scheme make sure that the revocatory tags will not be still tracked by updating key .
In application of RFID (Radio Frequency IDentification), security threats are on the rise. The demand for the privacy protection of tag is becoming more and more urgent. In RFID system, the privacy protection is against not only the outside attacker, but also the readers. In many exists the representative researches, the tags are only anonymous and untraceable for the outside attackers. In this paper, the list of protocol and the the key updated of protocol are proposed, which allow RFID tags to authenticate to readers without revealing the tag identity or any other information that allows tags to be traced. Compared with the scheme proposed by Armknecht et al., two schemes achieves anonymousness and untraceability authentication of RFID tags without the help of anonymizer. Furthermore, the key updated of scheme make sure that the revocatory tags will not be still tracked by updating key .
2015, 37(5): 1248-1254.
doi: 10.11999/JEIT141017
Abstract:
To meet the watermarking requirement in encrypted domain, a novel scheme for robust and separable watermarking in encrypted image is proposed based on Compressive Sensing (CS). Firstly, the content owner divides the original image into non-overlapping blocks, and then the edge-detection method is utilized to classify all blocks into significant or insignificant blocks. For the former, traditional method is used for encryption; and for the latter, CS is used for encryption, which leaves some space for embedding data. Then, the binary watermark is permutated with the data hiding key, and embedded into the encrypted image. The way to obtain the image content and watermark is separable, and the attributes of the block can be regained according to pixel distribution of the watermarked image, which avoids transmitting the attribute information. Furthermore, the watermark is embedded four times in the encrypted image, which guarantees its robustness. The experimental results show that the proposed scheme is robust and secure against moderate attacks.
To meet the watermarking requirement in encrypted domain, a novel scheme for robust and separable watermarking in encrypted image is proposed based on Compressive Sensing (CS). Firstly, the content owner divides the original image into non-overlapping blocks, and then the edge-detection method is utilized to classify all blocks into significant or insignificant blocks. For the former, traditional method is used for encryption; and for the latter, CS is used for encryption, which leaves some space for embedding data. Then, the binary watermark is permutated with the data hiding key, and embedded into the encrypted image. The way to obtain the image content and watermark is separable, and the attributes of the block can be regained according to pixel distribution of the watermarked image, which avoids transmitting the attribute information. Furthermore, the watermark is embedded four times in the encrypted image, which guarantees its robustness. The experimental results show that the proposed scheme is robust and secure against moderate attacks.
2015, 37(5): 1255-1259.
doi: 10.11999/JEIT140930
Abstract:
Tapered finline array plays an important role in waveguide-based spatial power combiner. This paper presents a simplified model of tapered finline array by taking it as a tapered TE mode wave impedance transformer. The spectral domain admittance method is applied for deriving the propagation constant. Based on the small reflection theory, a new compact and broadband Hecken finline taper array is proposed. The structure is simulated and optimized by HFSS. By introducing slotline-to-microstrip line transition, the finline array in X-band is manufactured. The back-to-back test results of optimal 22 Hecken finline taper arrays consistent with theoretical values show that the return coefficient is less than -12 dB and the insert loss is less than 1 dB in X-band (8~12 GHz). This paper presents the method of analysis and design of the finline array, and provides a guideline for designing the waveguide-based power combiner. It serves as a promising circuit for power combining.
Tapered finline array plays an important role in waveguide-based spatial power combiner. This paper presents a simplified model of tapered finline array by taking it as a tapered TE mode wave impedance transformer. The spectral domain admittance method is applied for deriving the propagation constant. Based on the small reflection theory, a new compact and broadband Hecken finline taper array is proposed. The structure is simulated and optimized by HFSS. By introducing slotline-to-microstrip line transition, the finline array in X-band is manufactured. The back-to-back test results of optimal 22 Hecken finline taper arrays consistent with theoretical values show that the return coefficient is less than -12 dB and the insert loss is less than 1 dB in X-band (8~12 GHz). This paper presents the method of analysis and design of the finline array, and provides a guideline for designing the waveguide-based power combiner. It serves as a promising circuit for power combining.
2015, 37(5): 1266-1270.
doi: 10.11999/JEIT141121
Abstract:
Aiming at the essential problems of the design of adaptive spatial steganography, this paper proposes an adaptive spatial steganographic algorithm without synchronizing the side information, combining with Cannys edge detection algorithm and the Syndrome Trellis Code (STC). Firstly, the parameters of Cannys algorithm are obtained on the basis of the factors, including the length of the secret message, the cover image, and so on; then Cannys algorithm is used to select the edge region of the cover image. Moreover, the embedding distortions of the edge and non-edge pixels are defined respectively. Finally, the STC is used to embed the secret message in multiple Least Significant Bit (LSB) planes of the pixels. The experimental results illustrate that, under the condition of four kinds of embedding rates, when resisting common universal steganalysis, the proposed method performs better than other three existing methods, and is comparable to the Spatial-UNIversal WAvelet Relative Distortion (S-UNIWARD) under the condition of small embedding rates.
Aiming at the essential problems of the design of adaptive spatial steganography, this paper proposes an adaptive spatial steganographic algorithm without synchronizing the side information, combining with Cannys edge detection algorithm and the Syndrome Trellis Code (STC). Firstly, the parameters of Cannys algorithm are obtained on the basis of the factors, including the length of the secret message, the cover image, and so on; then Cannys algorithm is used to select the edge region of the cover image. Moreover, the embedding distortions of the edge and non-edge pixels are defined respectively. Finally, the STC is used to embed the secret message in multiple Least Significant Bit (LSB) planes of the pixels. The experimental results illustrate that, under the condition of four kinds of embedding rates, when resisting common universal steganalysis, the proposed method performs better than other three existing methods, and is comparable to the Spatial-UNIversal WAvelet Relative Distortion (S-UNIWARD) under the condition of small embedding rates.