2014 Vol. 36, No. 6
Display Method:
2014, 36(6): 1271-1277.
doi: 10.3724/SP.J.1146.2013.01246
Abstract:
In order to construct a high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties is studied and used to build the encryption algorithm. First, the key stream generator is designed by the Markov Chaos with changeable parameters and the improved spatiotemporal chaos. Then, a true uniform random number generator is used to disturb the original key of the algorithm, which can dynamically change the mixed matrix and the key stream. Finally, the diffusion function is built by two iterations of the round function which is composed of different kinds of additions in different groups to increase the complexity of decipher. The experiments indicate that the key stream possesses good statistical properties, and the characteristic of the original image is broken which makes the cipher image undistinguishable. The further analysis indicates that the proposed algorithm can resist some known attacks like differential attacks, and the proposed algorithm is more efficient than the existed algorithms based on super chaos. Additionally, the proposed algorithm is easy to realize and can satisfy the security and efficiency requirements, which indicates promising applications.
In order to construct a high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties is studied and used to build the encryption algorithm. First, the key stream generator is designed by the Markov Chaos with changeable parameters and the improved spatiotemporal chaos. Then, a true uniform random number generator is used to disturb the original key of the algorithm, which can dynamically change the mixed matrix and the key stream. Finally, the diffusion function is built by two iterations of the round function which is composed of different kinds of additions in different groups to increase the complexity of decipher. The experiments indicate that the key stream possesses good statistical properties, and the characteristic of the original image is broken which makes the cipher image undistinguishable. The further analysis indicates that the proposed algorithm can resist some known attacks like differential attacks, and the proposed algorithm is more efficient than the existed algorithms based on super chaos. Additionally, the proposed algorithm is easy to realize and can satisfy the security and efficiency requirements, which indicates promising applications.
2014, 36(6): 1278-1284.
doi: 10.3724/SP.J.1146.2013.01528
Abstract:
To ensure the quality of watermarked image and improve the embedding capacity of watermarkings, an adaptive image reversible watermarking method based on interger transform is proposed in this paper, which defines a new generalized integer transform algorithm. Through the use of the method the image blocks of arbitrary sized are transformed, producing certain redundancy data that can be used for watermarking embedding. In addition, the parameter m used for integer transform is adaptively selected according to the variance of every image block, hence allowing for embedding more data bits into the smooth blocks while avoiding large distortion generated by complex ones, and thus the algorithm ensures a higher embedding capacity and better quality of watermarked image. Compared with similar algorithms, the experimental results show that the proposed method has larger maximal embedding capacity and taking Lena as a host image, the real payload can reach up to 2.36 bpp. The proposed integer transform algorithm is simple; through adaptively interger transforming and data embedding, the quality of watermarked image can be assured and the method offers a large real payload.
To ensure the quality of watermarked image and improve the embedding capacity of watermarkings, an adaptive image reversible watermarking method based on interger transform is proposed in this paper, which defines a new generalized integer transform algorithm. Through the use of the method the image blocks of arbitrary sized are transformed, producing certain redundancy data that can be used for watermarking embedding. In addition, the parameter m used for integer transform is adaptively selected according to the variance of every image block, hence allowing for embedding more data bits into the smooth blocks while avoiding large distortion generated by complex ones, and thus the algorithm ensures a higher embedding capacity and better quality of watermarked image. Compared with similar algorithms, the experimental results show that the proposed method has larger maximal embedding capacity and taking Lena as a host image, the real payload can reach up to 2.36 bpp. The proposed integer transform algorithm is simple; through adaptively interger transforming and data embedding, the quality of watermarked image can be assured and the method offers a large real payload.
2014, 36(6): 1285-1291.
doi: 10.3724/SP.J.1146.2013.01082
Abstract:
In order to extract accurately the image details, and improve the effect of image enhancement, an adaptive image enhancement algorithm with variable weighted matching based on morphological is proposed. With this method, extension omni-directional multi-scale structure element is constructed, which is used to decompose image of different scale details in different direction through top-hat translation. The proposed algorithm brokes the idea of that the detail weighted in each direction is taken average in traditional morphology method, and adjusts the weight of the different detail direction based on the dynamic characteristic analysis of the local gray level. In the image enhancement process, according to the structured feature of extracted details, the corresponding adaptive gain function is constructed to realize the image adaptive enhancement. The experimental results show that, the algorithm can highlight more effective image details than the traditional morphological method of image enhancement by using the autocorrelation of image, and can suppress the noise in some extent.
In order to extract accurately the image details, and improve the effect of image enhancement, an adaptive image enhancement algorithm with variable weighted matching based on morphological is proposed. With this method, extension omni-directional multi-scale structure element is constructed, which is used to decompose image of different scale details in different direction through top-hat translation. The proposed algorithm brokes the idea of that the detail weighted in each direction is taken average in traditional morphology method, and adjusts the weight of the different detail direction based on the dynamic characteristic analysis of the local gray level. In the image enhancement process, according to the structured feature of extracted details, the corresponding adaptive gain function is constructed to realize the image adaptive enhancement. The experimental results show that, the algorithm can highlight more effective image details than the traditional morphological method of image enhancement by using the autocorrelation of image, and can suppress the noise in some extent.
2014, 36(6): 1292-1298.
doi: 10.3724/SP.J.1146.2013.01220
Abstract:
In order to achieve smooth and continuous panorama stitching effect, an image fusion algorithm which based on contrast pyramid and combines color space conversion and Contourlet transform is proposed by considering the characteristics of panorama stitching. Firstly, luminance information of images is calculated using HSI transform. Then contrasting pyramid based Contourlet transform is used to decompose luminance information to get sub-band information of images. Finally, images are reconstructed by fusing different sub-bands information. Experimental results show that the proposed algorithm which leverages the contour features of Contourlet transform and the detail information of images could achieve good effects on panorama stitching fusion.
In order to achieve smooth and continuous panorama stitching effect, an image fusion algorithm which based on contrast pyramid and combines color space conversion and Contourlet transform is proposed by considering the characteristics of panorama stitching. Firstly, luminance information of images is calculated using HSI transform. Then contrasting pyramid based Contourlet transform is used to decompose luminance information to get sub-band information of images. Finally, images are reconstructed by fusing different sub-bands information. Experimental results show that the proposed algorithm which leverages the contour features of Contourlet transform and the detail information of images could achieve good effects on panorama stitching fusion.
2014, 36(6): 1299-1306.
doi: 10.3724/SP.J.1146.2013.00942
Abstract:
The main idea of Matching Pursuit (MP) is to get a local optimal solution by iteration, so as to gradually approach the original signal. To cope with the intersection of different atom sets, which may affect the classification performance of conventional MP methods, a new matching pursuit algorithm is proposed, which is suitable for supervised classification. The criterion for atoms selection consists of two parts. On one hand, by using the same atom set within the class, the intra-class structure of the similar signals is obtained for class-representation; on the other hand, by selecting the atom sets independently for every class, the discrimination ability for different classes could be further strengthened. The analysis on a toy example indicates that this scheme reduces the common factors between different classes and highlights the discrimination between signals, which may boost the performance of signal classification. Finally, the experiments on benchmark image databases and the measured radar emitter signals verify that the proposed algorithm achieves better robustness against noise and occlusion, compared with the convention MP-related methods.
The main idea of Matching Pursuit (MP) is to get a local optimal solution by iteration, so as to gradually approach the original signal. To cope with the intersection of different atom sets, which may affect the classification performance of conventional MP methods, a new matching pursuit algorithm is proposed, which is suitable for supervised classification. The criterion for atoms selection consists of two parts. On one hand, by using the same atom set within the class, the intra-class structure of the similar signals is obtained for class-representation; on the other hand, by selecting the atom sets independently for every class, the discrimination ability for different classes could be further strengthened. The analysis on a toy example indicates that this scheme reduces the common factors between different classes and highlights the discrimination between signals, which may boost the performance of signal classification. Finally, the experiments on benchmark image databases and the measured radar emitter signals verify that the proposed algorithm achieves better robustness against noise and occlusion, compared with the convention MP-related methods.
2014, 36(6): 1307-1311.
doi: 10.3724/SP.J.1146.2013.01099
Abstract:
To solve the model-fitting problem with different confidence levels of samples, a Neural-Network (NN)- based twice learning method is proposed. It is pointed out that the real model is a variation of experimental model. The neural network approximation to the mathematical expectation of real model, is believed to be the best network fusing the information of prior samples and real samples. In the first learning, neural network is trained using the prior samples only, and the error capacity intervals of the soft points, which are determined by the information of hard points, are calculated. Then, both prior samples and real samples are included in the training samples. The import-objective errors in the process of NN training are modified, using soft point error capacity intervals and hard point error-sensitivity coefficients. The expected network is generated by the second learning, with accurate fitting to the real samples and efficacious utilization of the prior samples. In contrast with Knowledge-Based Neural Networks (KBNN), this method is simpler and more amenable to manipulation with definite logical significance.
To solve the model-fitting problem with different confidence levels of samples, a Neural-Network (NN)- based twice learning method is proposed. It is pointed out that the real model is a variation of experimental model. The neural network approximation to the mathematical expectation of real model, is believed to be the best network fusing the information of prior samples and real samples. In the first learning, neural network is trained using the prior samples only, and the error capacity intervals of the soft points, which are determined by the information of hard points, are calculated. Then, both prior samples and real samples are included in the training samples. The import-objective errors in the process of NN training are modified, using soft point error capacity intervals and hard point error-sensitivity coefficients. The expected network is generated by the second learning, with accurate fitting to the real samples and efficacious utilization of the prior samples. In contrast with Knowledge-Based Neural Networks (KBNN), this method is simpler and more amenable to manipulation with definite logical significance.
2014, 36(6): 1312-1320.
doi: 10.3724/SP.J.1146.2013.00860
Abstract:
Latent tree-structured graphical models explore the latent relationships among variables by introducing hidden nodes, therefore they can better model the correlations among variables. In the learning process of tree-structured graphical models, the quantity of useful features extracted from observation data of variables reflects the models capability to model the deep relationships among variables. However, the excised algorithms learn the hidden tree only by the statics which are directly computed from observation data and ignore the different features among data. For the insufficiency of these algorithms in exploring the information, a new algorithm is proposed for learning the latent tree-structured graphical model based on fuzzy multi-features recursive-grouping. First, original observation data is transformed to multi-features by fuzzy membership functions and construct multi-dimensional fuzzy feature vectors. Then, the distance between each fuzzy feature vectors is computed and synthesized to get the fuzzy multi-features distance matrix of all variables. Finally, based on the distance matrix, the latent tree graphical model is constructed by the recursive-grouping algorithm. The proposed algorithm is applied to stock return data modeling and temperature data modeling, which demonstrate the effectiveness of the algorithm.
Latent tree-structured graphical models explore the latent relationships among variables by introducing hidden nodes, therefore they can better model the correlations among variables. In the learning process of tree-structured graphical models, the quantity of useful features extracted from observation data of variables reflects the models capability to model the deep relationships among variables. However, the excised algorithms learn the hidden tree only by the statics which are directly computed from observation data and ignore the different features among data. For the insufficiency of these algorithms in exploring the information, a new algorithm is proposed for learning the latent tree-structured graphical model based on fuzzy multi-features recursive-grouping. First, original observation data is transformed to multi-features by fuzzy membership functions and construct multi-dimensional fuzzy feature vectors. Then, the distance between each fuzzy feature vectors is computed and synthesized to get the fuzzy multi-features distance matrix of all variables. Finally, based on the distance matrix, the latent tree graphical model is constructed by the recursive-grouping algorithm. The proposed algorithm is applied to stock return data modeling and temperature data modeling, which demonstrate the effectiveness of the algorithm.
2014, 36(6): 1321-1326.
doi: 10.3724/SP.J.1146.2013.01244
Abstract:
There are few studies on the vehicle recognition methods based on grille regional characteristics both at home and abroad, and its classification efficiency and accuracy is low. Based on the characteristics parameters of structure, shape and texture, the vehicle grille recognition method of the improved C-Support Vector Classification (C-SVC) based on the optimal parameters searching algorithm is proposed in this paper, where the efficiency and the precision are controlled by the dual-angle constraint: on the one hand, based on the Mahalanobis distance and a-principle, and combining with the weighted judgment, the sample data is sorted and used to accelerate the training and testing speed of the Support Vector Machine (SVM) and to improve the algorithm generalization efficiency; on the other hand, in the process of setting kernel function parameter, the optimal parameter iterative searching algorithm based on priori knowledge is designed to improve the classification accuracy of the classifier. The experiment shows that the accuracy rate of vehicle grille recognition method is 97.53%, representing the advantages of higher accuracy and lower false detection rate. It is also proved that this method is able to optimize the classification efficiency and to meet the real-time requirements of recognition.
There are few studies on the vehicle recognition methods based on grille regional characteristics both at home and abroad, and its classification efficiency and accuracy is low. Based on the characteristics parameters of structure, shape and texture, the vehicle grille recognition method of the improved C-Support Vector Classification (C-SVC) based on the optimal parameters searching algorithm is proposed in this paper, where the efficiency and the precision are controlled by the dual-angle constraint: on the one hand, based on the Mahalanobis distance and a-principle, and combining with the weighted judgment, the sample data is sorted and used to accelerate the training and testing speed of the Support Vector Machine (SVM) and to improve the algorithm generalization efficiency; on the other hand, in the process of setting kernel function parameter, the optimal parameter iterative searching algorithm based on priori knowledge is designed to improve the classification accuracy of the classifier. The experiment shows that the accuracy rate of vehicle grille recognition method is 97.53%, representing the advantages of higher accuracy and lower false detection rate. It is also proved that this method is able to optimize the classification efficiency and to meet the real-time requirements of recognition.
2014, 36(6): 1327-1333.
doi: 10.3724/SP.J.1146.2013.01218
Abstract:
A new method called weighted Local Binary Pattern (LBP) with adaptive threshold is proposed in this paper to address the shortcomings of LBP and Center Symmetric Local Binary Pattern (CS-LBP), using unflexible threshold and non- discriminating respective sub-patches based on different textures. Firstly, the image is divided into several sub-images and LBP or CS-LBP texture histograms are extracted respectively from each sub-image based on the adaptive threshold. Then, the proposed algorithm adaptively weighted the LBP or CS-LBP histograms of sub-patches with information entropy as their basis and connected all histograms serially to create a final texture descriptor. Finally, the improved efficiency of the proposed algorithm is achieved by speeding up the computation of the average of an image. The experimental results by face databases show that a higher recognition accuracy can be obtained by employing the proposed method with nearest neighbor classification.
A new method called weighted Local Binary Pattern (LBP) with adaptive threshold is proposed in this paper to address the shortcomings of LBP and Center Symmetric Local Binary Pattern (CS-LBP), using unflexible threshold and non- discriminating respective sub-patches based on different textures. Firstly, the image is divided into several sub-images and LBP or CS-LBP texture histograms are extracted respectively from each sub-image based on the adaptive threshold. Then, the proposed algorithm adaptively weighted the LBP or CS-LBP histograms of sub-patches with information entropy as their basis and connected all histograms serially to create a final texture descriptor. Finally, the improved efficiency of the proposed algorithm is achieved by speeding up the computation of the average of an image. The experimental results by face databases show that a higher recognition accuracy can be obtained by employing the proposed method with nearest neighbor classification.
2014, 36(6): 1334-1339.
doi: 10.3724/SP.J.1146.2013.01242
Abstract:
Most of spatial spectrum estimation methods fail when there is only one valid snapshot. To deal with this issue, a pseudo covariance matrix construction model utilizing array receiving signal is proposed. Theoretical analysis shows that the presented model, which consists in the existing methods, is more flexible and general. Combining with beam forming idea, a method based on weighted summarizing is proposed, which considers both the affections of signal-to-noise ratio and array freedom. Estimation performance can be enhanced in some degree. Theoretical analysis and simulation results verify the correctness and effectiveness of the proposed method and model.
Most of spatial spectrum estimation methods fail when there is only one valid snapshot. To deal with this issue, a pseudo covariance matrix construction model utilizing array receiving signal is proposed. Theoretical analysis shows that the presented model, which consists in the existing methods, is more flexible and general. Combining with beam forming idea, a method based on weighted summarizing is proposed, which considers both the affections of signal-to-noise ratio and array freedom. Estimation performance can be enhanced in some degree. Theoretical analysis and simulation results verify the correctness and effectiveness of the proposed method and model.
2014, 36(6): 1340-1347.
doi: 10.3724/SP.J.1146.2013.00798
Abstract:
De-noising is an important application field of the wavelet analysis. It has advantage over the traditional filtering methods for its well localized time and frequency property. A central issue in the signal de-nosing research is how to obtain a good balance between shrinking noise and preserving the signal singularity features. This paper presents a wavelet de-noising method based on an adaptive threshold function. By tuning the parameter of the threshold function, the noise wavelet coefficients are shrunk while the signal details are preserved as much as possible on the small scales of the wavelet transform, and on the other hand, the noise coefficients are removed to their maximum extent on a large scale. The simulation results of the blocks, bumps and signals corresponding to the sonar returns from underwater targets, demonstrate that the signal singularity features by adopting the proposed method are better preserved with significant advantage than the traditional threshold filtering method.
De-noising is an important application field of the wavelet analysis. It has advantage over the traditional filtering methods for its well localized time and frequency property. A central issue in the signal de-nosing research is how to obtain a good balance between shrinking noise and preserving the signal singularity features. This paper presents a wavelet de-noising method based on an adaptive threshold function. By tuning the parameter of the threshold function, the noise wavelet coefficients are shrunk while the signal details are preserved as much as possible on the small scales of the wavelet transform, and on the other hand, the noise coefficients are removed to their maximum extent on a large scale. The simulation results of the blocks, bumps and signals corresponding to the sonar returns from underwater targets, demonstrate that the signal singularity features by adopting the proposed method are better preserved with significant advantage than the traditional threshold filtering method.
2014, 36(6): 1348-1354.
doi: 10.3724/SP.J.1146.2013.01038
Abstract:
The update vector of Least Mean Square (LMS) algorithm is an estimation of the gradient vector, thus its convergence rate is limited by the method of steepest descent. Based on the discussion of basic LMS, a direction optimization method of LMS algorithm is proposed in order to get rid of this speed constraint. In the proposed method, the closest update vector to the Newton direction is chosen based on the analysis of the error signal. Based on the method, a Direction Optimization LMS (DOLMS) algorithm is proposed, and it is extended to the variable step-size DOLMS algorithm. The theoretical analysis and the simulation results show that the proposed method has higher speed of convergence and less computational complexity than traditional block LMS algorithm.
The update vector of Least Mean Square (LMS) algorithm is an estimation of the gradient vector, thus its convergence rate is limited by the method of steepest descent. Based on the discussion of basic LMS, a direction optimization method of LMS algorithm is proposed in order to get rid of this speed constraint. In the proposed method, the closest update vector to the Newton direction is chosen based on the analysis of the error signal. Based on the method, a Direction Optimization LMS (DOLMS) algorithm is proposed, and it is extended to the variable step-size DOLMS algorithm. The theoretical analysis and the simulation results show that the proposed method has higher speed of convergence and less computational complexity than traditional block LMS algorithm.
2014, 36(6): 1355-1361.
doi: 10.3724/SP.J.1146.2013.00629
Abstract:
The regularization parameter of sparse representation model is determined by the unknown noise and sparsity. Meanwhile, it can directly affect the performances of sparsity reconstruction. However, the optimization algorithm of sparsity representation issue, which is solved with parameter setting by expert reasoning, priori knowledge or experiments, can not set the parameter adaptively. In order to solve the issue, the sparsity Bayesian learning algorithm which can set the parameter adaptively without priori knowledge is proposed. Firstly, the parameters in the model is constructed with the probability. Secondly, on the basis of the framework of Bayesian learning, the issue of parameter setting and sparsity resolving is transformed to the convex optimization issue which is the addition of a series of mixture L1 normal and the weighted L2 normal. Finally, the parameter setting and sparsity resolving are achieved by the iterative optimization. Theoretical analysis and simulations show that the proposed algorithm is competitive and even better compared with other parameter non-adjusted automatically iterative reweighted algorithms when ideal parameter is known, and the reconstruction performance of the proposed algorithm is significantly better than the other algorithms when choosing the non-ideal parameters.
The regularization parameter of sparse representation model is determined by the unknown noise and sparsity. Meanwhile, it can directly affect the performances of sparsity reconstruction. However, the optimization algorithm of sparsity representation issue, which is solved with parameter setting by expert reasoning, priori knowledge or experiments, can not set the parameter adaptively. In order to solve the issue, the sparsity Bayesian learning algorithm which can set the parameter adaptively without priori knowledge is proposed. Firstly, the parameters in the model is constructed with the probability. Secondly, on the basis of the framework of Bayesian learning, the issue of parameter setting and sparsity resolving is transformed to the convex optimization issue which is the addition of a series of mixture L1 normal and the weighted L2 normal. Finally, the parameter setting and sparsity resolving are achieved by the iterative optimization. Theoretical analysis and simulations show that the proposed algorithm is competitive and even better compared with other parameter non-adjusted automatically iterative reweighted algorithms when ideal parameter is known, and the reconstruction performance of the proposed algorithm is significantly better than the other algorithms when choosing the non-ideal parameters.
2014, 36(6): 1362-1367.
doi: 10.3724/SP.J.1146.2013.01164
Abstract:
A weighted estimation method based on the Time Delay Difference (TDD) variance is proposed with regard to the problem of TDD estimation of unknown source. This method utilizes the TDD of the target radiation signal frequency unit and the TDD of the noise frequency unit with respective characteristics of being stable and random to weight TDD estimation results of each frequency unit, enhancing the TDD estimation results of signal frequency unit, and achieving the TDD estimation of unknown source. The simulation results show that, compared with the conventional cross-correlation method, the estimation performance of this method is improved 3 dB. The theoretical analysis and simulation results both show that the robustness of this method is better than the conventional cross-correlation method.
A weighted estimation method based on the Time Delay Difference (TDD) variance is proposed with regard to the problem of TDD estimation of unknown source. This method utilizes the TDD of the target radiation signal frequency unit and the TDD of the noise frequency unit with respective characteristics of being stable and random to weight TDD estimation results of each frequency unit, enhancing the TDD estimation results of signal frequency unit, and achieving the TDD estimation of unknown source. The simulation results show that, compared with the conventional cross-correlation method, the estimation performance of this method is improved 3 dB. The theoretical analysis and simulation results both show that the robustness of this method is better than the conventional cross-correlation method.
2014, 36(6): 1368-1373.
doi: 10.3724/SP.J.1146.2013.01198
Abstract:
This paper discusses the design of unimodular waveforms with low correlation sidelobes that is useful for MIMO radar. These waveforms can suppress range sidelobes masking and mutual interferences among different echo signals. First, according to the relationship between the aperiodic correlation sequences and the waveforms Power Spectral Density (PSD), the correlation porperty optimization is transformed into the PSD optimization. Then, based on the PSD approximation, the designed waveforms PSDs are approximated to ideal ones. Finally, under the algorithm framework of alternating projection, Fast Fourier Transform (FFT) are used to optimize the waveforms. The numerical simulations demonstrate that the proposed method can design waveforms with good correlations for MIMO radar and it is computationally efficient.
This paper discusses the design of unimodular waveforms with low correlation sidelobes that is useful for MIMO radar. These waveforms can suppress range sidelobes masking and mutual interferences among different echo signals. First, according to the relationship between the aperiodic correlation sequences and the waveforms Power Spectral Density (PSD), the correlation porperty optimization is transformed into the PSD optimization. Then, based on the PSD approximation, the designed waveforms PSDs are approximated to ideal ones. Finally, under the algorithm framework of alternating projection, Fast Fourier Transform (FFT) are used to optimize the waveforms. The numerical simulations demonstrate that the proposed method can design waveforms with good correlations for MIMO radar and it is computationally efficient.
2014, 36(6): 1374-1380.
doi: 10.3724/SP.J.1146.2013.01264
Abstract:
Sampling may cause some loss for coherent integration in time domain and the computation burdens of the common coherent integration algorithms are usually heavy. To resolve these issues, a fast algorithm realizing long-term coherent integration in fast-time frequency domain is proposed. The algorithm firstly utilizes non-uniform FFT to accomplish range walk correction and phase compensation in fast-time frequency domain, and then fulfills the integration via IFFT. The proposed algorithm can avoid loss entailed by sampling, and needs relatively less computation. The theoretical analysis and simulation results demonstrate the effectiveness of the proposed algorithm.
Sampling may cause some loss for coherent integration in time domain and the computation burdens of the common coherent integration algorithms are usually heavy. To resolve these issues, a fast algorithm realizing long-term coherent integration in fast-time frequency domain is proposed. The algorithm firstly utilizes non-uniform FFT to accomplish range walk correction and phase compensation in fast-time frequency domain, and then fulfills the integration via IFFT. The proposed algorithm can avoid loss entailed by sampling, and needs relatively less computation. The theoretical analysis and simulation results demonstrate the effectiveness of the proposed algorithm.
2014, 36(6): 1381-1388.
doi: 10.3724/SP.J.1146.2013.01147
Abstract:
Owing to the differences among scatterer distributions observed by wideband radars with different viewing angles, it is necessary to research the network radars imaging algorithm for asymmetrical spinning targets. By making use of the range profile series of spinning target obtained by two wideband radars at different locations, the scatterer association is accomplished based on the micro-motion feature invariability of asymmetrical spinning target. Then, the three-dimensional image, which can provide the real size of the target, is obtained. The simulation demonstrates the high precision of the proposed algorithm, and insensitivity to sheltering effects and RCS fluctuation of scatterers.
Owing to the differences among scatterer distributions observed by wideband radars with different viewing angles, it is necessary to research the network radars imaging algorithm for asymmetrical spinning targets. By making use of the range profile series of spinning target obtained by two wideband radars at different locations, the scatterer association is accomplished based on the micro-motion feature invariability of asymmetrical spinning target. Then, the three-dimensional image, which can provide the real size of the target, is obtained. The simulation demonstrates the high precision of the proposed algorithm, and insensitivity to sheltering effects and RCS fluctuation of scatterers.
2014, 36(6): 1389-1393.
doi: 10.3724/SP.J.1146.2013.01716
Abstract:
Estimating procession period based on ballistic missile targets Radar Cross Section (RCS) sequences is an important means of feature extraction and target identification. The RCS sequences of ballistic missile target are unstable random process when the target is in procession, and the conventional Fourier transform and correlation type method needs long observation time and high data rate to estimate the procession period, which are unacceptable to the limited radar resource. A novel method of estimating procession period based on RCS sequences is presented. The proposed method first fits the RCS sequences with trigonometric function of a certain frequency, then get a procession frequency that minimize the fitting errors of different frequencies trigonometric function. The proposed method estimate more accurately and needs fewer time resources than conventional ones, as is validated by the simulation results of RCS data.
Estimating procession period based on ballistic missile targets Radar Cross Section (RCS) sequences is an important means of feature extraction and target identification. The RCS sequences of ballistic missile target are unstable random process when the target is in procession, and the conventional Fourier transform and correlation type method needs long observation time and high data rate to estimate the procession period, which are unacceptable to the limited radar resource. A novel method of estimating procession period based on RCS sequences is presented. The proposed method first fits the RCS sequences with trigonometric function of a certain frequency, then get a procession frequency that minimize the fitting errors of different frequencies trigonometric function. The proposed method estimate more accurately and needs fewer time resources than conventional ones, as is validated by the simulation results of RCS data.
2014, 36(6): 1394-1399.
doi: 10.3724/SP.J.1146.2013.00702
Abstract:
Netted radar systems show great potential in improving the performance of radar detection, tracking and interference suppression. However, the systems suffer high auto-correlation and cross-correlations of transmitted waveforms. Meanwhile, they also have to face the congested spectrum environment, especially when some radars in the net working on High Frequency (HF) to Ultra High Frequency (UHF) band. To solve this issue, a new method for designing sparse frequency unimodular waveform with low range side lobes is proposed, which minimizes a new effective penalty function based on both requirements for the Power Spectrum Density (PSD) and Integrated Sidelobe Level (ISL). An iterative algorithm based on FFT and subspace decomposition is proposed. The numerical examples show that the proposed approach is efficient in computation and flexible in designing sparse frequency waveform with low auto-correlation and cross-correlations.
Netted radar systems show great potential in improving the performance of radar detection, tracking and interference suppression. However, the systems suffer high auto-correlation and cross-correlations of transmitted waveforms. Meanwhile, they also have to face the congested spectrum environment, especially when some radars in the net working on High Frequency (HF) to Ultra High Frequency (UHF) band. To solve this issue, a new method for designing sparse frequency unimodular waveform with low range side lobes is proposed, which minimizes a new effective penalty function based on both requirements for the Power Spectrum Density (PSD) and Integrated Sidelobe Level (ISL). An iterative algorithm based on FFT and subspace decomposition is proposed. The numerical examples show that the proposed approach is efficient in computation and flexible in designing sparse frequency waveform with low auto-correlation and cross-correlations.
2014, 36(6): 1400-1405.
doi: 10.3724/SP.J.1146.2013.01180
Abstract:
Based on the idea of the multiple beam method, the least squares?multi beam?method is developed to resolve the problem that there is no intersections between beams by the traditional means of implementation . The method is used to process the data obtained from the OS081H high frequency surface radar system and the results are compared with the data obtained from automatic meteorological station. It is showing that this method can eliminate the wind direction ambiguity. In addition, the results are compared with the data obtained by the maximum likelihood method and the effect of wind speed on the accuracy of inversion is discussed. The results show that the correlation and accuracy improves as wind speed increases.
Based on the idea of the multiple beam method, the least squares?multi beam?method is developed to resolve the problem that there is no intersections between beams by the traditional means of implementation . The method is used to process the data obtained from the OS081H high frequency surface radar system and the results are compared with the data obtained from automatic meteorological station. It is showing that this method can eliminate the wind direction ambiguity. In addition, the results are compared with the data obtained by the maximum likelihood method and the effect of wind speed on the accuracy of inversion is discussed. The results show that the correlation and accuracy improves as wind speed increases.
2014, 36(6): 1406-1412.
doi: 10.3724/SP.J.1146.2013.01132
Abstract:
In the hyperspectral compressive sensing
In the hyperspectral compressive sensing