Email alert
2016 Vol. 38, No. 7
Display Method:
2016, 38(7): 1557-1585.
doi: 10.11999/JEIT160483
Abstract:
Let G be a k-chromatic graph.G is called a Kempe graph if all k-colorings ofG are Kempe equivalent. It is an unsolved and hard problem to characterize the properties of Kempe graphs with chromatic number3. The Kempe equivalence of maximal planar graphs is addressed in this paper. Our contributions are as follows: (1) Observe and study a class of subgraphs, called 2-chromatic ears, which play a critical role in guaranteeing the Kempe equivalence between two 4-colorings; (2) Introduce and explore the properties of-characteristic graphs, which clearly characterize the relations of all 4-colorings of a graph; (3) Divide the Kempe equivalent classes of non-Kempe 4-chromatic maximal planar graphs into three classes, tree-type, cycle-type, and circular-cycle-type, and point out that all these three classes can exist simultaneously in the set of 4-colorings of one maximal planar graph; (4) Study the characteristics of Kempe maximal planar graphs, introduce a recursive domino method to construct such graphs, and propose two conjectures.
Let G be a k-chromatic graph.G is called a Kempe graph if all k-colorings ofG are Kempe equivalent. It is an unsolved and hard problem to characterize the properties of Kempe graphs with chromatic number3. The Kempe equivalence of maximal planar graphs is addressed in this paper. Our contributions are as follows: (1) Observe and study a class of subgraphs, called 2-chromatic ears, which play a critical role in guaranteeing the Kempe equivalence between two 4-colorings; (2) Introduce and explore the properties of-characteristic graphs, which clearly characterize the relations of all 4-colorings of a graph; (3) Divide the Kempe equivalent classes of non-Kempe 4-chromatic maximal planar graphs into three classes, tree-type, cycle-type, and circular-cycle-type, and point out that all these three classes can exist simultaneously in the set of 4-colorings of one maximal planar graph; (4) Study the characteristics of Kempe maximal planar graphs, introduce a recursive domino method to construct such graphs, and propose two conjectures.
2016, 38(7): 1586-1593.
doi: 10.11999/JEIT151122
Abstract:
Visual saliency is widely applied to computer vision. Image saliency detection has been extensively studied, while there are only a few effective methods of computing saliency for videos owing to its high challenge. Inspired by image saliency methods, this paper proposes a unified spatiotemporal feature extraction and optimization framework for video saliency. First, the spatiotemporal feature descriptor is constructed via region covariance. Then, initial saliency map is computed by the local contrast of the descriptor. Finally, a local spatiotemporal optimization framework considering the previous and next frames of the current one is modeled to obtain the final saliency map. Extensive experiments on two public datasets demonstrate that the proposed algorithm not only outperforms the state-of-the-art methods, but also is of great extendibility.
Visual saliency is widely applied to computer vision. Image saliency detection has been extensively studied, while there are only a few effective methods of computing saliency for videos owing to its high challenge. Inspired by image saliency methods, this paper proposes a unified spatiotemporal feature extraction and optimization framework for video saliency. First, the spatiotemporal feature descriptor is constructed via region covariance. Then, initial saliency map is computed by the local contrast of the descriptor. Finally, a local spatiotemporal optimization framework considering the previous and next frames of the current one is modeled to obtain the final saliency map. Extensive experiments on two public datasets demonstrate that the proposed algorithm not only outperforms the state-of-the-art methods, but also is of great extendibility.
2016, 38(7): 1594-1601.
doi: 10.11999/JEIT151145
Abstract:
A new salient region detection algorithm is proposed via KL divergence between color probability distributions of super-pixels and merging multi-scale saliency maps. Firstly, multi-scale super-pixel segmentations of an input image are computed. In each segmentation scale, an undirected close-loop connected graph is constructed, in which nodes are the super-pixels and the adjacent regions are expanded reasonably relying on the total number of super-pixels. Then, all the color values in each super-pixel are clustered in terms of their discriminative power to get the statistical probability distribution of the cluster labels for each super-pixel. Next, the edges between all adjacent super-pixel pairs are weighted with the harmonic-mean of KL divergence of their probability distributions, and then the multi-scale saliency maps are calculated according to boundary connectivity and region contrast. The final saliency map is obtained by calculating and optimizing the mean map of all the saliency maps with different scales. Experimental results on some large benchmark datasets demonstrate that the proposed algorithm outperforms some state-of-the-art methods, and has higher precision and recall rates. The proposed algorithm can also produce smooth saliency maps.
A new salient region detection algorithm is proposed via KL divergence between color probability distributions of super-pixels and merging multi-scale saliency maps. Firstly, multi-scale super-pixel segmentations of an input image are computed. In each segmentation scale, an undirected close-loop connected graph is constructed, in which nodes are the super-pixels and the adjacent regions are expanded reasonably relying on the total number of super-pixels. Then, all the color values in each super-pixel are clustered in terms of their discriminative power to get the statistical probability distribution of the cluster labels for each super-pixel. Next, the edges between all adjacent super-pixel pairs are weighted with the harmonic-mean of KL divergence of their probability distributions, and then the multi-scale saliency maps are calculated according to boundary connectivity and region contrast. The final saliency map is obtained by calculating and optimizing the mean map of all the saliency maps with different scales. Experimental results on some large benchmark datasets demonstrate that the proposed algorithm outperforms some state-of-the-art methods, and has higher precision and recall rates. The proposed algorithm can also produce smooth saliency maps.
2016, 38(7): 1602-1608.
doi: 10.11999/JEIT151064
Abstract:
A robust object tracking method is proposed to deal with technical issues during tracking. Firstly, the global template based on sparse representation is used to describe object appearance, while positive and negative modules are built to separate the object from the background. Then, Random Projection (RP) is used to reduce the dimension of modules and candidate objects, which could release the calculation burden. Furthermore, the Particle Filter (PF) is used as the object motion model, and the multi-normal resample method is used to maintain the diversity of particles. To alleviate module drift problem, the positive module is divided into static module and changeable module, while different modules are dealt with different ways, and sparse reconstruction error is used to determine whether the object is occluded. Experiment results on numerous challenging videos show that the proposed method has better performance in accuracy and stability in comparison with state-of-the-art tracking methods.
A robust object tracking method is proposed to deal with technical issues during tracking. Firstly, the global template based on sparse representation is used to describe object appearance, while positive and negative modules are built to separate the object from the background. Then, Random Projection (RP) is used to reduce the dimension of modules and candidate objects, which could release the calculation burden. Furthermore, the Particle Filter (PF) is used as the object motion model, and the multi-normal resample method is used to maintain the diversity of particles. To alleviate module drift problem, the positive module is divided into static module and changeable module, while different modules are dealt with different ways, and sparse reconstruction error is used to determine whether the object is occluded. Experiment results on numerous challenging videos show that the proposed method has better performance in accuracy and stability in comparison with state-of-the-art tracking methods.
2016, 38(7): 1609-1615.
doi: 10.11999/JEIT151108
Abstract:
In order to improve the stability and accuracy of the object tracking under different conditions, an online object tracking algorithm based on Gray-Level Co-occurrence Matrix (GLCM) and third-order tensor is proposed. First, the algorithm extracts the gray-level information of target area to describe the two high discrimination features of target by GLCM, the dynamic information about target changing is fused by third-order tensor theory, and the third-order tensor appearance model of the object is constructed. Then, it uses bilinear space theory to expand the appearance model, and implements the incremental learning. Updating of model by online models characteristic value description, thus computation of the model updating is greatly reduced. Meanwhile, the static observation model and adaptive observation model are constructed, and secondary combined stable tracking of object is achieved by dynamic matching of two observation models. Experimental results indicate that the proposed algorithm can effectively deal with the moving object tracking on a variety of challenging scenes, and the average tracking error is less than 9 pixels.
In order to improve the stability and accuracy of the object tracking under different conditions, an online object tracking algorithm based on Gray-Level Co-occurrence Matrix (GLCM) and third-order tensor is proposed. First, the algorithm extracts the gray-level information of target area to describe the two high discrimination features of target by GLCM, the dynamic information about target changing is fused by third-order tensor theory, and the third-order tensor appearance model of the object is constructed. Then, it uses bilinear space theory to expand the appearance model, and implements the incremental learning. Updating of model by online models characteristic value description, thus computation of the model updating is greatly reduced. Meanwhile, the static observation model and adaptive observation model are constructed, and secondary combined stable tracking of object is achieved by dynamic matching of two observation models. Experimental results indicate that the proposed algorithm can effectively deal with the moving object tracking on a variety of challenging scenes, and the average tracking error is less than 9 pixels.
2016, 38(7): 1616-1623.
doi: 10.11999/JEIT151449
Abstract:
In a visual tracking system, the feature description plays the most important role. Multi-cue fusion is an effective way to solve the tracking problem under many complex conditions. Therefore, a perceptive deep neural network based on multi parallel networks which can be triggered adaptively is proposed. Then, using the multi-cue fusion, a new tracking method based on deep learning is established, in which the target can be adaptively fragmented. The fragment decreases the input dimension, thus reducing the computation complexity. During the tracking process, the model can dynamically adjust the weights of fragments according to the reliability of them, which is able to improve the flexibility of the tracker to deal with some complex circumstances, such as target posture change, light change and occluded by other objects. Qualitative and quantitative analysis on challenging benchmark video sequences show that the proposed tracking method is robust and can track the moving target robustly.
In a visual tracking system, the feature description plays the most important role. Multi-cue fusion is an effective way to solve the tracking problem under many complex conditions. Therefore, a perceptive deep neural network based on multi parallel networks which can be triggered adaptively is proposed. Then, using the multi-cue fusion, a new tracking method based on deep learning is established, in which the target can be adaptively fragmented. The fragment decreases the input dimension, thus reducing the computation complexity. During the tracking process, the model can dynamically adjust the weights of fragments according to the reliability of them, which is able to improve the flexibility of the tracker to deal with some complex circumstances, such as target posture change, light change and occluded by other objects. Qualitative and quantitative analysis on challenging benchmark video sequences show that the proposed tracking method is robust and can track the moving target robustly.
2016, 38(7): 1624-1630.
doi: 10.11999/JEIT151001
Abstract:
As visual tracking algorithms based on traditional co-training framework are characterized by poor robustness in complex environment, an optimized compressive tracking algorithm in a novel co-training criterion is proposed. Firstly, the spatial layout information and the online feature selection technique based on maximizing entropy energy are used to improve the discriminative capacity of compressive sense classifier, and two independent classifiers are constructed by structural compressive features selected from the gray intensity space and the local binary pattern space respectively. Secondly, on the basis of the classifiers collaborative strategy that is acquired by calculating the confidence score distribution entropy of the candidate samples, complementary features can be adaptive fused, which reinforces the robustness of tracking results. Thirdly, as assistant of the cascaded Histograms of Orientation Gradient (HOG) classifier, the collaborative appearance model is updated with accuracy by a novel co-training criterion with sample selecting ability, which solves the updating error of co-training accumulation problem. Comparative experiment results on extensive challenging sequences demonstrate that the proposed algorithm is of better performance than other similar tracking algorithms.
As visual tracking algorithms based on traditional co-training framework are characterized by poor robustness in complex environment, an optimized compressive tracking algorithm in a novel co-training criterion is proposed. Firstly, the spatial layout information and the online feature selection technique based on maximizing entropy energy are used to improve the discriminative capacity of compressive sense classifier, and two independent classifiers are constructed by structural compressive features selected from the gray intensity space and the local binary pattern space respectively. Secondly, on the basis of the classifiers collaborative strategy that is acquired by calculating the confidence score distribution entropy of the candidate samples, complementary features can be adaptive fused, which reinforces the robustness of tracking results. Thirdly, as assistant of the cascaded Histograms of Orientation Gradient (HOG) classifier, the collaborative appearance model is updated with accuracy by a novel co-training criterion with sample selecting ability, which solves the updating error of co-training accumulation problem. Comparative experiment results on extensive challenging sequences demonstrate that the proposed algorithm is of better performance than other similar tracking algorithms.
2016, 38(7): 1631-1637.
doi: 10.11999/JEIT151050
Abstract:
In consideration of the contradiction between the positioning accuracy and computational efficiency of the previous indoor positioning algorithm, a double stage positioning algorithm (LANDMARC- SROMP CS) using LANDMARC combined with Compressive Sensing based on the Regularized Orthogonal Matching Pursuit optimized by the Simulated annealing algorithm (SROMP) is put forward. First of all, LANDMARC location algorithm is used to lock the target area quickly; then in the locked area, Compressive Sensing (CS) theory is introduced to realize the target position estimation. In this part, firstly, the virtual reference tags are constructed according to the scale of the locked area; then, the measurement matrix is constructed by the received signal strength data of the virtual reference tags, and the signal strength data are calculated by the indoor propagation loss model which is trained by a new relevance vector machine algorithm based on mixed kernel functions. At last, the SROMP compressive sensing reconstruction algorithm is used to get the position index matrix, and the position information of the target also can be obtained through a simple weighted average calculation. The experimental results show that the average positioning error of the proposed algorithm is only 0.6445 m, and the computation efficiency of the proposed algorithm is relatively high, which can meet the indoor positioning requirements well.
In consideration of the contradiction between the positioning accuracy and computational efficiency of the previous indoor positioning algorithm, a double stage positioning algorithm (LANDMARC- SROMP CS) using LANDMARC combined with Compressive Sensing based on the Regularized Orthogonal Matching Pursuit optimized by the Simulated annealing algorithm (SROMP) is put forward. First of all, LANDMARC location algorithm is used to lock the target area quickly; then in the locked area, Compressive Sensing (CS) theory is introduced to realize the target position estimation. In this part, firstly, the virtual reference tags are constructed according to the scale of the locked area; then, the measurement matrix is constructed by the received signal strength data of the virtual reference tags, and the signal strength data are calculated by the indoor propagation loss model which is trained by a new relevance vector machine algorithm based on mixed kernel functions. At last, the SROMP compressive sensing reconstruction algorithm is used to get the position index matrix, and the position information of the target also can be obtained through a simple weighted average calculation. The experimental results show that the average positioning error of the proposed algorithm is only 0.6445 m, and the computation efficiency of the proposed algorithm is relatively high, which can meet the indoor positioning requirements well.
2016, 38(7): 1638-1644.
doi: 10.11999/JEIT151078
Abstract:
The regular algorithms of target detection and tracking in image domain are very sensitive to the false alarm rate, and the real time performance of target detection and tracking is limited with high false alarm rate. In order to reduce the false alarm rate of original target detection and improve the real time performance, an algorithm of low-speed moving target detection of single frame image based on Doppler shift estimation is proposed. Through transmitting LFM plus pair signal which is non-sensitive to the Doppler shift, the influence on image by Doppler shift can be ignored. But during the detection in image domain, the Doppler shift of target echo is used to remove static targets and clutter highlights. The false alarm rate of moving target detection is reduced based on single frame data, the moving target detection is achieved through a single frame image to make a good foundation for target tracking later. First, the CFAR determination in image domain is carried out in the algorithm. Then, the Doppler shift of the beamforming signal at the highlights detected is estimated through time-domain broadband beamforming and complex correlation frequency measurement. The static targets and clutter highlights are removed effectively through single frame image. In order to improve the performance of time-domain broadband beamforming, the filter coefficients are designed by second order cone programming. The 0.01 times sampling point of the fractional delay is achieved by a 9-order FIR filter and the estimation accuracy of the Doppler shift is improved. The validity of the proposed method is verified by the computer simulation and pool experiment.
The regular algorithms of target detection and tracking in image domain are very sensitive to the false alarm rate, and the real time performance of target detection and tracking is limited with high false alarm rate. In order to reduce the false alarm rate of original target detection and improve the real time performance, an algorithm of low-speed moving target detection of single frame image based on Doppler shift estimation is proposed. Through transmitting LFM plus pair signal which is non-sensitive to the Doppler shift, the influence on image by Doppler shift can be ignored. But during the detection in image domain, the Doppler shift of target echo is used to remove static targets and clutter highlights. The false alarm rate of moving target detection is reduced based on single frame data, the moving target detection is achieved through a single frame image to make a good foundation for target tracking later. First, the CFAR determination in image domain is carried out in the algorithm. Then, the Doppler shift of the beamforming signal at the highlights detected is estimated through time-domain broadband beamforming and complex correlation frequency measurement. The static targets and clutter highlights are removed effectively through single frame image. In order to improve the performance of time-domain broadband beamforming, the filter coefficients are designed by second order cone programming. The 0.01 times sampling point of the fractional delay is achieved by a 9-order FIR filter and the estimation accuracy of the Doppler shift is improved. The validity of the proposed method is verified by the computer simulation and pool experiment.
2016, 38(7): 1645-1653.
doi: 10.11999/JEIT151058
Abstract:
The current No-Reference Image Quality Assessment (NR-IQA) methods are not well consistent with subjective evaluation, a novel NR-IQA method based on the DIstribution Characteristics of Natural statistics (DICN) is proposed in this paper. In the proposed method, image is decomposed into low frequency subbands and high frequency subbands with wavelet, and its high frequency subbands are divided into blocks at size of 88, their amplitude and entropy are respectively extracted from the blocks, then their mean values of the distribution histogram and skewness are respectively calculated, and their results are as the image features. The features trained by Support Vector Regression (SVR) are for building 5 kinds of distortion image quality pre-measurement model. To determine the weights of the different distortions, the image features of classifier based on SVR are structured for carrying out the distortion evalution. Based on 5 kinds of distortion evaluation models, the NR-IQA model with the natural statistical distribution can be obtained. The results of experiments show that the proposed method performance is better than the present classical methods. The method is well consistent with the subjective assessment results, and can reflect human subjective feeling well.
The current No-Reference Image Quality Assessment (NR-IQA) methods are not well consistent with subjective evaluation, a novel NR-IQA method based on the DIstribution Characteristics of Natural statistics (DICN) is proposed in this paper. In the proposed method, image is decomposed into low frequency subbands and high frequency subbands with wavelet, and its high frequency subbands are divided into blocks at size of 88, their amplitude and entropy are respectively extracted from the blocks, then their mean values of the distribution histogram and skewness are respectively calculated, and their results are as the image features. The features trained by Support Vector Regression (SVR) are for building 5 kinds of distortion image quality pre-measurement model. To determine the weights of the different distortions, the image features of classifier based on SVR are structured for carrying out the distortion evalution. Based on 5 kinds of distortion evaluation models, the NR-IQA model with the natural statistical distribution can be obtained. The results of experiments show that the proposed method performance is better than the present classical methods. The method is well consistent with the subjective assessment results, and can reflect human subjective feeling well.
2016, 38(7): 1654-1659.
doi: 10.11999/JEIT151107
Abstract:
Association rule analysis, as one of the significant means of data mining, plays an important role in discovering the implicit knowledge in massive transaction data. To overcome the inherent defects of the classic Apriori algorithm, this paper proposes Apriori With Prejudging (AWP) algorithm. AWP algorithm adds a pre-judging procedure on the basis of the self-join and pruning progress in Apriori algorithm. It reduces and optimizes the k-frequent item sets using prior probability. In addition, the damping factor and compensating factor are introduced to revise the deviation caused by pre-judging. AWP algorithm simplifies the operation process of mining frequent item sets. Experimental results show that the improvement measures can effectively reduce the number of scanning databases and reduce the running time of the algorithm.
Association rule analysis, as one of the significant means of data mining, plays an important role in discovering the implicit knowledge in massive transaction data. To overcome the inherent defects of the classic Apriori algorithm, this paper proposes Apriori With Prejudging (AWP) algorithm. AWP algorithm adds a pre-judging procedure on the basis of the self-join and pruning progress in Apriori algorithm. It reduces and optimizes the k-frequent item sets using prior probability. In addition, the damping factor and compensating factor are introduced to revise the deviation caused by pre-judging. AWP algorithm simplifies the operation process of mining frequent item sets. Experimental results show that the improvement measures can effectively reduce the number of scanning databases and reduce the running time of the algorithm.
2016, 38(7): 1660-1665.
doi: 10.11999/JEIT151089
Abstract:
In order to improve the detection effect of omni-directional M-mode echocardiography motion curve, this paper focuses on the research of the related issues and proposes an edge detection algorithm with fuzzy enhancement and gray system theory for the omni-directional M-mode echocardiographys motion curve. Firstly, the improved fuzzy enhancement algorithm is used to enhance the edge information, while suppressing the noise and background. Moveover, the proposed algorithm is used to detect edges on echocardiography image based on a ststistic which is constructed by gray correlation in gray system theory. Finally, the best motion edges can be obtained by eliminating noise and connecting crack motion curve. Experimental results show that the proposed algorithm has better accuracy and strong robustness against the noise.
In order to improve the detection effect of omni-directional M-mode echocardiography motion curve, this paper focuses on the research of the related issues and proposes an edge detection algorithm with fuzzy enhancement and gray system theory for the omni-directional M-mode echocardiographys motion curve. Firstly, the improved fuzzy enhancement algorithm is used to enhance the edge information, while suppressing the noise and background. Moveover, the proposed algorithm is used to detect edges on echocardiography image based on a ststistic which is constructed by gray correlation in gray system theory. Finally, the best motion edges can be obtained by eliminating noise and connecting crack motion curve. Experimental results show that the proposed algorithm has better accuracy and strong robustness against the noise.
2016, 38(7): 1666-1673.
doi: 10.11999/JEIT151079
Abstract:
In this paper, the over-complete dictionary with nonorthogonal factor is firstly gained from Electro Encephalo Graph (EEG) signal with spatio-temporal characteristics, and then it is used to sparsely represent multichannel EEG signal for containing the information of spatio-temporal correlation. This contributes to enhance the performance of the joint reconstruction of multi-channel EEG signal using the Spatio-Temporal Sparse Bayesian Learning (STSBL) algorithm. The multi-channel EEG signal from the open eegmmidb database are selected to evaluate the effectiveness of the proposed algorithm. The experimental results show that the designed over-complete dictionary can provide more valuable information about the spatio-temporal characteristics in multichannel EEG signal for STSBL algorithm. When compared to the existing conventional compressed sensing technique for reconstruction multi-channel EEG signal, the signal-noise ratio of the proposed method increases by 12 dB and the reconstruction time decreases by 0.75 s, which significantly improve the performance of joint reconstruction of multichannel EEG signal.
In this paper, the over-complete dictionary with nonorthogonal factor is firstly gained from Electro Encephalo Graph (EEG) signal with spatio-temporal characteristics, and then it is used to sparsely represent multichannel EEG signal for containing the information of spatio-temporal correlation. This contributes to enhance the performance of the joint reconstruction of multi-channel EEG signal using the Spatio-Temporal Sparse Bayesian Learning (STSBL) algorithm. The multi-channel EEG signal from the open eegmmidb database are selected to evaluate the effectiveness of the proposed algorithm. The experimental results show that the designed over-complete dictionary can provide more valuable information about the spatio-temporal characteristics in multichannel EEG signal for STSBL algorithm. When compared to the existing conventional compressed sensing technique for reconstruction multi-channel EEG signal, the signal-noise ratio of the proposed method increases by 12 dB and the reconstruction time decreases by 0.75 s, which significantly improve the performance of joint reconstruction of multichannel EEG signal.
2016, 38(7): 1674-1681.
doi: 10.11999/JEIT151130
Abstract:
A novel mathematical index about edge detection is constructed to indicate both conspicuous edges and inconspicuous edges in a gray-level image. The index called Sum of Gradient Direction (SGD) is derived from the basic idea that the gradient directions of the points surrounding the real edge point have good consistency while the gradient directions of those surrounding the noise point have poor consistency. According to the SGD index a new adaptive thresholding method to detect edges is proposed. A great quantity of experiments show that: the SGD index has the ability to distinguish both conspicuous edge points and inconspicuous edge points from the noisy points; the proposed novel edge detector utilizing the SGD to regulate the gradient threshold has the ability of detecting weak edges and suppressing noisy points at the same time.
A novel mathematical index about edge detection is constructed to indicate both conspicuous edges and inconspicuous edges in a gray-level image. The index called Sum of Gradient Direction (SGD) is derived from the basic idea that the gradient directions of the points surrounding the real edge point have good consistency while the gradient directions of those surrounding the noise point have poor consistency. According to the SGD index a new adaptive thresholding method to detect edges is proposed. A great quantity of experiments show that: the SGD index has the ability to distinguish both conspicuous edge points and inconspicuous edge points from the noisy points; the proposed novel edge detector utilizing the SGD to regulate the gradient threshold has the ability of detecting weak edges and suppressing noisy points at the same time.
2016, 38(7): 1682-1688.
doi: 10.11999/JEIT151076
Abstract:
It is significant to construct deterministic measurement matrix for the promotion and application of the Compressed Sensing (CS) theory. Originating from the algebraic coding theory, a construction algorithm of Binary Sequence Family (BSF) based deterministic measurement matrix is presented. The coherence is an important criterion to describe the property of matrices. Lower coherence leads to higher reconstruction performance. The coherence of the proposed measurement matrix is derived to be smaller than the corresponding Gaussian random matrix and Bernoulli random matrix. Theoretical analysis and simulation results show that the proposed matrix can obtain better reconstruction results than the corresponding Gaussian random matrix and Bernoulli random matrix. The proposed matrix can make the hardware realization convenient and easy by means of Linear Feedback Shift Register (LFSR) structures, thus being conductive to practical compressed sensing.
It is significant to construct deterministic measurement matrix for the promotion and application of the Compressed Sensing (CS) theory. Originating from the algebraic coding theory, a construction algorithm of Binary Sequence Family (BSF) based deterministic measurement matrix is presented. The coherence is an important criterion to describe the property of matrices. Lower coherence leads to higher reconstruction performance. The coherence of the proposed measurement matrix is derived to be smaller than the corresponding Gaussian random matrix and Bernoulli random matrix. Theoretical analysis and simulation results show that the proposed matrix can obtain better reconstruction results than the corresponding Gaussian random matrix and Bernoulli random matrix. The proposed matrix can make the hardware realization convenient and easy by means of Linear Feedback Shift Register (LFSR) structures, thus being conductive to practical compressed sensing.
2016, 38(7): 1689-1695.
doi: 10.11999/JEIT151101
Abstract:
Based on multi-scale resampling, an Exponential-Like Kernel (ELK) function is designed, and evaluated with local feature extraction in kernel regression and Support Vector Machine (SVM) classification. The ELK is a one-parameter kernel, whose distribution is controlled only by the resolution of analysis. With block and Doppler noisy signals, Nadaraya-Watson regression with ELK mainly shows more noise and step error than with Gaussian kernel, it also has better precision and is more robust than LOcally WEighted Scatterplot Smoothing (LOWESS). Data sets from the UCI Machine Learning Repository used in SVM test demonstrate that, ELK has nearly equal classification accuracy as RBF does, and its locality results in more detailed margin hyperplanes, in consequence, a big number of support vectors in low classification accuracy situation. Moreover, the insensitivity?of ELK to the adjustive coefficient in kernel methods shows the potential to facilitate the parameter optimization progress. ELK, as a single parameter kernel with significant locality, is hopefully to be extensively used in relative kernel methods.
Based on multi-scale resampling, an Exponential-Like Kernel (ELK) function is designed, and evaluated with local feature extraction in kernel regression and Support Vector Machine (SVM) classification. The ELK is a one-parameter kernel, whose distribution is controlled only by the resolution of analysis. With block and Doppler noisy signals, Nadaraya-Watson regression with ELK mainly shows more noise and step error than with Gaussian kernel, it also has better precision and is more robust than LOcally WEighted Scatterplot Smoothing (LOWESS). Data sets from the UCI Machine Learning Repository used in SVM test demonstrate that, ELK has nearly equal classification accuracy as RBF does, and its locality results in more detailed margin hyperplanes, in consequence, a big number of support vectors in low classification accuracy situation. Moreover, the insensitivity?of ELK to the adjustive coefficient in kernel methods shows the potential to facilitate the parameter optimization progress. ELK, as a single parameter kernel with significant locality, is hopefully to be extensively used in relative kernel methods.
2016, 38(7): 1696-1702.
doi: 10.11999/JEIT151029
Abstract:
In view that conventional methods for Frequency Hopping (FH) signal parameter estimation suffer from performance degradation in alpha stable noise environment, the Cauchy based maximum likelihood estimation method is introduced in this paper. The FH signal is decomposed into the two-dimensional envelope versus frequency plane, and then a maximum-likelihood function based on Cauchy distribution is established to extract the frequency parameter directly. For the short-time stationarity of FH signals, the maximum-likelihood function is windowed in order to estimate the specific values and sequence of frequency-hopping, after that the hopping timing and the duration can be estimated. Simulation results show that compared with the fractional lower order statistics as well as the Myriad filter based time frequency analysis methods, the proposed method improves the estimation accuracy of FH signal parameters and is robust to the alpha stable distribution noise.
In view that conventional methods for Frequency Hopping (FH) signal parameter estimation suffer from performance degradation in alpha stable noise environment, the Cauchy based maximum likelihood estimation method is introduced in this paper. The FH signal is decomposed into the two-dimensional envelope versus frequency plane, and then a maximum-likelihood function based on Cauchy distribution is established to extract the frequency parameter directly. For the short-time stationarity of FH signals, the maximum-likelihood function is windowed in order to estimate the specific values and sequence of frequency-hopping, after that the hopping timing and the duration can be estimated. Simulation results show that compared with the fractional lower order statistics as well as the Myriad filter based time frequency analysis methods, the proposed method improves the estimation accuracy of FH signal parameters and is robust to the alpha stable distribution noise.
2016, 38(7): 1703-1709.
doi: 10.11999/JEIT151030
Abstract:
The existing spectrum detection method can not take full advantage of angle dimension. To sense the spectrum more comprehensively, the signal model is established based on the sparsity of angle dimension. The reconstruction result can be derived by Sparse Bayesian Learning (SBL) algorithm. By integrating the binary probability hypothesis into iterative procedure of SBL, a decision test combined with adaptive threshold is derived. The proposed pruning step can accept the active components of the model, and transform the sparse recovery into a detection problem for signals from different angles. Therefore, the algorithm can sense the spectrum blindly with constant false-alarm rate as well as estimate the accurate angle of each incident signal. Numerical simulation results verify that adaptive threshold can improve reconstruction accuracy with low computing cost. Moreover, the proposed algorithm can achieve better estimation and detection performance than previous algorithms.
The existing spectrum detection method can not take full advantage of angle dimension. To sense the spectrum more comprehensively, the signal model is established based on the sparsity of angle dimension. The reconstruction result can be derived by Sparse Bayesian Learning (SBL) algorithm. By integrating the binary probability hypothesis into iterative procedure of SBL, a decision test combined with adaptive threshold is derived. The proposed pruning step can accept the active components of the model, and transform the sparse recovery into a detection problem for signals from different angles. Therefore, the algorithm can sense the spectrum blindly with constant false-alarm rate as well as estimate the accurate angle of each incident signal. Numerical simulation results verify that adaptive threshold can improve reconstruction accuracy with low computing cost. Moreover, the proposed algorithm can achieve better estimation and detection performance than previous algorithms.
2016, 38(7): 1710-1716.
doi: 10.11999/JEIT151066
Abstract:
The performances of Time-Frequency Auto-Regressive Moving Average (TFARMA) model method degenerate underSS distribution environment. Hence, Fractional Lower Order Time-Frequency Auto- Regressive Moving Average (FLO-TFARMA) model algorithm based on fractional lower order covariant is proposed, the parameters estimation of FLO-TFARMA model is introduced, time-frequency distribution based on FLO-TFARMA model is given, FLO-TFARMA model algorithm are compared with the existing TFARMA algorithm in detail. The simulation results show that FLO-TFARMA model method have better performance than TFARMA model method underSS distribution environment, and the time-frequency spectrum of FLO- TFARMA method is more obvious when the parameter is smaller.
The performances of Time-Frequency Auto-Regressive Moving Average (TFARMA) model method degenerate underSS distribution environment. Hence, Fractional Lower Order Time-Frequency Auto- Regressive Moving Average (FLO-TFARMA) model algorithm based on fractional lower order covariant is proposed, the parameters estimation of FLO-TFARMA model is introduced, time-frequency distribution based on FLO-TFARMA model is given, FLO-TFARMA model algorithm are compared with the existing TFARMA algorithm in detail. The simulation results show that FLO-TFARMA model method have better performance than TFARMA model method underSS distribution environment, and the time-frequency spectrum of FLO- TFARMA method is more obvious when the parameter is smaller.
2016, 38(7): 1717-1723.
doi: 10.11999/JEIT151034
Abstract:
Speech Bandwidth Extension (BWE) is a technique that attempts to improve the speech quality by recovering the missing High Frequency (HF) components using the correlation that exists between the Low Frequency (LF) and HF parts of the wide-band speech signal. The Gaussian Mixture Model (GMM) based methods are widely used, but it recovers the missing HF components on the assumption that the LF and HF parts obey a Gaussian distribution and gives their linear relationship, leading to the distortion of reconstructed speech. This Study proposes a new speech BWE method, which uses two Gaussian-Bernoulli Restricted Boltzmann Machines (GBRBMs) to extract the high-order statistical characteristics of spectral envelopes of the LF and HF respectively. Then, high-order features of the LF are mapped to those of the HF using a Feedforward Neural Network (FNN). The proposed method learns deep relationship between the spectral envelopes of LF and HF and can model the distribution of spectral envelopes more precisely by extracting the high-order statistical characteristics of the LF components and the HF components. The objective and subjective test results show that the proposed method outperforms the conventional GMM based method.
Speech Bandwidth Extension (BWE) is a technique that attempts to improve the speech quality by recovering the missing High Frequency (HF) components using the correlation that exists between the Low Frequency (LF) and HF parts of the wide-band speech signal. The Gaussian Mixture Model (GMM) based methods are widely used, but it recovers the missing HF components on the assumption that the LF and HF parts obey a Gaussian distribution and gives their linear relationship, leading to the distortion of reconstructed speech. This Study proposes a new speech BWE method, which uses two Gaussian-Bernoulli Restricted Boltzmann Machines (GBRBMs) to extract the high-order statistical characteristics of spectral envelopes of the LF and HF respectively. Then, high-order features of the LF are mapped to those of the HF using a Feedforward Neural Network (FNN). The proposed method learns deep relationship between the spectral envelopes of LF and HF and can model the distribution of spectral envelopes more precisely by extracting the high-order statistical characteristics of the LF components and the HF components. The objective and subjective test results show that the proposed method outperforms the conventional GMM based method.
2016, 38(7): 1724-1730.
doi: 10.11999/JEIT151019
Abstract:
The issue of automatically recognizing a target from its Full-Polarization High Range Resolution Profiles (FPHRRPs) with consecutive observations is considered. The prior information contained in a multi-view FPHRRP sample is hierarchical: all the entries contained in the sample are originated from the same target; the entries within a single view are associated with the same target pose; the multiple views under the same polarization mode are correlated. To utilize efficiently the prior information for target recognition, a novel joint sparse representation based multi-view FPHRRPs target recognition method is proposed. The presented method assumes all the entries within a multi-view FPHRRP sample share a common sparsity pattern in their sparse representation vectors at atom-level, which has the advantage of exploiting the aforementioned information to enhance recognition performance. Experiments are conducted using a synthetic vehicle target dataset. The results show that the proposed method achieves promising recognition accuracy and it is robust with respect to noisy observations.
The issue of automatically recognizing a target from its Full-Polarization High Range Resolution Profiles (FPHRRPs) with consecutive observations is considered. The prior information contained in a multi-view FPHRRP sample is hierarchical: all the entries contained in the sample are originated from the same target; the entries within a single view are associated with the same target pose; the multiple views under the same polarization mode are correlated. To utilize efficiently the prior information for target recognition, a novel joint sparse representation based multi-view FPHRRPs target recognition method is proposed. The presented method assumes all the entries within a multi-view FPHRRP sample share a common sparsity pattern in their sparse representation vectors at atom-level, which has the advantage of exploiting the aforementioned information to enhance recognition performance. Experiments are conducted using a synthetic vehicle target dataset. The results show that the proposed method achieves promising recognition accuracy and it is robust with respect to noisy observations.
2016, 38(7): 1731-1737.
doi: 10.11999/JEIT151131
Abstract:
There are two issues in the Sparse Reconstruction (SR) algorithm of Multiple Measurement Vectors (MMV). One is the high computation complexity and the other is that redundant support set can not be effectively removed. In order to improve the efficiency and accuracy of SR algorithm simultaneously for MMV model, a Fast Orthogonal Matching Pursuit algorithm based on Bayesian Test (FOMP-BT) is presented in this paper. Firstly, the total number of iterations and the computation of each iteration are reduced through the new atomic group selection and warm start matrix inversion, thus the efficiency of the algorithm is improved. Secondly, using the idea of the Bayesian test to eliminate redundant support set, the accuracy of reconstruction is improved. Finally, the theoretical analysis of the algorithm is carried out from the aspects of parameter selection and computation complexity. The simulation results show that the proposed algorithm has the advantages of high accuracy, fast speed and good robustness to noise.
There are two issues in the Sparse Reconstruction (SR) algorithm of Multiple Measurement Vectors (MMV). One is the high computation complexity and the other is that redundant support set can not be effectively removed. In order to improve the efficiency and accuracy of SR algorithm simultaneously for MMV model, a Fast Orthogonal Matching Pursuit algorithm based on Bayesian Test (FOMP-BT) is presented in this paper. Firstly, the total number of iterations and the computation of each iteration are reduced through the new atomic group selection and warm start matrix inversion, thus the efficiency of the algorithm is improved. Secondly, using the idea of the Bayesian test to eliminate redundant support set, the accuracy of reconstruction is improved. Finally, the theoretical analysis of the algorithm is carried out from the aspects of parameter selection and computation complexity. The simulation results show that the proposed algorithm has the advantages of high accuracy, fast speed and good robustness to noise.
2016, 38(7): 1738-1744.
doi: 10.11999/JEIT151036
Abstract:
Due to the low Signal to Clutter Noise Ratio (SCNR), the residual stationary targets in a clutter suppressed multichannel Ultra-High Frequency (UHF) band Synthetic Aperture Radar (SAR) image may lead to an unacceptable false alarm rate. A method of moving target screening is presented in this paper, which can determine whether the target detected by the Constant False Alarm Rate (CFAR) detector is a real moving one. A moving target data recovery method is described, which can recover the Doppler phase history of any isolated target within a full-K SAR image. The recovered data is processed again into a sub-image by range Doppler processing, and the sub-image is refocused with azimuth autofocus processing. The sub-image will not change after refocusing if the target in it is a stationary one, and it will be refocused if the target is a moving one. The false moving target can be eliminated by detecting this change. The proposed method is demonstrated on simulated and real SAR Ground Moving Target Indication (GMTI) data.
Due to the low Signal to Clutter Noise Ratio (SCNR), the residual stationary targets in a clutter suppressed multichannel Ultra-High Frequency (UHF) band Synthetic Aperture Radar (SAR) image may lead to an unacceptable false alarm rate. A method of moving target screening is presented in this paper, which can determine whether the target detected by the Constant False Alarm Rate (CFAR) detector is a real moving one. A moving target data recovery method is described, which can recover the Doppler phase history of any isolated target within a full-K SAR image. The recovered data is processed again into a sub-image by range Doppler processing, and the sub-image is refocused with azimuth autofocus processing. The sub-image will not change after refocusing if the target in it is a stationary one, and it will be refocused if the target is a moving one. The false moving target can be eliminated by detecting this change. The proposed method is demonstrated on simulated and real SAR Ground Moving Target Indication (GMTI) data.
2016, 38(7): 1745-1751.
doi: 10.11999/JEIT151152
Abstract:
The netted radar often suffers congested spectrum assignment, high autocorrelation sidelobes as well as cross-interference of transmitted waveforms. In this paper, the Joint Optimization Relaxed Alternating Projection (JORAP) method is introduced to design sparse frequency waveforms with low auto- and cross-correlation sidelobes under the unimodular constraint. Firstly, the original optimization issue is converted into spectrum approximation issue via FFT between the aperiodic correlation function and power spectral density. Secondly, different design requirements are synthesized via multi-objective optimization mechanism. Next, the projection space is exploited utilizing its relaxed factor and accelerating factor. Finally, the iterative optimization is conducted by FFT and accelerated alternating projection. Simulations demonstrate that this algorithm, without computing conjugate gradient, can avoid the local stagnation and obtain efficient performance, which seems more convenient for engineering than some prevalent alternating projections or cyclic algorithms.
The netted radar often suffers congested spectrum assignment, high autocorrelation sidelobes as well as cross-interference of transmitted waveforms. In this paper, the Joint Optimization Relaxed Alternating Projection (JORAP) method is introduced to design sparse frequency waveforms with low auto- and cross-correlation sidelobes under the unimodular constraint. Firstly, the original optimization issue is converted into spectrum approximation issue via FFT between the aperiodic correlation function and power spectral density. Secondly, different design requirements are synthesized via multi-objective optimization mechanism. Next, the projection space is exploited utilizing its relaxed factor and accelerating factor. Finally, the iterative optimization is conducted by FFT and accelerated alternating projection. Simulations demonstrate that this algorithm, without computing conjugate gradient, can avoid the local stagnation and obtain efficient performance, which seems more convenient for engineering than some prevalent alternating projections or cyclic algorithms.
2016, 38(7): 1752-1757.
doi: 10.11999/JEIT151003
Abstract:
Diagonal loading method can be exploited to improve the performance of Space Time Adaptive Processing (STAP) in the face of limited training data. However, the diagonal loading level may be not easily determined in fact. To solve this problem, an adaptive parameter estimation method based on the received radar data is proposed. The diagonal loading problem is firstly transformed into the Tikhonov regularization problem. Then, Generalized Cross Validation (GCV) is introduced to construct the optimization problem. Finally, secant method is utilized to solve the optimization problem and calculate the loading parameter. The performance of the method is demonstrated using both simulated data and measured data. The results show that the method can improve the radar moving target detection performance in a limited sample support environment.
Diagonal loading method can be exploited to improve the performance of Space Time Adaptive Processing (STAP) in the face of limited training data. However, the diagonal loading level may be not easily determined in fact. To solve this problem, an adaptive parameter estimation method based on the received radar data is proposed. The diagonal loading problem is firstly transformed into the Tikhonov regularization problem. Then, Generalized Cross Validation (GCV) is introduced to construct the optimization problem. Finally, secant method is utilized to solve the optimization problem and calculate the loading parameter. The performance of the method is demonstrated using both simulated data and measured data. The results show that the method can improve the radar moving target detection performance in a limited sample support environment.
2016, 38(7): 1758-1764.
doi: 10.11999/JEIT151110
Abstract:
The performance of low frequency Ultra-Wide-Band (UWB) Synthetic Aperture Radar (SAR) is seriously affected by narrow band Radio Frequency Interference (RFI) in the VHF/UHF band. RFI suppression, by placing notches over the energy spectra to remove the RFI spikes, results energy loss of the wide-band signals and raises the range direction side-lobes. This paper presents an approach to suppress the side-lobe in range direction caused by adaptive filtering. After clipping main energy of strong scatterers in range compression domain before RFI estimation, the interference signals are estimated using an Adaptive Line Enhancer (ALE) and subtracted from the radar echo. The clipping approach makes use of the different time-domain characteristics between wide-band signal and narrow band interference, and by using pulse compression technology the method can be processed efficiently. To investigate the performance of RFI suppression, clipping of strong scatterers is combined to the ALE based on Normalized Least Mean Square (NLMS) algorithm. Simulation results and experimental data test compared with the conventional algorithms suggest that side-lobes of the strong scatters can be reduced effectively in RFI suppression.
The performance of low frequency Ultra-Wide-Band (UWB) Synthetic Aperture Radar (SAR) is seriously affected by narrow band Radio Frequency Interference (RFI) in the VHF/UHF band. RFI suppression, by placing notches over the energy spectra to remove the RFI spikes, results energy loss of the wide-band signals and raises the range direction side-lobes. This paper presents an approach to suppress the side-lobe in range direction caused by adaptive filtering. After clipping main energy of strong scatterers in range compression domain before RFI estimation, the interference signals are estimated using an Adaptive Line Enhancer (ALE) and subtracted from the radar echo. The clipping approach makes use of the different time-domain characteristics between wide-band signal and narrow band interference, and by using pulse compression technology the method can be processed efficiently. To investigate the performance of RFI suppression, clipping of strong scatterers is combined to the ALE based on Normalized Least Mean Square (NLMS) algorithm. Simulation results and experimental data test compared with the conventional algorithms suggest that side-lobes of the strong scatters can be reduced effectively in RFI suppression.
2016, 38(7): 1765-1772.
doi: 10.11999/JEIT151155
Abstract:
A new deception jamming method for multi-channel SAR-GMTI is studied. The SAR signal received by jammer is modulated by cosinusoidal phase in the range, and is modulated by cosinusoidal phase in the azimuth based on the jammers rotation motion, so deception jamming performance of 2-D cosinusoidal phase can be realized at the same time. The countering performance against GMTI is analyzed by using the tri-channel interference cancelling technique. The method can produce 2-D netted multi-false targets jamming performance, so the ground moving targets and stationary targets can be protected at the same time. Theoretical analysis and computer simulation justify the validity and the efficiency of the proposed method.
A new deception jamming method for multi-channel SAR-GMTI is studied. The SAR signal received by jammer is modulated by cosinusoidal phase in the range, and is modulated by cosinusoidal phase in the azimuth based on the jammers rotation motion, so deception jamming performance of 2-D cosinusoidal phase can be realized at the same time. The countering performance against GMTI is analyzed by using the tri-channel interference cancelling technique. The method can produce 2-D netted multi-false targets jamming performance, so the ground moving targets and stationary targets can be protected at the same time. Theoretical analysis and computer simulation justify the validity and the efficiency of the proposed method.
2016, 38(7): 1773-1780.
doi: 10.11999/JEIT150933
Abstract:
In order to improve the efficiency and quality of image fusion, a new image fusion algorithm based on four-direction Sparse Representation (SR) and fast Non-Subsampled Contourlet Transform (NSCT) is proposed. The proposed method firstly provides a series of low- and high-frequency sub-bands of source images via fast NSCT decomposition. Then adaptive DCT over-complete dictionary is used for the fast four-direction sparse representation and coefficients fusion of low-pass sub-band, while Gaussian weighted regional energy based fusion rule are used in high-pass sub-bands. Fast NSCT modifies the tree structure filter bank of traditional NSCT into multi-channel structure, and it saves about half of the time. The fast SR fusion method adopts a four-direction sparse representation for coefficients fusion instead of traditional sliding window method, and further improves the efficiency of algorithm. The experimental results show that the proposed fast fusion algorithm can improve the efficiency nearly 20 times without sacrificing fusion quality.
In order to improve the efficiency and quality of image fusion, a new image fusion algorithm based on four-direction Sparse Representation (SR) and fast Non-Subsampled Contourlet Transform (NSCT) is proposed. The proposed method firstly provides a series of low- and high-frequency sub-bands of source images via fast NSCT decomposition. Then adaptive DCT over-complete dictionary is used for the fast four-direction sparse representation and coefficients fusion of low-pass sub-band, while Gaussian weighted regional energy based fusion rule are used in high-pass sub-bands. Fast NSCT modifies the tree structure filter bank of traditional NSCT into multi-channel structure, and it saves about half of the time. The fast SR fusion method adopts a four-direction sparse representation for coefficients fusion instead of traditional sliding window method, and further improves the efficiency of algorithm. The experimental results show that the proposed fast fusion algorithm can improve the efficiency nearly 20 times without sacrificing fusion quality.
2016, 38(7): 1781-1787.
doi: 10.11999/JEIT151198
Abstract:
For DVB-S2 standard LDPC code, to achieve an efficient encoding architecture based on FPGA, a fast pipeline parallel and recursive encoding algorithm is proposed which can significantly improve encoding speed and improve the encoding data rate of information throughput. At the same time, the parallel shift operation and parallel XOR processing structure is introduced to calculate code intermediate variable. It can effectively improve the encoding parallel degree and reduce the occupancy volume of storage resources. In addition, according to dynamic adaptive encoding, the storage structure and effective reuse of data storage unit and the RAM address generator are optimized. In this case, the utilization of FPGA resources is further improved. The experiment based on Stratix IV series FPGA for DVB-S2 standard LDPC code, shows that the proposed method can achieve system clock frequency of 126.17 MHz and encoding data rate of information throughput of more than 20 Gbps.
For DVB-S2 standard LDPC code, to achieve an efficient encoding architecture based on FPGA, a fast pipeline parallel and recursive encoding algorithm is proposed which can significantly improve encoding speed and improve the encoding data rate of information throughput. At the same time, the parallel shift operation and parallel XOR processing structure is introduced to calculate code intermediate variable. It can effectively improve the encoding parallel degree and reduce the occupancy volume of storage resources. In addition, according to dynamic adaptive encoding, the storage structure and effective reuse of data storage unit and the RAM address generator are optimized. In this case, the utilization of FPGA resources is further improved. The experiment based on Stratix IV series FPGA for DVB-S2 standard LDPC code, shows that the proposed method can achieve system clock frequency of 126.17 MHz and encoding data rate of information throughput of more than 20 Gbps.
2016, 38(7): 1788-1793.
doi: 10.11999/JEIT151087
Abstract:
As the Long and Short Codes Direct Sequence Code Division Multiple Access (LSC-DS-CDMA) signal contains long and short PN codes of multi-user, the existing methods of PN codes blind estimation for the Direct Sequence Code Division Multiple Access (DS-CDMA) signal are no longer applicable. Then a pseudo random (PN) codes estimation method based on matrix completion and triple correlation is proposed. Firstly, LSC-DS-CDMA signal is represented as a matrix model with missing data for multi-user short code and the composite code matrix estimation is modeled as a blind source separation problem in the theory. Secondly, matrix completion theory is used to estimate the composite code subspace. A method of the composite code sequences estimation is proposed based on the singular value thresholding algorithm and Fast-ICA algorithm. Finally, the delayed triple correlation algorithm is presented to estimate the long and short PN codes from the composite code sequences based on the shift-and-add property of m sequence. Simulations show that the bit error rate of long and short codes sequences can be reduced to 0.1% when the SNR is above -2 dB.
As the Long and Short Codes Direct Sequence Code Division Multiple Access (LSC-DS-CDMA) signal contains long and short PN codes of multi-user, the existing methods of PN codes blind estimation for the Direct Sequence Code Division Multiple Access (DS-CDMA) signal are no longer applicable. Then a pseudo random (PN) codes estimation method based on matrix completion and triple correlation is proposed. Firstly, LSC-DS-CDMA signal is represented as a matrix model with missing data for multi-user short code and the composite code matrix estimation is modeled as a blind source separation problem in the theory. Secondly, matrix completion theory is used to estimate the composite code subspace. A method of the composite code sequences estimation is proposed based on the singular value thresholding algorithm and Fast-ICA algorithm. Finally, the delayed triple correlation algorithm is presented to estimate the long and short PN codes from the composite code sequences based on the shift-and-add property of m sequence. Simulations show that the bit error rate of long and short codes sequences can be reduced to 0.1% when the SNR is above -2 dB.
2016, 38(7): 1794-1799.
doi: 10.11999/JEIT151068
Abstract:
Scrambler reconstruction algorithm based on Walsh-Hadamard transformation is a promising method to recover the feedback relationships, which picks out the optimal solution under the rule of maximum number. However, its computation complexity increases markedly with the transformation degree. In order to reduce the complexity, a method to reconstruct the scrambler with real-time test is proposed. In the process of Walsh- Hadamard transformation, the objects can be tested in real time. If the feedback polynomial is detected, the transformation can be terminated. With real-time test, the computation complexity can be reduced about 50% on average.
Scrambler reconstruction algorithm based on Walsh-Hadamard transformation is a promising method to recover the feedback relationships, which picks out the optimal solution under the rule of maximum number. However, its computation complexity increases markedly with the transformation degree. In order to reduce the complexity, a method to reconstruct the scrambler with real-time test is proposed. In the process of Walsh- Hadamard transformation, the objects can be tested in real time. If the feedback polynomial is detected, the transformation can be terminated. With real-time test, the computation complexity can be reduced about 50% on average.
2016, 38(7): 1800-1807.
doi: 10.11999/JEIT151043
Abstract:
Ubiquitous network is a kind of standard heterogeneous network. It is a hot research topic to secure switching between networks. This paper analyzes EAP-AKA, which is used during handoff across heterogeneous networks. However, this protocol has high authentication delay and is confronted with several security threats, such as user identity disclosure, man in middle attack and DoS attack. Moreover, access point of the access network is not verified, leaving the user under attack even after heavy authentication procedure. To deal with the above security vulnerabilities, an improved secure authentication protocol for ubiquitous network based on EAP-AKA protocol is proposed, extending the applicability of traditional EAP-AKA protocol from the 3G system to ubiquitous network. The new protocol reduces authentication delay and effectively protects identities of users and access points. In order to avoid main session key leakage, the Diffie Hellman algorithm is used to generate a symmetric key randomly each time. The mutual authentication between user endpoint and the home network is also achieved in new protocol. Experiments and analysis verifies effectiveness and efficiency of the proposed protocol.
Ubiquitous network is a kind of standard heterogeneous network. It is a hot research topic to secure switching between networks. This paper analyzes EAP-AKA, which is used during handoff across heterogeneous networks. However, this protocol has high authentication delay and is confronted with several security threats, such as user identity disclosure, man in middle attack and DoS attack. Moreover, access point of the access network is not verified, leaving the user under attack even after heavy authentication procedure. To deal with the above security vulnerabilities, an improved secure authentication protocol for ubiquitous network based on EAP-AKA protocol is proposed, extending the applicability of traditional EAP-AKA protocol from the 3G system to ubiquitous network. The new protocol reduces authentication delay and effectively protects identities of users and access points. In order to avoid main session key leakage, the Diffie Hellman algorithm is used to generate a symmetric key randomly each time. The mutual authentication between user endpoint and the home network is also achieved in new protocol. Experiments and analysis verifies effectiveness and efficiency of the proposed protocol.
2016, 38(7): 1808-1815.
doi: 10.11999/JEIT151095
Abstract:
Crowdsourcing is a new distributed problem solving pattern brought by the Internet. However, intrinsic incentive problems reside in crowdsourcing applications as workers and requester are selfish and aim to maximize their own benefit. In this paper, the following key contributions are made. A reputation-based incentive model is designed using repeated game theory, based on thorough analysis for current research on reputation and incentive mechanism; and a punishment mechanism is established to counter selfish workers. The experiment results show that the new established model can efficiently motivate the rational workers and counter the selfish ones. By setting punishment parameters appropriately, the overall performance of crowdsourcing system can be improved up to 90%, even if the fraction of selfish workers is 20%.
Crowdsourcing is a new distributed problem solving pattern brought by the Internet. However, intrinsic incentive problems reside in crowdsourcing applications as workers and requester are selfish and aim to maximize their own benefit. In this paper, the following key contributions are made. A reputation-based incentive model is designed using repeated game theory, based on thorough analysis for current research on reputation and incentive mechanism; and a punishment mechanism is established to counter selfish workers. The experiment results show that the new established model can efficiently motivate the rational workers and counter the selfish ones. By setting punishment parameters appropriately, the overall performance of crowdsourcing system can be improved up to 90%, even if the fraction of selfish workers is 20%.
2016, 38(7): 1816-1822.
doi: 10.11999/JEIT150864
Abstract:
Recently, the complex networks have been more and more popular in various areas of science and engineering. Synchronization is one of the hot topics in the investigation of complex networks. This paper focuses on modified function projective synchronization of two complex networks with known or unknown parameters. Based on Lyapunov stability theory and the adaptive control technique, an adaptive synchronization controller is developed to realize modified function projective synchronization in two complex networks. Numerical examples are provided to show the effectiveness of the proposed method.
Recently, the complex networks have been more and more popular in various areas of science and engineering. Synchronization is one of the hot topics in the investigation of complex networks. This paper focuses on modified function projective synchronization of two complex networks with known or unknown parameters. Based on Lyapunov stability theory and the adaptive control technique, an adaptive synchronization controller is developed to realize modified function projective synchronization in two complex networks. Numerical examples are provided to show the effectiveness of the proposed method.
2016, 38(7): 1823-1830.
doi: 10.11999/JEIT151074
Abstract:
Testing sandbox authentication mechanism needs to recognize the sandbox interception first, i.e. to recognize the intercepted system function sets by the sandbox. Existing Hook recognition methods and tools mainly focus on the existence of the hook, lacking the ability of recognizing sandbox interception. This study proposes a sandbox interception recognition method based on function injection. The method recognizes the sandbox intercepts testing functions by analyzing the trace of system functions. First, the method injects and executes the system functions in untrusted process to record the function trace. Then, according to the features of intercepted system function trace, the paper designs the address space finite state automata and identifies intercepted system functions by analyzing the trace. Next, the function sets are traversed to identify the intercepted system function sets by target sandbox. Finally, a prototype is implementedSIAnalyzer, and tested with Chromium Sandbox and Adobe Reader Sandbox. Results show the method proposed is effective and practical.
Testing sandbox authentication mechanism needs to recognize the sandbox interception first, i.e. to recognize the intercepted system function sets by the sandbox. Existing Hook recognition methods and tools mainly focus on the existence of the hook, lacking the ability of recognizing sandbox interception. This study proposes a sandbox interception recognition method based on function injection. The method recognizes the sandbox intercepts testing functions by analyzing the trace of system functions. First, the method injects and executes the system functions in untrusted process to record the function trace. Then, according to the features of intercepted system function trace, the paper designs the address space finite state automata and identifies intercepted system functions by analyzing the trace. Next, the function sets are traversed to identify the intercepted system function sets by target sandbox. Finally, a prototype is implementedSIAnalyzer, and tested with Chromium Sandbox and Adobe Reader Sandbox. Results show the method proposed is effective and practical.
2016, 38(7): 1831-1837.
doi: 10.11999/JEIT151104
Abstract:
In the data processing of quantified time signal, traditional encoding method in high frequency is faced with the problem of high Bit Error Rate (BER) affecting the datas quantitative accuracy. This paper presents BER mechanism analytical model according to the analysis of the causes of bit error, which takes both data latch and delay mismatch effects of different state pattern into consideration. And the analysis of same frequency coding mode with low BER is put forward based on the comparison of the binary and Gray coding method. The circuit and layout designs of Time to Digital Converter (TDC) with same frequency coding mode are implemented in TSMC 0.35m CMOS process. The test results of the Multi Project Wafer (MPW) chip show that BER of the same frequency coding mode is effectively reduced compared with traditional encoding modes under the same conditions.
In the data processing of quantified time signal, traditional encoding method in high frequency is faced with the problem of high Bit Error Rate (BER) affecting the datas quantitative accuracy. This paper presents BER mechanism analytical model according to the analysis of the causes of bit error, which takes both data latch and delay mismatch effects of different state pattern into consideration. And the analysis of same frequency coding mode with low BER is put forward based on the comparison of the binary and Gray coding method. The circuit and layout designs of Time to Digital Converter (TDC) with same frequency coding mode are implemented in TSMC 0.35m CMOS process. The test results of the Multi Project Wafer (MPW) chip show that BER of the same frequency coding mode is effectively reduced compared with traditional encoding modes under the same conditions.
2016, 38(7): 1838-1842.
doi: 10.11999/JEIT151063
Abstract:
To solve the deployment problem of nodes with unbalanced sensing radiuses in mobile sensor network, an Autonomous Deployment Algorithm (ADA) based on the VL (Voronoi Laguerre) graph is proposed. First, the VL graph is used to divide the target area, the coverage tasks of target area are allocated among different sensor nodes. Then, the node assigned with coverage subinterval confirms its candidate target location in next round by structuring the VL controlled polygon. The node without sub-range calculates its virtual repulsion according to the geometrical position relationship with its neighbor nodes perception circles and the target areas borders to ultimately ascertain the target point moving to. Each node in the network updates its position by rounds to improve the network coverage. The simulation results show ADA algorithm has obvious advantages in network coverage rate, deployment speed, nodes distribution uniformity and so on.
To solve the deployment problem of nodes with unbalanced sensing radiuses in mobile sensor network, an Autonomous Deployment Algorithm (ADA) based on the VL (Voronoi Laguerre) graph is proposed. First, the VL graph is used to divide the target area, the coverage tasks of target area are allocated among different sensor nodes. Then, the node assigned with coverage subinterval confirms its candidate target location in next round by structuring the VL controlled polygon. The node without sub-range calculates its virtual repulsion according to the geometrical position relationship with its neighbor nodes perception circles and the target areas borders to ultimately ascertain the target point moving to. Each node in the network updates its position by rounds to improve the network coverage. The simulation results show ADA algorithm has obvious advantages in network coverage rate, deployment speed, nodes distribution uniformity and so on.