Email alert
2016 Vol. 38, No. 3
Display Method:
2016, 38(3): 509-516.
doi: 10.11999/JEIT150676
Abstract:
Unmanned Aerial Vehicle (UAV) images are characterized by a very high spatial resolution, and consequently by more abundant information of the edge and the texture. The conventional stitching methods, which use Speeded Up Robust Features (SURF) and kd-tree based nearest neighbor matching, are facing with new challenges for processing UAV images. In this paper, a fast feature extraction and matching algorithm is proposed for more efficient stitching of UAV images. Firstly, the Local Difference Binary (LDB) algorithm is used to describe the feature, which could reduce the dimension of feature without sacrificing its discrimination. Then, the Local Sensitive Hash (LSH) is used to replace kd-tree search structure, which achieves nearest neighbor matching more efficiently. Compared with the conventional stitching method, experimental results demonstrate that the proposed method achieves a higher accuracy and greater efficiency, which is more applicable to rapid mapping of UAV images.
Unmanned Aerial Vehicle (UAV) images are characterized by a very high spatial resolution, and consequently by more abundant information of the edge and the texture. The conventional stitching methods, which use Speeded Up Robust Features (SURF) and kd-tree based nearest neighbor matching, are facing with new challenges for processing UAV images. In this paper, a fast feature extraction and matching algorithm is proposed for more efficient stitching of UAV images. Firstly, the Local Difference Binary (LDB) algorithm is used to describe the feature, which could reduce the dimension of feature without sacrificing its discrimination. Then, the Local Sensitive Hash (LSH) is used to replace kd-tree search structure, which achieves nearest neighbor matching more efficiently. Compared with the conventional stitching method, experimental results demonstrate that the proposed method achieves a higher accuracy and greater efficiency, which is more applicable to rapid mapping of UAV images.
2016, 38(3): 517-522.
doi: 10.11999/JEIT150700
Abstract:
With the development of big data and information technology, a better understanding of users trajectories is of great importance for the design of many applications, such as personalized recommendation, behavioral targeting and computational advertising. In this paper, with the theory of urban computing based on big data, a model of recognizing information veracity of users on the social media networks is proposed. The behavior characteristics of users trajectories based on context awareness are analyzed. The model of recognizing the truth of social roles is formalized and built. The subjectivity of recognizing users roles is overcomed. Furthermore, experiments are conducted with large-scale and real-world datasets. The results show that the proposed model offers a powerful ability for recognition of truth social roles.
With the development of big data and information technology, a better understanding of users trajectories is of great importance for the design of many applications, such as personalized recommendation, behavioral targeting and computational advertising. In this paper, with the theory of urban computing based on big data, a model of recognizing information veracity of users on the social media networks is proposed. The behavior characteristics of users trajectories based on context awareness are analyzed. The model of recognizing the truth of social roles is formalized and built. The subjectivity of recognizing users roles is overcomed. Furthermore, experiments are conducted with large-scale and real-world datasets. The results show that the proposed model offers a powerful ability for recognition of truth social roles.
2016, 38(3): 523-531.
doi: 10.11999/JEIT150645
Abstract:
For the traditional clustering algorithms efficiency problems in situations of insufficient datasets or datasets with noises, a Knowledge Transfer Clustering Algorithm with Privacy Protection (KTCAPP) is proposed based on the classical Fuzzy C-Means (FCM) technology by leveraging two kinds of knowledge which are the historical class center and the historical class membership. The performance of KTCAPP is enhanced by using auxiliary knowledge from history datasets to guide the current clustering task with insufficient datasets or datasets with noises. In addition, KTCAPP is of good capability of privacy protection because the algorithm only uses the historical class center and the historical class membership which do not expose the raw data. Experiment results show the proposed algorithm is efficient.
For the traditional clustering algorithms efficiency problems in situations of insufficient datasets or datasets with noises, a Knowledge Transfer Clustering Algorithm with Privacy Protection (KTCAPP) is proposed based on the classical Fuzzy C-Means (FCM) technology by leveraging two kinds of knowledge which are the historical class center and the historical class membership. The performance of KTCAPP is enhanced by using auxiliary knowledge from history datasets to guide the current clustering task with insufficient datasets or datasets with noises. In addition, KTCAPP is of good capability of privacy protection because the algorithm only uses the historical class center and the historical class membership which do not expose the raw data. Experiment results show the proposed algorithm is efficient.
2016, 38(3): 532-540.
doi: 10.11999/JEIT150633
Abstract:
Classification algorithm of Support Vector Machine (SVM) is introduced the penalty factor to adjust the overfit and nonlinear problem. The method is beneficial for seeking the optimal solution by allowing a part of samples error classified. But it also causes a problem that error classified samples distribute disorderedly and increase the burden of training. In order to solve the above problems, according to large margin classification thought, based on principles that the intraclass samples must be closer and the interclass samples must be looser, this research proposes a new classification algorithm called Intraclass-Distance-Sum-Minimization (IDSM) based classification algorithm. This algorithm constructs a training model by using principle of minimizing the sum of the intraclass distance and finds the optimal projection rule by analytical method. And then the optimal projection rule is used to samples projection transformation to achieve the effect that intraclass intervals are small and the interclass intervals are large. Accordingly, this research offers a kernel version of the algorithm to solve high-dimensional classification problems. Experiment results on a large number of UCI datasets and the Yale face database indicate the superiority of the proposed algorithm.
Classification algorithm of Support Vector Machine (SVM) is introduced the penalty factor to adjust the overfit and nonlinear problem. The method is beneficial for seeking the optimal solution by allowing a part of samples error classified. But it also causes a problem that error classified samples distribute disorderedly and increase the burden of training. In order to solve the above problems, according to large margin classification thought, based on principles that the intraclass samples must be closer and the interclass samples must be looser, this research proposes a new classification algorithm called Intraclass-Distance-Sum-Minimization (IDSM) based classification algorithm. This algorithm constructs a training model by using principle of minimizing the sum of the intraclass distance and finds the optimal projection rule by analytical method. And then the optimal projection rule is used to samples projection transformation to achieve the effect that intraclass intervals are small and the interclass intervals are large. Accordingly, this research offers a kernel version of the algorithm to solve high-dimensional classification problems. Experiment results on a large number of UCI datasets and the Yale face database indicate the superiority of the proposed algorithm.
2016, 38(3): 541-548.
doi: 10.11999/JEIT150530
Abstract:
The aerial camera takes photographs from sky. Conventional auto exposure algorithms can not be suitable for some special scenes such as the background is bright or dark and few strong interference points exist. Unsuitable exposure causes information losing in images. In order to solve this problem, an algorithm based on gray value is proposed. Firstly, it regulates the exposure preliminarily using the information of the histogram near 0 and 255. Then it obtains the regions of no interest in the image through the convolution of the histogram and weighted different proportion for the regions according to their area in the entire image. Finally, the weighted average is computed, which is regarded as the feedback controlling exposure. Experimental results indicate that image entropy of regions of interest increased more than 10% in dark or bright background. This method is suitable for many scenes, and meets the demands of the aerial camera.
The aerial camera takes photographs from sky. Conventional auto exposure algorithms can not be suitable for some special scenes such as the background is bright or dark and few strong interference points exist. Unsuitable exposure causes information losing in images. In order to solve this problem, an algorithm based on gray value is proposed. Firstly, it regulates the exposure preliminarily using the information of the histogram near 0 and 255. Then it obtains the regions of no interest in the image through the convolution of the histogram and weighted different proportion for the regions according to their area in the entire image. Finally, the weighted average is computed, which is regarded as the feedback controlling exposure. Experimental results indicate that image entropy of regions of interest increased more than 10% in dark or bright background. This method is suitable for many scenes, and meets the demands of the aerial camera.
2016, 38(3): 549-556.
doi: 10.11999/JEIT150410
Abstract:
The traditional Bag-Of-Words (BOW) model easy causes confusion of different action classes due to the lack of distribution information among features. And the size of BOW has a large effect on recognition rate. In order to reflect the distribution information of interesting points, the position relationship of interesting points in local spatio-temporal region is calculated as the consistency of distribution features. And the appearance features are fused to build the enhanced BOW model. SVM is adopted for multi-classes recognition. The experiment is carried out on KTH dataset for single person action recognition and UT-interaction dataset for multi-person abnormal action recognition. Compared with traditional BOW model, the enhanced BOW algorithm not only has a great improvement in recognition rate, but also reduces the influence of BOW models size on recognition rate. The experiment results of the proposed algorithm show the validity and good performance.
The traditional Bag-Of-Words (BOW) model easy causes confusion of different action classes due to the lack of distribution information among features. And the size of BOW has a large effect on recognition rate. In order to reflect the distribution information of interesting points, the position relationship of interesting points in local spatio-temporal region is calculated as the consistency of distribution features. And the appearance features are fused to build the enhanced BOW model. SVM is adopted for multi-classes recognition. The experiment is carried out on KTH dataset for single person action recognition and UT-interaction dataset for multi-person abnormal action recognition. Compared with traditional BOW model, the enhanced BOW algorithm not only has a great improvement in recognition rate, but also reduces the influence of BOW models size on recognition rate. The experiment results of the proposed algorithm show the validity and good performance.
2016, 38(3): 557-564.
doi: 10.11999/JEIT150693
Abstract:
To deal with the consistency problem of training process and decision process in Generalized Eigenvalue Proximal Support Vector Machine (GEPSVM), an improved version of eigenvalue proximal support vector machine, called IGEPSVM for short is proposed. At first, IGEPSVM for binary classification problem is proposed, and then Multi-IGEPSVM is also presented for multi-class classification problem based on one-versus-rest strategy. The main contributions of this paper are as follows. The generalized eigenvalue decomposition problems are replaced by the standard eigenvalue decomposition problems, leading to simpler optimization problems. An extra parameter is introduced, which can adjust the performance of the model and improve the classification accuracy of GEPSVM. A corresponding multi-class classification algorithm is proposed, which is not studied in GEPSVM. Experimental results on several datasets illustrate that IGEPSVM is superior to GEPSVM in both classification accuracy and training speed.
To deal with the consistency problem of training process and decision process in Generalized Eigenvalue Proximal Support Vector Machine (GEPSVM), an improved version of eigenvalue proximal support vector machine, called IGEPSVM for short is proposed. At first, IGEPSVM for binary classification problem is proposed, and then Multi-IGEPSVM is also presented for multi-class classification problem based on one-versus-rest strategy. The main contributions of this paper are as follows. The generalized eigenvalue decomposition problems are replaced by the standard eigenvalue decomposition problems, leading to simpler optimization problems. An extra parameter is introduced, which can adjust the performance of the model and improve the classification accuracy of GEPSVM. A corresponding multi-class classification algorithm is proposed, which is not studied in GEPSVM. Experimental results on several datasets illustrate that IGEPSVM is superior to GEPSVM in both classification accuracy and training speed.
2016, 38(3): 565-570.
doi: 10.11999/JEIT150686
Abstract:
In practical application, the adaptive weight of the sidelobe canceller can not be updated frequently. Thus, the spatial nonstationary interference would lead to the mismatch between the weight and the snapshots, which seriously deteriorates the cancellation performance of the sidelobe canceller. A null broadening algorithm is proposed for the sidelobe canceller, which is performed by tapering the conventional beamformer output and the covariance matrix of the auxiliary elements simultaneously. The taper vector and matrix only depend on the elements location and null width meaning that they can be designed offline and are quite suitable for practical use. Simulation results verify the effectiveness of the proposed method.
In practical application, the adaptive weight of the sidelobe canceller can not be updated frequently. Thus, the spatial nonstationary interference would lead to the mismatch between the weight and the snapshots, which seriously deteriorates the cancellation performance of the sidelobe canceller. A null broadening algorithm is proposed for the sidelobe canceller, which is performed by tapering the conventional beamformer output and the covariance matrix of the auxiliary elements simultaneously. The taper vector and matrix only depend on the elements location and null width meaning that they can be designed offline and are quite suitable for practical use. Simulation results verify the effectiveness of the proposed method.
2016, 38(3): 571-577.
doi: 10.11999/JEIT150705
Abstract:
Target appearance model is crucial for tracking. In this paper, a Real-time SuperPixels based Tracking (RSPT) method is proposed in a tracking-by-detection framework, by investigating mid-level vision cue superpixels. Firstly, a discriminative appearance model is constructed relying superpixels feature and K-Nearest Neighbor (KNN) learning method. Then the tracking problem is posed by computing a confidence map, and detecting the best target station by maximizing an object location likelihood function. The integral image data structure is adopted for fast detection, innovatively. Implemented in MATLAB without code optimization, the proposed tracker runs at 19 frames per second on an i5 laptop. Extensive experimental results on challenging sequences show that the proposed algorithm performs favorably against some state-of-the-art methods in terms of accuracy and robustness.
Target appearance model is crucial for tracking. In this paper, a Real-time SuperPixels based Tracking (RSPT) method is proposed in a tracking-by-detection framework, by investigating mid-level vision cue superpixels. Firstly, a discriminative appearance model is constructed relying superpixels feature and K-Nearest Neighbor (KNN) learning method. Then the tracking problem is posed by computing a confidence map, and detecting the best target station by maximizing an object location likelihood function. The integral image data structure is adopted for fast detection, innovatively. Implemented in MATLAB without code optimization, the proposed tracker runs at 19 frames per second on an i5 laptop. Extensive experimental results on challenging sequences show that the proposed algorithm performs favorably against some state-of-the-art methods in terms of accuracy and robustness.
2016, 38(3): 578-585.
doi: 10.11999/JEIT150610
Abstract:
For the sparse representation of image quality assessment model are based on gray image and the lack of color information, a Non-negative Matrix Factorization (NMF)-based full reference color image quality assessment method is proposed. Firstly, from the natural color image in random sampling, training samples are got. Non-negative matrix factorization method is used to train and get a feature basis matrix. After using Schmidt orthogonalization, a feature extracting matrix is got. Secondly, according to the visual saliency model, maximum visual saliency is defined and significant difference of two steps is used to select visual important area. Finally, using the feature extraction matrix, low dimensional feature vectors and the final color image quality evaluation value are got. The experimental results show that the proposed method has good performance in the LIVE, CSIQ and TID2008 three image databases. The average results of three image quality assessment databases show that the proposed method outperforms other methods, which means that the proposed method has better correlation with the subjective perception.
For the sparse representation of image quality assessment model are based on gray image and the lack of color information, a Non-negative Matrix Factorization (NMF)-based full reference color image quality assessment method is proposed. Firstly, from the natural color image in random sampling, training samples are got. Non-negative matrix factorization method is used to train and get a feature basis matrix. After using Schmidt orthogonalization, a feature extracting matrix is got. Secondly, according to the visual saliency model, maximum visual saliency is defined and significant difference of two steps is used to select visual important area. Finally, using the feature extraction matrix, low dimensional feature vectors and the final color image quality evaluation value are got. The experimental results show that the proposed method has good performance in the LIVE, CSIQ and TID2008 three image databases. The average results of three image quality assessment databases show that the proposed method outperforms other methods, which means that the proposed method has better correlation with the subjective perception.
2016, 38(3): 586-593.
doi: 10.11999/JEIT150778
Abstract:
A method of voiced/unvoiced classification and pitch estimation based on Pitch Estimation Filter with Amplitude Compression (PEFAC) is proposed in this paper. The method first attenuates strong noise components at the?low frequencies based on PEFAC and extracts pitch harmonic from noisy speech in the log-frequency domain. Then, the harmonic number associated with the pitch harmonic is determined by Symmetric average magnitude sum function weighted Impulse-train Matching (SIM) scheme in time domain. A pitch tracking scheme using dynamic programming is applied to select the pitch candidates and a voiced speech probability is computed from the likelihood ratio of Gaussian Mixture Models (GMMs) classifiers based on 3-element feature vector. The simulated results show that the proposed method efficiently reduces voiced/unvoiced and pitch estimation error, and it is superior to some of the state-of-theart method in the real environment.
A method of voiced/unvoiced classification and pitch estimation based on Pitch Estimation Filter with Amplitude Compression (PEFAC) is proposed in this paper. The method first attenuates strong noise components at the?low frequencies based on PEFAC and extracts pitch harmonic from noisy speech in the log-frequency domain. Then, the harmonic number associated with the pitch harmonic is determined by Symmetric average magnitude sum function weighted Impulse-train Matching (SIM) scheme in time domain. A pitch tracking scheme using dynamic programming is applied to select the pitch candidates and a voiced speech probability is computed from the likelihood ratio of Gaussian Mixture Models (GMMs) classifiers based on 3-element feature vector. The simulated results show that the proposed method efficiently reduces voiced/unvoiced and pitch estimation error, and it is superior to some of the state-of-theart method in the real environment.
2016, 38(3): 594-599.
doi: 10.11999/JEIT150745
Abstract:
In RFID systems, tag anti-collision algorithm is significantly important for fast tag identification, especially in mobile scenarios. A Group Strategy for Remaining tags Algorithm (GSRA) is proposed. It is divided into two phases, which are the identification of remaining tags and the identification of new arriving tags, and stay tag information grouped is stored and updated, so as to improve the identification efficiency of the stay tag. The theoretic analysis shows that the system efficiency only concerns the rate of migration and static system efficiency, and is not related with the number of tags. The simulation result demonstrates that the system efficience of GSRA algorithm achieve 240% in Collision Tree (CT) algorithm and 20% with respect to the rate of tag migration.
In RFID systems, tag anti-collision algorithm is significantly important for fast tag identification, especially in mobile scenarios. A Group Strategy for Remaining tags Algorithm (GSRA) is proposed. It is divided into two phases, which are the identification of remaining tags and the identification of new arriving tags, and stay tag information grouped is stored and updated, so as to improve the identification efficiency of the stay tag. The theoretic analysis shows that the system efficiency only concerns the rate of migration and static system efficiency, and is not related with the number of tags. The simulation result demonstrates that the system efficience of GSRA algorithm achieve 240% in Collision Tree (CT) algorithm and 20% with respect to the rate of tag migration.
2016, 38(3): 600-606.
doi: 10.11999/JEIT150550
Abstract:
Countering the active deception jamming which is repeated by a Digital Radio Frequency Memory (DRFM) has been a challenging problem. Utilizing the characteristics that the Range Gate Pull Off (RGPO) jamming has a slightly different center frequency from the input radar signal and evenly spaced harmonics, a novel anti-jamming approach is proposed based on Singular Spectrum Analysis (SSA). First, the singular values energy distribution diversity between jamming harmonics and target echo, which are pretreated through SSA decomposition, is extracted for jamming detection. Then according to the center frequency diversity, a suitable subspace of singular values is divided for recovering the target echo, which means the jamming is mitigated simultaneously. The proposed approach does not need to estimate noise parameter, and has the precious property of a Constant False-Alarm Rate (CFAR) in the jamming detection stage. The validity of the proposed method is evaluated using experimental data via Monte Carlo simulation.
Countering the active deception jamming which is repeated by a Digital Radio Frequency Memory (DRFM) has been a challenging problem. Utilizing the characteristics that the Range Gate Pull Off (RGPO) jamming has a slightly different center frequency from the input radar signal and evenly spaced harmonics, a novel anti-jamming approach is proposed based on Singular Spectrum Analysis (SSA). First, the singular values energy distribution diversity between jamming harmonics and target echo, which are pretreated through SSA decomposition, is extracted for jamming detection. Then according to the center frequency diversity, a suitable subspace of singular values is divided for recovering the target echo, which means the jamming is mitigated simultaneously. The proposed approach does not need to estimate noise parameter, and has the precious property of a Constant False-Alarm Rate (CFAR) in the jamming detection stage. The validity of the proposed method is evaluated using experimental data via Monte Carlo simulation.
2016, 38(3): 607-612.
doi: 10.11999/JEIT150575
Abstract:
The existing approaches with missing data for SAR by Gapped-data Amplitude and Phase EStimation (GAPES) ignore the range migration and phase error, which lead to the decrease of the imaging quality. An imaging method of gapped-data for spotlight mode SAR based on GAPES is presented. It can correct the range migration using two-dimensional interpolation, realize autofucus from sparse data by sparse project approximation subspace tracking algorithm, and ensure the resolution of the image. The simulations and real data processing results show the validity of the proposed approach.
The existing approaches with missing data for SAR by Gapped-data Amplitude and Phase EStimation (GAPES) ignore the range migration and phase error, which lead to the decrease of the imaging quality. An imaging method of gapped-data for spotlight mode SAR based on GAPES is presented. It can correct the range migration using two-dimensional interpolation, realize autofucus from sparse data by sparse project approximation subspace tracking algorithm, and ensure the resolution of the image. The simulations and real data processing results show the validity of the proposed approach.
2016, 38(3): 613-621.
doi: 10.11999/JEIT150782
Abstract:
As a new and special bistatic SAR imaging mode, Missile-borne Bistatic Forward-Looking SAR (MBFL-SAR) can perform Two-Dimensional (2D) imaging during the terminal diving period of missile. However, double square root and high order terms in range history make it difficult to obtain its 2D frequency spectrum effectively. The changing heights and different velocities of the transmitter and the receiver yield to the space variant characteristic of echo signal phase. This paper presents a phase space-variance correction method for MBFL-SAR based on the revised equivalent range equation. In this method, the range equation containing double square root and high order terms is equivalent and simplified to one only containing single square root, based on which 2D frequency spectrum with high precision is gained using the principle of stationary phase. Then, the space variant phase terms of 2D frequency spectrum are compensated accurately through high order polynomial fitting, followed by the focus of the imaging scene. This method can perform imaging with high precision and is more efficient than the traditional algorithm. Finally, the simulation experiments validate the effectiveness of the proposed algorithm.
As a new and special bistatic SAR imaging mode, Missile-borne Bistatic Forward-Looking SAR (MBFL-SAR) can perform Two-Dimensional (2D) imaging during the terminal diving period of missile. However, double square root and high order terms in range history make it difficult to obtain its 2D frequency spectrum effectively. The changing heights and different velocities of the transmitter and the receiver yield to the space variant characteristic of echo signal phase. This paper presents a phase space-variance correction method for MBFL-SAR based on the revised equivalent range equation. In this method, the range equation containing double square root and high order terms is equivalent and simplified to one only containing single square root, based on which 2D frequency spectrum with high precision is gained using the principle of stationary phase. Then, the space variant phase terms of 2D frequency spectrum are compensated accurately through high order polynomial fitting, followed by the focus of the imaging scene. This method can perform imaging with high precision and is more efficient than the traditional algorithm. Finally, the simulation experiments validate the effectiveness of the proposed algorithm.
2016, 38(3): 622-628.
doi: 10.11999/JEIT150555
Abstract:
The issue of Direction-Of-Arrival (DOA) estimation in low-angle tracking environment for Very High Frequency (VHF) MIMO radar is investigated in this paper. Under the condition of the reflecting surface height is unknown, an algorithm for the target elevation estimation is proposed. The algorithm is based on the criterion of maximizing the correlation coefficient of target echo and the atoms of a given dictionary, where the dictionary grids can be iteratively refined to estimate the parameters more precisely. The simulation results indicate the effectiveness of the algorithm.
The issue of Direction-Of-Arrival (DOA) estimation in low-angle tracking environment for Very High Frequency (VHF) MIMO radar is investigated in this paper. Under the condition of the reflecting surface height is unknown, an algorithm for the target elevation estimation is proposed. The algorithm is based on the criterion of maximizing the correlation coefficient of target echo and the atoms of a given dictionary, where the dictionary grids can be iteratively refined to estimate the parameters more precisely. The simulation results indicate the effectiveness of the algorithm.
2016, 38(3): 629-634.
doi: 10.11999/JEIT150539
Abstract:
The MUltiple SIgnal Classification (MUSIC) algorithm is one of the most important techniques for Direction-Of-Arrival (DOA) estimate. However, this method is found expensive in practical applications, due to the heavy computational cost involved. To reduce the complexity, a novel efficient estimator based on Subspace Rotation Technique (STR) is proposed. The key idea is to divide the noise subspace matrix along its row direction into two sub-matrices, and perform STR to get a new rotated sub-noise subspace with reduced dimensions. As this rotated sub-noise subspace is also orthogonal to the signal subspace, a new cost function is finally derived to estimate DOAs. Theoretical analysis indicates that redundancy computations in spectral search are efficiently avoided by the proposed method as compared to MUSIC, especially in scenarios where large numbers of sensors are applied to locate small numbers of signals. Simulation results verify the effectiveness and efficiency of the new technique.
The MUltiple SIgnal Classification (MUSIC) algorithm is one of the most important techniques for Direction-Of-Arrival (DOA) estimate. However, this method is found expensive in practical applications, due to the heavy computational cost involved. To reduce the complexity, a novel efficient estimator based on Subspace Rotation Technique (STR) is proposed. The key idea is to divide the noise subspace matrix along its row direction into two sub-matrices, and perform STR to get a new rotated sub-noise subspace with reduced dimensions. As this rotated sub-noise subspace is also orthogonal to the signal subspace, a new cost function is finally derived to estimate DOAs. Theoretical analysis indicates that redundancy computations in spectral search are efficiently avoided by the proposed method as compared to MUSIC, especially in scenarios where large numbers of sensors are applied to locate small numbers of signals. Simulation results verify the effectiveness and efficiency of the new technique.
2016, 38(3): 635-642.
doi: 10.11999/JEIT150659
Abstract:
For multichannel High-Resolution Wide-Swath SAR (HRWS SAR), to achieve high resolution wide cover mapping ability, the echoes of each channel suffer Doppler ambiguity, thus current clutter suppression approaches may not perform well. To solve the problem, this paper proposes a novel clutter suppression approach, and the corresponding double threshold CFAR detection scheme is given. First, inspired by the idea of solution for Doppler ambiguity based on Digital Beam Forming (DBF), adaptive DBF is applied to clutter suppression. Then, the existing problems are analyzed and the improved approach, which can reduce the Degree of Freedom (DOFs) and settle the problem between computation load and estimation accuracy, is given. At last, the effectiveness of the proposed approach is demonstrated by the simulated results.
For multichannel High-Resolution Wide-Swath SAR (HRWS SAR), to achieve high resolution wide cover mapping ability, the echoes of each channel suffer Doppler ambiguity, thus current clutter suppression approaches may not perform well. To solve the problem, this paper proposes a novel clutter suppression approach, and the corresponding double threshold CFAR detection scheme is given. First, inspired by the idea of solution for Doppler ambiguity based on Digital Beam Forming (DBF), adaptive DBF is applied to clutter suppression. Then, the existing problems are analyzed and the improved approach, which can reduce the Degree of Freedom (DOFs) and settle the problem between computation load and estimation accuracy, is given. At last, the effectiveness of the proposed approach is demonstrated by the simulated results.
2016, 38(3): 643-648.
doi: 10.11999/JEIT150648
Abstract:
Sum rate maximization is often used as the target of linear Interference Alignment (IA). However, the sum rate function is non-convex and hard to be solved. This problem is solved according to the relationship of mean square error and sum rate which is known as the Weighted Minimum Mean Square Error (WMMSE). This method relies on the knowledge of channel state information. In real systems, the channel estimation error may cause significant descent to the sum rate performance. This paper proposes an improved algorithm, which considers the statistical character of channel estimation error. Simulation results show that the proposed algorithm is robust to channel estimation error and improves the sum-rate efficiently, compared with the usual WMMSE method.
Sum rate maximization is often used as the target of linear Interference Alignment (IA). However, the sum rate function is non-convex and hard to be solved. This problem is solved according to the relationship of mean square error and sum rate which is known as the Weighted Minimum Mean Square Error (WMMSE). This method relies on the knowledge of channel state information. In real systems, the channel estimation error may cause significant descent to the sum rate performance. This paper proposes an improved algorithm, which considers the statistical character of channel estimation error. Simulation results show that the proposed algorithm is robust to channel estimation error and improves the sum-rate efficiently, compared with the usual WMMSE method.
2016, 38(3): 649-654.
doi: 10.11999/JEIT150681
Abstract:
In the fifth generation (5G) mobile communication system, massive MIMO antenna and ultra dense deployment of the network are the two ways to achieve high throughput. To solve the mobility management problems in ultra dense clustering network, this paper presents a handover management algorithm that adjusts the hysteresis margin according to the movement of terminal equipment. In this algorithm, the handover is divided into pre-handover and official handover after clustering small base stations. The pre-handover helps to select the best target cell, complete resource reservation and pre-authentication. During the official handover, hysteresis margin of the handover threshold is adjusted according to the speed of the device. Simulation results show that it can effectively reduce the handover delay and probability of handover failure.
In the fifth generation (5G) mobile communication system, massive MIMO antenna and ultra dense deployment of the network are the two ways to achieve high throughput. To solve the mobility management problems in ultra dense clustering network, this paper presents a handover management algorithm that adjusts the hysteresis margin according to the movement of terminal equipment. In this algorithm, the handover is divided into pre-handover and official handover after clustering small base stations. The pre-handover helps to select the best target cell, complete resource reservation and pre-authentication. During the official handover, hysteresis margin of the handover threshold is adjusted according to the speed of the device. Simulation results show that it can effectively reduce the handover delay and probability of handover failure.
An Optimized Inter-prediction Algorithm for High Efficiency Video Coding Based on Texture Similarity
2016, 38(3): 655-660.
doi: 10.11999/JEIT150672
Abstract:
In this paper, an optimized inter-prediction algorithm is proposed for High Efficiency Video Coding (HEVC) based on video texture similarity. With the increasing of the video resolution, spatial statistical redundancy is increased meanwhile. HEVC improves the coding efficiency by using large coding units, therefore increases the coding complexity significantly. For video sequence, the flat area and texture area are high correlated between adjacent frames. In this paper, previous encoded depth or split depth information is used to determine fast the depth of current encoding coding unit. For flat area, the algorithm predicts the maximum depth to reduce the mode decision for smaller coding units. For texture area, the algorithm can fast determine the minimum depth which can save the mode decision for larger coding units. Experimental result shows that the proposed algorithm can reduce 50% coding complexity compared with original HEVC algorithm. Meanwhile, the average PSNR only decreases 0.09 dB and the average coding rate is increased about 0.13%.
In this paper, an optimized inter-prediction algorithm is proposed for High Efficiency Video Coding (HEVC) based on video texture similarity. With the increasing of the video resolution, spatial statistical redundancy is increased meanwhile. HEVC improves the coding efficiency by using large coding units, therefore increases the coding complexity significantly. For video sequence, the flat area and texture area are high correlated between adjacent frames. In this paper, previous encoded depth or split depth information is used to determine fast the depth of current encoding coding unit. For flat area, the algorithm predicts the maximum depth to reduce the mode decision for smaller coding units. For texture area, the algorithm can fast determine the minimum depth which can save the mode decision for larger coding units. Experimental result shows that the proposed algorithm can reduce 50% coding complexity compared with original HEVC algorithm. Meanwhile, the average PSNR only decreases 0.09 dB and the average coding rate is increased about 0.13%.
2016, 38(3): 661-667.
doi: 10.11999/JEIT150562
Abstract:
In entropy coding systems based on the context modeling, the context dilution problem introduced by high-order context models needs to be alleviated by the context quantization to achieve the desired compression gain. Therefore, an algorithm is proposed to implement the Context Quantization by the Minimizing Description Length (MDLCQ) in this paper. With the description length as the evaluation criterion, the Context Quantization Of Single-Condition (CQOSC) is attained by the dynamic programming algorithm. Then the context quantizer of multi-conditions can be designed by the iterated application of CQOSC. This algorithm can not only design the optimized context quantizer for multi-valued sources, but also determine adaptively the importance of every condition so as to design the best order of the model. The experimental results show that the context quantizer designed by the MDLCQ algorithm can apparently improve the compression performance of the entropy coding system.
In entropy coding systems based on the context modeling, the context dilution problem introduced by high-order context models needs to be alleviated by the context quantization to achieve the desired compression gain. Therefore, an algorithm is proposed to implement the Context Quantization by the Minimizing Description Length (MDLCQ) in this paper. With the description length as the evaluation criterion, the Context Quantization Of Single-Condition (CQOSC) is attained by the dynamic programming algorithm. Then the context quantizer of multi-conditions can be designed by the iterated application of CQOSC. This algorithm can not only design the optimized context quantizer for multi-valued sources, but also determine adaptively the importance of every condition so as to design the best order of the model. The experimental results show that the context quantizer designed by the MDLCQ algorithm can apparently improve the compression performance of the entropy coding system.
2016, 38(3): 668-673.
doi: 10.11999/JEIT150739
Abstract:
Blind recognition of Space-Time Block Code (STBC) is a new important task in cognitive radio system. Most of the previous researches require multiple receive antennas, however, in many practical applications, size and power on the receivers may favor single receive antenna solution. To solve the problem above, an algorithm for blind classification of STBC is proposed. Using the correlation of the symbols in STBC block, fourth-order statistics are used as feature, and Euclidean metric between two statistics is used to classify different STBCs. It does not require estimation of the channel, signal-to-noise, and modulation of the transmitted signals. Monte Carlo simulations show the validity of the algorithm with low sensitivity to phase noise and Doppler shift.
Blind recognition of Space-Time Block Code (STBC) is a new important task in cognitive radio system. Most of the previous researches require multiple receive antennas, however, in many practical applications, size and power on the receivers may favor single receive antenna solution. To solve the problem above, an algorithm for blind classification of STBC is proposed. Using the correlation of the symbols in STBC block, fourth-order statistics are used as feature, and Euclidean metric between two statistics is used to classify different STBCs. It does not require estimation of the channel, signal-to-noise, and modulation of the transmitted signals. Monte Carlo simulations show the validity of the algorithm with low sensitivity to phase noise and Doppler shift.
2016, 38(3): 674-680.
doi: 10.11999/JEIT150747
Abstract:
To recognize the major modulation schemes which are applied to concurrent communication systems, a joint method based on the high-order cumulants and cyclic spectrum with intelligent decision algorithm (neural network) is proposed to recognize the modulation schemes for digital signals. Firstly, a new featured parameter is extracted from the four-order and six-order cumulants of the digital signals to identify the modulation schemes of {BPSK, 2ASK}, {QPSK}, {2FSK, 4FSK}, {MSK}, and {16QAM, 64QAM}, then {OFDM}, {16QAM, 64QAM}, {2ASK, BPSK}, and {2FSK, 4FSK} are classified by the other featured parameters of the joint high-order cumulants and cyclic spectrum algorithms. In order to facilitate the engineering implementation, the semi-physical simulation and mixed programming of LabVIEW and MATLAB are used to validate the proposed algorithms. Simulation results show that the algorithms can recognize modulations {OFDM, BPSK, QPSK, 2ASK, 2FSK, 4FSK, MSK, 16QAM, 64QAM} with small Signal-to-Noise Ratio (SNR). The average recognition rate is more than 94% with SNR greater or equal than 5 dB, which validates the effectiveness of the proposed algorithms.
To recognize the major modulation schemes which are applied to concurrent communication systems, a joint method based on the high-order cumulants and cyclic spectrum with intelligent decision algorithm (neural network) is proposed to recognize the modulation schemes for digital signals. Firstly, a new featured parameter is extracted from the four-order and six-order cumulants of the digital signals to identify the modulation schemes of {BPSK, 2ASK}, {QPSK}, {2FSK, 4FSK}, {MSK}, and {16QAM, 64QAM}, then {OFDM}, {16QAM, 64QAM}, {2ASK, BPSK}, and {2FSK, 4FSK} are classified by the other featured parameters of the joint high-order cumulants and cyclic spectrum algorithms. In order to facilitate the engineering implementation, the semi-physical simulation and mixed programming of LabVIEW and MATLAB are used to validate the proposed algorithms. Simulation results show that the algorithms can recognize modulations {OFDM, BPSK, QPSK, 2ASK, 2FSK, 4FSK, MSK, 16QAM, 64QAM} with small Signal-to-Noise Ratio (SNR). The average recognition rate is more than 94% with SNR greater or equal than 5 dB, which validates the effectiveness of the proposed algorithms.
2016, 38(3): 681-687.
doi: 10.11999/JEIT150660
Abstract:
This paper proposes a novel chaotic communication scheme named Correlation-Delay-Shift-Keying with No Intrasignal Interference (CDSK-NII). By utilizing the repeated chaotic sequence as the reference signal and taking advantage of the zero-sum sequence to ensure the reference signal strictly orthogonal to the information- bearing signal, CDSK-NII can eliminate the intrasignal interference during the demodulation. The Bit Error Ratio (BER) of CDSK-NII is analyzed under AWGN channel and Rayleigh fading channel. Experiment results show that, due to no intrasignal interference, the BER of CDSK-NII is lower than that of CDSK and Generalized CDSK (GCDSK); with the length of multiframe increasing, the performance of CDSK-NII becomes better, and its BER is lower than that of Reference-Adaptive CDSK (RA-CDSK).
This paper proposes a novel chaotic communication scheme named Correlation-Delay-Shift-Keying with No Intrasignal Interference (CDSK-NII). By utilizing the repeated chaotic sequence as the reference signal and taking advantage of the zero-sum sequence to ensure the reference signal strictly orthogonal to the information- bearing signal, CDSK-NII can eliminate the intrasignal interference during the demodulation. The Bit Error Ratio (BER) of CDSK-NII is analyzed under AWGN channel and Rayleigh fading channel. Experiment results show that, due to no intrasignal interference, the BER of CDSK-NII is lower than that of CDSK and Generalized CDSK (GCDSK); with the length of multiframe increasing, the performance of CDSK-NII becomes better, and its BER is lower than that of Reference-Adaptive CDSK (RA-CDSK).
2016, 38(3): 688-693.
doi: 10.11999/JEIT150720
Abstract:
An improved weighted bit-flipping decoding algorithm for LDPC codes is presented. The proposed algorithm introduces an updating rule for variable nodes to efficiently improve the reliability of the flipped bits and reduces the error codes caused by the oscillation of the loops. Simulation results show that the proposed algorithm achieves better BER performance than the Sum of Magnitude based Weighted Bit-Flipping (SMWBF) decoding algorithm over the additive white Gaussian noise channel with only a small increase in computational complexity.
An improved weighted bit-flipping decoding algorithm for LDPC codes is presented. The proposed algorithm introduces an updating rule for variable nodes to efficiently improve the reliability of the flipped bits and reduces the error codes caused by the oscillation of the loops. Simulation results show that the proposed algorithm achieves better BER performance than the Sum of Magnitude based Weighted Bit-Flipping (SMWBF) decoding algorithm over the additive white Gaussian noise channel with only a small increase in computational complexity.
2016, 38(3): 694-699.
doi: 10.11999/JEIT150825
Abstract:
A novel Symbol-Variance Feedback Equalizer (SVEF) algorithm is proposed to reduce the computational complexity of the equalizer in Turbo equalization. The derivation of the algorithm is based on the Taylor expansion of the Linear Minimum Mean Squared Error (LMMSE) estimation function. In the proposed scheme, the initial estimates are obtained from the time-invariant equalizer, then the estimates are weighted by the a priori symbol variances and finally filtered by a time-invariant filter to obtain better estimates. As the time-variant a priori symbol variances are utilized, the performance of the proposed equalizer is much closer to that of the exact MMSE linear equalizer. Simulation results show that the Signal-to-Noise Ratio (SNR) loss of the proposed scheme in Proakis C channel is reduced to 0.17 dB from 0.83 dB compared to the various time-invariant MMSE Turbo equalization, and its computational complexity can be reduced to logarithmical order by implementation based on the fast Fourier transform.
A novel Symbol-Variance Feedback Equalizer (SVEF) algorithm is proposed to reduce the computational complexity of the equalizer in Turbo equalization. The derivation of the algorithm is based on the Taylor expansion of the Linear Minimum Mean Squared Error (LMMSE) estimation function. In the proposed scheme, the initial estimates are obtained from the time-invariant equalizer, then the estimates are weighted by the a priori symbol variances and finally filtered by a time-invariant filter to obtain better estimates. As the time-variant a priori symbol variances are utilized, the performance of the proposed equalizer is much closer to that of the exact MMSE linear equalizer. Simulation results show that the Signal-to-Noise Ratio (SNR) loss of the proposed scheme in Proakis C channel is reduced to 0.17 dB from 0.83 dB compared to the various time-invariant MMSE Turbo equalization, and its computational complexity can be reduced to logarithmical order by implementation based on the fast Fourier transform.
2016, 38(3): 700-706.
doi: 10.11999/JEIT150576
Abstract:
In the process of monitoring and controlling power distribution network, condition monitoring and fault tolerance of towers and other facilities become an urgent problem in power system. The existing monitoring system can not maintain transmission of distributed power timely when fault occurs because of the limitations such as linear topology. Therefore, it may result in serious power system accidents, influencing production business of electric power. Based on the background of using sensors to monitor overhead transmission line, a fault tolerance mechanism for sensors deployment is proposed. First, according to N-x principle, the number of backup nodes and cellular-enabled modules is minimized to achieve the goal of cost minimization. Second, the number constraint of N-x principle and delay constraint is integrated into establishing a mathematical optimization model. Based on this model and by using clustering algorithm, a fault tolerance mechanism is built for sensors monitoring overhead transmission line in smart grid. Finally, the simulation experiment shows that sensor monitoring network deployed with this mechanism can tolerate the faults on the basis of minimized cost effectively.
In the process of monitoring and controlling power distribution network, condition monitoring and fault tolerance of towers and other facilities become an urgent problem in power system. The existing monitoring system can not maintain transmission of distributed power timely when fault occurs because of the limitations such as linear topology. Therefore, it may result in serious power system accidents, influencing production business of electric power. Based on the background of using sensors to monitor overhead transmission line, a fault tolerance mechanism for sensors deployment is proposed. First, according to N-x principle, the number of backup nodes and cellular-enabled modules is minimized to achieve the goal of cost minimization. Second, the number constraint of N-x principle and delay constraint is integrated into establishing a mathematical optimization model. Based on this model and by using clustering algorithm, a fault tolerance mechanism is built for sensors monitoring overhead transmission line in smart grid. Finally, the simulation experiment shows that sensor monitoring network deployed with this mechanism can tolerate the faults on the basis of minimized cost effectively.
2016, 38(3): 707-712.
doi: 10.11999/JEIT150754
Abstract:
Bidirectional Label Switch Paths (LSPs) are important parts of Multi-Protocol Label Switching- Transport Profile (MPLS-TP) networking technology. However, the existing algorithms of establishing bidirectional LSPs have redundancy in operation, control overhead, and waiting time of data packets. To address this problem, a novel algorithm based on single trips of control packets, Efficient Algorithm for Establishing Biderictional LSPs (EAEBL), is proposed in this article. On the premise of completing the establishment of bidirectional LSPs, EAEBL only needs to transfer the control packet through a single trip, thus the redundancy in operation and control overhead is reduced and conveying data packets is accelerated. Theoretical analysis verifies the effectiveness of EAEBL. Simulation results show that EAEBL reduces the control overhead and delay for establishing bidirectional LSPs by at least 14.7% and 50%, respectively, as compared with three existing algorithms. Moreover, the waiting time of data packets in source LSPs is decreased to approach zero.
Bidirectional Label Switch Paths (LSPs) are important parts of Multi-Protocol Label Switching- Transport Profile (MPLS-TP) networking technology. However, the existing algorithms of establishing bidirectional LSPs have redundancy in operation, control overhead, and waiting time of data packets. To address this problem, a novel algorithm based on single trips of control packets, Efficient Algorithm for Establishing Biderictional LSPs (EAEBL), is proposed in this article. On the premise of completing the establishment of bidirectional LSPs, EAEBL only needs to transfer the control packet through a single trip, thus the redundancy in operation and control overhead is reduced and conveying data packets is accelerated. Theoretical analysis verifies the effectiveness of EAEBL. Simulation results show that EAEBL reduces the control overhead and delay for establishing bidirectional LSPs by at least 14.7% and 50%, respectively, as compared with three existing algorithms. Moreover, the waiting time of data packets in source LSPs is decreased to approach zero.
2016, 38(3): 713-719.
doi: 10.11999/JEIT150280
Abstract:
The data has correlations and redundancy in Wireless Sensor Network (WSN). How to reduce effectively the amount of communication data and extend the network life cycle is one of researching hot points. The Two-Step data Compression algorithm based on Sequence Correlation (TSC-SC) for WSN is proposed in this paper. The cluster head and the nodes in clusters perform different compression algorithms for themselves. In order to eliminate the spatial correlation of data and reduce the calculated amount, the cluster head nodes perform the grouping algorithm firstly, then the nodes in clusters perform the classifing compression to eliminate correlation for multi-attribute data, and pass the compression parameters to the cluster head; the cluster head perform the classifing compression again after decompressing the parameters. So the data-redundancy and communication energy consumption is further reduced. A new evaluation model named Network Compression Energy Ratio (NCER) based on energy discrimination is also proposed. The evaluation model realizes comprehensive evaluation of compression algorithms by considering both the basic requirements of compression and calculated energy consumption in the nodes. Simulation results show that TSC-SC algorithm can reduce the compression ratio and compression error effectively; the amount of communication data and energy consumption can achieve a satisfactory level in the network. The algorithm can be estimated directly using NCER.
The data has correlations and redundancy in Wireless Sensor Network (WSN). How to reduce effectively the amount of communication data and extend the network life cycle is one of researching hot points. The Two-Step data Compression algorithm based on Sequence Correlation (TSC-SC) for WSN is proposed in this paper. The cluster head and the nodes in clusters perform different compression algorithms for themselves. In order to eliminate the spatial correlation of data and reduce the calculated amount, the cluster head nodes perform the grouping algorithm firstly, then the nodes in clusters perform the classifing compression to eliminate correlation for multi-attribute data, and pass the compression parameters to the cluster head; the cluster head perform the classifing compression again after decompressing the parameters. So the data-redundancy and communication energy consumption is further reduced. A new evaluation model named Network Compression Energy Ratio (NCER) based on energy discrimination is also proposed. The evaluation model realizes comprehensive evaluation of compression algorithms by considering both the basic requirements of compression and calculated energy consumption in the nodes. Simulation results show that TSC-SC algorithm can reduce the compression ratio and compression error effectively; the amount of communication data and energy consumption can achieve a satisfactory level in the network. The algorithm can be estimated directly using NCER.
2016, 38(3): 720-727.
doi: 10.11999/JEIT150664
Abstract:
The constructing cost and lifetime are two core problems when constructing a barrier coverage of Wireless Sensor Network (WSN). For the former, the amount of nodes and information transferred are considered very much. And the WSN shutdown Caused by just for some specific nodes died should be avoided. This paper proposes an algorithm named Distributed Barrier Coverage Algorithm (DBCA) to construct a distributed 1-barrier coverage by using k-HOP clustering and path planning. Theory analysis and simulation results show that the algorithm can reduce the number of nodes and information transferred effectively. When the deployed nodes reach the number of 700, it performs better than Optimal Node Selection Algorithm (ONSA) and Localized Barrier Coverage Protocol (LBCP) algorithm in reducing transferring information with 25% and 41.6%, and in prolonging lifetime with 44% and 30%.
The constructing cost and lifetime are two core problems when constructing a barrier coverage of Wireless Sensor Network (WSN). For the former, the amount of nodes and information transferred are considered very much. And the WSN shutdown Caused by just for some specific nodes died should be avoided. This paper proposes an algorithm named Distributed Barrier Coverage Algorithm (DBCA) to construct a distributed 1-barrier coverage by using k-HOP clustering and path planning. Theory analysis and simulation results show that the algorithm can reduce the number of nodes and information transferred effectively. When the deployed nodes reach the number of 700, it performs better than Optimal Node Selection Algorithm (ONSA) and Localized Barrier Coverage Protocol (LBCP) algorithm in reducing transferring information with 25% and 41.6%, and in prolonging lifetime with 44% and 30%.
2016, 38(3): 728-734.
doi: 10.11999/JEIT150656
Abstract:
Virtual network embedding is researched across multiple domains under network virtualization environment. A hierarchical virtual resource provisioning architecture with centralized management and distributed control is proposed. On this basis, an effective virtual network embedding across multiple domains framework is built, and virtual network request is divided, which aims to minimize the embedding cost. An Optimal Artificial Bee Colony (OABC) algorithm is proposed to address this problem. Simulation results show that the performances of the proposed method are better than some other methods on average divide time, acceptance of virtual network request, and average extra embedding cost.
Virtual network embedding is researched across multiple domains under network virtualization environment. A hierarchical virtual resource provisioning architecture with centralized management and distributed control is proposed. On this basis, an effective virtual network embedding across multiple domains framework is built, and virtual network request is divided, which aims to minimize the embedding cost. An Optimal Artificial Bee Colony (OABC) algorithm is proposed to address this problem. Simulation results show that the performances of the proposed method are better than some other methods on average divide time, acceptance of virtual network request, and average extra embedding cost.
2016, 38(3): 735-752.
doi: 10.11999/JEIT151356
Abstract:
This paper reviews the past situation and existing problems in chaotic cipher theories and applications, and reports the recent progress of some issues on theoretical design and hardware implementation of ciphers based on high-dimensional chaos system, including basic theory, design methods, typical applications, and ideas coping with the problems. In the aspect of design of chaotic cipher and its security evaluation, the following progress is summarized: counteracting dynamics degradation of digital chaotic systems with anti-control methods; designing no-degeneration chaotic systems in digital domain; proposing a multi-round chaotic stream cipher with the high-dimensional digital chaotic systems and feedback mechanism of self-loop; evaluating security level of the proposed chaotic ciphers with various methods. In the aspect of application and hardware implementation of multimedia secure communication, the following developments are reported: optimizing a cross platform conducting real-time remote chaotic secure communication, targeting different application businesses of hand-held devices, such as smartphone, computer, ARM, FPGA; and establishing a demonstration platform on chaotic secure communications to verify the effectiveness.
This paper reviews the past situation and existing problems in chaotic cipher theories and applications, and reports the recent progress of some issues on theoretical design and hardware implementation of ciphers based on high-dimensional chaos system, including basic theory, design methods, typical applications, and ideas coping with the problems. In the aspect of design of chaotic cipher and its security evaluation, the following progress is summarized: counteracting dynamics degradation of digital chaotic systems with anti-control methods; designing no-degeneration chaotic systems in digital domain; proposing a multi-round chaotic stream cipher with the high-dimensional digital chaotic systems and feedback mechanism of self-loop; evaluating security level of the proposed chaotic ciphers with various methods. In the aspect of application and hardware implementation of multimedia secure communication, the following developments are reported: optimizing a cross platform conducting real-time remote chaotic secure communication, targeting different application businesses of hand-held devices, such as smartphone, computer, ARM, FPGA; and establishing a demonstration platform on chaotic secure communications to verify the effectiveness.
2016, 38(3): 753-757.
doi: 10.11999/JEIT150733
Abstract:
Detection performance of Passive Radar Network (PRN) is affected by many factors including network geometry, radio propagation environment, performance of system, signal and data processing capability, and so on. Optimized disposition with passive radar network needs to consider all the aspects, and performance evaluation of the network should be firstly taken into consideration. Launching from the positioning performance of passive radar network, first a feasible evaluation scheme is proposed, and the distribution of positioning precision is obtained by Monte-Carlo simulation under a specified network configuration. Then, the experimental scenario and the experimental progress is highlighted, involving system introduction, illustration, and analysis of typical detection results of airplanes. Finally, the positioning accuracy is compared with reference information and simulation results, which proves the validity of passive radar network positioning performance evaluation scheme.
Detection performance of Passive Radar Network (PRN) is affected by many factors including network geometry, radio propagation environment, performance of system, signal and data processing capability, and so on. Optimized disposition with passive radar network needs to consider all the aspects, and performance evaluation of the network should be firstly taken into consideration. Launching from the positioning performance of passive radar network, first a feasible evaluation scheme is proposed, and the distribution of positioning precision is obtained by Monte-Carlo simulation under a specified network configuration. Then, the experimental scenario and the experimental progress is highlighted, involving system introduction, illustration, and analysis of typical detection results of airplanes. Finally, the positioning accuracy is compared with reference information and simulation results, which proves the validity of passive radar network positioning performance evaluation scheme.
2016, 38(3): 758-762.
doi: 10.11999/JEIT150578
Abstract:
In view of the existing CRB (Cramr-Rao Bound) for near-field single source is based on the uniform linear array, and analyzes less fully the CRB's characteristics and the influence factors of specific. In this paper, non-matrix, closed-form expression of the deterministic CRB for the near-field narrow band model is deduced based on Fisher information matrix, Schur complement and Jacobi transform with non-uniform linear array. The behavior of the performance with respect to some features of interest are discussed, namely, the array geometry, the array aperture, the thinned factor, the target signal frequency and the signal to noise ratio. Through the reasonable array design scheme, the direction and range of the target are both estimated. The Monte-Carlo simulation results validate the effectiveness of the theoretical analysis and the conclusion.
In view of the existing CRB (Cramr-Rao Bound) for near-field single source is based on the uniform linear array, and analyzes less fully the CRB's characteristics and the influence factors of specific. In this paper, non-matrix, closed-form expression of the deterministic CRB for the near-field narrow band model is deduced based on Fisher information matrix, Schur complement and Jacobi transform with non-uniform linear array. The behavior of the performance with respect to some features of interest are discussed, namely, the array geometry, the array aperture, the thinned factor, the target signal frequency and the signal to noise ratio. Through the reasonable array design scheme, the direction and range of the target are both estimated. The Monte-Carlo simulation results validate the effectiveness of the theoretical analysis and the conclusion.