Email alert
2018 Vol. 40, No. 6
Display Method:
2018, 40(6): 1271-1278.
doi: 10.11999/JEIT170877
Abstract:
The distributed swarm operation of unmanned systems represented by Unmanned Aerial Vehicle (UAV) is the growth point of the future war, and the swarm situation awareness is an important part. Based on the typical situation awareness model, the distributed swarm situation awareness of UAV is studied. Firstly, in the individual level, the mental model in the dynamic system of the Endsley1995 Situation Awareness (SA) model is modified as a human-computer intelligent model. Then in accordance with the UAV swarm type, isomorphic and heterogeneous, based on the team SA and Distributed SA (DSA) theory, the UAV swarm SA model are built. Then, the consensus and evaluation problems of UAV Swarm SA are analyzed. The consensus formation process of isomorphic UAV swarm SA and the consensus judgment method of heterogeneous UAV swarm SA are given. And the choice of UAV swarm SA evaluation method is analyzed. The analysis shows that the proposed swarm situation awareness model is corresponding to the swarm cooperative combat characteristics and has certain positive significance.
The distributed swarm operation of unmanned systems represented by Unmanned Aerial Vehicle (UAV) is the growth point of the future war, and the swarm situation awareness is an important part. Based on the typical situation awareness model, the distributed swarm situation awareness of UAV is studied. Firstly, in the individual level, the mental model in the dynamic system of the Endsley1995 Situation Awareness (SA) model is modified as a human-computer intelligent model. Then in accordance with the UAV swarm type, isomorphic and heterogeneous, based on the team SA and Distributed SA (DSA) theory, the UAV swarm SA model are built. Then, the consensus and evaluation problems of UAV Swarm SA are analyzed. The consensus formation process of isomorphic UAV swarm SA and the consensus judgment method of heterogeneous UAV swarm SA are given. And the choice of UAV swarm SA evaluation method is analyzed. The analysis shows that the proposed swarm situation awareness model is corresponding to the swarm cooperative combat characteristics and has certain positive significance.
2018, 40(6): 1279-1286.
doi: 10.11999/JEIT170883
Abstract:
Assumed that extension and measurement number of Extended Targets (ET) are respectively modeled as ellipse and Poisson, a Gaussian Inverse Wishart Probability Hypothesis Density (GIW-PHD) filter can estimate kinematic and extension states. However, for the number of spatially close targets and the extensions of non-ellipsoidal and occluded targets, the results estimated by this filter are not accurate enough. In view of these problems, an improved GIW-PHD filter is proposed in this paper. Firstly, assumed that target extension is modeled as a reference ellipse of the same size, a modified Random Matrix (RM) method is obtained by devising a new scatter matrix. Then, combining the improved RM method with the ET-PHD based on a measurement number multi-Bernoulli model, the improved GIW-PHD filter is obtained. Simulated and experimental results show that, compared with the traditional GIW-PHD, the improved GIW-PHD filter can obtain more accurate estimates in target number and the extensions of ellipsoidal and non-ellipsoidal targets with large measurement number and extensions.
Assumed that extension and measurement number of Extended Targets (ET) are respectively modeled as ellipse and Poisson, a Gaussian Inverse Wishart Probability Hypothesis Density (GIW-PHD) filter can estimate kinematic and extension states. However, for the number of spatially close targets and the extensions of non-ellipsoidal and occluded targets, the results estimated by this filter are not accurate enough. In view of these problems, an improved GIW-PHD filter is proposed in this paper. Firstly, assumed that target extension is modeled as a reference ellipse of the same size, a modified Random Matrix (RM) method is obtained by devising a new scatter matrix. Then, combining the improved RM method with the ET-PHD based on a measurement number multi-Bernoulli model, the improved GIW-PHD filter is obtained. Simulated and experimental results show that, compared with the traditional GIW-PHD, the improved GIW-PHD filter can obtain more accurate estimates in target number and the extensions of ellipsoidal and non-ellipsoidal targets with large measurement number and extensions.
2018, 40(6): 1287-1293.
doi: 10.11999/JEIT170765
Abstract:
Ordinal regression is one of the supervised learning issues, which resides between classification and regression in machine learning fields. There exist many real problems in practice, which can be modeled as ordinal regression problems due to the ordering information between labels. Therefore ordinal regression has received increasing interest by many researchers recently. The Extreme Learning Machine (ELM)-based algorithms are easy to train without iterative algorithm and they can avoid the local optimal solution; meanwhile they reduce the training time compared with other learning algorithms. However, the ELM-based algorithms which are applied to ordinal regression have not been exploited much. This paper proposes a new ordered code-based kernel extreme learning ordinal regression machine to fill this gap, which combines the kernel ELM and error correcting output codes effectively. The model overcomes the problems of how to get high quality feature mappings in ordinal regression and how to avoid setting the number of hidden nodes by manual. To validate the effectiveness of this model, numerous experiments are conducted on a lot of datasets. The experimental results show that the model can improve the accuracy by 10.8% on average compared with traditional ELM-based algorithms and achieve the state- of-the-art performance with the least time.
Ordinal regression is one of the supervised learning issues, which resides between classification and regression in machine learning fields. There exist many real problems in practice, which can be modeled as ordinal regression problems due to the ordering information between labels. Therefore ordinal regression has received increasing interest by many researchers recently. The Extreme Learning Machine (ELM)-based algorithms are easy to train without iterative algorithm and they can avoid the local optimal solution; meanwhile they reduce the training time compared with other learning algorithms. However, the ELM-based algorithms which are applied to ordinal regression have not been exploited much. This paper proposes a new ordered code-based kernel extreme learning ordinal regression machine to fill this gap, which combines the kernel ELM and error correcting output codes effectively. The model overcomes the problems of how to get high quality feature mappings in ordinal regression and how to avoid setting the number of hidden nodes by manual. To validate the effectiveness of this model, numerous experiments are conducted on a lot of datasets. The experimental results show that the model can improve the accuracy by 10.8% on average compared with traditional ELM-based algorithms and achieve the state- of-the-art performance with the least time.
2018, 40(6): 1294-1300.
doi: 10.11999/JEIT170956
Abstract:
The visibility of the visible images is not good under the poor lighting condition. If the visible and infrared images are fused directly, the resolution of the fused images is not ideal. In order to solve this problem, a modified infrared and visible image fusion approach based on contrast enhancement and multi-scale edge-preserving is proposed. Firstly, an adaptive enhancement method based on the guided filter is adopted to enhance the visibility of dark region content in the visible image. Input images are then decomposed with a scale-aware edge-preserving filter. Subsequently, saliency maps of infrared and visible images are calculated on the basis of frequency-tuned filtering. Finally, the fused images are reconstructed with the weighting maps. Experiments show that the proposed scheme can not only make the detail information more prominent, but also suppress the artifacts effectively.
The visibility of the visible images is not good under the poor lighting condition. If the visible and infrared images are fused directly, the resolution of the fused images is not ideal. In order to solve this problem, a modified infrared and visible image fusion approach based on contrast enhancement and multi-scale edge-preserving is proposed. Firstly, an adaptive enhancement method based on the guided filter is adopted to enhance the visibility of dark region content in the visible image. Input images are then decomposed with a scale-aware edge-preserving filter. Subsequently, saliency maps of infrared and visible images are calculated on the basis of frequency-tuned filtering. Finally, the fused images are reconstructed with the weighting maps. Experiments show that the proposed scheme can not only make the detail information more prominent, but also suppress the artifacts effectively.
2018, 40(6): 1301-1308.
doi: 10.11999/JEIT170884
Abstract:
The traditional Local Binary Pattern (LBP) has limited feature discrimination and is sensitive to the noise. In order to alleviate these problems, this paper proposes a method to extract texture features based on pyramid decomposition and sectored local mean binary pattern. First, the pyramid decomposition is performed on the original image to obtain low-frequency and high-frequency (difference) images with different decomposition levels. To extract robust yet discriminative features, thresholding technique is further used to transform the high-frequency images into positive and negative high-frequency images. Then, based on local averaging operations, Sectored Local Mean Binary Pattern (SLMBP) is proposed and used to compute texture feature codes at different decomposition levels. Finally, the texture features are obtained by joint coding across frequency bands and by histogram weighting across decomposition levels. Experiments on three publicly available texture databases (Outex, Brodatz and UIUC) demonstrate that the proposed method can effectively improve the classification accuracy of texture images both in noise-free conditions and in the presence of different levels of Gaussian noise.
The traditional Local Binary Pattern (LBP) has limited feature discrimination and is sensitive to the noise. In order to alleviate these problems, this paper proposes a method to extract texture features based on pyramid decomposition and sectored local mean binary pattern. First, the pyramid decomposition is performed on the original image to obtain low-frequency and high-frequency (difference) images with different decomposition levels. To extract robust yet discriminative features, thresholding technique is further used to transform the high-frequency images into positive and negative high-frequency images. Then, based on local averaging operations, Sectored Local Mean Binary Pattern (SLMBP) is proposed and used to compute texture feature codes at different decomposition levels. Finally, the texture features are obtained by joint coding across frequency bands and by histogram weighting across decomposition levels. Experiments on three publicly available texture databases (Outex, Brodatz and UIUC) demonstrate that the proposed method can effectively improve the classification accuracy of texture images both in noise-free conditions and in the presence of different levels of Gaussian noise.
2018, 40(6): 1309-1315.
doi: 10.11999/JEIT170789
Abstract:
Since dynamic background may be erroneously detected as a moving object in the Robust Principal Component Analysis (RPCA) algorithm, a RPCA-based moving object detection optimization algorithm is proposed to improve it. After detected by the RPCA algorithm, the moving object will be separated from dynamic background according to the Gaussian distribution of dynamic background in the time domain and the difference of mean value and variance between dynamic background and moving object in the whole video stream. The results show that the algorithm can deal with dynamic background effectively and detect the moving objects well.
Since dynamic background may be erroneously detected as a moving object in the Robust Principal Component Analysis (RPCA) algorithm, a RPCA-based moving object detection optimization algorithm is proposed to improve it. After detected by the RPCA algorithm, the moving object will be separated from dynamic background according to the Gaussian distribution of dynamic background in the time domain and the difference of mean value and variance between dynamic background and moving object in the whole video stream. The results show that the algorithm can deal with dynamic background effectively and detect the moving objects well.
2018, 40(6): 1316-1322.
doi: 10.11999/JEIT170937
Abstract:
T-distributed Stochastic Neighbor Embedding (TSNE) is introduced into cluster ensemble problem and a cluster ensemble approach based on TSNE is proposed. First, TSNE is utilized to minimize Kullback-Leibler divergences between the high-dimensinal points corresponding to the rows of hypergraphs adjacent matrix and the low-dimensional mapping points, which preserves the structure of high-dimensional space in low-dimensional space. Then, a hierarchical clustering algorithm is carried out in the low-dimensional space to obtain the final clustering result. Experimental results on several baseline datasets indicate that TSNE can improve the cluster results of hierarchical clustering algorithm and the proposed cluster ensemble method via TSNE outperforms state-of-the-art methods.
T-distributed Stochastic Neighbor Embedding (TSNE) is introduced into cluster ensemble problem and a cluster ensemble approach based on TSNE is proposed. First, TSNE is utilized to minimize Kullback-Leibler divergences between the high-dimensinal points corresponding to the rows of hypergraphs adjacent matrix and the low-dimensional mapping points, which preserves the structure of high-dimensional space in low-dimensional space. Then, a hierarchical clustering algorithm is carried out in the low-dimensional space to obtain the final clustering result. Experimental results on several baseline datasets indicate that TSNE can improve the cluster results of hierarchical clustering algorithm and the proposed cluster ensemble method via TSNE outperforms state-of-the-art methods.
2018, 40(6): 1323-1329.
doi: 10.11999/JEIT170749
Abstract:
Focusing on the problem that the personnel positioning methods are seriously influenced by the indoor environment, big cumulative error and other issues, a method is proposed to correct the position, which combines the prior knowledge of the map and the heading recognition. Firstly, the dimension of the feature set is reduced by Linear Discriminant Analysis (LDA). Then, the heading of the underground personnel is classified and the special points are marked through combining Random Forest (RF) and the method of setting a threshold value. Finally, the movement trajectory of the underground personnel, which is obtained by the Pedestrian Dead Reckoning (PDR) algorithm, is corrected and updated by matching the special point with the prior knowledge of the roadway structure. The experimental results show that the pre-processing method of LDA can effectively improve the precision of the classifier by more than 6%. The proposed method can effectively reduce the cumulative error, with high accuracy and robustness. The activity recognition accuracy can reach 98%, which can achieve reliable real- time location.
Focusing on the problem that the personnel positioning methods are seriously influenced by the indoor environment, big cumulative error and other issues, a method is proposed to correct the position, which combines the prior knowledge of the map and the heading recognition. Firstly, the dimension of the feature set is reduced by Linear Discriminant Analysis (LDA). Then, the heading of the underground personnel is classified and the special points are marked through combining Random Forest (RF) and the method of setting a threshold value. Finally, the movement trajectory of the underground personnel, which is obtained by the Pedestrian Dead Reckoning (PDR) algorithm, is corrected and updated by matching the special point with the prior knowledge of the roadway structure. The experimental results show that the pre-processing method of LDA can effectively improve the precision of the classifier by more than 6%. The proposed method can effectively reduce the cumulative error, with high accuracy and robustness. The activity recognition accuracy can reach 98%, which can achieve reliable real- time location.
2018, 40(6): 1330-1337.
doi: 10.11999/JEIT170704
Abstract:
The non-uniform illumination, low brightness, serious color deviation and halo effects around artificial light sources lead to the difficulty in haze removal for night-time image. The existing dehazing methods are mostly designed for daytime image and not applicable to nighttime image. This paper focuses on researching nighttime image dehazing. A new nighttime haze model that accounts for the artificial varying light sources is introduced. Based on this new model, a new dehazing framework is proposed. Firstly, the atmospheric light is estimated based on the low pass filter method. This atmospheric light map can be used to predict the transmission of night scene accurately. Secondly, to solve the problem of halo effects around artificial light sources in existing dehazing methods, a method that estimates the distance between the object of the scene and the artificial light sources based on the image chromaticity is proposed. In this way, the scene objects near to the light source region and objects far away from the light source region can be processed respectively. Finally, as for the color cast, an efficient color correction algorithm based on the histogram matching is presented in this paper. Comparing with existing daytime and nighttime dehazing methods, the experimental results of a number of examples demonstrate the effectiveness of the proposed night-time haze model and the dehazing method.
The non-uniform illumination, low brightness, serious color deviation and halo effects around artificial light sources lead to the difficulty in haze removal for night-time image. The existing dehazing methods are mostly designed for daytime image and not applicable to nighttime image. This paper focuses on researching nighttime image dehazing. A new nighttime haze model that accounts for the artificial varying light sources is introduced. Based on this new model, a new dehazing framework is proposed. Firstly, the atmospheric light is estimated based on the low pass filter method. This atmospheric light map can be used to predict the transmission of night scene accurately. Secondly, to solve the problem of halo effects around artificial light sources in existing dehazing methods, a method that estimates the distance between the object of the scene and the artificial light sources based on the image chromaticity is proposed. In this way, the scene objects near to the light source region and objects far away from the light source region can be processed respectively. Finally, as for the color cast, an efficient color correction algorithm based on the histogram matching is presented in this paper. Comparing with existing daytime and nighttime dehazing methods, the experimental results of a number of examples demonstrate the effectiveness of the proposed night-time haze model and the dehazing method.
2018, 40(6): 1338-1344.
doi: 10.11999/JEIT170799
Abstract:
In order to improve the inadequacies of Local Binary Pattern (LBP), Center-Symmetric Local Binary Pattern (CS-LBP) and Histogram of Oriented Gradient (HOG) algorithm, Center-Symmetric Local Smooth Binary Pattern (CS-LSBP) and Histogram of Oriented Absolute Gradient (HOAG) are proposed, and a facial expression recognition method based on local texture and local shape features is proposed in this paper. Firstly, CS-LSBP and HOAG are used to extract two local features of expression image of the face. Then, Canonical Correlation Analysis (CCA) is used to fuse two local features. Finally, Support Vector Machine (SVM) is performed for the expression classification. Experimental results on JAFFE and Cohn-Kanade (CK) facial expression databases show that, the improved feature extraction method can extract the detail information of the image more completely and accurately. And the fusion method based on CCA can give full play to the representation ability of each feature. The facial expression recognition method proposed in this paper obtains a better recognition effect.
In order to improve the inadequacies of Local Binary Pattern (LBP), Center-Symmetric Local Binary Pattern (CS-LBP) and Histogram of Oriented Gradient (HOG) algorithm, Center-Symmetric Local Smooth Binary Pattern (CS-LSBP) and Histogram of Oriented Absolute Gradient (HOAG) are proposed, and a facial expression recognition method based on local texture and local shape features is proposed in this paper. Firstly, CS-LSBP and HOAG are used to extract two local features of expression image of the face. Then, Canonical Correlation Analysis (CCA) is used to fuse two local features. Finally, Support Vector Machine (SVM) is performed for the expression classification. Experimental results on JAFFE and Cohn-Kanade (CK) facial expression databases show that, the improved feature extraction method can extract the detail information of the image more completely and accurately. And the fusion method based on CCA can give full play to the representation ability of each feature. The facial expression recognition method proposed in this paper obtains a better recognition effect.
2018, 40(6): 1345-1352.
doi: 10.11999/JEIT170824
Abstract:
A novel non-contact heart rate estimation method is proposed to deal with the issue of heart rate measurement from face videos under motion interference in realistic situations, it is hard to estimate heart rate accurately using existing methods. Firstly, the discriminative response map fitting method and KLT tracking algorithm are used to eliminate the influence of face rigid motion. Then the chrominance features are selected to estimate heart rate with two steps because of the robustness to facial movements. The frequency and spatial domain weights are assigned through spatial gradient to eliminate the influence of non-rigid motion. Finally, the accurate average heart rate value and pulse wave signal waveform can be acquired from different face regions. Compared with three other methods, experimental results indicate that the proposed method enhances the consistency of estimated waveform and ground truth waveform and has obvious superiority in accuracy and robustness of heart rate estimation.
A novel non-contact heart rate estimation method is proposed to deal with the issue of heart rate measurement from face videos under motion interference in realistic situations, it is hard to estimate heart rate accurately using existing methods. Firstly, the discriminative response map fitting method and KLT tracking algorithm are used to eliminate the influence of face rigid motion. Then the chrominance features are selected to estimate heart rate with two steps because of the robustness to facial movements. The frequency and spatial domain weights are assigned through spatial gradient to eliminate the influence of non-rigid motion. Finally, the accurate average heart rate value and pulse wave signal waveform can be acquired from different face regions. Compared with three other methods, experimental results indicate that the proposed method enhances the consistency of estimated waveform and ground truth waveform and has obvious superiority in accuracy and robustness of heart rate estimation.
2018, 40(6): 1353-1359.
doi: 10.11999/JEIT170769
Abstract:
In order to reduce the effect of low dose CT lung noise on the late diagnosis of lung cancer screening, a denoising model of low-dose CT lung based on deep convolution neural network is proposed. The input of the model is the complete CT lung image. The pooling layer reduces the dimension of input. Batch normalization works out the poor performance with the increase of network depth. The residuals of each layer are learned with residual learning. Finally, the denoised image is produced. Compared with classical methods, the proposed method achieves good filtering effect in solving the denoising method, and also retaining the details of lung image information, which is much better than the traditional filtering algorithm.
In order to reduce the effect of low dose CT lung noise on the late diagnosis of lung cancer screening, a denoising model of low-dose CT lung based on deep convolution neural network is proposed. The input of the model is the complete CT lung image. The pooling layer reduces the dimension of input. Batch normalization works out the poor performance with the increase of network depth. The residuals of each layer are learned with residual learning. Finally, the denoised image is produced. Compared with classical methods, the proposed method achieves good filtering effect in solving the denoising method, and also retaining the details of lung image information, which is much better than the traditional filtering algorithm.
2018, 40(6): 1360-1367.
doi: 10.11999/JEIT170800
Abstract:
To solve the online learning problem under the scenario of time-varying and containing outliers, this paper proposes an M-estimator and Variable Forgetting Factor based Online Sequential Extreme Learning Machine (VFF-M-OSELM). The VFF-M-OSELM is developed from the online sequential extreme learning machine algorithm and retains the same excellent sequential learning ability as it, it replaces the conventional Least-Squares (LS) cost function with a robust M-estimator based cost function to enhance the robustness of the learning model to outliers. Meanwhile, a new variable forgetting factor method is designed and incorporated in the VFF-M- OSELM to enhance further the dynamic tracking ability and adaptivity of the algorithm to time-varying system. The simulation results verify the effectiveness of the proposed algorithm.
To solve the online learning problem under the scenario of time-varying and containing outliers, this paper proposes an M-estimator and Variable Forgetting Factor based Online Sequential Extreme Learning Machine (VFF-M-OSELM). The VFF-M-OSELM is developed from the online sequential extreme learning machine algorithm and retains the same excellent sequential learning ability as it, it replaces the conventional Least-Squares (LS) cost function with a robust M-estimator based cost function to enhance the robustness of the learning model to outliers. Meanwhile, a new variable forgetting factor method is designed and incorporated in the VFF-M- OSELM to enhance further the dynamic tracking ability and adaptivity of the algorithm to time-varying system. The simulation results verify the effectiveness of the proposed algorithm.
2018, 40(6): 1368-1374.
doi: 10.11999/JEIT170819
Abstract:
Concerning the problem of real-time restriction on the application of Convolution Neural Network (CNN) in embedded field, and the large degree of sparsity in CNN convolution calculations, this paper proposes an implement method of CNN accelerator based on FPGA to improve computation speed. Firstly, the sparseness characteristics of CNN convolution calculation are seeked out. Secondly, in order to use the parameters sparseness, CNN convolution calculations are converted to matrix multiplication. Finally, the implementation method of parallel matrix multiplier based on FPGA is proposed. Simulation results on the Virtex-7 VC707 FPGA show that the design shortens the calculation time by 19% compared to the traditional CNN accelerator. The method of simplifying the CNN calculation process by sparseness not only can be implemented on FPGA, but also can migrate to other embedded ends.
Concerning the problem of real-time restriction on the application of Convolution Neural Network (CNN) in embedded field, and the large degree of sparsity in CNN convolution calculations, this paper proposes an implement method of CNN accelerator based on FPGA to improve computation speed. Firstly, the sparseness characteristics of CNN convolution calculation are seeked out. Secondly, in order to use the parameters sparseness, CNN convolution calculations are converted to matrix multiplication. Finally, the implementation method of parallel matrix multiplier based on FPGA is proposed. Simulation results on the Virtex-7 VC707 FPGA show that the design shortens the calculation time by 19% compared to the traditional CNN accelerator. The method of simplifying the CNN calculation process by sparseness not only can be implemented on FPGA, but also can migrate to other embedded ends.
2018, 40(6): 1375-1382.
doi: 10.11999/JEIT170856
Abstract:
An algorithm of Direction Of Arrival (DOA) estimation based on the single-snapshot data is proposed for distributed Two-Dimensional (2-D) array. 2-D Hankle matrixes are firstly constructed using the single observation of every subarray element. Then angles of azimuth and elevation for different baselines based on the distributed 2-D array are estimated using 2-D state space balance method. Finally, high accuracy and unambiguous angles of azimuth and elevation are obtained through the solution of fuzzy algorithm. The matching problem of the DOA estimation about different baselines and the pairing problem between azimuth and elevation are well solved by the proposed algorithm, therefore the characteristic of large aperture is acquired using the distributed array. At the same time, this algorithm can deal with the correlation signals and uncorrelation signals. Computer simulation results confirm the effectiveness of the proposed algorithm.
An algorithm of Direction Of Arrival (DOA) estimation based on the single-snapshot data is proposed for distributed Two-Dimensional (2-D) array. 2-D Hankle matrixes are firstly constructed using the single observation of every subarray element. Then angles of azimuth and elevation for different baselines based on the distributed 2-D array are estimated using 2-D state space balance method. Finally, high accuracy and unambiguous angles of azimuth and elevation are obtained through the solution of fuzzy algorithm. The matching problem of the DOA estimation about different baselines and the pairing problem between azimuth and elevation are well solved by the proposed algorithm, therefore the characteristic of large aperture is acquired using the distributed array. At the same time, this algorithm can deal with the correlation signals and uncorrelation signals. Computer simulation results confirm the effectiveness of the proposed algorithm.
2018, 40(6): 1383-1389.
doi: 10.11999/JEIT170826
Abstract:
Focusing on the problem of poor accuracy and low resolution of traditional Direction Of Arrival (DOA) estimation algorithm in the presence of non-uniform noise, based on the Matrix Complement theory, a Weighted L1 Sparse Reconstruction DOA estimation algorithm is developed under the Second-order Statistical domain (MC-WLOSRSS) in this paper. Following the matrix completion approach, the regularization factor is firstly introduced to reconstruct the signal covariance matrix reconstruction as a noise-free covariance matrix. After that, the multi-vector problem of the noise-free covariance matrix can be transformed into a single vector one by exploiting sum-average operation for matrix in the second-order statistical domain. Finally, the DOA can be complemented by employing the sparse reconstruction weighted L1 norm. Numerical simulations show that the proposed algorithm outperforms the traditional DOA algorithms such as MUltiple SIgnal Classification (MUSIC), Improved L1-SRACV (IL1-SRACV), L1-norm-Singular Value Decomposition (L1-SVD) subspace and sparse reconstruction weighted L1 methods in the following respects: suppressing the influence of the non-uniform noise significantly, bettering DOA estimation performance, as well as improving estimation accuracy and resolution with low Signal-Noise Ratio (SNR).
Focusing on the problem of poor accuracy and low resolution of traditional Direction Of Arrival (DOA) estimation algorithm in the presence of non-uniform noise, based on the Matrix Complement theory, a Weighted L1 Sparse Reconstruction DOA estimation algorithm is developed under the Second-order Statistical domain (MC-WLOSRSS) in this paper. Following the matrix completion approach, the regularization factor is firstly introduced to reconstruct the signal covariance matrix reconstruction as a noise-free covariance matrix. After that, the multi-vector problem of the noise-free covariance matrix can be transformed into a single vector one by exploiting sum-average operation for matrix in the second-order statistical domain. Finally, the DOA can be complemented by employing the sparse reconstruction weighted L1 norm. Numerical simulations show that the proposed algorithm outperforms the traditional DOA algorithms such as MUltiple SIgnal Classification (MUSIC), Improved L1-SRACV (IL1-SRACV), L1-norm-Singular Value Decomposition (L1-SVD) subspace and sparse reconstruction weighted L1 methods in the following respects: suppressing the influence of the non-uniform noise significantly, bettering DOA estimation performance, as well as improving estimation accuracy and resolution with low Signal-Noise Ratio (SNR).
2018, 40(6): 1390-1396.
doi: 10.11999/JEIT170807
Abstract:
A novel Two Dimension Direction Of Arrive (2D-DOA) estimation method based on sparse sampling array optimization is proposed, which is combined with Accelerated Proximal Gradient (APG) and MUltiple SIgnal Classification (MUSIC). First, a 2D-DOA estimation signal model for sparse array is established, and its low rank feature and Null Space Property (NSP) are analyzed. Then, a sparse sampling array optimization method based on Genetic Algorithm (GA) is studied to enhance the performance of Matrix Completion (MC) and DOA. Finally, APG and MUSIC are employed to reconstruct the received signal matrix and estimate the direction of wave arrived, respectively. Computer simulation results show that the proposed method improves the utilization rate of array and reduces the average side lobe of spatial spectrum effectively, compared with the conventional 2D-DOA methods.
A novel Two Dimension Direction Of Arrive (2D-DOA) estimation method based on sparse sampling array optimization is proposed, which is combined with Accelerated Proximal Gradient (APG) and MUltiple SIgnal Classification (MUSIC). First, a 2D-DOA estimation signal model for sparse array is established, and its low rank feature and Null Space Property (NSP) are analyzed. Then, a sparse sampling array optimization method based on Genetic Algorithm (GA) is studied to enhance the performance of Matrix Completion (MC) and DOA. Finally, APG and MUSIC are employed to reconstruct the received signal matrix and estimate the direction of wave arrived, respectively. Computer simulation results show that the proposed method improves the utilization rate of array and reduces the average side lobe of spatial spectrum effectively, compared with the conventional 2D-DOA methods.
2018, 40(6): 1397-1403.
doi: 10.11999/JEIT170854
Abstract:
To solve the problem that Compressive Tracking (CT) algorithm is unable to adapt to the scale change of the object and ignores the sample weight, an optimized compressive tracking algorithm based on particle filter and sample weighting is presented. Firstly, the compressive feature is improved for building a target apparent model with normalized rectangle features. Then, the thought of sample weighting is utilized. In order to increase the precision of the classifier, different weights are given to the positive samples in accordance with the different distances between the positive samples and the object. Finally, the dynamic state estimation is made under the particle filter frame with integrating the scale invariant feature. At the phase of particle prediction, a second-order autoregressive model is utilized to obtain the estimation and prediction of the particle state. The particle state is updated with the observation model. The particles resampling is used to prevent the degradation of particles. Experimental results demonstrate that the improved algorithm can adapt to the scale change of object, and the accuracy and stability of the compressive tracking algorithm is improved.
To solve the problem that Compressive Tracking (CT) algorithm is unable to adapt to the scale change of the object and ignores the sample weight, an optimized compressive tracking algorithm based on particle filter and sample weighting is presented. Firstly, the compressive feature is improved for building a target apparent model with normalized rectangle features. Then, the thought of sample weighting is utilized. In order to increase the precision of the classifier, different weights are given to the positive samples in accordance with the different distances between the positive samples and the object. Finally, the dynamic state estimation is made under the particle filter frame with integrating the scale invariant feature. At the phase of particle prediction, a second-order autoregressive model is utilized to obtain the estimation and prediction of the particle state. The particle state is updated with the observation model. The particles resampling is used to prevent the degradation of particles. Experimental results demonstrate that the improved algorithm can adapt to the scale change of object, and the accuracy and stability of the compressive tracking algorithm is improved.
2018, 40(6): 1404-1411.
doi: 10.11999/JEIT170792
Abstract:
It is a great challenge to model Takagi-Sugeno(T-S) fuzzy systems on high dimensional data due to the problem of the curse of dimensionality. To this end, a novel T-S fuzzy system modeling method called WOMP-GS-FIS is proposed. The proposed method considers feature selection and group sparse coding simultaneously. Specifically, feature selection is performed by a novel Weighted Orthogonal Matching Pursuit (WOMP) method, based on which the fuzzy rule antecedent part is extracted and the dictionary of the fuzzy system is generated. Then, a group sparse optimization problem based on the group sparse regularization is formulated to obtain the optimal consequent parameters. In this way, the major fuzzy rules are selected by utilizing the group information that existing in the T-S fuzzy systems. The experimental results show that the proposed method can not only simplify the rule,s structure, but also reduce the number of fuzzy rules under the premise of good generalization performance, so as to solve the poor interpretation problem of fuzzy rules on high dimensional data effectively.
It is a great challenge to model Takagi-Sugeno(T-S) fuzzy systems on high dimensional data due to the problem of the curse of dimensionality. To this end, a novel T-S fuzzy system modeling method called WOMP-GS-FIS is proposed. The proposed method considers feature selection and group sparse coding simultaneously. Specifically, feature selection is performed by a novel Weighted Orthogonal Matching Pursuit (WOMP) method, based on which the fuzzy rule antecedent part is extracted and the dictionary of the fuzzy system is generated. Then, a group sparse optimization problem based on the group sparse regularization is formulated to obtain the optimal consequent parameters. In this way, the major fuzzy rules are selected by utilizing the group information that existing in the T-S fuzzy systems. The experimental results show that the proposed method can not only simplify the rule,s structure, but also reduce the number of fuzzy rules under the premise of good generalization performance, so as to solve the poor interpretation problem of fuzzy rules on high dimensional data effectively.
2018, 40(6): 1412-1418.
doi: 10.11999/JEIT170924
Abstract:
Considering the fact that the classical two scale model which is based on the Geometrical Optics-Small Perturbation Method (GO-SPM) is sensitive to the cut-off wave number, a two scale model derived from the Geometrical Optics-Small Slope Approximation (GO-SSA) is established. In this model, the SPM in the classical two scale model is replaced by the first order Small Slope Approximation (SSA1). At the same time, the solution of geometrical optics for specular contribution is modified. The simulations show that GO-SSA can get the same accuracy as GO-SPM while do not need to consider the choice of cut-off wave number. The integral equation of GO-SSA is simplified based on the characteristics of Elfouhaily wave spectrum. At last, the full polarimetric scattering characteristics of Elfouhaily ocean model in monostatic and bistatic cases are simulated and analyzed. It can be found that the results of cross polarization present an interesting distribution with variation of angles which is different from classical models. In the three-dimensional results of bistatic scattering, the scattered direction with minimum value always exists in every polarization form. The value of the scattering power in this direction has relationship with the parameters of the environment, which has a potential application to the parameter inversion.
Considering the fact that the classical two scale model which is based on the Geometrical Optics-Small Perturbation Method (GO-SPM) is sensitive to the cut-off wave number, a two scale model derived from the Geometrical Optics-Small Slope Approximation (GO-SSA) is established. In this model, the SPM in the classical two scale model is replaced by the first order Small Slope Approximation (SSA1). At the same time, the solution of geometrical optics for specular contribution is modified. The simulations show that GO-SSA can get the same accuracy as GO-SPM while do not need to consider the choice of cut-off wave number. The integral equation of GO-SSA is simplified based on the characteristics of Elfouhaily wave spectrum. At last, the full polarimetric scattering characteristics of Elfouhaily ocean model in monostatic and bistatic cases are simulated and analyzed. It can be found that the results of cross polarization present an interesting distribution with variation of angles which is different from classical models. In the three-dimensional results of bistatic scattering, the scattered direction with minimum value always exists in every polarization form. The value of the scattering power in this direction has relationship with the parameters of the environment, which has a potential application to the parameter inversion.
2018, 40(6): 1419-1425.
doi: 10.11999/JEIT170833
Abstract:
To reduce Radar Cross-Section (RCS), improve operation bandwidth, this paper proposes an innovative Salisbury screen based on time-controlled surface and researches the frequency shifting of UHF radar signal. First, a reflective modulation board which is composed of adjustable impedance sheet, dielectric spacer and grounded slab is presented by using the controllability of electromagnetic properties. Second, the equivalent circuit of dynamic two-phase transmission line is established, and an inductance layer is loaded on a periodic Frequency Selective Surface (FSS). Theoretical derivation and simulation results show that Salisbury screen can realize the spectrum shifting for UHF radar signal with large bandwidth, multi-directional, as well as different polarizations. Moreover, this screen can reduce RCS and detection probability of the long distance moving target.
To reduce Radar Cross-Section (RCS), improve operation bandwidth, this paper proposes an innovative Salisbury screen based on time-controlled surface and researches the frequency shifting of UHF radar signal. First, a reflective modulation board which is composed of adjustable impedance sheet, dielectric spacer and grounded slab is presented by using the controllability of electromagnetic properties. Second, the equivalent circuit of dynamic two-phase transmission line is established, and an inductance layer is loaded on a periodic Frequency Selective Surface (FSS). Theoretical derivation and simulation results show that Salisbury screen can realize the spectrum shifting for UHF radar signal with large bandwidth, multi-directional, as well as different polarizations. Moreover, this screen can reduce RCS and detection probability of the long distance moving target.
2018, 40(6): 1426-1432.
doi: 10.11999/JEIT170739
Abstract:
The research of improving the Secrecy Capacity (SC) of wireless communication system using Artificial Noise (AN) is one of the classics models in the field of physical layer security communication. Considering the Peak-to-Average Power Ratio (PAPR) problem of transmit signal, a power allocation of AN subspaces algorithm is proposed to reduce the PAPR of transmit signal based on convex optimization. This algorithm utilizes a series of convex optimization problems to approach the nonconvex PAPR optimization problem based on fractional programming, Difference of Convex (DC) functions programming and nonconvex quadratic equality constraint transformation. Simulation results show that the proposed algorithm reduce the PAPR value of transmit signal to improve the communication performance of legitimate user compared with benchmark problems.
The research of improving the Secrecy Capacity (SC) of wireless communication system using Artificial Noise (AN) is one of the classics models in the field of physical layer security communication. Considering the Peak-to-Average Power Ratio (PAPR) problem of transmit signal, a power allocation of AN subspaces algorithm is proposed to reduce the PAPR of transmit signal based on convex optimization. This algorithm utilizes a series of convex optimization problems to approach the nonconvex PAPR optimization problem based on fractional programming, Difference of Convex (DC) functions programming and nonconvex quadratic equality constraint transformation. Simulation results show that the proposed algorithm reduce the PAPR value of transmit signal to improve the communication performance of legitimate user compared with benchmark problems.
2018, 40(6): 1433-1437.
doi: 10.11999/JEIT170820
Abstract:
Since its wide coverage area, data broadcast is the major service of spatial system. However, due to the long distance and the complex, varied climate, data transmission suffers large round-trip-time and poor error performance. In order to achieve better performance, based on fountain codes and the feedback information, a novel and efficient data transmission strategy is proposed. Compared with typical protocols, the proposed strategy uses the feedback information to estimate the channel erasure probability. Besides, a weighted packet chosen vector is introduced into the fountain encoder to ensure the lost packets retransmitted in the order of the total consideration of their lost probability and the number of users which need a retransmission. Simulation results show that by the proposed scheme users can receive the data packets reliably while the total number of transmitted packets is less than the traditional protocols.
Since its wide coverage area, data broadcast is the major service of spatial system. However, due to the long distance and the complex, varied climate, data transmission suffers large round-trip-time and poor error performance. In order to achieve better performance, based on fountain codes and the feedback information, a novel and efficient data transmission strategy is proposed. Compared with typical protocols, the proposed strategy uses the feedback information to estimate the channel erasure probability. Besides, a weighted packet chosen vector is introduced into the fountain encoder to ensure the lost packets retransmitted in the order of the total consideration of their lost probability and the number of users which need a retransmission. Simulation results show that by the proposed scheme users can receive the data packets reliably while the total number of transmitted packets is less than the traditional protocols.
2018, 40(6): 1438-1445.
doi: 10.11999/JEIT170879
Abstract:
VoIP (Voice over Internet Protocol) is based on voice stream, which has the advantages of large data transmission and wide application. But the VoIP system is confronted with the security threats of data security and privacy disclosure. Thus, according to nonergodicity and redundancy of the fixed codebook search, an information hiding algorithm based on fixed codebook search process is proposed. This information hiding algorithm is carried out by a functional relationship between the nonzero pulse positions and secret information. The idea of least significant pulse replacement is used in the search process and a distortion minimization criterion is proposed to control the distortion of speech quality caused by the embedding of secret information. The experimental results show that the hiding capacity of the proposed algorithm is up to 400 bit/s and the average PESQ score is 3.45 which indicate that the algorithm has good imperceptibility.
VoIP (Voice over Internet Protocol) is based on voice stream, which has the advantages of large data transmission and wide application. But the VoIP system is confronted with the security threats of data security and privacy disclosure. Thus, according to nonergodicity and redundancy of the fixed codebook search, an information hiding algorithm based on fixed codebook search process is proposed. This information hiding algorithm is carried out by a functional relationship between the nonzero pulse positions and secret information. The idea of least significant pulse replacement is used in the search process and a distortion minimization criterion is proposed to control the distortion of speech quality caused by the embedding of secret information. The experimental results show that the hiding capacity of the proposed algorithm is up to 400 bit/s and the average PESQ score is 3.45 which indicate that the algorithm has good imperceptibility.
2018, 40(6): 1446-1452.
doi: 10.11999/JEIT170756
Abstract:
Mobile Ad hoc NETwork (MANET) is vulnerable to various security threats, and intrusion detection is an effective guarantee for its safe operation. However, existing methods mainly focus on feature selection and feature weighting, and ignore the potential association among features. To solve this problem, an intrusion detection method for MANET based on graph theory is proposed. First of all, nine features are selected as nodes based on the analysis of typical attack behavior, and the edges among nodes are determined according to Euclidean distance so as to build the structure diagram. Secondly, the scale attributes of neighborhood nodes and the degree of closeness attributes among nodes are considered to explore (i.e. feature) the correlation among nodes, then the statistical properties degree distribution and clustering coefficient of graph theory are used to realize the above two attributes. Finally, contrasting experimental results show that compared with the traditional methods, the average detection rate and false detection rate of new method are improved by 10.15% and reduced by 1.8% respectively.
Mobile Ad hoc NETwork (MANET) is vulnerable to various security threats, and intrusion detection is an effective guarantee for its safe operation. However, existing methods mainly focus on feature selection and feature weighting, and ignore the potential association among features. To solve this problem, an intrusion detection method for MANET based on graph theory is proposed. First of all, nine features are selected as nodes based on the analysis of typical attack behavior, and the edges among nodes are determined according to Euclidean distance so as to build the structure diagram. Secondly, the scale attributes of neighborhood nodes and the degree of closeness attributes among nodes are considered to explore (i.e. feature) the correlation among nodes, then the statistical properties degree distribution and clustering coefficient of graph theory are used to realize the above two attributes. Finally, contrasting experimental results show that compared with the traditional methods, the average detection rate and false detection rate of new method are improved by 10.15% and reduced by 1.8% respectively.
Temporal-aware Multi-category Products Recommendation Model Based on Aspect-level Sentiment Analysis
2018, 40(6): 1453-1460.
doi: 10.11999/JEIT170938
Abstract:
Review data in e-commerce websites implicates items features and users sentiment. Most existing recommendation researches based on aspect-level sentiment analysis capture users aspect preference for items by extracting users sentiment towards different aspects of items in the review data of a same category, ignoring that different category items have different aspects and that users aspect preference varies by time. A temporal-aware multi-category products recommendation model is proposed based on aspect-level sentiment analysis, which jointly models user, category, item, aspect, aspect-sentiment and time in order to find how users aspect preferences vary by time on different category items. This model is able to infer users aspect preferences for items at any time, which can provide users with explainable recommendations. Experiment results on two real-world data sets show that, in comparison to other recommendation models based on time or aspect-level sentiment analysis, the proposed model achieves significant improvement in the precision and recall for the top-N recommendation.
Review data in e-commerce websites implicates items features and users sentiment. Most existing recommendation researches based on aspect-level sentiment analysis capture users aspect preference for items by extracting users sentiment towards different aspects of items in the review data of a same category, ignoring that different category items have different aspects and that users aspect preference varies by time. A temporal-aware multi-category products recommendation model is proposed based on aspect-level sentiment analysis, which jointly models user, category, item, aspect, aspect-sentiment and time in order to find how users aspect preferences vary by time on different category items. This model is able to infer users aspect preferences for items at any time, which can provide users with explainable recommendations. Experiment results on two real-world data sets show that, in comparison to other recommendation models based on time or aspect-level sentiment analysis, the proposed model achieves significant improvement in the precision and recall for the top-N recommendation.
2018, 40(6): 1461-1467.
doi: 10.11999/JEIT170874
Abstract:
The particularity of security threats for Multi-floor building Indoor Wireless Networks (MIWNs) is mainly caused by its stochastic, dynamic and complex spatial topology. According to the features of MIWNs, such as the randomization of node distribution, complexity of spatial structure, and diversification of loss types, physical layer security technologies and stochastic geometry theory are utilized to study the cooperative secrecy transmission in MIWNs. First, a fundamental system model for MIWNs is proposed based on multi-floor Poisson point process. On this basis, cooperative transmission is introduced into MIWNs and an analysis framework to evaluate the secrecy probability for cooperative transmissions in MIWNs is proposed. Then, based on the theoretical analyses and simulation results, the influences of total floor number, secrecy rate threshold, floor number for target user, and the transmit power allocation on secrecy performance in MIWNs are examined. Finally, the simulations verify that the cooperative transmission can effectively improve the secrecy performance of the MIWNs.
The particularity of security threats for Multi-floor building Indoor Wireless Networks (MIWNs) is mainly caused by its stochastic, dynamic and complex spatial topology. According to the features of MIWNs, such as the randomization of node distribution, complexity of spatial structure, and diversification of loss types, physical layer security technologies and stochastic geometry theory are utilized to study the cooperative secrecy transmission in MIWNs. First, a fundamental system model for MIWNs is proposed based on multi-floor Poisson point process. On this basis, cooperative transmission is introduced into MIWNs and an analysis framework to evaluate the secrecy probability for cooperative transmissions in MIWNs is proposed. Then, based on the theoretical analyses and simulation results, the influences of total floor number, secrecy rate threshold, floor number for target user, and the transmit power allocation on secrecy performance in MIWNs are examined. Finally, the simulations verify that the cooperative transmission can effectively improve the secrecy performance of the MIWNs.
2018, 40(6): 1468-1475.
doi: 10.11999/JEIT170974
Abstract:
Massive MIMO system using Space Division Multiple Access (SDMA) can improve system throughput, and the use of multi-user downlink signal collaboration can cause superimposed interference to the eavesdropper, bringing a natural security gain. However, the physical layer security research of the system still adopts the traditional artificial noise scheme to improve the system security, ignoring the safety gain caused by the multi-user signal interference, resulting in serious power waste. In response to this problem, the impact of multi-user signal interference on the system achievable average security rate and average safety energy efficiency is analyzed in this paper, and the optimal interval of access users is given. The research shows that the system security capability is weak when the number of access users is small or large. Therefore, an adaptive secure transmission strategy to transmit N scrambling beams and user scheduling based on user location is proposed respectively. Finally, the effectiveness of the theoretical derivation and the proposed strategy is verified through the simulation. By using the proposed strategy, the secure communication can be guaranteed when the system,s natural security capability is insufficient.
Massive MIMO system using Space Division Multiple Access (SDMA) can improve system throughput, and the use of multi-user downlink signal collaboration can cause superimposed interference to the eavesdropper, bringing a natural security gain. However, the physical layer security research of the system still adopts the traditional artificial noise scheme to improve the system security, ignoring the safety gain caused by the multi-user signal interference, resulting in serious power waste. In response to this problem, the impact of multi-user signal interference on the system achievable average security rate and average safety energy efficiency is analyzed in this paper, and the optimal interval of access users is given. The research shows that the system security capability is weak when the number of access users is small or large. Therefore, an adaptive secure transmission strategy to transmit N scrambling beams and user scheduling based on user location is proposed respectively. Finally, the effectiveness of the theoretical derivation and the proposed strategy is verified through the simulation. By using the proposed strategy, the secure communication can be guaranteed when the system,s natural security capability is insufficient.
2018, 40(6): 1476-1483.
doi: 10.11999/JEIT170836
Abstract:
For the high Range Sidelobe Level (RSL) of designing distributed MIMO radar orthogonal phase coded waveforms and its mismatched filter bank separately, a joint design method of Orthogonal Phase Coded Waveforms and Mismatched Filter Bank (OPCW-MFB) is proposed in this paper. Firstly, by controlling signal-to-noise ratio loss and minimizing the RSL of mismatched filter bank output, the joint design criterion is formulated. Then, based on the theory of a least-pth minimax algorithm, a double least-pth minimax algorithm with the L-BFGS as its sub-algorithm is proposed to solve it. Numerical results show that compared with the separate design methods of OPCW-MFB, the proposed method can further suppress the RSL.
For the high Range Sidelobe Level (RSL) of designing distributed MIMO radar orthogonal phase coded waveforms and its mismatched filter bank separately, a joint design method of Orthogonal Phase Coded Waveforms and Mismatched Filter Bank (OPCW-MFB) is proposed in this paper. Firstly, by controlling signal-to-noise ratio loss and minimizing the RSL of mismatched filter bank output, the joint design criterion is formulated. Then, based on the theory of a least-pth minimax algorithm, a double least-pth minimax algorithm with the L-BFGS as its sub-algorithm is proposed to solve it. Numerical results show that compared with the separate design methods of OPCW-MFB, the proposed method can further suppress the RSL.
2018, 40(6): 1484-1491.
doi: 10.11999/JEIT170762
Abstract:
Phase shifter is the steering wheel to control the beam direction of Phased Array Antenna (PAA), which determines the performance of the PAA. Micro Electronic Mechanical System (MEMS) phase shifter has obvious advantages for PAA, but there always exists structural deformation caused by the complex work environment and environmental load of the PAA, which has serious impact over the performance of the PAA. Therefore, the coupling between the key structural parameters of MEMS phase shifter and the electrical parameters is studied by transmitting the influence of complex environmental factors on structure of MEMS to the structural and electrical parameters. The electromechanical integrated model of distributed MEMS phase shifter is derived. Besides, the rapid performance assessment and structural tolerance of the deformed MEMS phase shifter is calculated based on the coupled model. The simulation results show the effectiveness of the coupled model and the engineering application value.
Phase shifter is the steering wheel to control the beam direction of Phased Array Antenna (PAA), which determines the performance of the PAA. Micro Electronic Mechanical System (MEMS) phase shifter has obvious advantages for PAA, but there always exists structural deformation caused by the complex work environment and environmental load of the PAA, which has serious impact over the performance of the PAA. Therefore, the coupling between the key structural parameters of MEMS phase shifter and the electrical parameters is studied by transmitting the influence of complex environmental factors on structure of MEMS to the structural and electrical parameters. The electromechanical integrated model of distributed MEMS phase shifter is derived. Besides, the rapid performance assessment and structural tolerance of the deformed MEMS phase shifter is calculated based on the coupled model. The simulation results show the effectiveness of the coupled model and the engineering application value.
2018, 40(6): 1492-1498.
doi: 10.11999/JEIT170900
Abstract:
In order to obtain efficient information transmission, the existing compression algorithms reduce the compression ratio by increasing complexity. In view of this problem, an array configuration speedup model is proposed in this paper. It is proved that low compression ratio may not improve the transmission efficiency and the factors, decompression module throughput and data block compression rate which affect the efficiency of information transmission, are found. Combined the influencing factors with the configuration information, a new lossless compression method is designed and the decompression hardware circuit is implemented, whose throughput can reach 16.1 Gbps. The lossless compression algorithm is tested using AES, A5-1 and SM4. Compared with the mainstream lossless compression algorithms LZW, Huffman, LPAQ1 and Arithmetic, the results show that the overall compression ratio is equivalent. However, the compression ratio of data block generated by the compression algorithm is optimized, which can not only meet the demand of acceleration, but also possesses high throughput decompression performance. The configuration speedup ratio obtained by lossless compression algorithm is about 8%, 9%, 10% and 22% higher than LPAQl, Arithmetic, Huffman, and LZW with ideal hardware throughput.
In order to obtain efficient information transmission, the existing compression algorithms reduce the compression ratio by increasing complexity. In view of this problem, an array configuration speedup model is proposed in this paper. It is proved that low compression ratio may not improve the transmission efficiency and the factors, decompression module throughput and data block compression rate which affect the efficiency of information transmission, are found. Combined the influencing factors with the configuration information, a new lossless compression method is designed and the decompression hardware circuit is implemented, whose throughput can reach 16.1 Gbps. The lossless compression algorithm is tested using AES, A5-1 and SM4. Compared with the mainstream lossless compression algorithms LZW, Huffman, LPAQ1 and Arithmetic, the results show that the overall compression ratio is equivalent. However, the compression ratio of data block generated by the compression algorithm is optimized, which can not only meet the demand of acceleration, but also possesses high throughput decompression performance. The configuration speedup ratio obtained by lossless compression algorithm is about 8%, 9%, 10% and 22% higher than LPAQl, Arithmetic, Huffman, and LZW with ideal hardware throughput.
2018, 40(6): 1499-1504.
doi: 10.11999/JEIT170823
Abstract:
In the fitting process of the incoherent scatter power spectra with pure-growth mode line and the inversion of the disturbed ionospheric parameters, the GUISDAP package which is based on the equilibrium incoherent scatter theory, always exits serious error. Assuming the ionospheric parameters at heating-off time as a priori information, by using the least squares method to search the best Gaussian peak for the modification of the measured incoherent scatter spectra, one method is provided to obtain the disturbed electron temperature through fitting the modified measured spectra by taking advantage of the incoherent scatter theory of electron super-Gaussian distribution. This method is used to the ionospheric heating experimental data which is conducted in Norway in the autumn of 2010, and the results indicate that the electron temperature obtained by fitting the modified spectra is about 800 K higher than that of the heating off time, and the increase ratio is about 13%~50%, which is well coincided with the electron temperature increase range of ionospheric heating reported in the existing literature. The conclusion shows that the method is applicable to the inversion of disturbed ionospheric parameters by using of the incoherent scatter spectra with pure-growth mode line.
In the fitting process of the incoherent scatter power spectra with pure-growth mode line and the inversion of the disturbed ionospheric parameters, the GUISDAP package which is based on the equilibrium incoherent scatter theory, always exits serious error. Assuming the ionospheric parameters at heating-off time as a priori information, by using the least squares method to search the best Gaussian peak for the modification of the measured incoherent scatter spectra, one method is provided to obtain the disturbed electron temperature through fitting the modified measured spectra by taking advantage of the incoherent scatter theory of electron super-Gaussian distribution. This method is used to the ionospheric heating experimental data which is conducted in Norway in the autumn of 2010, and the results indicate that the electron temperature obtained by fitting the modified spectra is about 800 K higher than that of the heating off time, and the increase ratio is about 13%~50%, which is well coincided with the electron temperature increase range of ionospheric heating reported in the existing literature. The conclusion shows that the method is applicable to the inversion of disturbed ionospheric parameters by using of the incoherent scatter spectra with pure-growth mode line.
2018, 40(6): 1505-1514.
doi: 10.11999/JEIT170945
Abstract:
With the development of airborne equipment and the intercepted receiving technology, the survivability of aircraft in electronic confrontation is seriously threatened. The concept and basic principles of radio frequency stealth, the research status and major contradictions of radio frequency stealth technology are summarized in this paper. Secondly, focusing on the radar RF radiation model as the main line, in power control, waveform design, etc, the RF stealth technology of power control, waveform design and environment utilization in time domain, spatial domain, frequency domain and energy domain is expounded. The important research achievements in the field of RF stealth are summarized. Finally, on the basis of the analysis of existing algorithms and research results, the limitations of RF stealth technology and the uniqueness of evaluation indicators in current research are summarized, and the future research direction of RF stealth is forecasted.
With the development of airborne equipment and the intercepted receiving technology, the survivability of aircraft in electronic confrontation is seriously threatened. The concept and basic principles of radio frequency stealth, the research status and major contradictions of radio frequency stealth technology are summarized in this paper. Secondly, focusing on the radar RF radiation model as the main line, in power control, waveform design, etc, the RF stealth technology of power control, waveform design and environment utilization in time domain, spatial domain, frequency domain and energy domain is expounded. The important research achievements in the field of RF stealth are summarized. Finally, on the basis of the analysis of existing algorithms and research results, the limitations of RF stealth technology and the uniqueness of evaluation indicators in current research are summarized, and the future research direction of RF stealth is forecasted.
2018, 40(6): 1515-1519.
doi: 10.11999/JEIT170619
Abstract:
Based on cyberspace security collective defensive mechanism and its synchronization, uncertainty factors are introduced in the synchronization of cyberspace operation, and the improved synchronization model is established. The stability of cyberspace operation synchronization is analyzed by utilizing Lyapunov function, and synchronization criterions are put forward. What is more, factors that influenced synchronization ability and stability are explored, such as edge connection probability, cyberspace scale, standby elements, and uncertainty probability. Finally, simulations are given. Theoretical research and simulations show that the factors of cyberspace operation synchronization are negatively related with the second eigenvalue and the ratio of minimum eigenvalue to the second eigenvalue, and corresponding negatively related with the cyberspace ecosystems global synchronization stability and local synchronization stability.
Based on cyberspace security collective defensive mechanism and its synchronization, uncertainty factors are introduced in the synchronization of cyberspace operation, and the improved synchronization model is established. The stability of cyberspace operation synchronization is analyzed by utilizing Lyapunov function, and synchronization criterions are put forward. What is more, factors that influenced synchronization ability and stability are explored, such as edge connection probability, cyberspace scale, standby elements, and uncertainty probability. Finally, simulations are given. Theoretical research and simulations show that the factors of cyberspace operation synchronization are negatively related with the second eigenvalue and the ratio of minimum eigenvalue to the second eigenvalue, and corresponding negatively related with the cyberspace ecosystems global synchronization stability and local synchronization stability.
2018, 40(6): 1520-1524.
doi: 10.11999/JEIT170748
Abstract:
The current research on Coarse Grained Reconfigurable Architecture (CGRA) loop mapping mainly focuses on operation placement and data routing, but seldom involves data mapping. To solve this problem, a mapping flow based on memory partitioning and path reuse is designed. Firstly, fine grained memory partitioning is used to find the data placement improving the parallelism of data access. Secondly, placement and routing is searched by modulo scheduling. Finally, the routing overhead model is used to balance memory routing and processing unit routing and path reuse strategy is introduced to optimize routing resources. Experimental results validate the performance of proposed approach in initiation interval, instruction per cycle and execution delay.
The current research on Coarse Grained Reconfigurable Architecture (CGRA) loop mapping mainly focuses on operation placement and data routing, but seldom involves data mapping. To solve this problem, a mapping flow based on memory partitioning and path reuse is designed. Firstly, fine grained memory partitioning is used to find the data placement improving the parallelism of data access. Secondly, placement and routing is searched by modulo scheduling. Finally, the routing overhead model is used to balance memory routing and processing unit routing and path reuse strategy is introduced to optimize routing resources. Experimental results validate the performance of proposed approach in initiation interval, instruction per cycle and execution delay.