Email alert
2018 Vol. 40, No. 12
Display Method:
2018, 40(12)
Abstract:
2018, 40(12): 2795-2803.
doi: 10.11999/JEIT180229
Abstract:
This paper presents a novel unsupervised image classification method for Polarimetric Synthetic Aperture Radar (PolSAR) data. The proposed method is based on a discriminative clustering framework that explicitly relies on a discriminative supervised classification technique to perform unsupervised clustering. To implement this idea, an energy function is designed for unsupervised PolSAR image classification by combining a supervised Softmax Regression (SR) model with a Markov Random Field (MRF) smoothness constraint. In this model, both the pixelwise class labels and classifiers are taken as unknown variables to be optimized. Starting from the initialized class labels generated by Cloude-Pottier decomposition and K-Wishart distribution hypothesis, the classifiers and class labels are iteratively optimized by alternately minimizing the energy function with respect to them. Finally, the optimized class labels are taken as the classification result, and the classifiers for different classes are also derived as a side effect. This approach is applied to real PolSAR benchmark data. Extensive experiments justify that the proposed approach can effectively classify the PolSAR image in an unsupervised way and produce higher accuracies than the compared state-of-the-art methods.
This paper presents a novel unsupervised image classification method for Polarimetric Synthetic Aperture Radar (PolSAR) data. The proposed method is based on a discriminative clustering framework that explicitly relies on a discriminative supervised classification technique to perform unsupervised clustering. To implement this idea, an energy function is designed for unsupervised PolSAR image classification by combining a supervised Softmax Regression (SR) model with a Markov Random Field (MRF) smoothness constraint. In this model, both the pixelwise class labels and classifiers are taken as unknown variables to be optimized. Starting from the initialized class labels generated by Cloude-Pottier decomposition and K-Wishart distribution hypothesis, the classifiers and class labels are iteratively optimized by alternately minimizing the energy function with respect to them. Finally, the optimized class labels are taken as the classification result, and the classifiers for different classes are also derived as a side effect. This approach is applied to real PolSAR benchmark data. Extensive experiments justify that the proposed approach can effectively classify the PolSAR image in an unsupervised way and produce higher accuracies than the compared state-of-the-art methods.
2018, 40(12): 2804-2811.
doi: 10.11999/JEIT180263
Abstract:
In order to improve the fusion quality of panchromatic image and multi-spectral image, a remote sensing image fusion method based on optimized dictionary learning is proposed. Firstly, K-means cluster is applied to image blocks in the image database, and then image blocks with high similarity are removed partly in order to improve the training efficiency. While obtaining a universal dictionary, the similar dictionary atoms and less used dictionary atoms are marked for further research. Secondly, similar dictionary atoms and less used dictionary atoms are replaced by panchromatic image blocks with the largest difference from the original sparse model to obtain an adaptive dictionary. Furthermore the adaptive dictionary is used to sparse represent the intensity component and panchromatic image, the modulus maxima coefficients in the sparse coefficients of each image blocks are separated to obtain maximal sparse coefficients, and the remaining sparse coefficients are called residual sparse coefficients. Then, each part is fused by different fusion rules to preserve more spectral and spatial detail information. Finally, inverse IHS transform is employed to obtain the fused image. Experiments demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than its counterparts.
In order to improve the fusion quality of panchromatic image and multi-spectral image, a remote sensing image fusion method based on optimized dictionary learning is proposed. Firstly, K-means cluster is applied to image blocks in the image database, and then image blocks with high similarity are removed partly in order to improve the training efficiency. While obtaining a universal dictionary, the similar dictionary atoms and less used dictionary atoms are marked for further research. Secondly, similar dictionary atoms and less used dictionary atoms are replaced by panchromatic image blocks with the largest difference from the original sparse model to obtain an adaptive dictionary. Furthermore the adaptive dictionary is used to sparse represent the intensity component and panchromatic image, the modulus maxima coefficients in the sparse coefficients of each image blocks are separated to obtain maximal sparse coefficients, and the remaining sparse coefficients are called residual sparse coefficients. Then, each part is fused by different fusion rules to preserve more spectral and spatial detail information. Finally, inverse IHS transform is employed to obtain the fused image. Experiments demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than its counterparts.
2018, 40(12): 2812-2819.
doi: 10.11999/JEIT180209
Abstract:
Vehicle detection is one of the hotspots in the field of remote sensing image analysis. The intelligent extraction and identification of vehicles are of great significance to traffic management and urban construction. In remote sensing field, the existing methods of vehicle detection based on Convolution Neural Network (CNN) are complicated and most of these methods have poor performance for dense areas. To solve above problems, an end-to-end neural network model named DF-RCNN is presented to solve the detecting difficulty in dense areas. Firstly, the model unifies the resolution of the deep and shallow feature maps and combines them. After that, the deformable convolution and RoI pooling are used to study the geometrical deformation of the target by adding a small number of parameters and calculations. Experimental results show that the proposed model has good detection performance for vehicle targets in dense areas.
Vehicle detection is one of the hotspots in the field of remote sensing image analysis. The intelligent extraction and identification of vehicles are of great significance to traffic management and urban construction. In remote sensing field, the existing methods of vehicle detection based on Convolution Neural Network (CNN) are complicated and most of these methods have poor performance for dense areas. To solve above problems, an end-to-end neural network model named DF-RCNN is presented to solve the detecting difficulty in dense areas. Firstly, the model unifies the resolution of the deep and shallow feature maps and combines them. After that, the deformable convolution and RoI pooling are used to study the geometrical deformation of the target by adding a small number of parameters and calculations. Experimental results show that the proposed model has good detection performance for vehicle targets in dense areas.
Ambiguity Resolving and Imaging Algorithm for Multi-channel Forward-looking Synthetic Aperture Radar
2018, 40(12): 2820-2825.
doi: 10.11999/JEIT180177
Abstract:
The azimuth resolution of traditional synthetic aperture radar is only provided by synthetic aperture. However, in the forward looking area, the Doppler diversity is limited, so the imaging performance declines rapidly. And forward looking imaging also has the Doppler ambiguity problem. In this paper, an adaptive beam forming method with spatial confinement under ideal line track is proposed. The imaging quality of the positive forward region is improved effectively by combining the array of real aperture and synthetic aperture, and the Doppler solution is blurred by using the array space domain. First, the echo data is processed by High Squint SAR imaging to obtain the blurred image. Then the beam-forming is performed, weighted and coherent accumulated with each channel image, so as to resolve Doppler ambiguity and enhance the azimuth resolution. Simulation confirms the validity of the proposed approach.
The azimuth resolution of traditional synthetic aperture radar is only provided by synthetic aperture. However, in the forward looking area, the Doppler diversity is limited, so the imaging performance declines rapidly. And forward looking imaging also has the Doppler ambiguity problem. In this paper, an adaptive beam forming method with spatial confinement under ideal line track is proposed. The imaging quality of the positive forward region is improved effectively by combining the array of real aperture and synthetic aperture, and the Doppler solution is blurred by using the array space domain. First, the echo data is processed by High Squint SAR imaging to obtain the blurred image. Then the beam-forming is performed, weighted and coherent accumulated with each channel image, so as to resolve Doppler ambiguity and enhance the azimuth resolution. Simulation confirms the validity of the proposed approach.
2018, 40(12): 2826-2833.
doi: 10.11999/JEIT180039
Abstract:
In multistatic radar, a Censored Data-Based Decentralized Fusion (CDDF) is proposed to address the issue of fusing local observations with communication constraints. The local likelihood ratio is calculated based on the observation of moving target immersed in clutter, where the local radar site possesses a coherent multi-channel array. Each local radar site transmits if and only if their observations’ likelihood ratios exceed the local thresholds, which determine the communication rates. By virtue of the Neyman-Pearson lemma, the global test statistic can be achieved by combining received censored data. The fusion center makes a global decision through comparing the global test statistic with a global threshold. Besides, the closed-form expression of probability of false alarm or probability of detection is also derived in this paper. Numerical simulation shows that the CDDF has better performance than " OR” rule, while approaching the performance of Centralized Fusion (CF) with the increase of the communication rate.
In multistatic radar, a Censored Data-Based Decentralized Fusion (CDDF) is proposed to address the issue of fusing local observations with communication constraints. The local likelihood ratio is calculated based on the observation of moving target immersed in clutter, where the local radar site possesses a coherent multi-channel array. Each local radar site transmits if and only if their observations’ likelihood ratios exceed the local thresholds, which determine the communication rates. By virtue of the Neyman-Pearson lemma, the global test statistic can be achieved by combining received censored data. The fusion center makes a global decision through comparing the global test statistic with a global threshold. Besides, the closed-form expression of probability of false alarm or probability of detection is also derived in this paper. Numerical simulation shows that the CDDF has better performance than " OR” rule, while approaching the performance of Centralized Fusion (CF) with the increase of the communication rate.
2018, 40(12): 2834-2840.
doi: 10.11999/JEIT180079
Abstract:
To improve the resolution of the SAR system, radar bandwidth should be improved. By means of synthetic bandwidth, wide bandwidth can be achieved with less hardware complexity. For frequency band synthesis SAR system, frequency difference should be accurately known. However, in the real measurement situation, the frequency difference may drift and should be estimated based on the raw data. In this manuscript, an effective method is proposed to estimate the frequency difference error and compensate the phase error. Based on the relation between the interferometric phase of subband echoes and frequency difference, the frequency difference drift is estimated. The interferometry between subband images yields the interferometric image. It is observed that in the yielded image, phase varies with range and the slope is proportional to the frequency difference. Also, the phase is redundant along azimuth. Based on the redundancy along azimuth, a new vector is formed. The vector is a sinusoidal signal with the frequency value corresponding to the relative range shift. Frequency analysis yields the value of the frequency difference error. Based on the proposed method, the SAR image is improved. The effectiveness of the method is verified by processing the real SAR data.
To improve the resolution of the SAR system, radar bandwidth should be improved. By means of synthetic bandwidth, wide bandwidth can be achieved with less hardware complexity. For frequency band synthesis SAR system, frequency difference should be accurately known. However, in the real measurement situation, the frequency difference may drift and should be estimated based on the raw data. In this manuscript, an effective method is proposed to estimate the frequency difference error and compensate the phase error. Based on the relation between the interferometric phase of subband echoes and frequency difference, the frequency difference drift is estimated. The interferometry between subband images yields the interferometric image. It is observed that in the yielded image, phase varies with range and the slope is proportional to the frequency difference. Also, the phase is redundant along azimuth. Based on the redundancy along azimuth, a new vector is formed. The vector is a sinusoidal signal with the frequency value corresponding to the relative range shift. Frequency analysis yields the value of the frequency difference error. Based on the proposed method, the SAR image is improved. The effectiveness of the method is verified by processing the real SAR data.
2018, 40(12): 2841-2847.
doi: 10.11999/JEIT180097
Abstract:
In passive bistatic radar systems, there exists the zero and non-zero Doppler shift multipath clutter in the surveillance channel. The multipath clutter affects the target detection. Temporal adaptive iterative filter such as Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Square (RLS) are often used to reject multipath clutter in passive bistatic radar, but these methods are only applicable to reject zero Doppler shift multipath clutter. To solve the problem of zero and non-zero Doppler shift multipath clutter, combined with the orthogonal frequency division multiplexing waveform features of digital broadcasting television signals, a clutter rejection algorithm is proposed based on carrier domain adaptive iterative filter. The algorithm utilizes the correlation of multipath clutter with the same Doppler shift at the same carrier frequency in subcarrier domain to reject the zero and non-zero Doppler shift multipath clutter. Simulation and experiment data processing results show the superiority of the proposed algorithm.
In passive bistatic radar systems, there exists the zero and non-zero Doppler shift multipath clutter in the surveillance channel. The multipath clutter affects the target detection. Temporal adaptive iterative filter such as Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Square (RLS) are often used to reject multipath clutter in passive bistatic radar, but these methods are only applicable to reject zero Doppler shift multipath clutter. To solve the problem of zero and non-zero Doppler shift multipath clutter, combined with the orthogonal frequency division multiplexing waveform features of digital broadcasting television signals, a clutter rejection algorithm is proposed based on carrier domain adaptive iterative filter. The algorithm utilizes the correlation of multipath clutter with the same Doppler shift at the same carrier frequency in subcarrier domain to reject the zero and non-zero Doppler shift multipath clutter. Simulation and experiment data processing results show the superiority of the proposed algorithm.
2018, 40(12): 2848-2853.
doi: 10.11999/JEIT180294
Abstract:
The separable probability is a significant criterion to evaluate the resolution characteristics of SAR distribution targets. On the basis of refining the separable condition of targets and taking the statistic characteristic of SAR distribution targets into consideration, a new separable judgment criterion for targets is proposed, and a precise calculation method of the separable probability is deduced. Besides, in order to simplify the calculation, the approximate calculation method with less computational complexity is presented. It is shown in the simulation results that the proposed method is in accordance with the actual situation, which can reflect the effect of the statistic characteristic of SAR distribution target on the resolution characteristic, and can provide theoretical support for the SAR image quality evaluation and system parameter design.
The separable probability is a significant criterion to evaluate the resolution characteristics of SAR distribution targets. On the basis of refining the separable condition of targets and taking the statistic characteristic of SAR distribution targets into consideration, a new separable judgment criterion for targets is proposed, and a precise calculation method of the separable probability is deduced. Besides, in order to simplify the calculation, the approximate calculation method with less computational complexity is presented. It is shown in the simulation results that the proposed method is in accordance with the actual situation, which can reflect the effect of the statistic characteristic of SAR distribution target on the resolution characteristic, and can provide theoretical support for the SAR image quality evaluation and system parameter design.
2018, 40(12): 2854-2860.
doi: 10.11999/JEIT180115
Abstract:
To link better scattering centers with target structures, a forward method is presented to deduce the component-level 3-D scattering center position of radar target under the mechanisms of single and double scattering based on target geometric model. Under the mechanism of double scattering, the principle and method for determining the ray equivalent position is introduced especially under the situation of strong scattering. As for other weak scattering situations, the equivalent transformation is used to transform the weak scattering situations to the strong one. Finally, this position derivation method is applied to the models of right dihedral angle, obtuse dihedral angle, SLICY and T72 tank to deduce and analyze their component-level scattering center positions. The corresponding simulated or actual SAR images are used for contrast to validate the accuracy of the position derivation method.
To link better scattering centers with target structures, a forward method is presented to deduce the component-level 3-D scattering center position of radar target under the mechanisms of single and double scattering based on target geometric model. Under the mechanism of double scattering, the principle and method for determining the ray equivalent position is introduced especially under the situation of strong scattering. As for other weak scattering situations, the equivalent transformation is used to transform the weak scattering situations to the strong one. Finally, this position derivation method is applied to the models of right dihedral angle, obtuse dihedral angle, SLICY and T72 tank to deduce and analyze their component-level scattering center positions. The corresponding simulated or actual SAR images are used for contrast to validate the accuracy of the position derivation method.
2018, 40(12): 2861-2867.
doi: 10.11999/JEIT180212
Abstract:
This paper proposes a threat assessment based sensor control by using multi-target filter with random finite set. First, the general sensor control approach based on information theory is presented in the framework of Partially Observable Markov Decision Process (POMDP). Meanwhile, combined with target movement situation, the factors that affect the target threat degree are analyzed. Then, the multi-target state is estimated based on the particle multi-target filter, the multi-target threat level is established according to the multi-target motion situation, and the maximum threat target distribution characteristic is analyzed and extracted from the multi-target distribution characteristic. Finally, the Rényi divergence is used as the evaluation index in sensor control, and the final control policy is solved with the maximum information gain as the criterion. The simulation results verify the feasibility and effectiveness of the proposed method.
This paper proposes a threat assessment based sensor control by using multi-target filter with random finite set. First, the general sensor control approach based on information theory is presented in the framework of Partially Observable Markov Decision Process (POMDP). Meanwhile, combined with target movement situation, the factors that affect the target threat degree are analyzed. Then, the multi-target state is estimated based on the particle multi-target filter, the multi-target threat level is established according to the multi-target motion situation, and the maximum threat target distribution characteristic is analyzed and extracted from the multi-target distribution characteristic. Finally, the Rényi divergence is used as the evaluation index in sensor control, and the final control policy is solved with the maximum information gain as the criterion. The simulation results verify the feasibility and effectiveness of the proposed method.
2018, 40(12): 2868-2873.
doi: 10.11999/JEIT180147
Abstract:
Wi-Fi indoor localization technique is one of the current research hotspots in the field of mobile computing, however, the conventional location fingerprinting based localization scheme does not consider the diversity of Wi-Fi signal distribution in the complicated indoor environment, resulting in the low robustness of indoor localization system. To address this problem, a new hybrid hypothesis test of signal distribution for Wi-Fi indoor localization is proposed. Specifically, the Jarque-Bera (JB) test is conducted to examine the normality of Wi-Fi signal distribution at each Reference Point (RP). Then, according to the different Wi-Fi signal distributions, the hybrid Mann-Whitney U test and T test approaches are used to construct the set of matching reference points with the purpose of realizing the area localization. Finally, by calculating the K-Nearest Neighbor (KNN) of matching reference points in the located area, the location coordinate of the target is obtained. The experimental results indicate that the proposed approach is featured with higher localization accuracy as well as stronger system robustness compared with the conventional Wi-Fi indoor localization approaches.
Wi-Fi indoor localization technique is one of the current research hotspots in the field of mobile computing, however, the conventional location fingerprinting based localization scheme does not consider the diversity of Wi-Fi signal distribution in the complicated indoor environment, resulting in the low robustness of indoor localization system. To address this problem, a new hybrid hypothesis test of signal distribution for Wi-Fi indoor localization is proposed. Specifically, the Jarque-Bera (JB) test is conducted to examine the normality of Wi-Fi signal distribution at each Reference Point (RP). Then, according to the different Wi-Fi signal distributions, the hybrid Mann-Whitney U test and T test approaches are used to construct the set of matching reference points with the purpose of realizing the area localization. Finally, by calculating the K-Nearest Neighbor (KNN) of matching reference points in the located area, the location coordinate of the target is obtained. The experimental results indicate that the proposed approach is featured with higher localization accuracy as well as stronger system robustness compared with the conventional Wi-Fi indoor localization approaches.
2018, 40(12): 2874-2880.
doi: 10.11999/JEIT180225
Abstract:
Focusing on the problem of adaptive beamformer performance decreasing due to target steering vector constraint errors, an algorithm for robust beamforming with joint iterative estimations of steering vector and covariance matrix is proposed. First, the initial value of target steering vector is obtained by sparse reconstruction, following eliminating the target signal estimation in the sampling covariance matrix, the initialization of the covariance matrix is completed; Then, basing on the steering vector error optimization model, this algorithm adopts the convex optimization to estimate joint-iteratively target steering vector and interference plus noise covariance matrix. Finally, the adaptive weight vector is obtained with the steady estimations of steering vector and covariance matrix. Simulation results show output signal to interference and noise ratio is improved in the situation of target steering vector constraint errors.
Focusing on the problem of adaptive beamformer performance decreasing due to target steering vector constraint errors, an algorithm for robust beamforming with joint iterative estimations of steering vector and covariance matrix is proposed. First, the initial value of target steering vector is obtained by sparse reconstruction, following eliminating the target signal estimation in the sampling covariance matrix, the initialization of the covariance matrix is completed; Then, basing on the steering vector error optimization model, this algorithm adopts the convex optimization to estimate joint-iteratively target steering vector and interference plus noise covariance matrix. Finally, the adaptive weight vector is obtained with the steady estimations of steering vector and covariance matrix. Simulation results show output signal to interference and noise ratio is improved in the situation of target steering vector constraint errors.
2018, 40(12): 2881-2888.
doi: 10.11999/JEIT171058
Abstract:
In the two-dimensional Direction Of Arrival (DOA) estimation of coherently distributed noncircular sources, the problem of large complexity is caused by dimension expansion after exploiting noncircular property, meanwhile the existing low-complexity algorithms all require additional parameter pairing procedure. To solve these problems, a rapid DOA estimation algorithm with automatic pairing is proposed for coherently distributed noncircular sources based on cross-correlation propagator. The L-shaped array is considered. Firstly, the extended array manifold model is established by exploiting the noncircularity of the signal, and then it is proved that there are approximate rotational invariance relationships in the Generalized Steering Vectors (GSVs) of two subarrays of the L array. At the same time, the extra noise can be eliminated by the cross-correlation matrix of the array output signals. Finally, on the basis of the approximate rotational invariance relationships of the sub-arrays, the center azimuth and elevation DOAs can be obtained by propagator method. Theoretical analysis and simulation experiments show that without the spectrum searching and eigenvalue decomposition of the sample covariance matrix, the proposed algorithm has low computational complexity. Moreover, it can automatically pair the estimated central azimuth and central elevation DOAs. In addition, compared with the existing propagation method for coherently distributed noncircular sources, the proposed algorithm can achieve higher estimation accuracy with the small complexity cost.
In the two-dimensional Direction Of Arrival (DOA) estimation of coherently distributed noncircular sources, the problem of large complexity is caused by dimension expansion after exploiting noncircular property, meanwhile the existing low-complexity algorithms all require additional parameter pairing procedure. To solve these problems, a rapid DOA estimation algorithm with automatic pairing is proposed for coherently distributed noncircular sources based on cross-correlation propagator. The L-shaped array is considered. Firstly, the extended array manifold model is established by exploiting the noncircularity of the signal, and then it is proved that there are approximate rotational invariance relationships in the Generalized Steering Vectors (GSVs) of two subarrays of the L array. At the same time, the extra noise can be eliminated by the cross-correlation matrix of the array output signals. Finally, on the basis of the approximate rotational invariance relationships of the sub-arrays, the center azimuth and elevation DOAs can be obtained by propagator method. Theoretical analysis and simulation experiments show that without the spectrum searching and eigenvalue decomposition of the sample covariance matrix, the proposed algorithm has low computational complexity. Moreover, it can automatically pair the estimated central azimuth and central elevation DOAs. In addition, compared with the existing propagation method for coherently distributed noncircular sources, the proposed algorithm can achieve higher estimation accuracy with the small complexity cost.
2018, 40(12): 2889-2895.
doi: 10.11999/JEIT180186
Abstract:
There are a large number of indoor WiFi signals which can be used for indoor positioning. Although many WiFi indoor positioning technology is proposed, it's positioning accuracy still does not meet the actual application requirements. For this problem, an Adaptive Affinity Propagation Clustering (AAPC) algorithm is proposed to improve the clustering quality of WiFi fingerprint, thus improving the positioning accuracy. The AAPC algorithm generates different clustering results by dynamically adjusting parameters, then cluster validity indices are used to select the best ones. A large number of real environmental data are collected and tested. The experimental results show that the clustering results generated by AAPC algorithm have higher positioning accuracy.
There are a large number of indoor WiFi signals which can be used for indoor positioning. Although many WiFi indoor positioning technology is proposed, it's positioning accuracy still does not meet the actual application requirements. For this problem, an Adaptive Affinity Propagation Clustering (AAPC) algorithm is proposed to improve the clustering quality of WiFi fingerprint, thus improving the positioning accuracy. The AAPC algorithm generates different clustering results by dynamically adjusting parameters, then cluster validity indices are used to select the best ones. A large number of real environmental data are collected and tested. The experimental results show that the clustering results generated by AAPC algorithm have higher positioning accuracy.
2018, 40(12): 2896-2904.
doi: 10.11999/JEIT180241
Abstract:
To solve the problems in current co-saliency detection algorithms, a novel co-saliency detection algorithm is proposed which applies fully convolution neural network and global optimization model. First, a fully convolution saliency detection network is built based on VGG16Net. The network can simulate the human visual attention mechanism and extract the saliency region in an image from the semantic level. Second, based on the traditional saliency optimization model, the global co-saliency optimization model is constructed, which realizes the transmission and sharing of the current superpixel saliency value in inter-images and intra-image through superpixel matching, making the final saliency map has better co-saliency value. Third, the inter-image saliency value propagation constraint parameter is innovatively introduced to overcome the disadvantages of superpixel mismatching. Experimental results on public test datasets show that the proposed algorithm is superior over current state-of-the-art methods in terms of detection accuracy and detection efficiency, and has strong robustness.
To solve the problems in current co-saliency detection algorithms, a novel co-saliency detection algorithm is proposed which applies fully convolution neural network and global optimization model. First, a fully convolution saliency detection network is built based on VGG16Net. The network can simulate the human visual attention mechanism and extract the saliency region in an image from the semantic level. Second, based on the traditional saliency optimization model, the global co-saliency optimization model is constructed, which realizes the transmission and sharing of the current superpixel saliency value in inter-images and intra-image through superpixel matching, making the final saliency map has better co-saliency value. Third, the inter-image saliency value propagation constraint parameter is innovatively introduced to overcome the disadvantages of superpixel mismatching. Experimental results on public test datasets show that the proposed algorithm is superior over current state-of-the-art methods in terms of detection accuracy and detection efficiency, and has strong robustness.
2018, 40(12): 2905-2912.
doi: 10.11999/JEIT180180
Abstract:
As to the problem of sound event detection in low Signal-Noise-Ratio (SNR) noise environments, a method is proposed based on discrete cosine transform coefficients extracted from multi-band power distribution image. First, by using gammatone spectrogram analysis, sound signal is transformed into multi-band power distribution image. Next, 8×8 size blocking and discrete cosine transform are applied to analyze the multi-band power distribution image. Based on the main Zigzag coefficients which are scanned from the discrete cosine transform coefficients, features of sound event are constructed. Finally, features are modeled and detected through random forests classifier. The results show that the proposed method achieves a better detection performance in low SNR comparing to other methods.
As to the problem of sound event detection in low Signal-Noise-Ratio (SNR) noise environments, a method is proposed based on discrete cosine transform coefficients extracted from multi-band power distribution image. First, by using gammatone spectrogram analysis, sound signal is transformed into multi-band power distribution image. Next, 8×8 size blocking and discrete cosine transform are applied to analyze the multi-band power distribution image. Based on the main Zigzag coefficients which are scanned from the discrete cosine transform coefficients, features of sound event are constructed. Finally, features are modeled and detected through random forests classifier. The results show that the proposed method achieves a better detection performance in low SNR comparing to other methods.
2018, 40(12): 2913-2918.
doi: 10.11999/JEIT171091
Abstract:
A new robust Generalized Synchrosqueezing S-Transform(GSST) is proposed to solve the distortion problem of SynchroSqueezing S-Transform(SSST) in mixture noise. Firstly, the method improves the Viterbi algorithm for improving the Time-Frequency(TF) analysis performance of S-transform in alpha-gaussian mixture noise. After acquiring the phase locus information of the FM signal, the synchrosqueezing is used to improve the time-frequency aggregation. The simulation results show that the proposed method can accurately obtain the time-frequency information of FM signal under the background of Alpha-Gaussian mixture noise in low SNR, and has a better robustness and applicability than the SST.
A new robust Generalized Synchrosqueezing S-Transform(GSST) is proposed to solve the distortion problem of SynchroSqueezing S-Transform(SSST) in mixture noise. Firstly, the method improves the Viterbi algorithm for improving the Time-Frequency(TF) analysis performance of S-transform in alpha-gaussian mixture noise. After acquiring the phase locus information of the FM signal, the synchrosqueezing is used to improve the time-frequency aggregation. The simulation results show that the proposed method can accurately obtain the time-frequency information of FM signal under the background of Alpha-Gaussian mixture noise in low SNR, and has a better robustness and applicability than the SST.
2018, 40(12): 2919-2927.
doi: 10.11999/JEIT180120
Abstract:
The Coherent Plane-Wave Compounding (CPWC) algorithm is based on the recombination of several plane-waves with different steering angles, which can achieve high-quality images with high frame rate. However, CPWC ignores the coherence between the plane-wave imaging results. Coherence Factor (CF) weighted algorithm can effectively improve the imaging contrast and resolution, while it degrades the background speckle quality. A Short-Lag Coherence Factor (SLCF) algorithm for CPWC is proposed. SLCF uses the angular difference parameter to ascertain the order of the coherence factor and calculates the coherence factor for the plane-waves with small angular difference. Then, SLCF is utilized to weight CPWC to obtain the final images. Simulated and experimental results show that SLCF-weighted algorithm can improve the imaging quality in terms of lateral resolution and Contrast Ratio (CR), compared with CPWC. In addition, in comparison with CF and Generalized Coherence Factor (GCF) weighted algorithm, SLCF can achieve better background speckle quality and it has lower computational complexity.
The Coherent Plane-Wave Compounding (CPWC) algorithm is based on the recombination of several plane-waves with different steering angles, which can achieve high-quality images with high frame rate. However, CPWC ignores the coherence between the plane-wave imaging results. Coherence Factor (CF) weighted algorithm can effectively improve the imaging contrast and resolution, while it degrades the background speckle quality. A Short-Lag Coherence Factor (SLCF) algorithm for CPWC is proposed. SLCF uses the angular difference parameter to ascertain the order of the coherence factor and calculates the coherence factor for the plane-waves with small angular difference. Then, SLCF is utilized to weight CPWC to obtain the final images. Simulated and experimental results show that SLCF-weighted algorithm can improve the imaging quality in terms of lateral resolution and Contrast Ratio (CR), compared with CPWC. In addition, in comparison with CF and Generalized Coherence Factor (GCF) weighted algorithm, SLCF can achieve better background speckle quality and it has lower computational complexity.
2018, 40(12): 2928-2935.
doi: 10.11999/JEIT180191
Abstract:
A method based on Gaussianization and generalized matching, called Gaussianization-Generalized Matching (GGM) method is proposed, for nonlinear processing in impulsive noise. The GGM method can be designed based on noise samples, aided by nonparametric probability density estimation. Thus the GGM design is suitable for nonlinear processing in unknown noise models. The GGM method in the \begin{document}${\rm S\alpha S}$\end{document}
model is analyzed, and also the comparison with another approach is presented based on unmatched noise model assumption in the Class A noise. The GGM method is applied to the constant false alarm rate technique via the efficacy function. Simulation and analysis results show that the GGM design is sub-optimal, works robustly when the noise model is unknown, and raises a low requirement on the sample number. Thus, the GGM method provides a promising choice when the noise model is unclear or time-varying.
A method based on Gaussianization and generalized matching, called Gaussianization-Generalized Matching (GGM) method is proposed, for nonlinear processing in impulsive noise. The GGM method can be designed based on noise samples, aided by nonparametric probability density estimation. Thus the GGM design is suitable for nonlinear processing in unknown noise models. The GGM method in the
2018, 40(12): 2936-2944.
doi: 10.11999/JEIT180154
Abstract:
Chroma extensions video coding is a hot topic in the field of video coding. Chroma extensions video coding scheme based on AVS2 platform is proposed. The most direct solution is pseudo444/422 coding. In this method, chroma component in the input image is down sampled by averaging adjacent samples. The core coding modules are still 420 coding. Further, this paper seamlessly extends intra prediction and loop filter to the 444/422 chroma format to implement 444/422 intra prediction coding. The experimental results show that compared with pseudo444/422 coding, in the case of high bit rate, the average U/V BD-rate saving is 31.44%/31.72% and 18.85%/19.30% for 444 and 422 test sequences respectively, with negligible increase of Y BD-rate (0.5% on average). The modification of the 422 chroma intra prediction algorithm achieves up to 5.66% Y/U/V BD-rate reduction. 444/422 intra prediction coding provides similar or better coding performance than HEVC RExt coding at low bitrates.
Chroma extensions video coding is a hot topic in the field of video coding. Chroma extensions video coding scheme based on AVS2 platform is proposed. The most direct solution is pseudo444/422 coding. In this method, chroma component in the input image is down sampled by averaging adjacent samples. The core coding modules are still 420 coding. Further, this paper seamlessly extends intra prediction and loop filter to the 444/422 chroma format to implement 444/422 intra prediction coding. The experimental results show that compared with pseudo444/422 coding, in the case of high bit rate, the average U/V BD-rate saving is 31.44%/31.72% and 18.85%/19.30% for 444 and 422 test sequences respectively, with negligible increase of Y BD-rate (0.5% on average). The modification of the 422 chroma intra prediction algorithm achieves up to 5.66% Y/U/V BD-rate reduction. 444/422 intra prediction coding provides similar or better coding performance than HEVC RExt coding at low bitrates.
2018, 40(12): 2945-2953.
doi: 10.11999/JEIT180077
Abstract:
Focusing on the issue that the systematic errors lead to poor robustness and low accuracy of optical flow calculation, a robust optical flow calculation method is proposed in this paper, which is based on the wavelet multi-resolution theory. With the multi-resolution characteristics of wavelet, the system error caused by variation of illumination conditions and sensor noise is incorporated into the calculation of optical flow to improve the robustness and estimation accuracy. In what follows, the total least square method is used to solve the over-determined wavelet optical flow equations to obtain the optical flow vector. As compared to the traditional Lucas-Kanade approach, Horn-Schunck method and optical flow estimation in omnidirectional images using wavelet approach, simulation results show that the proposed algorithm can significantly improve the accuracy of optical flow estimation and the robustness of the optical flow field.
Focusing on the issue that the systematic errors lead to poor robustness and low accuracy of optical flow calculation, a robust optical flow calculation method is proposed in this paper, which is based on the wavelet multi-resolution theory. With the multi-resolution characteristics of wavelet, the system error caused by variation of illumination conditions and sensor noise is incorporated into the calculation of optical flow to improve the robustness and estimation accuracy. In what follows, the total least square method is used to solve the over-determined wavelet optical flow equations to obtain the optical flow vector. As compared to the traditional Lucas-Kanade approach, Horn-Schunck method and optical flow estimation in omnidirectional images using wavelet approach, simulation results show that the proposed algorithm can significantly improve the accuracy of optical flow estimation and the robustness of the optical flow field.
2018, 40(12): 2954-2961.
doi: 10.11999/JEIT180192
Abstract:
To solve the problem of network structure change and route failure caused by random movement of network nodes, a relative mobility prediction based k-hop clustering algorithm is proposed, the movement of nodes are analyzed and predicted, the cluster structure is adjusted adaptively, the stability of cluster structure is improved. First, the Doppler shift is used to calculate the relative moving speed and obtain the link expiration time between nodes. Then, during the cluster formation stage, the MAX-MIN heuristic algorithm is used to select the cluster head according to the average link expiration time of the node. Furthermore, during the cluster maintenance stage, a network adaptive adjustment method is proposed based on node motion. On the one hand, the node information transmission cycle is adjusted to balance the data overhead and accuracy; On the other hand, the cluster structure is adjusted by predicting the link disconnection to reduce link reconstruction time and improve the quality of network operation. Simulation results show that the proposed algorithm can effectively prolong the duration of cluster head and improve the stability of cluster structure in dynamic environment.
To solve the problem of network structure change and route failure caused by random movement of network nodes, a relative mobility prediction based k-hop clustering algorithm is proposed, the movement of nodes are analyzed and predicted, the cluster structure is adjusted adaptively, the stability of cluster structure is improved. First, the Doppler shift is used to calculate the relative moving speed and obtain the link expiration time between nodes. Then, during the cluster formation stage, the MAX-MIN heuristic algorithm is used to select the cluster head according to the average link expiration time of the node. Furthermore, during the cluster maintenance stage, a network adaptive adjustment method is proposed based on node motion. On the one hand, the node information transmission cycle is adjusted to balance the data overhead and accuracy; On the other hand, the cluster structure is adjusted by predicting the link disconnection to reduce link reconstruction time and improve the quality of network operation. Simulation results show that the proposed algorithm can effectively prolong the duration of cluster head and improve the stability of cluster structure in dynamic environment.
2018, 40(12): 2962-2969.
doi: 10.11999/JEIT180131
Abstract:
An adaptive virtual resource allocation algorithm is proposed based on Constrained Markov Decision Process (CMDP) for wireless access network slice virtual resource allocation. First of all, this algorithm in the Non-Orthogonal Multiple Access (NOMA) system, uses the user outage probability and the slice queues as constraints, uses the total rate of slices as a reward to build a resource adaptive problem using the CMDP theory. Secondly, the post-decision state is defined to avoid the expectation operation in the optimal value function. Furthermore, aiming at the problem of " dimensionality disaster” of MDP, based on the approximate dynamic programming theory, a basis function for the assignment behavior is designed to replace the post-decision state space and to reduce the computational dimension. Finally, an adaptive virtual resource allocation algorithm is designed to optimize the slicing performance. The simulation results show that the algorithm can improve the performance of the system and meet the service requirements of slicing.
An adaptive virtual resource allocation algorithm is proposed based on Constrained Markov Decision Process (CMDP) for wireless access network slice virtual resource allocation. First of all, this algorithm in the Non-Orthogonal Multiple Access (NOMA) system, uses the user outage probability and the slice queues as constraints, uses the total rate of slices as a reward to build a resource adaptive problem using the CMDP theory. Secondly, the post-decision state is defined to avoid the expectation operation in the optimal value function. Furthermore, aiming at the problem of " dimensionality disaster” of MDP, based on the approximate dynamic programming theory, a basis function for the assignment behavior is designed to replace the post-decision state space and to reduce the computational dimension. Finally, an adaptive virtual resource allocation algorithm is designed to optimize the slicing performance. The simulation results show that the algorithm can improve the performance of the system and meet the service requirements of slicing.
2018, 40(12): 2970-2978.
doi: 10.11999/JEIT180111
Abstract:
Based on interference cancellation method, a low complexity Iterative Parallel Interference Cancellation (IPIC) algorithm is proposed for the uplink of massive MIMO systems. The proposed algorithm avoids the high complexity matrix inversion required by the linear detection algorithm, and hence the complexity is maintained only at \begin{document}$({\cal O}({K^2}))$\end{document}
. Meanwhile, the noise prediction mechanism is introduced and the noise-prediction aided iterative parallel interference cancellation algorithm is proposed to improve further the detection performance. Considering the residual inter-antenna interference, a low-complexity soft output signal detection algorithm is proposed as well. The simulation results show that the complexity of all the proposed signal detection methods are better than that of the MMSE detection algorithm. With only a small number of iterations, the proposed algorithm achieves its performance quite close to or even surpassing that of the MMSE algorithm.
Based on interference cancellation method, a low complexity Iterative Parallel Interference Cancellation (IPIC) algorithm is proposed for the uplink of massive MIMO systems. The proposed algorithm avoids the high complexity matrix inversion required by the linear detection algorithm, and hence the complexity is maintained only at
2018, 40(12): 2979-2985.
doi: 10.11999/JEIT180218
Abstract:
The resource allocation for Cloud Radio Access Network (C-RAN) is investigated. The max-min fairness criterion is used as the optimization criterion and the Energy Efficiency (EE) of C-RAN users is taken as the optimization objective function, by maximizing the EE of the worst link under the constraints of maximum transmit power and minimum transmit rate, the user transmit power and Remote Radio Heads (RRHs) beamforming vectors are jointly optimized. The above optimization problem belongs to the nonlinear and fractional programming problem. First, the original nonconvex optimization problem is transformed into an equivalent optimization problem in subtractive form. Then, by introducing a new variable, non-smooth equivalent optimization problem is transformed into a smooth optimization problem. Finally, a two-layer iterative power allocation and beamforming algorithm is proposed. The proposed algorithm is compared with traditional non-EE resource allocation algorithm and EE maximization algorithm. The experimental results show that the proposed algorithm is effective in improving the EE and the fairness of resource allocation.
The resource allocation for Cloud Radio Access Network (C-RAN) is investigated. The max-min fairness criterion is used as the optimization criterion and the Energy Efficiency (EE) of C-RAN users is taken as the optimization objective function, by maximizing the EE of the worst link under the constraints of maximum transmit power and minimum transmit rate, the user transmit power and Remote Radio Heads (RRHs) beamforming vectors are jointly optimized. The above optimization problem belongs to the nonlinear and fractional programming problem. First, the original nonconvex optimization problem is transformed into an equivalent optimization problem in subtractive form. Then, by introducing a new variable, non-smooth equivalent optimization problem is transformed into a smooth optimization problem. Finally, a two-layer iterative power allocation and beamforming algorithm is proposed. The proposed algorithm is compared with traditional non-EE resource allocation algorithm and EE maximization algorithm. The experimental results show that the proposed algorithm is effective in improving the EE and the fairness of resource allocation.
2018, 40(12): 2986-2991.
doi: 10.11999/JEIT180196
Abstract:
Lai-Massey structure is a block cipher structure developed from IDEA algorithm. FOX is the representative of this cipher structure. In this paper, the keys are assumed to be generated independently and uniform randomly, and then the provable security against differential and linear cryptanalysis of Lai-Massey structure is studied from two aspects: the upper bound of the average differential probability and the upper bound of the average linear chains probability with the given starting and ending points. This paper proves that when \begin{document}$r{\rm{ = }}2$\end{document}
, the average differential probability \begin{document}$ \le p{}_{\max }$\end{document}
. With the F function of the Lai-Massey structure is orthomorphism, this paper proves that when \begin{document}$r \ge 3$\end{document}
, the average differential probability \begin{document}$ \le p_{\max }^2$\end{document}
. A similar conclusion is obtained for the linear chains with a given starting and ending point.
Lai-Massey structure is a block cipher structure developed from IDEA algorithm. FOX is the representative of this cipher structure. In this paper, the keys are assumed to be generated independently and uniform randomly, and then the provable security against differential and linear cryptanalysis of Lai-Massey structure is studied from two aspects: the upper bound of the average differential probability and the upper bound of the average linear chains probability with the given starting and ending points. This paper proves that when
2018, 40(12): 2992-2997.
doi: 10.11999/JEIT180189
Abstract:
Based on the theory of Galois rings of characteristic 4, a new class of quaternary sequences with period 2p2 is established over Z4 using generated cyclotomy, where p is an odd prime. The linear complexity of the new sequences is determined. Results show that the sequences have larger linear complexity and resist the attack by Berlekamp-Massey (B-M) algorithm. It is a good sequence from the viewpoint of cryptography.
Based on the theory of Galois rings of characteristic 4, a new class of quaternary sequences with period 2p2 is established over Z4 using generated cyclotomy, where p is an odd prime. The linear complexity of the new sequences is determined. Results show that the sequences have larger linear complexity and resist the attack by Berlekamp-Massey (B-M) algorithm. It is a good sequence from the viewpoint of cryptography.
2018, 40(12): 2998-3006.
doi: 10.11999/JEIT180122
Abstract:
Attribute based encryption can provide data confidentiality protection and fine-grained access control for fog-cloud computing, however mobile devices in fog cloud computing system are difficult to bear the burdensome computing burden of attribute based encryption. In order to address this problem, an offline/online ciphertext-plicy attribute-based encryption scheme is presented with verifiable outsourced decryption based on the bilinear group of prime order. It can realize the offline/online key generation and data encryption. Simultaneously, it supports the verifiable outsourced decryption. Then, the formal security proofs of its selective chosen plaintext attack security and verifiability are provided. After that, the improved offline/online ciphertext-plicy attribute-based encryption scheme with verifiable outsourced decryption is presented, which reduces the number of bilinear pairings from linear to constant in the transformation phase. Finally, the efficiency of the proposed scheme is analyzed and verified through theoretical analysis and experimental simulation. The experimental results show that the proposed scheme is efficient and practical.
Attribute based encryption can provide data confidentiality protection and fine-grained access control for fog-cloud computing, however mobile devices in fog cloud computing system are difficult to bear the burdensome computing burden of attribute based encryption. In order to address this problem, an offline/online ciphertext-plicy attribute-based encryption scheme is presented with verifiable outsourced decryption based on the bilinear group of prime order. It can realize the offline/online key generation and data encryption. Simultaneously, it supports the verifiable outsourced decryption. Then, the formal security proofs of its selective chosen plaintext attack security and verifiability are provided. After that, the improved offline/online ciphertext-plicy attribute-based encryption scheme with verifiable outsourced decryption is presented, which reduces the number of bilinear pairings from linear to constant in the transformation phase. Finally, the efficiency of the proposed scheme is analyzed and verified through theoretical analysis and experimental simulation. The experimental results show that the proposed scheme is efficient and practical.
2018, 40(12): 3007-3012.
doi: 10.11999/JEIT180249
Abstract:
The privacy preserving aggregate signcryption for heterogeneous systems can ensure the confidentiality and unforgeability of the data between heterogeneous cryptosystems, it also can provide multi-ciphertext batch verification. This paper analyzes the security of a scheme with privacy-preserving aggregate signcryption heterogeneous, and points out that the scheme can not resist the attack of malicious Key Generating Center (KGC), it can forge a valid ciphertext. In order to improve the security of the original scheme, a new heterogeneous aggregation signature scheme with privacy protection function is proposed.The new scheme overcomes the security problems existing in the original scheme and ensures the data transmission between the certificateless public key cryptography and the identity-based public key cryptographic, and the security of the new scheme is proved under the random oracle model. Efficiency analysis shows that the new program is equivalent to the original one.
The privacy preserving aggregate signcryption for heterogeneous systems can ensure the confidentiality and unforgeability of the data between heterogeneous cryptosystems, it also can provide multi-ciphertext batch verification. This paper analyzes the security of a scheme with privacy-preserving aggregate signcryption heterogeneous, and points out that the scheme can not resist the attack of malicious Key Generating Center (KGC), it can forge a valid ciphertext. In order to improve the security of the original scheme, a new heterogeneous aggregation signature scheme with privacy protection function is proposed.The new scheme overcomes the security problems existing in the original scheme and ensures the data transmission between the certificateless public key cryptography and the identity-based public key cryptographic, and the security of the new scheme is proved under the random oracle model. Efficiency analysis shows that the new program is equivalent to the original one.
2018, 40(12): 3013-3019.
doi: 10.11999/JEIT180219
Abstract:
Wireless powered technology is an effective way to extend the lifetime of wireless network nodes. A wireless powered hybrid multiple access system is studied that is consist of a base station and multiple users in clusters. The transmission of the system is divided into two phases. The base station broadcasts energy to the users in the first phase. The users transmit information to the base station in the second phase. The users among different clusters transmit in the time division multiple access manner, while the users in the same cluster transmit in the non-orthogonal multiple access manner. Joint phase time duration allocation and power allocation are investigated at the base station and the users in order to improve the spectrum efficiency and user fairness, respectively. Two algorithms are proposed, which maximize the system throughput and the minimum throughput of the clusters, respectively. Simulation results show that the two proposed algorithms can effectively increase spectral efficiency and guarantee fairness of user clusters, respectively.
Wireless powered technology is an effective way to extend the lifetime of wireless network nodes. A wireless powered hybrid multiple access system is studied that is consist of a base station and multiple users in clusters. The transmission of the system is divided into two phases. The base station broadcasts energy to the users in the first phase. The users transmit information to the base station in the second phase. The users among different clusters transmit in the time division multiple access manner, while the users in the same cluster transmit in the non-orthogonal multiple access manner. Joint phase time duration allocation and power allocation are investigated at the base station and the users in order to improve the spectrum efficiency and user fairness, respectively. Two algorithms are proposed, which maximize the system throughput and the minimum throughput of the clusters, respectively. Simulation results show that the two proposed algorithms can effectively increase spectral efficiency and guarantee fairness of user clusters, respectively.
2018, 40(12): 3020-3027.
doi: 10.11999/JEIT171085
Abstract:
The Internet of Things (IoT) is becoming a hot research area, and tens of billions of devices are being connected to the Internet which are advancing on the sensor search service. IoT features (searches are strong spatiotemporal variability, limited resources of the sensor, and mass heterogeneous dynamic data) raise a challenge to the search engines for efficiently and effectively searching and selecting the sensors. In this paper, Piecewise-Linear fitting Sensor Similarity (PLSS) search method is proposed. Based on the content values, PLSS calculates the sensor similarity models to search most similarity sensors. PLSS improves the accuracy and efficiency of search compared with FUZZY set algorithm (FUZZY) and least squares method. PLSS storage costs are at least two order of magnitude less than raw data.
The Internet of Things (IoT) is becoming a hot research area, and tens of billions of devices are being connected to the Internet which are advancing on the sensor search service. IoT features (searches are strong spatiotemporal variability, limited resources of the sensor, and mass heterogeneous dynamic data) raise a challenge to the search engines for efficiently and effectively searching and selecting the sensors. In this paper, Piecewise-Linear fitting Sensor Similarity (PLSS) search method is proposed. Based on the content values, PLSS calculates the sensor similarity models to search most similarity sensors. PLSS improves the accuracy and efficiency of search compared with FUZZY set algorithm (FUZZY) and least squares method. PLSS storage costs are at least two order of magnitude less than raw data.
2018, 40(12): 3028-3035.
doi: 10.11999/JEIT180207
Abstract:
Under the present network architecture, it is disadvantageous for scalability and service performance of server cluster to adopt hardware systems to realize load balancing of server cluster, because there are some restriction factors in such a method, including the difficulty of acquiring load nodes status and the complexity of redirecting traffic, etc. To solve the problem, a Load Balancing mechanism based on Software-Defined Networking (SDNLB) is proposed. With superiorities of SDN such as centralized control and flexible traffic scheduling, SDNLB monitors run states of servers and overall network load information by means of SNMP protocol and OpenFlow protocol in real time, and chooses the highest weight server as target server aiming for processing coming flows through the way of weight value calculation. On this basis, SDNLB takes full advantage of the optimal forwarding path algorithm to carry on traffic scheduling, and achieves the goal that raises utilization rate and processing performance of server cluster. An experiment platform is built to carry out simulation tests for overall performance of SDNLB, and the experiment results show that under the same network load conditions, SDNLB lowers effectively loads of server cluster, noticeably raises network throughput and bandwidth utilization, and reduces finish time and average latency of flows, compared with other load balancing algorithms.
Under the present network architecture, it is disadvantageous for scalability and service performance of server cluster to adopt hardware systems to realize load balancing of server cluster, because there are some restriction factors in such a method, including the difficulty of acquiring load nodes status and the complexity of redirecting traffic, etc. To solve the problem, a Load Balancing mechanism based on Software-Defined Networking (SDNLB) is proposed. With superiorities of SDN such as centralized control and flexible traffic scheduling, SDNLB monitors run states of servers and overall network load information by means of SNMP protocol and OpenFlow protocol in real time, and chooses the highest weight server as target server aiming for processing coming flows through the way of weight value calculation. On this basis, SDNLB takes full advantage of the optimal forwarding path algorithm to carry on traffic scheduling, and achieves the goal that raises utilization rate and processing performance of server cluster. An experiment platform is built to carry out simulation tests for overall performance of SDNLB, and the experiment results show that under the same network load conditions, SDNLB lowers effectively loads of server cluster, noticeably raises network throughput and bandwidth utilization, and reduces finish time and average latency of flows, compared with other load balancing algorithms.
2018, 40(12): 3036-3041.
doi: 10.11999/JEIT180217
Abstract:
A novel power frequency electric field measurement system based on high-performance MEMS electric field sensing chips is developed. Based on cross-correlation detection principle, a power frequency electric field demodulation algorithm of MEMS sensing chips that can inhibit background interference noise is proposed. And a small-scale, high-resolution electric field measuring probe is designed. Moreover, the system overall structure scheme is designed for implementation of high-accuracy demodulation electric field signals. The test result under power lines shows that the plotted curves of the developed MEMS system are consistent with Narda EFA-300.
A novel power frequency electric field measurement system based on high-performance MEMS electric field sensing chips is developed. Based on cross-correlation detection principle, a power frequency electric field demodulation algorithm of MEMS sensing chips that can inhibit background interference noise is proposed. And a small-scale, high-resolution electric field measuring probe is designed. Moreover, the system overall structure scheme is designed for implementation of high-accuracy demodulation electric field signals. The test result under power lines shows that the plotted curves of the developed MEMS system are consistent with Narda EFA-300.
2018, 40(12): 3042-3050.
doi: 10.11999/JEIT180170
Abstract:
In the advanced applications of real-time radar imaging and high-precision scientific computing systems, the design of high throughput and reconfigurable Floating-Point (FP) FFT accelerator is significant. Achieving high throughput FP FFT with low area and power cost poses a greater challenge due to high complexity of FP operations in comparison to fixed-point implementations. To address these issues, a serial of mixed-radix algorithms for 128/256/512/1024/2048-point FFT are proposed by decomposing long FFT into short implementations with cascaded radix-2k stages so that the complexity of multiplications can be significantly reduced. Besides, two novel fused FP add-subtract and dot-product units for dual-mode functionality are proposed, which can either compute on a pair of double precision operands or on two pairs of single precision operands in parallel. Thus, a high throughput dual-mode floating-point variable length FFT is designed. The proposed processor is implemented based on SMIC 28 nm CMOS technology. Simulation results show that the throughput and Signal-to-Quantization Noise Ratio (SQNR) in single-channel single precision and dual-channel half precision floating-point mode are 3.478 GSample/s, 135 dB and 6.957 GSample/s, 60 dB respectively. Compare to the other FP FFT, this processor can achieve 12 times improvement of normalized throughput-area ratio.
In the advanced applications of real-time radar imaging and high-precision scientific computing systems, the design of high throughput and reconfigurable Floating-Point (FP) FFT accelerator is significant. Achieving high throughput FP FFT with low area and power cost poses a greater challenge due to high complexity of FP operations in comparison to fixed-point implementations. To address these issues, a serial of mixed-radix algorithms for 128/256/512/1024/2048-point FFT are proposed by decomposing long FFT into short implementations with cascaded radix-2k stages so that the complexity of multiplications can be significantly reduced. Besides, two novel fused FP add-subtract and dot-product units for dual-mode functionality are proposed, which can either compute on a pair of double precision operands or on two pairs of single precision operands in parallel. Thus, a high throughput dual-mode floating-point variable length FFT is designed. The proposed processor is implemented based on SMIC 28 nm CMOS technology. Simulation results show that the throughput and Signal-to-Quantization Noise Ratio (SQNR) in single-channel single precision and dual-channel half precision floating-point mode are 3.478 GSample/s, 135 dB and 6.957 GSample/s, 60 dB respectively. Compare to the other FP FFT, this processor can achieve 12 times improvement of normalized throughput-area ratio.