Email alert
2017 Vol. 39, No. 2
Display Method:
2017, 39(2): 255-262.
doi: 10.11999/JEIT160359
Abstract:
To provide a better solution for search efficiency reduction problem caused by pseudo collision in the traditional intelligent optimization algorithms, this paper proposes a double clonal selection algorithm based on fuzzy non-genetic information memory. By combing with clonal selection theory, the search mechanism based on fuzzy non-genetic information memory is well performed. The non-genetic information in antibody evolution is collected, fuzzified and stored in the memory. Using this information to guide the subsequent double cloning search process, it can reduce the pseudo collision in non-optimal area, thus the global search efficiency is improved greatly. Extensive simulations show that the proposed algorithm has fast global convergence rate and high global convergence accuracy. Comparative results further demonstrate that it performs better than existing algorithms.
To provide a better solution for search efficiency reduction problem caused by pseudo collision in the traditional intelligent optimization algorithms, this paper proposes a double clonal selection algorithm based on fuzzy non-genetic information memory. By combing with clonal selection theory, the search mechanism based on fuzzy non-genetic information memory is well performed. The non-genetic information in antibody evolution is collected, fuzzified and stored in the memory. Using this information to guide the subsequent double cloning search process, it can reduce the pseudo collision in non-optimal area, thus the global search efficiency is improved greatly. Extensive simulations show that the proposed algorithm has fast global convergence rate and high global convergence accuracy. Comparative results further demonstrate that it performs better than existing algorithms.
2017, 39(2): 263-269.
doi: 10.11999/JEIT160329
Abstract:
The existed machine learning based road segmentation algorithms maintain obvious shortage that the detection effect decreases dramatically when the distribution of training samples and the scene target samples does not match. Focusing on this issue, a scene adaptive road segmentation algorithm based on Deep Convolutional Neural Network (DCNN) and auto encoder is proposed. Firstly, classic Slow Feature Analysis (SFA) and Gentle Boost based method is used to generate online samples whose label contain confidence value. After that, using the automatic feature extraction ability of DCNN and performing source-target scene feature similarity calculation with deep auto-encoder, a composite deep structure based scene adaptive classifier and its training method are designed. The experiment on KITTI dataset demonstrates that the proposed method outperforms the existed machine learning based road segmentation algorithms which upgrades the detection rate on average of around 4.5%.
The existed machine learning based road segmentation algorithms maintain obvious shortage that the detection effect decreases dramatically when the distribution of training samples and the scene target samples does not match. Focusing on this issue, a scene adaptive road segmentation algorithm based on Deep Convolutional Neural Network (DCNN) and auto encoder is proposed. Firstly, classic Slow Feature Analysis (SFA) and Gentle Boost based method is used to generate online samples whose label contain confidence value. After that, using the automatic feature extraction ability of DCNN and performing source-target scene feature similarity calculation with deep auto-encoder, a composite deep structure based scene adaptive classifier and its training method are designed. The experiment on KITTI dataset demonstrates that the proposed method outperforms the existed machine learning based road segmentation algorithms which upgrades the detection rate on average of around 4.5%.
2017, 39(2): 270-277.
doi: 10.11999/JEIT160296
Abstract:
Classification method based on sparse representation has won wide attention because of its simplicity and effectiveness, while how to adaptively build the relationship between dictionary atoms and class labels is still an important open question, at the same time most of the sparse representation classification methods need to solve a norm constraint optimization problem, which increases the computational complexity in the classification task. To address this issue, this paper proposes a novel Fisher constraint dictionary pair learning method to jointly learn a structured synthesis dictionary and a structured analysis dictionary, then directly obtains the sparse coefficient matrix by analysis dictionary. In this paper, the Fisher criterion is used to encode the coefficients. Finally the new method is applied to image classification task, the experimental results show that the new method not only improves the accuracy of classification but also greatly reduces the computational complexity. Compared with the existing methods, the new method has better performance.
Classification method based on sparse representation has won wide attention because of its simplicity and effectiveness, while how to adaptively build the relationship between dictionary atoms and class labels is still an important open question, at the same time most of the sparse representation classification methods need to solve a norm constraint optimization problem, which increases the computational complexity in the classification task. To address this issue, this paper proposes a novel Fisher constraint dictionary pair learning method to jointly learn a structured synthesis dictionary and a structured analysis dictionary, then directly obtains the sparse coefficient matrix by analysis dictionary. In this paper, the Fisher criterion is used to encode the coefficients. Finally the new method is applied to image classification task, the experimental results show that the new method not only improves the accuracy of classification but also greatly reduces the computational complexity. Compared with the existing methods, the new method has better performance.
2017, 39(2): 278-284.
doi: 10.11999/JEIT160260
Abstract:
The applicability of traditional spectral clustering is limited by its high complexity in large-scale data sets. Through construction of affinity matrix between landmark points and data points, the Landmark-based Spectral Clustering (LSC) algorithm can significantly reduce the computational complexity of spectral embedding. It is vital for clustering results to apply the suitable strategies of the generation of landmark points. While considering big data problems, the existing generation strategies of landmark points face some deficiencies: the unstable results of random sampling, along with the unknown convergence time and the repeatability of data reading in k-means centers method. In this paper, a rapid landmark-sampling spectral clustering algorithm based on the approximate singular value decomposition is designed, which makes the sampling probability of each landmark point decided by the row norm of the approximate singular vector matrix. Compared with LSC algorithm based on random sampling, the clustering result of new algorithm is more stable and accurate; compared with LSC algorithm based on k-means centers, the new algorithm reduces the computational complexity. Moreover, the preservation of information in original data is analyzed for the landmark-sampling results theoretically. At the same time, the performance of new approach is verified by the experiments in some public data sets.
The applicability of traditional spectral clustering is limited by its high complexity in large-scale data sets. Through construction of affinity matrix between landmark points and data points, the Landmark-based Spectral Clustering (LSC) algorithm can significantly reduce the computational complexity of spectral embedding. It is vital for clustering results to apply the suitable strategies of the generation of landmark points. While considering big data problems, the existing generation strategies of landmark points face some deficiencies: the unstable results of random sampling, along with the unknown convergence time and the repeatability of data reading in k-means centers method. In this paper, a rapid landmark-sampling spectral clustering algorithm based on the approximate singular value decomposition is designed, which makes the sampling probability of each landmark point decided by the row norm of the approximate singular vector matrix. Compared with LSC algorithm based on random sampling, the clustering result of new algorithm is more stable and accurate; compared with LSC algorithm based on k-means centers, the new algorithm reduces the computational complexity. Moreover, the preservation of information in original data is analyzed for the landmark-sampling results theoretically. At the same time, the performance of new approach is verified by the experiments in some public data sets.
2017, 39(2): 285-292.
doi: 10.11999/JEIT160357
Abstract:
Convolution Neural Networks (CNN) make breakthrough progress in many areas recently, such as speech recognition and image recognition. A limiting factor for use of CNN in large-scale application is, until recently, their computational expense, especially the calculation of linear convolution in spatial domain. Convolution theorem provides a very effective way to implement a linear convolution in spatial domain by multiplication in frequency domain. This paper proposes an unified one-dimensional FFT algorithm based on decimation-in-time split- radix-2/(2a), in which a is an arbitrary natural number. The acceleration performance of convolutional neural network is studied by using the proposed FFT algorithm on CPU environment. Experimental results on the MNIST database and Cifar-10 database show great improvement when compared to the direct linear convolution based CNN with no loss in accuracy, and the radix-2/4 FFT gets the best time savings of 38.56% and 72.01% respectively. Therefore, it is a very effective way to realize linear convolution operation in frequency domain.
Convolution Neural Networks (CNN) make breakthrough progress in many areas recently, such as speech recognition and image recognition. A limiting factor for use of CNN in large-scale application is, until recently, their computational expense, especially the calculation of linear convolution in spatial domain. Convolution theorem provides a very effective way to implement a linear convolution in spatial domain by multiplication in frequency domain. This paper proposes an unified one-dimensional FFT algorithm based on decimation-in-time split- radix-2/(2a), in which a is an arbitrary natural number. The acceleration performance of convolutional neural network is studied by using the proposed FFT algorithm on CPU environment. Experimental results on the MNIST database and Cifar-10 database show great improvement when compared to the direct linear convolution based CNN with no loss in accuracy, and the radix-2/4 FFT gets the best time savings of 38.56% and 72.01% respectively. Therefore, it is a very effective way to realize linear convolution operation in frequency domain.
2017, 39(2): 293-300.
doi: 10.11999/JEIT160361
Abstract:
The optimal noise that minimizes Bayes risk for a binary hypothesis testing problem is analyzed firstly. As a result, the minimization of Bayes risk can be equivalent as the optimization of the detection probability and/or false alarm probability . Secondly, a noise enhanced model, which can increase and decrease simultaneously, is established under the premise of maintaining predefined and . Then the optimal additional noise of this model is obtained by a convex combination of two optimal noises of two limit cases, which are the minimization of with maintaining the predefined and the maximization of with maintaining the predefined , respectively. Furthermore, the sufficient conditions for this model are given. Whats more, the additive noise that minimizes the Bayes risk is determined when the prior probabilities are known or not, and the corresponding additive noise can be obtained by recalculating a parameter only if the prior information changes. Finally, the availability of algorithm is proved through the simulation combined with a specific detection example.
The optimal noise that minimizes Bayes risk for a binary hypothesis testing problem is analyzed firstly. As a result, the minimization of Bayes risk can be equivalent as the optimization of the detection probability and/or false alarm probability . Secondly, a noise enhanced model, which can increase and decrease simultaneously, is established under the premise of maintaining predefined and . Then the optimal additional noise of this model is obtained by a convex combination of two optimal noises of two limit cases, which are the minimization of with maintaining the predefined and the maximization of with maintaining the predefined , respectively. Furthermore, the sufficient conditions for this model are given. Whats more, the additive noise that minimizes the Bayes risk is determined when the prior probabilities are known or not, and the corresponding additive noise can be obtained by recalculating a parameter only if the prior information changes. Finally, the availability of algorithm is proved through the simulation combined with a specific detection example.
2017, 39(2): 301-308.
doi: 10.11999/JEIT160436
Abstract:
Adaptive beamforming techniques for conformal arrays suffer from poor universality, difficulty to maintain the main beam and high computational cost. A novel robust adaptive beamforming algorithm for conformal arrays based on sparse reconstruction is proposed to alleviate the existing problems. Firstly, by introducing the Asymptotic Minimum Variance (AMV) criterion, the Interference-Plus-Noise (IPN) covariance matrix reconstruction is realized in a sparse way. Secondly, the Steering Vector (SV) of the Signal Of Interest (SOI) is estimated. Finally, the optimal weight coefficients are achieved. Simulation results demonstrate the effectiveness and robustness of the proposed algorithm and prove that this algorithm can achieve superior output performance over the existing adaptive beamforming methods for conformal arrays in a large range of Signal to Noise Ratio (SNR) of the SOI. Moreover, the proposed algorithm needs fewer snapshots with a lower computational cost and has a faster convergence rate.
Adaptive beamforming techniques for conformal arrays suffer from poor universality, difficulty to maintain the main beam and high computational cost. A novel robust adaptive beamforming algorithm for conformal arrays based on sparse reconstruction is proposed to alleviate the existing problems. Firstly, by introducing the Asymptotic Minimum Variance (AMV) criterion, the Interference-Plus-Noise (IPN) covariance matrix reconstruction is realized in a sparse way. Secondly, the Steering Vector (SV) of the Signal Of Interest (SOI) is estimated. Finally, the optimal weight coefficients are achieved. Simulation results demonstrate the effectiveness and robustness of the proposed algorithm and prove that this algorithm can achieve superior output performance over the existing adaptive beamforming methods for conformal arrays in a large range of Signal to Noise Ratio (SNR) of the SOI. Moreover, the proposed algorithm needs fewer snapshots with a lower computational cost and has a faster convergence rate.
2017, 39(2): 309-315.
doi: 10.11999/JEIT160369
Abstract:
A sparse representation speech denoising method based on adapted stopping residue error is proposed. Firstly, an over complete dictionary of the clean speech power spectrum is learned by the K-Singular Value Decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error is adaptively achieved according to the estimated cross terms and the noise spectrum which is adjusted by a weighted factor, and the Orthogonal Matching Pursuit (OMP) approach is applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech is re-synthesis via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the standard spectral subtraction, sparse representation based speech denoising algorithm and the AutoRegressive Hidden Markov Model (AR-HMM) based speech denoising method in terms of subjective and objective measure.
A sparse representation speech denoising method based on adapted stopping residue error is proposed. Firstly, an over complete dictionary of the clean speech power spectrum is learned by the K-Singular Value Decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error is adaptively achieved according to the estimated cross terms and the noise spectrum which is adjusted by a weighted factor, and the Orthogonal Matching Pursuit (OMP) approach is applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech is re-synthesis via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the standard spectral subtraction, sparse representation based speech denoising algorithm and the AutoRegressive Hidden Markov Model (AR-HMM) based speech denoising method in terms of subjective and objective measure.
2017, 39(2): 316-321.
doi: 10.11999/JEIT160306
Abstract:
The estimation accuracy of the wall displacement, delay time, and linear-regression-based Pulse Wave Velocity (PWV) affected by different scanning frame rates and beam density is investigated quantitatively in the measurement of the regional PWV with ultrasound transit time method based on a model of pulse wave propagation along a carotid artery segment. Through statistical variance analysis, the significance levels of measurement errors as well as the primary and secondary relations of these two influence factors are ascertained. The results show that the frame rates do not significantly affect the wall displacement estimation accuracy (p0.05) with relative errors ranged from 0.23 to 0.28. The delay time measurement accuracy is influenced significantly by the frame rates and spacing between two beams simultaneously (p0.01 ). The relative errors decrease from 0.99 to 0.06 as the distances from the first beam to others increase from 2.38 mm to 38 mm. However, the mean transit time errors increase from 0.19 to 0.43 when the frame rates decrease from 1127 Hz to 226 Hz. The PWV estimation errors ranging from 7% to 20% are affected significantly by the number of beams as well as frame rates under the condition that the beams used for regression fitness are no less than 10. The frame rate is the main influence factor in this situation (p0.01 ). Therefore, the PWV measurement accuracy can be improved by increasing frame rate with a proper beam setting. Experimental results could be helpful to explore novel measurement method for improving PWV accuracy in the follow-up work.
The estimation accuracy of the wall displacement, delay time, and linear-regression-based Pulse Wave Velocity (PWV) affected by different scanning frame rates and beam density is investigated quantitatively in the measurement of the regional PWV with ultrasound transit time method based on a model of pulse wave propagation along a carotid artery segment. Through statistical variance analysis, the significance levels of measurement errors as well as the primary and secondary relations of these two influence factors are ascertained. The results show that the frame rates do not significantly affect the wall displacement estimation accuracy (p0.05) with relative errors ranged from 0.23 to 0.28. The delay time measurement accuracy is influenced significantly by the frame rates and spacing between two beams simultaneously (p0.01 ). The relative errors decrease from 0.99 to 0.06 as the distances from the first beam to others increase from 2.38 mm to 38 mm. However, the mean transit time errors increase from 0.19 to 0.43 when the frame rates decrease from 1127 Hz to 226 Hz. The PWV estimation errors ranging from 7% to 20% are affected significantly by the number of beams as well as frame rates under the condition that the beams used for regression fitness are no less than 10. The frame rate is the main influence factor in this situation (p0.01 ). Therefore, the PWV measurement accuracy can be improved by increasing frame rate with a proper beam setting. Experimental results could be helpful to explore novel measurement method for improving PWV accuracy in the follow-up work.
2017, 39(2): 322-327.
doi: 10.11999/JEIT160316
Abstract:
To overcome the large errors and complexity of measuring the wave velocity of landing point location algorithm in target range, a method based on poisoning algorithm without velocity measurement is proposed. Nine accelerate sensors constitute pozidriv shaped array, which also consists of 2 sets of five-element cross array. DOA algorithm is used to pre-estimate the wave velocity, then the wave velocity as the initial parameter is set into the equation to calculate the initial position. Lastly, as the parameters the initial position and the velocity are set into the Taylor iterative algorithm to get the final location result. Because wave velocity need not to be measurement, measurement error can be reduced, wave velocity and position value can be calculated by iteration algorithm, so this algorithm makes the landing point location more simple, more accurate. The simulation verifies that this method is measurable, and the iterative algorithm is convergent in the range of 1000 meters.
To overcome the large errors and complexity of measuring the wave velocity of landing point location algorithm in target range, a method based on poisoning algorithm without velocity measurement is proposed. Nine accelerate sensors constitute pozidriv shaped array, which also consists of 2 sets of five-element cross array. DOA algorithm is used to pre-estimate the wave velocity, then the wave velocity as the initial parameter is set into the equation to calculate the initial position. Lastly, as the parameters the initial position and the velocity are set into the Taylor iterative algorithm to get the final location result. Because wave velocity need not to be measurement, measurement error can be reduced, wave velocity and position value can be calculated by iteration algorithm, so this algorithm makes the landing point location more simple, more accurate. The simulation verifies that this method is measurable, and the iterative algorithm is convergent in the range of 1000 meters.
2017, 39(2): 328-334.
doi: 10.11999/JEIT160276
Abstract:
A new method of construction of shift sequence sets is proposed, and based on these shift sequences, a new class of Gaussian integer sequence sets with period 2N which can choose Zero Correlation Zone (ZCZ) length flexibly is obtained by interleaving one perfect Gaussian integer sequence with period N. The new sequence sets whose parameters can reach or approach the Tang-Fan-Matsuji bound are optimal or almost optimal. Gaussian integer sequence sets with zero correlation zone can provide more address selection for high-speed quasi-synchronous spread spectrum system.
A new method of construction of shift sequence sets is proposed, and based on these shift sequences, a new class of Gaussian integer sequence sets with period 2N which can choose Zero Correlation Zone (ZCZ) length flexibly is obtained by interleaving one perfect Gaussian integer sequence with period N. The new sequence sets whose parameters can reach or approach the Tang-Fan-Matsuji bound are optimal or almost optimal. Gaussian integer sequence sets with zero correlation zone can provide more address selection for high-speed quasi-synchronous spread spectrum system.
2017, 39(2): 335-341.
doi: 10.11999/JEIT160375
Abstract:
As the substantial growth of data traffic over the past few years, the deployment of cellular base stations tends to be smaller and denser which puts forward higher requirements for backhaul techniques. In this study, WiFi is taken as a backhaul technique in 5G networks, and then a high-speed synchronous backhaul solution is proposed with aggregation of multiple WiFi channels of which the spectrum is non-continuous. Although IEEE 802.11n/ac can achieve channel aggregation with static/dynamic channel bonding scheme, the spectrum of these channels must be continuous. Moreover, static channel bonding is not flexible enough and dynamic channel bonding rarely has chance to be implemented when devices are deployed densely. The proposed solution can not only extend transmission bandwidth and improve network capacity of 5G backhaul networks, but also overcome defects of channel bonding in 802.11n/ac. Both analytical results and simulations show that the performance of the proposed solution is better than the traditional channel bonding and it can reduce adjacent channel interference among multiple channels in 5G backhaul networks. Meanwhile, the effectiveness and feasibility of the proposed solution are proved by the prototype verification system.
As the substantial growth of data traffic over the past few years, the deployment of cellular base stations tends to be smaller and denser which puts forward higher requirements for backhaul techniques. In this study, WiFi is taken as a backhaul technique in 5G networks, and then a high-speed synchronous backhaul solution is proposed with aggregation of multiple WiFi channels of which the spectrum is non-continuous. Although IEEE 802.11n/ac can achieve channel aggregation with static/dynamic channel bonding scheme, the spectrum of these channels must be continuous. Moreover, static channel bonding is not flexible enough and dynamic channel bonding rarely has chance to be implemented when devices are deployed densely. The proposed solution can not only extend transmission bandwidth and improve network capacity of 5G backhaul networks, but also overcome defects of channel bonding in 802.11n/ac. Both analytical results and simulations show that the performance of the proposed solution is better than the traditional channel bonding and it can reduce adjacent channel interference among multiple channels in 5G backhaul networks. Meanwhile, the effectiveness and feasibility of the proposed solution are proved by the prototype verification system.
2017, 39(2): 342-350.
doi: 10.11999/JEIT160429
Abstract:
Considering a general eavesdropping case where the eavesdropper is equipped with multiple antennas and two uncertainty models for the imperfect Channel State Information (CSI) on the main and eavesdroppers channels at the transmitter. Two robust transmission methods are proposed for multiuser Multiple-In, Single-Out, Multiple-antenna Eavesdropper (MISOME) systems to provide SINR guarantee for the legitimate user by the optimization of beamforming and artificial noise covariance matrix. For the deterministic uncertainty model, the Worst-Case Secrecy Rate Maximization (WC-SRM) problem is derived an equivalent problem of the WC-SRM problem through analysis. The equivalent problem can be recast as a single-variable optimization problem, which can be handled by solving a sequence of convex SemiDefinite Programs (SDPs). For the stochastic uncertainty model, a suboptimal scheme to solve the Outage Probability constrained Secrecy Rate Maximization (OP-SRM) problem is proposed based on the robust design for the WC-SRM problem. Finally, the effectiveness and the robustness of the proposed algorithms are validated by the simulation results.
Considering a general eavesdropping case where the eavesdropper is equipped with multiple antennas and two uncertainty models for the imperfect Channel State Information (CSI) on the main and eavesdroppers channels at the transmitter. Two robust transmission methods are proposed for multiuser Multiple-In, Single-Out, Multiple-antenna Eavesdropper (MISOME) systems to provide SINR guarantee for the legitimate user by the optimization of beamforming and artificial noise covariance matrix. For the deterministic uncertainty model, the Worst-Case Secrecy Rate Maximization (WC-SRM) problem is derived an equivalent problem of the WC-SRM problem through analysis. The equivalent problem can be recast as a single-variable optimization problem, which can be handled by solving a sequence of convex SemiDefinite Programs (SDPs). For the stochastic uncertainty model, a suboptimal scheme to solve the Outage Probability constrained Secrecy Rate Maximization (OP-SRM) problem is proposed based on the robust design for the WC-SRM problem. Finally, the effectiveness and the robustness of the proposed algorithms are validated by the simulation results.
2017, 39(2): 351-359.
doi: 10.11999/JEIT160560
Abstract:
Traditional lossless compression algorithms are not efficient for screen content coding. To take the full advantage of special characteristics of screen content, a lossless compression algorithm based on String Matching with High Performance and Low Complexity (SMHPLC) is proposed. It is implemented on the basis of LZ4HC (LZ4 High Compression). The main new ideas are using pixel instead of byte as the basic unit for string searching and matching, adopting joint optimal coding of three parameters of literal length, match length and offset and mapping for three parameters. Experiment results show that SMHPLC has both high coding efficiency and low complexity. Compared to LZ4HC, SMHPLC not only has a coding complexity reduction of 34.6%, 46.8%, but also achieve overall bit-rate saving of 22.4%, 21.2% in YUV, RGB color formats respectively for AVS2 common test sequences in moving text and graphics category.
Traditional lossless compression algorithms are not efficient for screen content coding. To take the full advantage of special characteristics of screen content, a lossless compression algorithm based on String Matching with High Performance and Low Complexity (SMHPLC) is proposed. It is implemented on the basis of LZ4HC (LZ4 High Compression). The main new ideas are using pixel instead of byte as the basic unit for string searching and matching, adopting joint optimal coding of three parameters of literal length, match length and offset and mapping for three parameters. Experiment results show that SMHPLC has both high coding efficiency and low complexity. Compared to LZ4HC, SMHPLC not only has a coding complexity reduction of 34.6%, 46.8%, but also achieve overall bit-rate saving of 22.4%, 21.2% in YUV, RGB color formats respectively for AVS2 common test sequences in moving text and graphics category.
2017, 39(2): 360-366.
doi: 10.11999/JEIT160343
Abstract:
In order to guarantee that Visible Light Communication (VLC) can provide both high-speed and low energy consumption data transmission services. A modulation named by Carrier-less Position/Phase (CPP) based on Pulse Position Modulation (PPM) is proposed. By utilizing the orthogonal filters, the transmission rate of the PPM is improved. According to that employing CPP modulation in VLC makes power efficiency significantly reduced, a novel variable bias is presented as an effort to reduce the power consumption effectively. Finally, the simulation results illustrate that applying the proposed variable bias to CPP scheme, compared to DC bias, the new scheme can save 2 dB of SNR to obtain the same BER performance under the bandwidth constrained conditions. After further considering slots correlation, the variable bias can further improve the BER performance by 1.5 dB.
In order to guarantee that Visible Light Communication (VLC) can provide both high-speed and low energy consumption data transmission services. A modulation named by Carrier-less Position/Phase (CPP) based on Pulse Position Modulation (PPM) is proposed. By utilizing the orthogonal filters, the transmission rate of the PPM is improved. According to that employing CPP modulation in VLC makes power efficiency significantly reduced, a novel variable bias is presented as an effort to reduce the power consumption effectively. Finally, the simulation results illustrate that applying the proposed variable bias to CPP scheme, compared to DC bias, the new scheme can save 2 dB of SNR to obtain the same BER performance under the bandwidth constrained conditions. After further considering slots correlation, the variable bias can further improve the BER performance by 1.5 dB.
2017, 39(2): 367-373.
doi: 10.11999/JEIT160387
Abstract:
To reduce the energy consumption in wireless access networks, a Cooperative Energy-Saving Mechanism based on Benders Decomposition (BD-CESM) is presented to solve inadequate coverage problem caused by BS dormancy by means of cooperative BS selection and BS state control. A cooperative BS selection model is designed to obtain appropriate cooperative BS and corresponding dormant BS according to the traffic load distribution and the spatial varying SINR. Then a Benders decomposition based joint optimization strategy is proposed to balance capacity with throughout via controlling on-off state of BS. The simulation results show that up to 42.6% of BS can be dormant while meeting basic network performance requirements. Furthermore, the coverage can be compensated without increasing transmitting power in the proposed mechanism.
To reduce the energy consumption in wireless access networks, a Cooperative Energy-Saving Mechanism based on Benders Decomposition (BD-CESM) is presented to solve inadequate coverage problem caused by BS dormancy by means of cooperative BS selection and BS state control. A cooperative BS selection model is designed to obtain appropriate cooperative BS and corresponding dormant BS according to the traffic load distribution and the spatial varying SINR. Then a Benders decomposition based joint optimization strategy is proposed to balance capacity with throughout via controlling on-off state of BS. The simulation results show that up to 42.6% of BS can be dormant while meeting basic network performance requirements. Furthermore, the coverage can be compensated without increasing transmitting power in the proposed mechanism.
2017, 39(2): 374-380.
doi: 10.11999/JEIT160289
Abstract:
In the electronic countermeasure, the opponents bit stream can be captured. However, without any knowledge about the type of data link protocol, the existing protocol analyzing tools can not analyze the useful information from the bit stream. To further get the carried?information, the bit stream should be segmented to frames firstly. According to the general rules of frame structure, a bit stream segmentation algorithm is proposed based on data mining, in which, the multi-association rule indicating the beginning of frames can be identified by using frequent sequence statistics, association analysis and association rules integration. The test results show that, this algorithm can extract the valid segmentation flag from unknown bit stream and segment the bit stream correctly. Compared to the similar data mining based bit stream analyzing algorithms, this algorithm can be more efficient and produce a unique result which is of high reliability.
In the electronic countermeasure, the opponents bit stream can be captured. However, without any knowledge about the type of data link protocol, the existing protocol analyzing tools can not analyze the useful information from the bit stream. To further get the carried?information, the bit stream should be segmented to frames firstly. According to the general rules of frame structure, a bit stream segmentation algorithm is proposed based on data mining, in which, the multi-association rule indicating the beginning of frames can be identified by using frequent sequence statistics, association analysis and association rules integration. The test results show that, this algorithm can extract the valid segmentation flag from unknown bit stream and segment the bit stream correctly. Compared to the similar data mining based bit stream analyzing algorithms, this algorithm can be more efficient and produce a unique result which is of high reliability.
Distributed Denial of Service Attack Detection Based on Object Character in Software Defined Network
2017, 39(2): 381-388.
doi: 10.11999/JEIT160370
Abstract:
During the Distributed Denial of Service (DDoS) attack happening in Software Defined Network (SDN) network, the attackers send a large number of data packets. Large quantities of new terminal identifiers are generated. Accordingly, the network connection resources are occupied, obstructing the normal operation of the network. To detect the attacked target accurately, and release the occupied resources, a DDoS attack detection method based on object features with the GHSOM technology is provided. First, the seven-tuple is proposed for detection to determine whether the target address is under attack by DDoS. Then, a simulation platform is built, which is based on the OpenDayLight controller. GHSOM algorithm is applied to the network. Simulation experiments are performed to validate the feasibility of the detection method. The results show that the seven-tuple for detection can effectively confirm whether the target object is under a DDoS attack.
During the Distributed Denial of Service (DDoS) attack happening in Software Defined Network (SDN) network, the attackers send a large number of data packets. Large quantities of new terminal identifiers are generated. Accordingly, the network connection resources are occupied, obstructing the normal operation of the network. To detect the attacked target accurately, and release the occupied resources, a DDoS attack detection method based on object features with the GHSOM technology is provided. First, the seven-tuple is proposed for detection to determine whether the target address is under attack by DDoS. Then, a simulation platform is built, which is based on the OpenDayLight controller. GHSOM algorithm is applied to the network. Simulation experiments are performed to validate the feasibility of the detection method. The results show that the seven-tuple for detection can effectively confirm whether the target object is under a DDoS attack.
2017, 39(2): 389-396.
doi: 10.11999/JEIT160338
Abstract:
In the searchable encryption services provided by cloud severs, data owners expect that their data files can be partitioned and stored to multiple cloud severs with the form of ciphertext, so as to improve the searching efficiency of authorized users and the processing ability of big data. For this issue, a multi-sever multi-keyword searchable encryption scheme is proposed based on cloud storage, and the proposed scheme is proved to be IND-CKA (adaptive Chosen Keyword Attack) secure coexisting with the secure trapdoor. Compared with the single sever searchable encryption, the proposed scheme can not only guarantee the data security, but also provide more accurate retrieval service when the keyword index or any one file does not contain all of the searching keywords.
In the searchable encryption services provided by cloud severs, data owners expect that their data files can be partitioned and stored to multiple cloud severs with the form of ciphertext, so as to improve the searching efficiency of authorized users and the processing ability of big data. For this issue, a multi-sever multi-keyword searchable encryption scheme is proposed based on cloud storage, and the proposed scheme is proved to be IND-CKA (adaptive Chosen Keyword Attack) secure coexisting with the secure trapdoor. Compared with the single sever searchable encryption, the proposed scheme can not only guarantee the data security, but also provide more accurate retrieval service when the keyword index or any one file does not contain all of the searching keywords.
2017, 39(2): 397-404.
doi: 10.11999/JEIT160449
Abstract:
The limitations of network resource and the dispersion of network management are the two major difficulties for traditional networks to address the Distributed Denial of Service (DDoS) attacks. However, current defense methods are static and hysteresis, which are unable to locate the attackers accurately. Therefore, a dynamic defense using the two pivotal features, centralized control and dynamic management, of Software Defined Networks (SDN) is proposed. An OpenFlow-based switch shuffling model is built which employs greedy algorithm to remap user-switch link dynamically. After several shuffling, attacker could be differentiated from legitimate users and provide the latter with low latency uninterrupted services. The proposed approach is implemented in Ryu, the open source SDN controller, and the prototype is tested in a real SDN. The results of performance test show that with this approach attackers in limited times of shuffling can be isolated and the effects of DDoS attacks on legal flows can be reduced. The outcomes of defense ability test demonstrate that the efficiency of the proposed dynamic approach has nothing to do with the size of attack flow, but is only related to the number of attackers in the ring topology structure which is composed of a single controller.
The limitations of network resource and the dispersion of network management are the two major difficulties for traditional networks to address the Distributed Denial of Service (DDoS) attacks. However, current defense methods are static and hysteresis, which are unable to locate the attackers accurately. Therefore, a dynamic defense using the two pivotal features, centralized control and dynamic management, of Software Defined Networks (SDN) is proposed. An OpenFlow-based switch shuffling model is built which employs greedy algorithm to remap user-switch link dynamically. After several shuffling, attacker could be differentiated from legitimate users and provide the latter with low latency uninterrupted services. The proposed approach is implemented in Ryu, the open source SDN controller, and the prototype is tested in a real SDN. The results of performance test show that with this approach attackers in limited times of shuffling can be isolated and the effects of DDoS attacks on legal flows can be reduced. The outcomes of defense ability test demonstrate that the efficiency of the proposed dynamic approach has nothing to do with the size of attack flow, but is only related to the number of attackers in the ring topology structure which is composed of a single controller.
2017, 39(2): 405-411.
doi: 10.11999/JEIT160373
Abstract:
Block Fast Factorized Back-Projection (Block-FFBP) algorithm adopts a subaperture synthesis approach to reduce the computing complexity of the conventional BP algorithm, and partitions the echo data into blocks in range to avoid the complicated transforms between polar and Cartesian coordinates. However, Block- FFBP results in a range span vibration of the data blocks, and Block-FFBP needs an extra data length associated with the interpolation kernel. That gives rise to the inefficiency of the memory, and furthermore the degradation of the imaging speed. A range Bulk processing based FFBP (Bulk-FFBP) algorithm is proposed in this paper. It is implemented in two ways. One is based on a series of range pivots, and the other one is of no pivots. The outperformance of Bulk-FFBP relative to Block-FFBP is verified through simulations in error analysis, imaging evaluation and computing efficiency test.
Block Fast Factorized Back-Projection (Block-FFBP) algorithm adopts a subaperture synthesis approach to reduce the computing complexity of the conventional BP algorithm, and partitions the echo data into blocks in range to avoid the complicated transforms between polar and Cartesian coordinates. However, Block- FFBP results in a range span vibration of the data blocks, and Block-FFBP needs an extra data length associated with the interpolation kernel. That gives rise to the inefficiency of the memory, and furthermore the degradation of the imaging speed. A range Bulk processing based FFBP (Bulk-FFBP) algorithm is proposed in this paper. It is implemented in two ways. One is based on a series of range pivots, and the other one is of no pivots. The outperformance of Bulk-FFBP relative to Block-FFBP is verified through simulations in error analysis, imaging evaluation and computing efficiency test.
2017, 39(2): 412-416.
doi: 10.11999/JEIT160307
Abstract:
The method for estimating the parameters of multi-look Pareto distribution based onzln(z) can not estimate the shape parameter less than 1. To overcome the drawback, it is generalized byzrln(z), which widens the efficacious range for shape parameter to be estimated. The expression of parameter estimation is deduced so as to demonstrate that the proposed method is able to estimate the shape parameter less than 1 theoretically. The simulation results validate that the method of zrln(z) is able to estimate the shape parameter more efficaciously in the range ofr1.
The method for estimating the parameters of multi-look Pareto distribution based onzln(z) can not estimate the shape parameter less than 1. To overcome the drawback, it is generalized byzrln(z), which widens the efficacious range for shape parameter to be estimated. The expression of parameter estimation is deduced so as to demonstrate that the proposed method is able to estimate the shape parameter less than 1 theoretically. The simulation results validate that the method of zrln(z) is able to estimate the shape parameter more efficaciously in the range ofr1.
2017, 39(2): 417-422.
doi: 10.11999/JEIT160314
Abstract:
A Direct Position Determination (DPD) algorithm based on spatial spectrum analysis for the issue of asynchronous observation in the multiple moving sensors passive fusion location system is proposed. At first, a direct determination mathematical model of the asynchronous scene is constructed and a multiple sensors cooperative spatial spectrum function is composed. Furthermore, the position of the emitter can be got by a two or three dimensional search, and then the derivations of the estimators variance and the Cramer-Rao Lower Bound (CRLB) are given. Finally, the Monte Carlo simulations over the asynchronous scene indicate that the accuracy of the proposed method is close to the Cramer-Rao lower bound and superior to the two-step location method based on the location parameters, especially in the low SNR.
A Direct Position Determination (DPD) algorithm based on spatial spectrum analysis for the issue of asynchronous observation in the multiple moving sensors passive fusion location system is proposed. At first, a direct determination mathematical model of the asynchronous scene is constructed and a multiple sensors cooperative spatial spectrum function is composed. Furthermore, the position of the emitter can be got by a two or three dimensional search, and then the derivations of the estimators variance and the Cramer-Rao Lower Bound (CRLB) are given. Finally, the Monte Carlo simulations over the asynchronous scene indicate that the accuracy of the proposed method is close to the Cramer-Rao lower bound and superior to the two-step location method based on the location parameters, especially in the low SNR.
2017, 39(2): 423-429.
doi: 10.11999/JEIT160397
Abstract:
The problem of ISAR image jamming is the main stream in radar countermeasures study. This paper proposes a new jamming technique for ISAR imaging based on Multiple Phase Sectionalized Modulation (MPSM) jamming method. Its basic principle and processing in intra-pulse and inter-pulse is analyzed in details. Then the final expression of MPSM jamming signal after ISAR imaging processing is derived, and the precise control method of jamming pattern by parameters design in double dimensions is provided. Simulation is conducted to prove the related jamming effects, and it is flexible and controllable to generate different jamming patterns and overspread effects by applying this new method.
The problem of ISAR image jamming is the main stream in radar countermeasures study. This paper proposes a new jamming technique for ISAR imaging based on Multiple Phase Sectionalized Modulation (MPSM) jamming method. Its basic principle and processing in intra-pulse and inter-pulse is analyzed in details. Then the final expression of MPSM jamming signal after ISAR imaging processing is derived, and the precise control method of jamming pattern by parameters design in double dimensions is provided. Simulation is conducted to prove the related jamming effects, and it is flexible and controllable to generate different jamming patterns and overspread effects by applying this new method.
2017, 39(2): 430-436.
doi: 10.11999/JEIT160386
Abstract:
As the basis of change detection and image fusion, SAR image registration plays an important role in the interpretation of multi-temporal SAR images. This paper presents a method of SAR image registration based on corner detection using SAR-FAST, which is a customized version of Features from Accelerated Segment Test (FAST) for processing SAR images. The proposed method firstly employs rolling guidance filter to suppress speckle noise. Secondly, the candidate corner point is determined by quantitative analysis of the dissimilarities of the detection windows on the extended circle and the center window. Finally, the error detections are removed by analyzing the intensity distribution properties of the candidate corners. The experimental results show that SAR-FAST can detect a sufficient number of corners with stability and high repeatability, and when applying to image registration, it also can get better registration results.
As the basis of change detection and image fusion, SAR image registration plays an important role in the interpretation of multi-temporal SAR images. This paper presents a method of SAR image registration based on corner detection using SAR-FAST, which is a customized version of Features from Accelerated Segment Test (FAST) for processing SAR images. The proposed method firstly employs rolling guidance filter to suppress speckle noise. Secondly, the candidate corner point is determined by quantitative analysis of the dissimilarities of the detection windows on the extended circle and the center window. Finally, the error detections are removed by analyzing the intensity distribution properties of the candidate corners. The experimental results show that SAR-FAST can detect a sufficient number of corners with stability and high repeatability, and when applying to image registration, it also can get better registration results.
2017, 39(2): 437-443.
doi: 10.11999/JEIT160274
Abstract:
Distributed micro-satellites SAR has the capabilities of substantially miniaturizing the size and lowering the cost of space-borne SAR systems. However, one of the key issues is to take full advantage of the distributed resources and achieve high-resolution images. In this paper, an approach utilizing LFMCW signal is proposed to realize distributed micro-satellites SAR system. The signal model and the high-resolution imaging method is studied on the basis of the serial formation in azimuth. LFMCW signals are transmitted simultaneously and beat-frequency division signals are received by different micro-satellites. With the use of the crossed receiving technique, different sub-band signals with superposed equivalent phase centers can be acquired by the configuration design of formation flying, and then the full-bandwidth signal is synthesized to obtain high-resolution image. The proposed method synthesizes the sub-band signals of distributed platforms, which provides theoretical support for applying high-resolution LFMCW signals in the field of distributed micro-satellites SAR. The correctness of the theoretical derivations and the effectiveness of the approach is validated by simulation results.
Distributed micro-satellites SAR has the capabilities of substantially miniaturizing the size and lowering the cost of space-borne SAR systems. However, one of the key issues is to take full advantage of the distributed resources and achieve high-resolution images. In this paper, an approach utilizing LFMCW signal is proposed to realize distributed micro-satellites SAR system. The signal model and the high-resolution imaging method is studied on the basis of the serial formation in azimuth. LFMCW signals are transmitted simultaneously and beat-frequency division signals are received by different micro-satellites. With the use of the crossed receiving technique, different sub-band signals with superposed equivalent phase centers can be acquired by the configuration design of formation flying, and then the full-bandwidth signal is synthesized to obtain high-resolution image. The proposed method synthesizes the sub-band signals of distributed platforms, which provides theoretical support for applying high-resolution LFMCW signals in the field of distributed micro-satellites SAR. The correctness of the theoretical derivations and the effectiveness of the approach is validated by simulation results.
2017, 39(2): 444-450.
doi: 10.11999/JEIT160324
Abstract:
In order to obtain accurate aerial stitching images, this paper proposes a novel image mosaic method based on Binary Robust Invariant Scalable Keypoints (BRISK) feature of directed line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, the low correct matching rate and low accuracy using conventional BRISK algorithm in image mosaic. This method firstly uses BRISK algorithm to match in order to acquire rough point matching. Secondly, it constructs directed line segments, describes them with BRISK feature, and matches those directed segments. The method is used to purified point matching based on statistical voting. Finally, weighted fusion and luminance equalization are used to image fusion to accomplish image mosaic. The experiment results show that the method has strong robustness and stability for lighting, rotation, resolution and scaling. The proposed method has high precision, and can achieve fine image mosaic results.
In order to obtain accurate aerial stitching images, this paper proposes a novel image mosaic method based on Binary Robust Invariant Scalable Keypoints (BRISK) feature of directed line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, the low correct matching rate and low accuracy using conventional BRISK algorithm in image mosaic. This method firstly uses BRISK algorithm to match in order to acquire rough point matching. Secondly, it constructs directed line segments, describes them with BRISK feature, and matches those directed segments. The method is used to purified point matching based on statistical voting. Finally, weighted fusion and luminance equalization are used to image fusion to accomplish image mosaic. The experiment results show that the method has strong robustness and stability for lighting, rotation, resolution and scaling. The proposed method has high precision, and can achieve fine image mosaic results.
2017, 39(2): 451-458.
doi: 10.11999/JEIT160351
Abstract:
For the unambiguous general acquisition for Binary Offset Carrier (BOC) and its derivative signals is still scarce of analysis, a new unambiguous algorithm is proposed for all types of BOC signals and all kinds of modulation order. Firstly, the common sub-carrier model is constructed according to the links of different sub-carrier modulation. Then according to the common sub-carrier model the general expression of the signals is got. On this base, the sub-carrier is broken down into many half periods. Last, a general capture method based on a combined correlation function is proposed. Simulation results indicate that the proposed method can deal with all kinds of BOC modulated signals including complex ones with different phases or orders. An unambiguous correlated function can be obtained by the proposed method. The main peak width of correlated function is narrowed and the accuracy of capture is improved.
For the unambiguous general acquisition for Binary Offset Carrier (BOC) and its derivative signals is still scarce of analysis, a new unambiguous algorithm is proposed for all types of BOC signals and all kinds of modulation order. Firstly, the common sub-carrier model is constructed according to the links of different sub-carrier modulation. Then according to the common sub-carrier model the general expression of the signals is got. On this base, the sub-carrier is broken down into many half periods. Last, a general capture method based on a combined correlation function is proposed. Simulation results indicate that the proposed method can deal with all kinds of BOC modulated signals including complex ones with different phases or orders. An unambiguous correlated function can be obtained by the proposed method. The main peak width of correlated function is narrowed and the accuracy of capture is improved.
2017, 39(2): 459-465.
doi: 10.11999/JEIT160347
Abstract:
In order to solve the problem of high computational complexity of the ambiguity in attitude measurement, a new attitude measurement method based on antenna configuration is proposed. In a certain range of measurement error, the antenna array for attitude determination is properly configured, the integer ambiguity of double difference carrier phase is determined by the constraint information of antenna configuration. Short-baseline and middle-baseline are used for coarse and precise attitude measurement. This method for aerocraft attitude determination does not need to solve the ambiguity. It avoids the use of complex ambiguity search algorithm and improves the speed of attitude determination, it is suitable for fast attitude measurement of aerocraft.
In order to solve the problem of high computational complexity of the ambiguity in attitude measurement, a new attitude measurement method based on antenna configuration is proposed. In a certain range of measurement error, the antenna array for attitude determination is properly configured, the integer ambiguity of double difference carrier phase is determined by the constraint information of antenna configuration. Short-baseline and middle-baseline are used for coarse and precise attitude measurement. This method for aerocraft attitude determination does not need to solve the ambiguity. It avoids the use of complex ambiguity search algorithm and improves the speed of attitude determination, it is suitable for fast attitude measurement of aerocraft.
2017, 39(2): 466-473.
doi: 10.11999/JEIT160326
Abstract:
The electromagnetic fields from oblique lightning channel are studied by using FDTD and considering the effects of vertical layered ground conductivity and lightning channel tilt angle. The calculation results show that the initial peak values of lightning electromagnetic fields will increase with increasing the channel tilt angle when the observation point is under the oblique lightning channel, and the rising edges of the electromagnetic fields become steeper. The peak time of the lightning electromagnetic fields will be greater with greater distance between the lightning stroke point on the ground and the observation point. For the electromagnetic fields on the ground surface, the ground conductivity at the same side of the observation point affects mainly the initial peak values of the horizontal electric field and azimuthal magnetic field; the ground conductivity at the other side affects mainly the amplitudes of the wave tail of the horizontal electric field and the azimuthal magnetic field. For the electromagnetic fields inside the ground, the vertical electric field will decrease with increasing the underground depth, but the horizontal electric field and azimuthal magnetic field underground is basically the same as that on the ground surface.
The electromagnetic fields from oblique lightning channel are studied by using FDTD and considering the effects of vertical layered ground conductivity and lightning channel tilt angle. The calculation results show that the initial peak values of lightning electromagnetic fields will increase with increasing the channel tilt angle when the observation point is under the oblique lightning channel, and the rising edges of the electromagnetic fields become steeper. The peak time of the lightning electromagnetic fields will be greater with greater distance between the lightning stroke point on the ground and the observation point. For the electromagnetic fields on the ground surface, the ground conductivity at the same side of the observation point affects mainly the initial peak values of the horizontal electric field and azimuthal magnetic field; the ground conductivity at the other side affects mainly the amplitudes of the wave tail of the horizontal electric field and the azimuthal magnetic field. For the electromagnetic fields inside the ground, the vertical electric field will decrease with increasing the underground depth, but the horizontal electric field and azimuthal magnetic field underground is basically the same as that on the ground surface.
2017, 39(2): 474-481.
doi: 10.11999/JEIT160395
Abstract:
With the development of communication technology, investigating predistortion circuit for Travelling Wave Tube (TWT) becomes more and more important. This paper analyzes the principle of nonlinearity generator based on the schottky diodes and the effects of the zero bias junction capacitor and series resistor of the diode Simulation Program with Integrated Circuit Emphasis (SPICE) model on the expansions of the circuit for the first time. The conventional micro-strip predistortion circuits, whose absolute or relative bandwidth are less than 1.8 GHz and 4%, work at below K band, and need isolators to match the input and output ports. Based on the Advanced Design System (ADS) software, it is designed that a micro-strip predistortion circuit for millimeter wave band TWT at center frequency 30 GHz, absolute bandwidth 2 GHz, and relative bandwidth 6.67%. The results of experiments show that the compressions variations of gain and phase are from 7.5 dB and 40, 7.3 dB and 50, 7.1 dB and 59to less than 3.8 dB and 10, 3.7 dB and 12, 2.4 dB and 15 for the TWT without and with the linearizer at 29 GHz, 30 GHz and 31 GHz respectively. The two tones test results show that the Input Power Back Off (IPBO) are 17 dB, 18 dB and 18 dB for the TWT, but 12 dB, 9 dB, and 8 dB for the Linearized TWT (LTWT), namely 5 dB, 9 dB, and 10 dB improvements with the linearizer at 29 GHz, 30 GHz, and 31 GHz respectively for the demand of 25 dBc Carrier to third InterModulation (C/IM3) ratioin in communication system. The linearity of TWT is clearly improved with the linearizer, which serves as a great value for engineering application.
With the development of communication technology, investigating predistortion circuit for Travelling Wave Tube (TWT) becomes more and more important. This paper analyzes the principle of nonlinearity generator based on the schottky diodes and the effects of the zero bias junction capacitor and series resistor of the diode Simulation Program with Integrated Circuit Emphasis (SPICE) model on the expansions of the circuit for the first time. The conventional micro-strip predistortion circuits, whose absolute or relative bandwidth are less than 1.8 GHz and 4%, work at below K band, and need isolators to match the input and output ports. Based on the Advanced Design System (ADS) software, it is designed that a micro-strip predistortion circuit for millimeter wave band TWT at center frequency 30 GHz, absolute bandwidth 2 GHz, and relative bandwidth 6.67%. The results of experiments show that the compressions variations of gain and phase are from 7.5 dB and 40, 7.3 dB and 50, 7.1 dB and 59to less than 3.8 dB and 10, 3.7 dB and 12, 2.4 dB and 15 for the TWT without and with the linearizer at 29 GHz, 30 GHz and 31 GHz respectively. The two tones test results show that the Input Power Back Off (IPBO) are 17 dB, 18 dB and 18 dB for the TWT, but 12 dB, 9 dB, and 8 dB for the Linearized TWT (LTWT), namely 5 dB, 9 dB, and 10 dB improvements with the linearizer at 29 GHz, 30 GHz, and 31 GHz respectively for the demand of 25 dBc Carrier to third InterModulation (C/IM3) ratioin in communication system. The linearity of TWT is clearly improved with the linearizer, which serves as a great value for engineering application.
2017, 39(2): 482-488.
doi: 10.11999/JEIT160334
Abstract:
The application of LCLC resonant converters for space Travelling-Wave Tube Amplifiers (TWTAs) is investigated in this paper. Based on the working principles under Zero Current Switching (ZCS) and Zero Voltage Switching (ZVS), the equivalent circuit of each mode is derived. In addition, the parameters in each mode are also calculated. In order to validate the effectiveness of the analysis, PSIM simulations are carried out and the results are in accordance with the calculated results. Finally, an LCLC resonant converter with 20 V input, 4600 V output, switching frequency 200 kHz, 280 W output power, 93.38% efficiency is designed. Both the simulation results and the experimental results validate the effectiveness of the analysis.
The application of LCLC resonant converters for space Travelling-Wave Tube Amplifiers (TWTAs) is investigated in this paper. Based on the working principles under Zero Current Switching (ZCS) and Zero Voltage Switching (ZVS), the equivalent circuit of each mode is derived. In addition, the parameters in each mode are also calculated. In order to validate the effectiveness of the analysis, PSIM simulations are carried out and the results are in accordance with the calculated results. Finally, an LCLC resonant converter with 20 V input, 4600 V output, switching frequency 200 kHz, 280 W output power, 93.38% efficiency is designed. Both the simulation results and the experimental results validate the effectiveness of the analysis.
2017, 39(2): 489-493.
doi: 10.11999/JEIT160303
Abstract:
To address the problem of the description and encryption of color images on the quantum computer, a new method based on the phase rotation of qubit is proposed. Firstly, the color image is described as a quantum superposition state by mapping the pixel gray value to the phase of the qubit, where the ground state denotes the position of the pixel, and the corresponding probability amplitude denotes the gray value of the pixel. Then, based on the phase rotation of the qubit, some simple image processing methods are designed. Finally, a new color image encryption algorithm is proposed, which consists of two processes: the scrambling of the pixel position and the rotation of the qubits. The proposed method can be run on quantum computers in the future. The simulation results on the classic computer show that the method is effective.
To address the problem of the description and encryption of color images on the quantum computer, a new method based on the phase rotation of qubit is proposed. Firstly, the color image is described as a quantum superposition state by mapping the pixel gray value to the phase of the qubit, where the ground state denotes the position of the pixel, and the corresponding probability amplitude denotes the gray value of the pixel. Then, based on the phase rotation of the qubit, some simple image processing methods are designed. Finally, a new color image encryption algorithm is proposed, which consists of two processes: the scrambling of the pixel position and the rotation of the qubits. The proposed method can be run on quantum computers in the future. The simulation results on the classic computer show that the method is effective.
2017, 39(2): 494-498.
doi: 10.11999/JEIT160272
Abstract:
Basing on the signal-subspace vectors from the spatial-frequency decomposition, a novel time-reversal imaging algorithm is proposed. Using the backscattered data recorded by the antenna array, a spatial-frequency multistatic matrix is set up. Singular value decomposition is applied to the matrix to obtain the signal-subspace vectors, which are employed to focus the targets imaging selectively. The imaging pseudo-spectrum based on the full backscattered data includes the contributions of multiple sub-vectors and can be viewed as the superposition of multiple images. The algorithm is statistically stable. The random phases, generated by the conventional time-reversal imaging method based on the spatial-spatial decomposition, do not arise in the algorithm. It has excellent capability to resist the noise interference and can accurately focus the multi-targets even when noise with 10 dB SNR is added to the measured data.
Basing on the signal-subspace vectors from the spatial-frequency decomposition, a novel time-reversal imaging algorithm is proposed. Using the backscattered data recorded by the antenna array, a spatial-frequency multistatic matrix is set up. Singular value decomposition is applied to the matrix to obtain the signal-subspace vectors, which are employed to focus the targets imaging selectively. The imaging pseudo-spectrum based on the full backscattered data includes the contributions of multiple sub-vectors and can be viewed as the superposition of multiple images. The algorithm is statistically stable. The random phases, generated by the conventional time-reversal imaging method based on the spatial-spatial decomposition, do not arise in the algorithm. It has excellent capability to resist the noise interference and can accurately focus the multi-targets even when noise with 10 dB SNR is added to the measured data.
2017, 39(2): 499-503.
doi: 10.11999/JEIT160297
Abstract:
A body biasing linearization technique is proposed to shape output spectrum mask to meet the stringent specification of the Human Body Communication (HBC) standard for Wireless Body Area Network (WBAN). Body biasing of the buffer transistors is properly designed, accordingly second-order nonlinearity coefficient is tuned so that Second order InterModulation product (IM2) at the output of buffer is cancelled. Under 0.35 m CMOS process and a supply voltage of 1.8 V, a sample HBC transmitter based on body biasing is designed. Simulation results show that an optimum of 90 dBm IIP2 can be obtained and output transmit spectral mask at 1 MHz is attenuated to be -130 dBr (dB relative to the center frequency). Compared with conventional circuits, an improvement of 23 dB spectrum attenuation is achieved, satisfying the -120 dBr requirement of IEEE 802.15.6 for WBAN.
A body biasing linearization technique is proposed to shape output spectrum mask to meet the stringent specification of the Human Body Communication (HBC) standard for Wireless Body Area Network (WBAN). Body biasing of the buffer transistors is properly designed, accordingly second-order nonlinearity coefficient is tuned so that Second order InterModulation product (IM2) at the output of buffer is cancelled. Under 0.35 m CMOS process and a supply voltage of 1.8 V, a sample HBC transmitter based on body biasing is designed. Simulation results show that an optimum of 90 dBm IIP2 can be obtained and output transmit spectral mask at 1 MHz is attenuated to be -130 dBr (dB relative to the center frequency). Compared with conventional circuits, an improvement of 23 dB spectrum attenuation is achieved, satisfying the -120 dBr requirement of IEEE 802.15.6 for WBAN.
2017, 39(2): 504-508.
doi: 10.11999/JEIT160876
Abstract:
Traveling Wave Tube Amplifier (TWTA) is used extensively in the fields of radars and communication. And its phase stability could exert influence upon the quality of transmission signals, the detection accuracy of target parameters and the electromagnetic compatibility. This paper quantitatively analyses the effects of Electric Power Conductor (EPC)s circuit parameters upon the phase stability of TWTA, and proposes further advanced three designing schemes on the EPC to improve the phase stability of TWTA, i.e. selecting a reasonable power supply circuit; adopting low-voltage-charging, high-frequency, high-voltage source to raise the stability of high-cathode voltage; compensating the phase instability from the cathode voltage pulse top fall effect through the adjusting of the amplitude of control pulse. This paper provides theoretical basis for the researching of TWTAs with compact size, high power, and high phase stability.
Traveling Wave Tube Amplifier (TWTA) is used extensively in the fields of radars and communication. And its phase stability could exert influence upon the quality of transmission signals, the detection accuracy of target parameters and the electromagnetic compatibility. This paper quantitatively analyses the effects of Electric Power Conductor (EPC)s circuit parameters upon the phase stability of TWTA, and proposes further advanced three designing schemes on the EPC to improve the phase stability of TWTA, i.e. selecting a reasonable power supply circuit; adopting low-voltage-charging, high-frequency, high-voltage source to raise the stability of high-cathode voltage; compensating the phase instability from the cathode voltage pulse top fall effect through the adjusting of the amplitude of control pulse. This paper provides theoretical basis for the researching of TWTAs with compact size, high power, and high phase stability.