Email alert
2017 Vol. 39, No. 11
Display Method:
2017, 39(11): 2541-2547.
doi: 10.11999/JEIT170161
Abstract:
Considering Dense Small-Cell Networks (DSCNs) with limited pilot resource, estimating channel is carried out using pilot-reused Minimum Mean Square Error (MMSE) estimator, and then exact expressions of the uplink achievable rate are derived with maximal ratio combing receiver for arbitrary pilot reuse factors. Severer pilot contamination will result in degrading the uplink net achievable sum rate. To maximize uplink achievable sum rate, a greedy pilot scheduling algorithm is proposed using large-scale fading channel information to reduce pilot contamination. On this basis, a low-complexity semi-dynamic pilot scheduling algorithm is proposed to determine best pilot reuse factor. Simulation results are presented to verify the theoretical derivation, and the proposed semi-dynamic pilot scheduling algorithm can reduce pilot overhead, mitigate pilot contamination and boost uplink net achievable sum rate.
Considering Dense Small-Cell Networks (DSCNs) with limited pilot resource, estimating channel is carried out using pilot-reused Minimum Mean Square Error (MMSE) estimator, and then exact expressions of the uplink achievable rate are derived with maximal ratio combing receiver for arbitrary pilot reuse factors. Severer pilot contamination will result in degrading the uplink net achievable sum rate. To maximize uplink achievable sum rate, a greedy pilot scheduling algorithm is proposed using large-scale fading channel information to reduce pilot contamination. On this basis, a low-complexity semi-dynamic pilot scheduling algorithm is proposed to determine best pilot reuse factor. Simulation results are presented to verify the theoretical derivation, and the proposed semi-dynamic pilot scheduling algorithm can reduce pilot overhead, mitigate pilot contamination and boost uplink net achievable sum rate.
2017, 39(11): 2548-2555.
doi: 10.11999/JEIT170122
Abstract:
In order to improve the spectrum efficiency in the cognitive radio networks, the optimization algorithm of the spectrum sensing estimation time is presented. The longer sensing time will bring two aspects of the consequences. On the one hand, the channel parameters are estimated more accurate so as to reduce the interference to the authorized users and to improve the throughput of the cognitive users. On the other hand, it shortens the transmission time so as to decease the system throughput. In this time, it exists an optimal sensing time to maximize the throughput. It is considered that the channel state information of sub-bands is exponentially distributed, so a stochastic programming method is proposed to optimize the sensing time of the cognitive radio networks. The computer simulation results show that the algorithm is effective and has a certain engineering application value.
In order to improve the spectrum efficiency in the cognitive radio networks, the optimization algorithm of the spectrum sensing estimation time is presented. The longer sensing time will bring two aspects of the consequences. On the one hand, the channel parameters are estimated more accurate so as to reduce the interference to the authorized users and to improve the throughput of the cognitive users. On the other hand, it shortens the transmission time so as to decease the system throughput. In this time, it exists an optimal sensing time to maximize the throughput. It is considered that the channel state information of sub-bands is exponentially distributed, so a stochastic programming method is proposed to optimize the sensing time of the cognitive radio networks. The computer simulation results show that the algorithm is effective and has a certain engineering application value.
2017, 39(11): 2556-2562.
doi: 10.11999/JEIT170184
Abstract:
In order to efficiently utilize the unlicensed band to improve the network capacity, it is necessary to solve the problem of the coexistence between Long Term Evaluation (LTE) and WiFi. Licensed-Assisted Access (LAA) and the Dual-Band Femtocell (DBF) which can access both the licensed and unlicensed bands are proposed recently. In this paper, considering the case that a DBF partly overlaps with just one WiFi Access Point (AP), a novel DBF unlicensed channel access mechanism is proposed. Then, the optimal traffic balancing scheme over licensed and unlicensed bands considering the effect of coverage overlap is developed. Numerical results show that if DBF and WiFi AP are invisible mutually, the proposed scheme outperforms the existing scheme in terms of sum utility and throughput because of extra fraction of unlicensed channel time for DBF using spatial reuse. Otherwise the performance is the same as the existing scheme, that is, DBF and WiFi AP use the unlicensed band alternately.
In order to efficiently utilize the unlicensed band to improve the network capacity, it is necessary to solve the problem of the coexistence between Long Term Evaluation (LTE) and WiFi. Licensed-Assisted Access (LAA) and the Dual-Band Femtocell (DBF) which can access both the licensed and unlicensed bands are proposed recently. In this paper, considering the case that a DBF partly overlaps with just one WiFi Access Point (AP), a novel DBF unlicensed channel access mechanism is proposed. Then, the optimal traffic balancing scheme over licensed and unlicensed bands considering the effect of coverage overlap is developed. Numerical results show that if DBF and WiFi AP are invisible mutually, the proposed scheme outperforms the existing scheme in terms of sum utility and throughput because of extra fraction of unlicensed channel time for DBF using spatial reuse. Otherwise the performance is the same as the existing scheme, that is, DBF and WiFi AP use the unlicensed band alternately.
2017, 39(11): 2563-2570.
doi: 10.11999/JEIT170391
Abstract:
The design of Directional Modulation (DM) signal by a phased array is one of the important topics in the field of physical layer security communication. In this paper, a synthesis method for synthesis of a sparse array is proposed based on convex optimization. Firstly, a nonconvex optimization problem is formulated associated with some basic metrics of DM signal. Secondly, two different solutions are presented: one is based on Iterative Reweighted l1-norm (IRL) resulting in a superdirective array with the interelement spacing less than half-wavelength; the other is based on Mixed Integer Programming (MIP) resulting in a nonsuperdirective array with the interelement spacing more than half-wavelength. Finally, the power efficiency of DM transmitter is optimized based on MIP algorithm. Simulation results show that the proposed synthesis method provides greater flexibility of controlling the security performance, power efficiency and sparse level, while at the same time the number of excitations is less than the uniformly spaced linear array in the benchmark problems.
The design of Directional Modulation (DM) signal by a phased array is one of the important topics in the field of physical layer security communication. In this paper, a synthesis method for synthesis of a sparse array is proposed based on convex optimization. Firstly, a nonconvex optimization problem is formulated associated with some basic metrics of DM signal. Secondly, two different solutions are presented: one is based on Iterative Reweighted l1-norm (IRL) resulting in a superdirective array with the interelement spacing less than half-wavelength; the other is based on Mixed Integer Programming (MIP) resulting in a nonsuperdirective array with the interelement spacing more than half-wavelength. Finally, the power efficiency of DM transmitter is optimized based on MIP algorithm. Simulation results show that the proposed synthesis method provides greater flexibility of controlling the security performance, power efficiency and sparse level, while at the same time the number of excitations is less than the uniformly spaced linear array in the benchmark problems.
2017, 39(11): 2571-2578.
doi: 10.11999/JEIT170058
Abstract:
In order to overcome the limitation of one-dimensional model in accuracy of mine workers fingerprint location, a two-dimensional fingerprint location database algorithm for mine workers is proposed. The problem of the large data acquisition workload brought by the two-dimensional model is also solved by SVR-Kriging interpolation. Firstly, Gaussian filtering is used to preprocess the fingerprint information of the collected sampling point and the variation function is fitted by the Support Vector Regression (SVR). Then, the Kriging interpolation is used to complete the position fingerprint information of the un-sampled area in the two-dimensional meshing. Finally, the fingerprint location database of the mine workers is established by integrating the location fingerprint information of the sampling points and the interpolation points, laying the foundation for the follow-up mine workers fingerprint location. The simulation results show that the proposed algorithm can reduce the workload of data acquisition while ensuring the feasibility and the effectiveness of the algorithm and can guarantee high accuracy when positioning is performed through the location fingerprint.
In order to overcome the limitation of one-dimensional model in accuracy of mine workers fingerprint location, a two-dimensional fingerprint location database algorithm for mine workers is proposed. The problem of the large data acquisition workload brought by the two-dimensional model is also solved by SVR-Kriging interpolation. Firstly, Gaussian filtering is used to preprocess the fingerprint information of the collected sampling point and the variation function is fitted by the Support Vector Regression (SVR). Then, the Kriging interpolation is used to complete the position fingerprint information of the un-sampled area in the two-dimensional meshing. Finally, the fingerprint location database of the mine workers is established by integrating the location fingerprint information of the sampling points and the interpolation points, laying the foundation for the follow-up mine workers fingerprint location. The simulation results show that the proposed algorithm can reduce the workload of data acquisition while ensuring the feasibility and the effectiveness of the algorithm and can guarantee high accuracy when positioning is performed through the location fingerprint.
2017, 39(11): 2579-2586.
doi: 10.11999/JEIT170150
Abstract:
Due to the high resource redundancy of full protection for requests multi-link failure in elastic optical networks, a strategy of Reliability Oriented Probability Protection and Backup Reprovisioning (ROPP-BR) is proposed. In the ROPP-BR, an adaptive adjustment working and protection link cost functions are devised to effectively select the alternative working and protection light-paths respectively which have high reliability and consume a small amount of spectrum resources in the multi-link failure. The cost function sufficiently considers the spectrum resources consumption and the link failure probability for both working and protection paths. Meanwhile, in order to satisfy the differentiated reliability requirements, a reliability oriented probability protection mechanism is introduced. And a theoretical analysis model is established to estimate the service reliability for light- paths in optical networks with probability protection, so the probability protection path can be flexibly configured under the restriction of the reliability requirements of the request. Moreover, for further improving protection sharing efficiency, when the request will be blocked for shortage of idle spectrum resource, a backup reprovisioning method based on maximum clique, is put forward to reconfigure the protection paths and spectrum resources. The simulation results indicate that the proposed ROPP-BR strategy can synthetically consider the performance of bandwidth blocking probability and service reliability, and effectively reduce the redundancy of protection resources.
Due to the high resource redundancy of full protection for requests multi-link failure in elastic optical networks, a strategy of Reliability Oriented Probability Protection and Backup Reprovisioning (ROPP-BR) is proposed. In the ROPP-BR, an adaptive adjustment working and protection link cost functions are devised to effectively select the alternative working and protection light-paths respectively which have high reliability and consume a small amount of spectrum resources in the multi-link failure. The cost function sufficiently considers the spectrum resources consumption and the link failure probability for both working and protection paths. Meanwhile, in order to satisfy the differentiated reliability requirements, a reliability oriented probability protection mechanism is introduced. And a theoretical analysis model is established to estimate the service reliability for light- paths in optical networks with probability protection, so the probability protection path can be flexibly configured under the restriction of the reliability requirements of the request. Moreover, for further improving protection sharing efficiency, when the request will be blocked for shortage of idle spectrum resource, a backup reprovisioning method based on maximum clique, is put forward to reconfigure the protection paths and spectrum resources. The simulation results indicate that the proposed ROPP-BR strategy can synthetically consider the performance of bandwidth blocking probability and service reliability, and effectively reduce the redundancy of protection resources.
2017, 39(11): 2587-2593.
doi: 10.11999/JEIT170068
Abstract:
A maximal-capacity-difference mapping-based secrecy polar coding method is proposed. It improves the secrecy rate by reducing the channel polarization speed. First, the polarized channels are divided into two categoryies based on the polarization structure: the good quality ones and the bad quality ones. By analyzing the eraser rates of the polarized channels, a maximal-capacity-difference mapping method is proposed. Through improving the capacity of the bad polarized channels and reducing that of the good polarized channels, the channel polarization speed decreases efficiently. Finally, weighting is adopted to modify the maximal-capacity-difference mapping results between legitimate channels and wiretap channels, thus the secrecy polar coding in multi-input channel is implemented. Simulation results verify that the secrecy rate of proposed method in binary erasure channels can be increased from 0.029 and 0.004 to 0.042, compared to the random mapping method and Arikans method at polarization ordern=9, respectively. And the proposed method also works in fading channels.
A maximal-capacity-difference mapping-based secrecy polar coding method is proposed. It improves the secrecy rate by reducing the channel polarization speed. First, the polarized channels are divided into two categoryies based on the polarization structure: the good quality ones and the bad quality ones. By analyzing the eraser rates of the polarized channels, a maximal-capacity-difference mapping method is proposed. Through improving the capacity of the bad polarized channels and reducing that of the good polarized channels, the channel polarization speed decreases efficiently. Finally, weighting is adopted to modify the maximal-capacity-difference mapping results between legitimate channels and wiretap channels, thus the secrecy polar coding in multi-input channel is implemented. Simulation results verify that the secrecy rate of proposed method in binary erasure channels can be increased from 0.029 and 0.004 to 0.042, compared to the random mapping method and Arikans method at polarization ordern=9, respectively. And the proposed method also works in fading channels.
2017, 39(11): 2594-2599.
doi: 10.11999/JEIT170113
Abstract:
This paper proposes a Distributed Joint Source-Channel Coding (DJSCC) scheme using Protograph Low Density Parity Check (P-LDPC) code. In the proposed scheme, the distributed source encoder sends some information bits together with the parity bits to simultaneously achieve both distributed compression and channel error correction. Iterative joint decoding is introduced to further exploit the source correlation. Moreover, the proposed scheme is investigated when the correlation between sources is not known at the decoder. Simulation results indicate that the proposed DJSCC scheme can obtain relatively large additional coding gains at a relatively small number of global iterations, and the performance for unknown correlated sources is almost the same as that for known correlated sources since correlation can be estimated jointly with the iterative decoding process.
This paper proposes a Distributed Joint Source-Channel Coding (DJSCC) scheme using Protograph Low Density Parity Check (P-LDPC) code. In the proposed scheme, the distributed source encoder sends some information bits together with the parity bits to simultaneously achieve both distributed compression and channel error correction. Iterative joint decoding is introduced to further exploit the source correlation. Moreover, the proposed scheme is investigated when the correlation between sources is not known at the decoder. Simulation results indicate that the proposed DJSCC scheme can obtain relatively large additional coding gains at a relatively small number of global iterations, and the performance for unknown correlated sources is almost the same as that for known correlated sources since correlation can be estimated jointly with the iterative decoding process.
2017, 39(11): 2600-2606.
doi: 10.11999/JEIT170084
Abstract:
Most existing parameter estimation algorithms for Frequency Hopping (FH) signal do not consider the structural characteristics of FH signals, and have the disadvantages of high computational complexity or low estimation accuracy in low signal-to-noise ratio circumstance. To solve this problem, this paper proposes a parameter estimation algorithm for frequency hopping signal in compressed domain based on sliding window and atomic dictionary. The frequency hopping signal is acquired by sliding compression sampling, and hopping time is roughly estimated with sliding window method. The Fourier orthogonal basis of block diagonalization is used as sparse basis to estimate the frequency of the signal. An atomic dictionary, which can represent the local time-frequency characteristics of the frequency hopping signal, is constructed based on the estimated frequency and rough hopping time. Then the hopping time can be estimated accurately by the matching pursuit algorithm. Simulation results show that this algorithm can significantly reduce the sampling data and computational complexity, while maintaining the high accuracy estimation.
Most existing parameter estimation algorithms for Frequency Hopping (FH) signal do not consider the structural characteristics of FH signals, and have the disadvantages of high computational complexity or low estimation accuracy in low signal-to-noise ratio circumstance. To solve this problem, this paper proposes a parameter estimation algorithm for frequency hopping signal in compressed domain based on sliding window and atomic dictionary. The frequency hopping signal is acquired by sliding compression sampling, and hopping time is roughly estimated with sliding window method. The Fourier orthogonal basis of block diagonalization is used as sparse basis to estimate the frequency of the signal. An atomic dictionary, which can represent the local time-frequency characteristics of the frequency hopping signal, is constructed based on the estimated frequency and rough hopping time. Then the hopping time can be estimated accurately by the matching pursuit algorithm. Simulation results show that this algorithm can significantly reduce the sampling data and computational complexity, while maintaining the high accuracy estimation.
2017, 39(11): 2607-2614.
doi: 10.11999/JEIT170166
Abstract:
The modified cyclic autocorrelation algorithm is proposed to estimate problems of the useful data period, entire symbol period, chip duration, and guard interval length in MC-CDMA signal. Firstly, the autocorrelation function of the received MC-CDMA signal is computed. Then, Fourier transformation and accumulation in frequency-domain are used. Finally, through detecting the interval of the peak pulses in different slices, parameters mentioned above can be estimated. In addition, a new method of estimating symbol period, the accumulative average method, is developed. By averaging amplitudes of the spectral lines in each column parallel to the delay axis, the symbol duration can be obtained. Cyclic autocorrelation expression of the MC-CDMA signal is derived theoretically, and it is proved that MC-CDMA signal with cyclic prefix has cyclostationary. The simulation results show that, the improved cyclic autocorrelation algorithm is effective in the low signal-to-noise cause.
The modified cyclic autocorrelation algorithm is proposed to estimate problems of the useful data period, entire symbol period, chip duration, and guard interval length in MC-CDMA signal. Firstly, the autocorrelation function of the received MC-CDMA signal is computed. Then, Fourier transformation and accumulation in frequency-domain are used. Finally, through detecting the interval of the peak pulses in different slices, parameters mentioned above can be estimated. In addition, a new method of estimating symbol period, the accumulative average method, is developed. By averaging amplitudes of the spectral lines in each column parallel to the delay axis, the symbol duration can be obtained. Cyclic autocorrelation expression of the MC-CDMA signal is derived theoretically, and it is proved that MC-CDMA signal with cyclic prefix has cyclostationary. The simulation results show that, the improved cyclic autocorrelation algorithm is effective in the low signal-to-noise cause.
2017, 39(11): 2615-2619.
doi: 10.11999/JEIT170092
Abstract:
A special new problem is proposed in the constrained longest common subsequence problem. Given sequences Q , C and the specific positions sequence I in Q, the matching path constrained longest common subsequence problem for Q and C with respect to I is to find a Longest Common Subsequence (LCS) of Q and C such that the positions I in Q are in matching path of this LCS. A matching path constrained longest common subsequence algorithm is proposed for this problem. Firstly, a new model is defined for matching path constrained longest common subsequence. Secondly, the property of the subsequence is given. Lastly, a common method with O(mnt) time and a fast method with O(mn) time are respectively analyzed, where n, m and t are lengths of Q, C, and I respectively.
A special new problem is proposed in the constrained longest common subsequence problem. Given sequences Q , C and the specific positions sequence I in Q, the matching path constrained longest common subsequence problem for Q and C with respect to I is to find a Longest Common Subsequence (LCS) of Q and C such that the positions I in Q are in matching path of this LCS. A matching path constrained longest common subsequence algorithm is proposed for this problem. Firstly, a new model is defined for matching path constrained longest common subsequence. Secondly, the property of the subsequence is given. Lastly, a common method with O(mnt) time and a fast method with O(mn) time are respectively analyzed, where n, m and t are lengths of Q, C, and I respectively.
2017, 39(11): 2620-2626.
doi: 10.11999/JEIT170236
Abstract:
In order to solve the problem of adaptive beamformer performance degradation due to few snapshots and mainlobe angle mismatch, a robust dominant mode rejection adaptive beamforming algorithm based on modified covariance matrix is proposed. The algorithm employs modified covariance matrix to estimate dominant mode component and reconstructs covariance matrix. Then, the signal subspace is obtained by projecting the steering vector of the desired signal onto eigenvector of the reconstructed covariance matrix, for enhancing the performance of algorithm at low SNR, the algorithm uses the principal eigenvector of a positive define matrix which integrates the desired signal region to modify signal subspace, thus calibrated steering vector of the desired signal is obtained by projecting the presumed steering vector onto the modified signal subspace. Finally, the optimal weight is obtained from the reconstructed covariance matrix and the calibrated steering vector. Theoretical analysis and simulation show the effectiveness and robustness of the proposed algorithm.
In order to solve the problem of adaptive beamformer performance degradation due to few snapshots and mainlobe angle mismatch, a robust dominant mode rejection adaptive beamforming algorithm based on modified covariance matrix is proposed. The algorithm employs modified covariance matrix to estimate dominant mode component and reconstructs covariance matrix. Then, the signal subspace is obtained by projecting the steering vector of the desired signal onto eigenvector of the reconstructed covariance matrix, for enhancing the performance of algorithm at low SNR, the algorithm uses the principal eigenvector of a positive define matrix which integrates the desired signal region to modify signal subspace, thus calibrated steering vector of the desired signal is obtained by projecting the presumed steering vector onto the modified signal subspace. Finally, the optimal weight is obtained from the reconstructed covariance matrix and the calibrated steering vector. Theoretical analysis and simulation show the effectiveness and robustness of the proposed algorithm.
2017, 39(11): 2627-2634.
doi: 10.11999/JEIT170173
Abstract:
According to the accurate and real-time requirement for fall detection. An activity model based on attitude angles is firstly established. A sensor board integrated with trial-axil accelerator and gyroscope is developed, which can capture the accelerations and angular velocities of human activities and transmit them to a smart phone by Bluetooth. Secondly, the three-dimensional attitude angle and acceleration signal vector magnitude are selected as features for fall detection. The collected data is preprocessed using Kalman filter to reduce noise and enhance the precision of attitude angle calculation. The k-Nearest Neighbor (k-NN) algorithm and appropriate sliding window are introduced to develop the fall detection and alert system. At last, the experimental results show that the system discriminates falls from the activities of daily living with accuracy of 98.9%, while the sensitivity and specificity are 98.9%, and 98.5% respectively. It proves that the method has favorable accuracy and reliability.
According to the accurate and real-time requirement for fall detection. An activity model based on attitude angles is firstly established. A sensor board integrated with trial-axil accelerator and gyroscope is developed, which can capture the accelerations and angular velocities of human activities and transmit them to a smart phone by Bluetooth. Secondly, the three-dimensional attitude angle and acceleration signal vector magnitude are selected as features for fall detection. The collected data is preprocessed using Kalman filter to reduce noise and enhance the precision of attitude angle calculation. The k-Nearest Neighbor (k-NN) algorithm and appropriate sliding window are introduced to develop the fall detection and alert system. At last, the experimental results show that the system discriminates falls from the activities of daily living with accuracy of 98.9%, while the sensitivity and specificity are 98.9%, and 98.5% respectively. It proves that the method has favorable accuracy and reliability.
2017, 39(11): 2635-2643.
doi: 10.11999/JEIT170045
Abstract:
The local similarity measurements are usually used for improving the tracking robustness under the complex scene. However, this method have drawbacks in cases of partial occlusion, deformation and rotation. For example, the method only considers traditional similarity measurements of targets and templates, results in the matching errors to lead to tracking failure. In this paper, a target tracking algorithm is proposed based on measurements of the local difference similarities. The presented method has the following advantages: firstly, both similarities and differences are considered for measurement; secondly, the differential weight learning of the local region is carried out to improve the accuracy of sub-block difference measurement; at last, an effective and efficient tracker is designed based on the difference analysis and a simple update manner within the particle filter framework. Experimental results show that the proposed algorithm achieves better performance than traditional competing methods in various factors, such as illumination changes, part occlusion, scale changes and so on.
The local similarity measurements are usually used for improving the tracking robustness under the complex scene. However, this method have drawbacks in cases of partial occlusion, deformation and rotation. For example, the method only considers traditional similarity measurements of targets and templates, results in the matching errors to lead to tracking failure. In this paper, a target tracking algorithm is proposed based on measurements of the local difference similarities. The presented method has the following advantages: firstly, both similarities and differences are considered for measurement; secondly, the differential weight learning of the local region is carried out to improve the accuracy of sub-block difference measurement; at last, an effective and efficient tracker is designed based on the difference analysis and a simple update manner within the particle filter framework. Experimental results show that the proposed algorithm achieves better performance than traditional competing methods in various factors, such as illumination changes, part occlusion, scale changes and so on.
2017, 39(11): 2644-2651.
doi: 10.11999/JEIT170390
Abstract:
Saliency detection is to find the most important object automatically according to the human visual in the unknown scene. For improving the precision of saliency detection, the saliency detection based on robust foreground seeds via manifold ranking is proposed in this paper. Firstly, the two different convex hulls are got by the Harris corner and boundary connectivity algorithm. And the original object region is defined by the intersection about the above convex hulls. Secondly, the superpixels in convex hull are done the similarity detection with the outer edge of the convex hull. The superpixels are removed when they are similar to most of the outer edge, and the more precision foreground seeds are got. Using the anchor graph, a novel graph construction is built to express the relationship between data nodes. And then, two different kinds of salient results will be got based on ranking on manifolds using foreground and background seeds respectively. Finally, the saliency map is got through optimizing a novel cost function. Experimental results prove that the proposed algorithm improves the performance evaluation of precision and recall rate further.
Saliency detection is to find the most important object automatically according to the human visual in the unknown scene. For improving the precision of saliency detection, the saliency detection based on robust foreground seeds via manifold ranking is proposed in this paper. Firstly, the two different convex hulls are got by the Harris corner and boundary connectivity algorithm. And the original object region is defined by the intersection about the above convex hulls. Secondly, the superpixels in convex hull are done the similarity detection with the outer edge of the convex hull. The superpixels are removed when they are similar to most of the outer edge, and the more precision foreground seeds are got. Using the anchor graph, a novel graph construction is built to express the relationship between data nodes. And then, two different kinds of salient results will be got based on ranking on manifolds using foreground and background seeds respectively. Finally, the saliency map is got through optimizing a novel cost function. Experimental results prove that the proposed algorithm improves the performance evaluation of precision and recall rate further.
2017, 39(11): 2652-2660.
doi: 10.11999/JEIT170162
Abstract:
In this paper, an adaptive block person re-identification method based on saliency fusion is proposed to solve the problems of the lack of guidance on the rule and size of block in the block matching-based person re-identification, and the differentiation degree between different blocks. Firstly, the heuristic idea is used to determine the initial clustering center, and the size and number of blocks are determined automatically according to the image content. Then, the intra-image salience of each block is calculated using the Area Under the normalized partial Curve (pAUC), the intra-image salience of each block is learned by structured SVM, and the weights of each block are fused as the base of matching Score fusion. Experiments show that this method can achieve better recognition results on the commonly used person re-identification data sets.
In this paper, an adaptive block person re-identification method based on saliency fusion is proposed to solve the problems of the lack of guidance on the rule and size of block in the block matching-based person re-identification, and the differentiation degree between different blocks. Firstly, the heuristic idea is used to determine the initial clustering center, and the size and number of blocks are determined automatically according to the image content. Then, the intra-image salience of each block is calculated using the Area Under the normalized partial Curve (pAUC), the intra-image salience of each block is learned by structured SVM, and the weights of each block are fused as the base of matching Score fusion. Experiments show that this method can achieve better recognition results on the commonly used person re-identification data sets.
2017, 39(11): 2661-2668.
doi: 10.11999/JEIT170214
Abstract:
In order to achieve fast and accurate segmentation of images with complicated background and weak boundaries, the re-initialization method is often adopted in the traditional level set function. However, this method has many problems such as large computation and inaccurate segmentation. Thus, combined with the saliency detection algorithm, a new image segmentation method of variable level set based on the combination of edge information and regional local information is proposed. Firstly, the saliency region of the image is detected by the cellular automata model to obtain initial boundary curve of the image. Then, an improved distance normalized level set evolution (Distance Regularized Level Set Evolution, DRLSE) model is used to combine the local information of the image into the variational energy equation, and the evolution of the curve is guided by the improved energy equation. Compared with the DRLSE, the experimental results show that the average time of the proposed algorithm only needs 2.76% of the former with further improvements in the accuracy of image segmentation.
In order to achieve fast and accurate segmentation of images with complicated background and weak boundaries, the re-initialization method is often adopted in the traditional level set function. However, this method has many problems such as large computation and inaccurate segmentation. Thus, combined with the saliency detection algorithm, a new image segmentation method of variable level set based on the combination of edge information and regional local information is proposed. Firstly, the saliency region of the image is detected by the cellular automata model to obtain initial boundary curve of the image. Then, an improved distance normalized level set evolution (Distance Regularized Level Set Evolution, DRLSE) model is used to combine the local information of the image into the variational energy equation, and the evolution of the curve is guided by the improved energy equation. Compared with the DRLSE, the experimental results show that the average time of the proposed algorithm only needs 2.76% of the former with further improvements in the accuracy of image segmentation.
2017, 39(11): 2669-2676.
doi: 10.11999/JEIT170120
Abstract:
In modern life, high stress causes negative emotions and even leads to various chronic diseases. Psychologists need to understand the stress state of the individual in order to facilitate the corresponding psychological treatment. The traditional method of self-evaluation in psychology contains some subjectivity, while the method based on physiological polygraph can not be used in daily stress assessment because of the volume of equipment. For these reasons, a wearable device is used to collect the physiological signals and an assessment of the individuals stress is made according to the associated relationship between the psychological and physiological. The Montreal Imaging Stress Task (MIST) is used to induce three states of no, moderate and high stress. The MIST includs both mental and psychosocial stress factors, which is more closing to a real-life condition. The experimental data are collected from 39 healthy subjects. Features are extracted from the data and the random forest is used to select the optimal stress-related feature combination, which is used to train and test the Support Vector Machine (SVM) classifier. Finally, the results show that the combination of random forest feature selection and SVM achieves a better performance. The accuracy is improved from 78% to 84% in the three states detection.
In modern life, high stress causes negative emotions and even leads to various chronic diseases. Psychologists need to understand the stress state of the individual in order to facilitate the corresponding psychological treatment. The traditional method of self-evaluation in psychology contains some subjectivity, while the method based on physiological polygraph can not be used in daily stress assessment because of the volume of equipment. For these reasons, a wearable device is used to collect the physiological signals and an assessment of the individuals stress is made according to the associated relationship between the psychological and physiological. The Montreal Imaging Stress Task (MIST) is used to induce three states of no, moderate and high stress. The MIST includs both mental and psychosocial stress factors, which is more closing to a real-life condition. The experimental data are collected from 39 healthy subjects. Features are extracted from the data and the random forest is used to select the optimal stress-related feature combination, which is used to train and test the Support Vector Machine (SVM) classifier. Finally, the results show that the combination of random forest feature selection and SVM achieves a better performance. The accuracy is improved from 78% to 84% in the three states detection.
2017, 39(11): 2677-2683.
doi: 10.11999/JEIT170108
Abstract:
Heart disease is of highest morbidity and mortality. The cardiac structure and mechanical characteristics can be reflected by auscultation. Compared with echocardiography and nuclear magnetic resonance, auscultation gets the advantages of fast, low cost and easy to use. The composition of phonocardiogram is complex, and the auscultation is easy to be affected by the subjectivity of the doctor, various noise and disturbances, which limits the application of auscultation. The algorithm of phonocardiogram segmentation and abnormal phonocardiogram screening is presented. For the reason that the heart cycle is estimated in advance, 80% cardiac cycle can be recognition correctly when random disturbances exist. The diagnostic indexes of time and frequency domain with high discrimination are also presented, and the abnormal heart sounds are recognized by Support Vector Machine (SVM) with the accuracy about 92%. The algorithm can be used for assisting doctors or portable phonocardiogram monitoring device.
Heart disease is of highest morbidity and mortality. The cardiac structure and mechanical characteristics can be reflected by auscultation. Compared with echocardiography and nuclear magnetic resonance, auscultation gets the advantages of fast, low cost and easy to use. The composition of phonocardiogram is complex, and the auscultation is easy to be affected by the subjectivity of the doctor, various noise and disturbances, which limits the application of auscultation. The algorithm of phonocardiogram segmentation and abnormal phonocardiogram screening is presented. For the reason that the heart cycle is estimated in advance, 80% cardiac cycle can be recognition correctly when random disturbances exist. The diagnostic indexes of time and frequency domain with high discrimination are also presented, and the abnormal heart sounds are recognized by Support Vector Machine (SVM) with the accuracy about 92%. The algorithm can be used for assisting doctors or portable phonocardiogram monitoring device.
Feature Extraction and Classification of Spectrum of Radiated Noise of Underwater High Speed Vehicle
2017, 39(11): 2684-2689.
doi: 10.11999/JEIT170283
Abstract:
In order to improve the result of underwater high speed vehicle classification, a classification method that is based on High Speed Characteristic Quantity (HSCQ) of vehicle radiated noise is designed. Firstly, analysis of Detection of Envelope Modulation On Noise (DEMON) spectrum of actual measured radiated noise is finished. The Modulation Distribution Ratio (MDR) of radiated noise is defined based on the separability of modulation frequency of DEMON spectrum. Then the spectrograms feature analysis and feature extraction of underwater high speed radiated noise are done based on image edge detection and edge growing. The Straight-line Characteristic Quantity of Spectrum (SCQS) of vehicle radiated noise is analyzed. Finally, considering the analysis results of two types of characteristic quantity, a new classification method of underwater high speed vehicle is realized and HSCQ of vehicle radiated noise is designed. The actual measured radiated noise analysis shows that, the false alarm rate of non-high speed vehicle is respectively 21.4% (only using MDR), 16.3% (only using SCQS), and 4.1% (using HSCQ).
In order to improve the result of underwater high speed vehicle classification, a classification method that is based on High Speed Characteristic Quantity (HSCQ) of vehicle radiated noise is designed. Firstly, analysis of Detection of Envelope Modulation On Noise (DEMON) spectrum of actual measured radiated noise is finished. The Modulation Distribution Ratio (MDR) of radiated noise is defined based on the separability of modulation frequency of DEMON spectrum. Then the spectrograms feature analysis and feature extraction of underwater high speed radiated noise are done based on image edge detection and edge growing. The Straight-line Characteristic Quantity of Spectrum (SCQS) of vehicle radiated noise is analyzed. Finally, considering the analysis results of two types of characteristic quantity, a new classification method of underwater high speed vehicle is realized and HSCQ of vehicle radiated noise is designed. The actual measured radiated noise analysis shows that, the false alarm rate of non-high speed vehicle is respectively 21.4% (only using MDR), 16.3% (only using SCQS), and 4.1% (using HSCQ).
2017, 39(11): 2690-2696.
doi: 10.11999/JEIT170178
Abstract:
Detection of weak targets in heavy ground clutter is the key issue for Foreign Object Debris (FOD) surveillance radar on airport runways. A novel hierarchical FOD detection method is proposed based on eigenvalue spectrum feature extraction and Minimax Probability Machine (MPM). The clutter map Constant False Alarm Rate (CFAR) detection algorithm is utilized firstly to categorize radar echoes into two kinds, i.e., background clutter and the FOD returns (including the false alarm returns). Then eigenvalue spectrum features are extracted to transform the FOD returns and false alarm returns into the feature domain where the FOD and false alarm are more distinguishable. Finally, the MPM classifier is utilized to categorize the FOD and false alarm into different kinds so as to reduce the false alarm rate. Experiments results based on measured data show that the proposed method can achieve good detection performance.
Detection of weak targets in heavy ground clutter is the key issue for Foreign Object Debris (FOD) surveillance radar on airport runways. A novel hierarchical FOD detection method is proposed based on eigenvalue spectrum feature extraction and Minimax Probability Machine (MPM). The clutter map Constant False Alarm Rate (CFAR) detection algorithm is utilized firstly to categorize radar echoes into two kinds, i.e., background clutter and the FOD returns (including the false alarm returns). Then eigenvalue spectrum features are extracted to transform the FOD returns and false alarm returns into the feature domain where the FOD and false alarm are more distinguishable. Finally, the MPM classifier is utilized to categorize the FOD and false alarm into different kinds so as to reduce the false alarm rate. Experiments results based on measured data show that the proposed method can achieve good detection performance.
2017, 39(11): 2697-2704.
doi: 10.11999/JEIT170149
Abstract:
Benefiting from the combined processing of echo signals received on spatially separated platforms, bistatic spaceborne SAR has many valuable applications such as survey, interferometry, target recognition and classification, disaster monitoring, etc. In order to improve the imaging performance further, this paper presents a bistatic spaceborne Multiple-Input Multiple-Output SAR (MIMO SAR) system combined with Space-Time Coding (STC) and Short-Term Shift-Orthogonal (STSO) chirp waveforms. With the help of digital beamforming techniques on receive, different transmitting waveforms can be separated and extracted from mixed echoes, so that such enhanced architecture can achieve the advantages of both bistatic and MIMO configuration from more spatial degrees of freedom. Furthermore, it offers an opportunity for mitigating the influences of double-bounce scattering by using beamforming on multiple SAR images. The theoretical analysis is derived in detail, then validated by simulation experiments.
Benefiting from the combined processing of echo signals received on spatially separated platforms, bistatic spaceborne SAR has many valuable applications such as survey, interferometry, target recognition and classification, disaster monitoring, etc. In order to improve the imaging performance further, this paper presents a bistatic spaceborne Multiple-Input Multiple-Output SAR (MIMO SAR) system combined with Space-Time Coding (STC) and Short-Term Shift-Orthogonal (STSO) chirp waveforms. With the help of digital beamforming techniques on receive, different transmitting waveforms can be separated and extracted from mixed echoes, so that such enhanced architecture can achieve the advantages of both bistatic and MIMO configuration from more spatial degrees of freedom. Furthermore, it offers an opportunity for mitigating the influences of double-bounce scattering by using beamforming on multiple SAR images. The theoretical analysis is derived in detail, then validated by simulation experiments.
2017, 39(11): 2705-2715.
doi: 10.11999/JEIT170086
Abstract:
In order to solve the SAR target discrimination problem in the real complex scenes, a SAR target discrimination method is proposed based on Bag-of-Words (BoW) model with multiple low-level features fusion. In the low-level feature extraction stage of BoW model, the SAR-SIFT feature is utilized to describe the shape information of local regions of an image sample. And also, a set of new local descriptors is used to capture the contrast information and the texture information of the local regions, which is extracted based on the traditional target discrimination features. For the fusion of different low-level features in BoW model, the image-level feature fusion strategy is implemented to generate the image global feature, which is realized by the Multiple Kernel Learning (MKL) method with L2-norm regularization. Experimental results with the MiniSAR real SAR dataset show that the proposed SAR target discrimination algorithm based on BoW model with multi-feature fusion achieves better discrimination performance compared with methods based on the traditional discrimination features and the BoW model features using single low-level descriptor.
In order to solve the SAR target discrimination problem in the real complex scenes, a SAR target discrimination method is proposed based on Bag-of-Words (BoW) model with multiple low-level features fusion. In the low-level feature extraction stage of BoW model, the SAR-SIFT feature is utilized to describe the shape information of local regions of an image sample. And also, a set of new local descriptors is used to capture the contrast information and the texture information of the local regions, which is extracted based on the traditional target discrimination features. For the fusion of different low-level features in BoW model, the image-level feature fusion strategy is implemented to generate the image global feature, which is realized by the Multiple Kernel Learning (MKL) method with L2-norm regularization. Experimental results with the MiniSAR real SAR dataset show that the proposed SAR target discrimination algorithm based on BoW model with multi-feature fusion achieves better discrimination performance compared with methods based on the traditional discrimination features and the BoW model features using single low-level descriptor.
2017, 39(11): 2716-2723.
doi: 10.11999/JEIT170079
Abstract:
Signal-Clutter-Ratio (SCR) in reference channel is an important parameter for the evaluation of integration loss of passive radar. When PN sequence in the Digital Terrestrial Television Broadcast (DTTB) illuminator is utilized for SCR estimation, there will be the problem of fractional-delay received signal relative to local PN sequence, which leads to a severe deviation of SCR estimation. For this problem, a novel algorithm based on compressive sensing is proposed by exploiting the sparsity of signal in delay dimension. Simulation demonstrates that accurate estimation of delay and strength for signals of different strength can be obtained by the proposed algorithm, which ensures the accuracy of SCR estimation. The processing of experimental data shows that the whole SCR in the received data is relatively high and the integration loss is trivial, about 0.5 dB. In addition, SCR decreases with the length of baseline, which leads to the increase of integration loss.
Signal-Clutter-Ratio (SCR) in reference channel is an important parameter for the evaluation of integration loss of passive radar. When PN sequence in the Digital Terrestrial Television Broadcast (DTTB) illuminator is utilized for SCR estimation, there will be the problem of fractional-delay received signal relative to local PN sequence, which leads to a severe deviation of SCR estimation. For this problem, a novel algorithm based on compressive sensing is proposed by exploiting the sparsity of signal in delay dimension. Simulation demonstrates that accurate estimation of delay and strength for signals of different strength can be obtained by the proposed algorithm, which ensures the accuracy of SCR estimation. The processing of experimental data shows that the whole SCR in the received data is relatively high and the integration loss is trivial, about 0.5 dB. In addition, SCR decreases with the length of baseline, which leads to the increase of integration loss.
2017, 39(11): 2724-2732.
doi: 10.11999/JEIT170072
Abstract:
Attribute-Based Encryption (ABE) scheme is widely used in the cloud storage due to its fine-grained access control. However, the single authority can lead to the trust issue and the computation bottleneck of distributing private keys in the original ABE schemes. To solve these problems, a distributed ABE scheme that consists of a number of central authorities and multiple attribute authorities, is constructed based on the prime-order bilinear group in this paper. Here, the central authority is responsible for establishing the system and generating the private key for the user, and a single private key is generated by only one central authority. In order to improve the stability of the system and reduce the calculation of the center authority, a plenty of central authorities are adopted. The attribute authority, which is independent of each other, is responsible for managing different attribute domains. At the same time, the ciphertext length of the proposed scheme has nothing to do with the number of attributes, therefore, it is a constant. The most important thing is that the decryption computation needs only two bilinear pair operations. The scheme is proved selectively secure based on q-Bilinear Diffie-Hellman Exponent (q-BDHE) assumption in the random oracle model. Finally, the functionality and efficiency of the proposed scheme are analyzed and verified. The experimental results show that the proposed scheme has both constant-size ciphertext and the ability of fast decryption, which greatly reduces the storage burden and improves the system efficiency.
Attribute-Based Encryption (ABE) scheme is widely used in the cloud storage due to its fine-grained access control. However, the single authority can lead to the trust issue and the computation bottleneck of distributing private keys in the original ABE schemes. To solve these problems, a distributed ABE scheme that consists of a number of central authorities and multiple attribute authorities, is constructed based on the prime-order bilinear group in this paper. Here, the central authority is responsible for establishing the system and generating the private key for the user, and a single private key is generated by only one central authority. In order to improve the stability of the system and reduce the calculation of the center authority, a plenty of central authorities are adopted. The attribute authority, which is independent of each other, is responsible for managing different attribute domains. At the same time, the ciphertext length of the proposed scheme has nothing to do with the number of attributes, therefore, it is a constant. The most important thing is that the decryption computation needs only two bilinear pair operations. The scheme is proved selectively secure based on q-Bilinear Diffie-Hellman Exponent (q-BDHE) assumption in the random oracle model. Finally, the functionality and efficiency of the proposed scheme are analyzed and verified. The experimental results show that the proposed scheme has both constant-size ciphertext and the ability of fast decryption, which greatly reduces the storage burden and improves the system efficiency.
2017, 39(11): 2733-2740.
doi: 10.11999/JEIT161054
Abstract:
Load balancing of multiple controllers is currently a focused issue in the research area of Software Defined Networking (SDN) deployment. Considering the issue of time efficiency of load balancing, this paper proposes a Load Balancing mechanism based on a Load Informing strategy (LILB). The mechanism involves four components: load measurement, load informing, balance decision and switch migration. Due to the function of load informing component, when a controller becomes overloaded, it can make load balance decisions without collecting other controllers load information. To reduce the communication overload and processing overhead caused by load informing component, this paper also proposes an inhibition algorithm to lower the frequency of informing load information. Moreover, this paper designs some decision methods of judging overloaded controllers, migrated switches, target controllers, and a judge about accepting a migration request for target controllers to avoid the load oscillation among controllers. Meanwhile, to achieve the smooth switching of controllers roles during migrating switches, an information interaction procedure is also designed. Finally, experiments are carried out based on Floodlight and Mininet to verify the feasibility and efficiency of the proposed mechanism.
Load balancing of multiple controllers is currently a focused issue in the research area of Software Defined Networking (SDN) deployment. Considering the issue of time efficiency of load balancing, this paper proposes a Load Balancing mechanism based on a Load Informing strategy (LILB). The mechanism involves four components: load measurement, load informing, balance decision and switch migration. Due to the function of load informing component, when a controller becomes overloaded, it can make load balance decisions without collecting other controllers load information. To reduce the communication overload and processing overhead caused by load informing component, this paper also proposes an inhibition algorithm to lower the frequency of informing load information. Moreover, this paper designs some decision methods of judging overloaded controllers, migrated switches, target controllers, and a judge about accepting a migration request for target controllers to avoid the load oscillation among controllers. Meanwhile, to achieve the smooth switching of controllers roles during migrating switches, an information interaction procedure is also designed. Finally, experiments are carried out based on Floodlight and Mininet to verify the feasibility and efficiency of the proposed mechanism.
2017, 39(11): 2741-2747.
doi: 10.11999/JEIT170212
Abstract:
In-network caching is always a hot topic in the research of Information-Centric Networking (ICN). Most of traditional caching coordination research use uniform caching policies within the same cache network, but the aim of different network part is quite various, thus the existed schemes are hard to achieve a comprehensive performance optimization or hard to scale. Additionally, most work fail to adapt to the on-path coupling effect content caching and request routing. This paper proposes and evaluates novel domain-orientated coordinated hybrid content caching and request search in ICN, which fully exploit content- space partitioning and divide the cache network into a core area and several edge areas. The off-path HASH-based coordination scheme is applied to core area and on-path reversion scheme is deployed in edge areas. The binary tuple is created to record the information of content placement and guides the request routing. Through simulation and comparison, it is found that the proposed policies yield better tradeoff between network-centric and user-centric performance than traditional schemes, and are scalable to be used in large ISPs.
In-network caching is always a hot topic in the research of Information-Centric Networking (ICN). Most of traditional caching coordination research use uniform caching policies within the same cache network, but the aim of different network part is quite various, thus the existed schemes are hard to achieve a comprehensive performance optimization or hard to scale. Additionally, most work fail to adapt to the on-path coupling effect content caching and request routing. This paper proposes and evaluates novel domain-orientated coordinated hybrid content caching and request search in ICN, which fully exploit content- space partitioning and divide the cache network into a core area and several edge areas. The off-path HASH-based coordination scheme is applied to core area and on-path reversion scheme is deployed in edge areas. The binary tuple is created to record the information of content placement and guides the request routing. Through simulation and comparison, it is found that the proposed policies yield better tradeoff between network-centric and user-centric performance than traditional schemes, and are scalable to be used in large ISPs.
2017, 39(11): 2748-2754.
doi: 10.11999/JEIT170229
Abstract:
Considering the limitation of energy, bandwidth, observation distance and communication distance in the Wireless Sensor Networks (WSN), a distributed sensor allocation algorithm based on potential game is proposed to solve the multi-target tracking problem. The predicted coordinate of target and Geometric Dilution Of Precision (GDOP) is used to establish the sensor allocation game modal with local information, and it is proved to be an exact potential game with at least one Nash Equilibrium point. On this basis, a parallel best response dynamic is proposed as the learning algorithm to search the Nash Equilibrium point. It is proved that the learning algorithm can help the game modal converge to a Nash Equilibrium point, and has faster convergence speed than traditional best response dynamic when sensors just communicate with local one-hop neighboring ones. In addition, a fully distributed decision makers selection mechanism is proposed on the basis of the Carrier Sense Multiple Access (CSMA), which is more satisfied with the self-organizing characteristic. The simulation results show that the proposed algorithm has great advantages in convergence speed, tracking accuracy and energy efficiency.
Considering the limitation of energy, bandwidth, observation distance and communication distance in the Wireless Sensor Networks (WSN), a distributed sensor allocation algorithm based on potential game is proposed to solve the multi-target tracking problem. The predicted coordinate of target and Geometric Dilution Of Precision (GDOP) is used to establish the sensor allocation game modal with local information, and it is proved to be an exact potential game with at least one Nash Equilibrium point. On this basis, a parallel best response dynamic is proposed as the learning algorithm to search the Nash Equilibrium point. It is proved that the learning algorithm can help the game modal converge to a Nash Equilibrium point, and has faster convergence speed than traditional best response dynamic when sensors just communicate with local one-hop neighboring ones. In addition, a fully distributed decision makers selection mechanism is proposed on the basis of the Carrier Sense Multiple Access (CSMA), which is more satisfied with the self-organizing characteristic. The simulation results show that the proposed algorithm has great advantages in convergence speed, tracking accuracy and energy efficiency.
2017, 39(11): 2755-2762.
doi: 10.11999/JEIT170547
Abstract:
Based on Synopsys TCAD 3-D device simulation, the effects of PMOS transistor process parameters on the upset and recovery effect of Static Random Access Memory (SRAM) memory cell are studied in a 65-nm bulk CMOS technology, mainly by changing the three process parameters. The simulation results show that reducing the doping concentration of deep-P+-well, N-well and threshold doping concentration in PMOS transistor can decrease the Linear Energy Transfer (LET) value of the upset and recovery. By reducing the doping concentration of deep-P+-well and N-well in PMOS transistor, the time of the upset and recovery increases. The conclusion of this paper is helpful to optimize the design of Static Random Access Memory cell mitigating Single-Event Effect (SEE), and can gives a great guidance for the anti-radiation integrated circuit under bulk CMOS process.
Based on Synopsys TCAD 3-D device simulation, the effects of PMOS transistor process parameters on the upset and recovery effect of Static Random Access Memory (SRAM) memory cell are studied in a 65-nm bulk CMOS technology, mainly by changing the three process parameters. The simulation results show that reducing the doping concentration of deep-P+-well, N-well and threshold doping concentration in PMOS transistor can decrease the Linear Energy Transfer (LET) value of the upset and recovery. By reducing the doping concentration of deep-P+-well and N-well in PMOS transistor, the time of the upset and recovery increases. The conclusion of this paper is helpful to optimize the design of Static Random Access Memory cell mitigating Single-Event Effect (SEE), and can gives a great guidance for the anti-radiation integrated circuit under bulk CMOS process.
2017, 39(11): 2763-2769.
doi: 10.11999/JEIT170210
Abstract:
A decoupling capacitor selection method based on maximum time-domain transient noise is proposed to solve the over-design problem caused by the traditional method based on the frequency-domain target impedance. According to the property that the current in board level can be approximated by a series of triangular pulses, the time to reach the decoupling capacitor's local maximum transient voltage noise and the condition which should be satisfied for the time-domain transient impedance are derived. Meanwhile, the time range of decoupling is determined by analyzing the maximum transient voltage noise of VRM branch. In addition, the selection criteria for the decoupling capacitors are developed by researching the properties and characteristics of the time-domain transient impedance curves of the decoupling capacitors. Finally, the decoupling design scheme based on the maximum time-domain transient noise is proposed. Comparing with the traditional frequency-domain decoupling scheme, the results of decoupling design for four examples with typical stimulus settings show that the quantity of capacitors can be reduced by more than 24.59% by the proposed algorithm under the condition of the same input excitation and satisfying the requirement of voltage noise.
A decoupling capacitor selection method based on maximum time-domain transient noise is proposed to solve the over-design problem caused by the traditional method based on the frequency-domain target impedance. According to the property that the current in board level can be approximated by a series of triangular pulses, the time to reach the decoupling capacitor's local maximum transient voltage noise and the condition which should be satisfied for the time-domain transient impedance are derived. Meanwhile, the time range of decoupling is determined by analyzing the maximum transient voltage noise of VRM branch. In addition, the selection criteria for the decoupling capacitors are developed by researching the properties and characteristics of the time-domain transient impedance curves of the decoupling capacitors. Finally, the decoupling design scheme based on the maximum time-domain transient noise is proposed. Comparing with the traditional frequency-domain decoupling scheme, the results of decoupling design for four examples with typical stimulus settings show that the quantity of capacitors can be reduced by more than 24.59% by the proposed algorithm under the condition of the same input excitation and satisfying the requirement of voltage noise.
2017, 39(11): 2770-2776.
doi: 10.11999/JEIT170117
Abstract:
At present the impact of flow regime on heat distribution and cooling capacity of collectors is not considered in the thermal analysis of the liquid-cooled collectors. The liquid-solid coupled heat transfer simulation method for the collectors of high power klystrons is presented. Using the CFX software in the platform of ANSYS Workbench, the model of the collector with two-layer water jacket and ditch grooves is simplified and established. The velocity distribution, the temperature rise, the pressure distribution of the flow and the temperature distribution on the collector are simulated. The results are contrasted with the theoretical results and the errors are reasonable. These simulation results are meaningful to the microwave high power test of klystrons and the optimal design of collectors.
At present the impact of flow regime on heat distribution and cooling capacity of collectors is not considered in the thermal analysis of the liquid-cooled collectors. The liquid-solid coupled heat transfer simulation method for the collectors of high power klystrons is presented. Using the CFX software in the platform of ANSYS Workbench, the model of the collector with two-layer water jacket and ditch grooves is simplified and established. The velocity distribution, the temperature rise, the pressure distribution of the flow and the temperature distribution on the collector are simulated. The results are contrasted with the theoretical results and the errors are reasonable. These simulation results are meaningful to the microwave high power test of klystrons and the optimal design of collectors.
2017, 39(11): 2777-2781.
doi: 10.11999/JEIT161267
Abstract:
In this paper, the design of Ka-band broadband, high-linearity and high-efficiency slow-wave system is proposed for Space Traveling Wave Tube (TWT). The dynamic phase-shifting technique is used to achieve low phase distortion and high efficiency. Based on this design, the Ka-band TWT is developed to realize wideband (25~27 ), low nonlinear distortion (non-linear phase shift40, AM / PM2.86/dB, third order intermodulation10.39 dBc), high efficiency (total efficiency51.7%). By the digital transmission system test, the error codepe is better than10-6 in the case ofC/N0 2.4 dB margin, it meets the satellite data transmission system for multi-channel parallel transmission program work requirements.
In this paper, the design of Ka-band broadband, high-linearity and high-efficiency slow-wave system is proposed for Space Traveling Wave Tube (TWT). The dynamic phase-shifting technique is used to achieve low phase distortion and high efficiency. Based on this design, the Ka-band TWT is developed to realize wideband (25~27 ), low nonlinear distortion (non-linear phase shift40, AM / PM2.86/dB, third order intermodulation10.39 dBc), high efficiency (total efficiency51.7%). By the digital transmission system test, the error codepe is better than10-6 in the case ofC/N0 2.4 dB margin, it meets the satellite data transmission system for multi-channel parallel transmission program work requirements.
2017, 39(11): 2782-2789.
doi: 10.11999/JEIT170107
Abstract:
The communication equipment effect mechanism under in-band electromagnetic interference is studied in this paper. Two electromagnetic interference prediction models are established. One model is based on the assumption that in-band interference is sensitive to the amplitude of field strength, and the other is based on the assumption that in-band interference is sensitive to the average power. The sensitive parameter can be distinguished by sine and AM test, and the Equipment Under Test (EUT) interference is predicted according to different models. The sine and AM continuous wave test, in-band dual-frequency test, in-band triple-frequency test are conducted with two typical VHF radios as test objects. Experiment results show that EUT1 is sensitive to the amplitude of field strength, and the model results are slightly greater than 1, and EUT2 is sensitive to the average power. The model results are all approximate 1. The prediction method of in-band multi-frequency electromagnetic interference is modified and improved by the test results. The proposed prediction method is able to forecast the communication equipment interference effectively under the in-band multi-frequency electromagnetic environment.
The communication equipment effect mechanism under in-band electromagnetic interference is studied in this paper. Two electromagnetic interference prediction models are established. One model is based on the assumption that in-band interference is sensitive to the amplitude of field strength, and the other is based on the assumption that in-band interference is sensitive to the average power. The sensitive parameter can be distinguished by sine and AM test, and the Equipment Under Test (EUT) interference is predicted according to different models. The sine and AM continuous wave test, in-band dual-frequency test, in-band triple-frequency test are conducted with two typical VHF radios as test objects. Experiment results show that EUT1 is sensitive to the amplitude of field strength, and the model results are slightly greater than 1, and EUT2 is sensitive to the average power. The model results are all approximate 1. The prediction method of in-band multi-frequency electromagnetic interference is modified and improved by the test results. The proposed prediction method is able to forecast the communication equipment interference effectively under the in-band multi-frequency electromagnetic environment.
2017, 39(11): 2790-2794.
doi: 10.11999/JEIT170111
Abstract:
A wideband and high gain dual-polarized antenna based on split ring resonators is presented. The antenna is composed of two cross printed dipole antennas vertically fixed on an aluminum plate as ground plane, which are excited by two similar micro-strip baluns. In order to further improve the gain with a broad bandwidth, split ring resonators and complementary split ring resonators are loaded on the printed dipole antennas. Measured results show that the proposed antenna achieves -10 dB return loss with bandwidth of 0.98~2.01 GHz (69%), and its port isolation is higher than 20 dB within that band. The maximum gain of antenna is improved up by 4.1 dB because of split ring resonators. Furthermore, the height of the antenna is reduced about 12% than the LPPDs- DPA.
A wideband and high gain dual-polarized antenna based on split ring resonators is presented. The antenna is composed of two cross printed dipole antennas vertically fixed on an aluminum plate as ground plane, which are excited by two similar micro-strip baluns. In order to further improve the gain with a broad bandwidth, split ring resonators and complementary split ring resonators are loaded on the printed dipole antennas. Measured results show that the proposed antenna achieves -10 dB return loss with bandwidth of 0.98~2.01 GHz (69%), and its port isolation is higher than 20 dB within that band. The maximum gain of antenna is improved up by 4.1 dB because of split ring resonators. Furthermore, the height of the antenna is reduced about 12% than the LPPDs- DPA.