Email alert
2017 Vol. 39, No. 7
Display Method:
2017, 39(7): 1525-1531.
doi: 10.11999/JEIT160988
Abstract:
An adaptive beamformer based on quaternion-valued widely linear processing is proposed, in which the output is expressed in quaternions for each array element. Taking into account the array output vector and its three involutions simultaneously, a quaternion-valued augmented signal model is established to exploit the noncircular information of the desired signal. The involution augmented adaptive beamforming is finally fulfilled based on the quaternion-valued widely linear processing. Compared with the conventional quaternion beamformers, the proposed beamformer has an improved reception capability for noncircular signals. The aperture of the array is extended and hence the degree-of-freedom of interference suppression is increased via widely linear processing. Simulations results illustrate the performance of the proposed beamformer.
An adaptive beamformer based on quaternion-valued widely linear processing is proposed, in which the output is expressed in quaternions for each array element. Taking into account the array output vector and its three involutions simultaneously, a quaternion-valued augmented signal model is established to exploit the noncircular information of the desired signal. The involution augmented adaptive beamforming is finally fulfilled based on the quaternion-valued widely linear processing. Compared with the conventional quaternion beamformers, the proposed beamformer has an improved reception capability for noncircular signals. The aperture of the array is extended and hence the degree-of-freedom of interference suppression is increased via widely linear processing. Simulations results illustrate the performance of the proposed beamformer.
2017, 39(7): 1532-1538.
doi: 10.11999/JEIT160841
Abstract:
In order to improve the performance of the blind equalizer under impulsive noise environments, a novel concurrent blind equalization algorithm based on probability density function matching and fractional lower order moments is presented. This algorithm uses the idea of probability density function matching at the beginning, and makes full use of the advantage of its fast convergence speed. In order to solve the problems of the phase information loss and incapability of suppressing impulse noise, this paper combines the fractional lower order moments of the decision signal in parallel as the cost function to update the weight coefficients of the blind equalizer. The convergence speed and convergence precision is further improved. The simulation experiments results show that the algorithm can effectively solve the problem of phase rotation and better suppress the impulse noise. Moreover, the algorithm has fast convergence speed, small steady-state error and strong robustness.
In order to improve the performance of the blind equalizer under impulsive noise environments, a novel concurrent blind equalization algorithm based on probability density function matching and fractional lower order moments is presented. This algorithm uses the idea of probability density function matching at the beginning, and makes full use of the advantage of its fast convergence speed. In order to solve the problems of the phase information loss and incapability of suppressing impulse noise, this paper combines the fractional lower order moments of the decision signal in parallel as the cost function to update the weight coefficients of the blind equalizer. The convergence speed and convergence precision is further improved. The simulation experiments results show that the algorithm can effectively solve the problem of phase rotation and better suppress the impulse noise. Moreover, the algorithm has fast convergence speed, small steady-state error and strong robustness.
2017, 39(7): 1539-1545.
doi: 10.11999/JEIT161137
Abstract:
Based on the dual uniform circular array, a self-calibration algorithm is proposed in the presence of mutual coupling, which can calibrate the mutual coupling in circular array and between two circular arrays at the same time. The algorithm utilizes the special mutual coupling matrix characteristic of dual uniform circular array to decouple angle information and mutual coupling coefficients. With less amount of calculation, the signal angle and the mutual coupling coefficients are estimated in turn. Finally, the cascade estimation is completed. The algorithm reduces the dimension of searching without any prior information about mutual coupling coefficient matrix and need not extra auxiliary source. So it can be easily implemented. Theory analysis and simulation results illustrate that the new algorithm has high precision and resolution, hence, it can effectively solve the problem of mutual coupling for the dual circular array.
Based on the dual uniform circular array, a self-calibration algorithm is proposed in the presence of mutual coupling, which can calibrate the mutual coupling in circular array and between two circular arrays at the same time. The algorithm utilizes the special mutual coupling matrix characteristic of dual uniform circular array to decouple angle information and mutual coupling coefficients. With less amount of calculation, the signal angle and the mutual coupling coefficients are estimated in turn. Finally, the cascade estimation is completed. The algorithm reduces the dimension of searching without any prior information about mutual coupling coefficient matrix and need not extra auxiliary source. So it can be easily implemented. Theory analysis and simulation results illustrate that the new algorithm has high precision and resolution, hence, it can effectively solve the problem of mutual coupling for the dual circular array.
2017, 39(7): 1546-1553.
doi: 10.11999/JEIT161171
Abstract:
The key issue in phase imaging is phase retrieval. Due to the loss of the phase information, the phase retrieval problem is usually ill-posed. How to realize the phase retrieval by using appropriate prior information is an important problem. In this work, based on single-shot phase imaging with a coded aperture, a single-shot phase imaging algorithm, which uses the structural sparsity, is proposed. The proposed algorithm exploits the overlapping structural sparsity of the total variation, and represents the structural sparsity in the form of convolution, making the problem easy to solve. Moreover, the steepest descent method is utilized to solve the corresponding optimization problem. The experiment results show that the complex amplitude can be reconstructed from noisy diffraction pattern using the proposed algorithm.
The key issue in phase imaging is phase retrieval. Due to the loss of the phase information, the phase retrieval problem is usually ill-posed. How to realize the phase retrieval by using appropriate prior information is an important problem. In this work, based on single-shot phase imaging with a coded aperture, a single-shot phase imaging algorithm, which uses the structural sparsity, is proposed. The proposed algorithm exploits the overlapping structural sparsity of the total variation, and represents the structural sparsity in the form of convolution, making the problem easy to solve. Moreover, the steepest descent method is utilized to solve the corresponding optimization problem. The experiment results show that the complex amplitude can be reconstructed from noisy diffraction pattern using the proposed algorithm.
2017, 39(7): 1554-1562.
doi: 10.11999/JEIT160908
Abstract:
A novel Bayesian possibilistic clustering method with optimality guarantees based on probability theory and possibilistic theory is proposed. First, the unknown membership degree and cluster center are represented as random variables. Given the specific constraints and uncertainty associated with each random variable, an appropriate probability distribution for each random variable is selected and the Bayesian possibilistic clustering model is proposed. On this basis, a novel Bayesian possibilistic clustering method with the optimal guarantee properties is propsed based on Bayesian theory and Monte Carlo sampling method using a Maximum-A-Posteriori (MAP) framework. Then, the convergence of the algorithm and the complexity of the algorithm are discussed. Experimental results on synthetic and real data sets show that the proposed method extends the traditional possibilistic clustering performance, and improves the clustering results.
A novel Bayesian possibilistic clustering method with optimality guarantees based on probability theory and possibilistic theory is proposed. First, the unknown membership degree and cluster center are represented as random variables. Given the specific constraints and uncertainty associated with each random variable, an appropriate probability distribution for each random variable is selected and the Bayesian possibilistic clustering model is proposed. On this basis, a novel Bayesian possibilistic clustering method with the optimal guarantee properties is propsed based on Bayesian theory and Monte Carlo sampling method using a Maximum-A-Posteriori (MAP) framework. Then, the convergence of the algorithm and the complexity of the algorithm are discussed. Experimental results on synthetic and real data sets show that the proposed method extends the traditional possibilistic clustering performance, and improves the clustering results.
2017, 39(7): 1563-1570.
doi: 10.11999/JEIT161133
Abstract:
Most algorithms of the zero-shot image classification with relative attributes do not consider the relationship between attributes and classes, therefore a new relative attributes method based on shared features is proposed for zero-shot image classification. In analogy to the multi-task learning, the object classifier and attribute classifier are simultaneously learned in this method, from which a shared sub-space of lower dimensional features is obtained to mine the relationship between attributes and classes. Inspired by the success of shared features, a novel relative attributes model based on shared features is proposed to promote the performance of the relationship between attributes and classes, in which the ranking function per attribute is learned by using shared features. In addition, the novel relative attributes model based on shared features is applied to zero-shot image classification, which yields high accuracy due to the shared features included. Experimental results demonstrate that the proposed method can achieve high relative attributes learning efficiency and zero-shot image classification accuracy.
Most algorithms of the zero-shot image classification with relative attributes do not consider the relationship between attributes and classes, therefore a new relative attributes method based on shared features is proposed for zero-shot image classification. In analogy to the multi-task learning, the object classifier and attribute classifier are simultaneously learned in this method, from which a shared sub-space of lower dimensional features is obtained to mine the relationship between attributes and classes. Inspired by the success of shared features, a novel relative attributes model based on shared features is proposed to promote the performance of the relationship between attributes and classes, in which the ranking function per attribute is learned by using shared features. In addition, the novel relative attributes model based on shared features is applied to zero-shot image classification, which yields high accuracy due to the shared features included. Experimental results demonstrate that the proposed method can achieve high relative attributes learning efficiency and zero-shot image classification accuracy.
2017, 39(7): 1571-1577.
doi: 10.11999/JEIT160966
Abstract:
The sparsity constraint of the L1 trackers representation model makes it have good robustness towards partial occlusion. However, the tracking speed of the L1 tracker is slow. To solve this study, this paper proposes a coding transfer method for visual tracking. By making use of the low-resolution dictionary to calculate coefficients of the candidate targets and the high-resolution dictionary to construct the observation likelihood model, the method reduces calculation amount effectively in the process of tracking. In order to improve the precision of coding transfer and the ability of the dictionary to overcome the background clutters, this study proposes an online robust discrimination joint dictionary learning model to update the dictionaries. The experimental results demonstrate that the proposed method has good robustness and superior tracking speed.
The sparsity constraint of the L1 trackers representation model makes it have good robustness towards partial occlusion. However, the tracking speed of the L1 tracker is slow. To solve this study, this paper proposes a coding transfer method for visual tracking. By making use of the low-resolution dictionary to calculate coefficients of the candidate targets and the high-resolution dictionary to construct the observation likelihood model, the method reduces calculation amount effectively in the process of tracking. In order to improve the precision of coding transfer and the ability of the dictionary to overcome the background clutters, this study proposes an online robust discrimination joint dictionary learning model to update the dictionaries. The experimental results demonstrate that the proposed method has good robustness and superior tracking speed.
2017, 39(7): 1578-1584.
doi: 10.11999/JEIT161044
Abstract:
To accurately locate, track space targets, and establish targets trajectory, a study on moving target detection based on motion information for star maps is practiced. Firstly, a new model to characterize the space moving target is constructed, then an algorithm for moving point target detection is proposed based on correlation coefficient matrix statistical information. Based on the detection method, target motion trajectory is finally extracted and the velocity estimation model of the moving target is built. This paper also proposes an evaluation method, which combines detection probability and false alarm probability, to verify this method. The experimental results demonstrate that the proposed method outperforms the compared methods and can achieve high detection probability while keeping low false alarm probability. Compared with simply expanding telescope diameter, this method provides a higher performance-price ratio way to improve the ability of space target detection.
To accurately locate, track space targets, and establish targets trajectory, a study on moving target detection based on motion information for star maps is practiced. Firstly, a new model to characterize the space moving target is constructed, then an algorithm for moving point target detection is proposed based on correlation coefficient matrix statistical information. Based on the detection method, target motion trajectory is finally extracted and the velocity estimation model of the moving target is built. This paper also proposes an evaluation method, which combines detection probability and false alarm probability, to verify this method. The experimental results demonstrate that the proposed method outperforms the compared methods and can achieve high detection probability while keeping low false alarm probability. Compared with simply expanding telescope diameter, this method provides a higher performance-price ratio way to improve the ability of space target detection.
2017, 39(7): 1585-1591.
doi: 10.11999/JEIT161121
Abstract:
With the development of information technology and the increasing demanding of information security, people are urgently in need of more reliable identification techniques for identity authentication. Therefore, the biometric recognition methods have become a compelling issue. Among the methods, the fingerprint identification technique attracts much interest due to its excellent feasibility and reliability performance. The traditional fingerprint recognition method is based on matching feature points. However, this method needs a long time to find the feature points, and suffering the blur, scaling, damage, and other problems, the recognition rate is decreased seriously. To solve these problems, a fouling and damaged fingerprint recognition algorithm named CBF-FFPF (Central Block Fingerprint and Fuzzy Feature Points Fingerprint) is proposed, it is based on Convolution Neural Network (CNN) of deep learning. Combining small sub block fingerprint, which takes the fingerprint core point as the center from the thinned image and fuzzy graph of fingerprint feature points, as original image input to obtain the recognition rate. The recognition rate based on CBF-FFPF is compared with the fingerprint identification algorithm based on Kernel Principal Component Analysis (KPCA), Extreme Learning Machine (ELM), and K-Nearest Neighbor (KNN). Experimental results show that fingerprint recognition algorithm CBF- FFPF has higher recognition rate and better robustness.
With the development of information technology and the increasing demanding of information security, people are urgently in need of more reliable identification techniques for identity authentication. Therefore, the biometric recognition methods have become a compelling issue. Among the methods, the fingerprint identification technique attracts much interest due to its excellent feasibility and reliability performance. The traditional fingerprint recognition method is based on matching feature points. However, this method needs a long time to find the feature points, and suffering the blur, scaling, damage, and other problems, the recognition rate is decreased seriously. To solve these problems, a fouling and damaged fingerprint recognition algorithm named CBF-FFPF (Central Block Fingerprint and Fuzzy Feature Points Fingerprint) is proposed, it is based on Convolution Neural Network (CNN) of deep learning. Combining small sub block fingerprint, which takes the fingerprint core point as the center from the thinned image and fuzzy graph of fingerprint feature points, as original image input to obtain the recognition rate. The recognition rate based on CBF-FFPF is compared with the fingerprint identification algorithm based on Kernel Principal Component Analysis (KPCA), Extreme Learning Machine (ELM), and K-Nearest Neighbor (KNN). Experimental results show that fingerprint recognition algorithm CBF- FFPF has higher recognition rate and better robustness.
2017, 39(7): 1592-1598.
doi: 10.11999/JEIT160984
Abstract:
In order to improve the applicability for different types of image and integrity of the results, a saliency detection algorithm is proposed. It combines the adaptive threshold merging with a new background selection strategy. In the segmentation process, the color difference sequence is obtained by the selective fusion of RGB and LAB of adjacent blocks. Adaptive threshold is generated by inverse proportion model of block area parameter. Merging progress is done after the adaptive threshold comparison with the color difference sequence. In the background selection process, background regions are obtained by the local relative position of background-subject-background in the local area. The experimental results are optimized for edge. Compared with other algorithms, the saliency map of two values obtained does not need external threshold algorithm in this paper. Adaptive threshold merging can eliminate the details of objects in complex environments and can focus on the saliency comparison of the same level size objects.
In order to improve the applicability for different types of image and integrity of the results, a saliency detection algorithm is proposed. It combines the adaptive threshold merging with a new background selection strategy. In the segmentation process, the color difference sequence is obtained by the selective fusion of RGB and LAB of adjacent blocks. Adaptive threshold is generated by inverse proportion model of block area parameter. Merging progress is done after the adaptive threshold comparison with the color difference sequence. In the background selection process, background regions are obtained by the local relative position of background-subject-background in the local area. The experimental results are optimized for edge. Compared with other algorithms, the saliency map of two values obtained does not need external threshold algorithm in this paper. Adaptive threshold merging can eliminate the details of objects in complex environments and can focus on the saliency comparison of the same level size objects.
2017, 39(7): 1599-1605.
doi: 10.11999/JEIT161090
Abstract:
Indirect Immuno Fluorescence (IIF) HEp-2 cell image analysis is an important basis for the diagnosis of autoimmune diseases. However, due to the great changes in the class and the similarity between the categories, HEp-2 cell staining pattern classification is a difficult problem. This paper presents an effective classification method based on the texture and shape information, learning from the principle of CLBP, a descriptor extracting texture information is proposed to describe the Complete information of the Local Triple Pattern (CLTP). Moreover, using Improved Fisher Vector (IFV) model and Rootsift feature, the shape information can be described. Through the combination of the texture and shape information, an SVM classifier is finally trained and an experiment is conducted in ICPR 2012 and ICIP 2013 data sets. Experiment results show that this method is superior over other methods in the cell level test and present competitive performance.
Indirect Immuno Fluorescence (IIF) HEp-2 cell image analysis is an important basis for the diagnosis of autoimmune diseases. However, due to the great changes in the class and the similarity between the categories, HEp-2 cell staining pattern classification is a difficult problem. This paper presents an effective classification method based on the texture and shape information, learning from the principle of CLBP, a descriptor extracting texture information is proposed to describe the Complete information of the Local Triple Pattern (CLTP). Moreover, using Improved Fisher Vector (IFV) model and Rootsift feature, the shape information can be described. Through the combination of the texture and shape information, an SVM classifier is finally trained and an experiment is conducted in ICPR 2012 and ICIP 2013 data sets. Experiment results show that this method is superior over other methods in the cell level test and present competitive performance.
2017, 39(7): 1606-1611.
doi: 10.11999/JEIT160933
Abstract:
In the squint Terrain Observation by Progressive scabs (TOPSAR) mode, the different azimuth scatters have different Doppler center frequencies, which causes azimuth under-sampling problem and increase the coupling between range and azimuth. Considering at the characteristics of squint TOPSAR echo, this article proposes a new full aperture imaging algorithm: first, the azimuth signal aliasing is removed by introducing the nonlinear walk correction; second, the nonlinear chirp scaling algorithm for azimuth focusing is applied to compensating the Doppler modulation rate; finally, the image geometric distortion is eliminated by 2-D chirp scaling operation. Compared with the traditional algorithm, the proposed algorithm avoids the interpolation under the case of extending a small amount of data, hence the calculation needed is less. The simulation results prove the effectiveness of the algorithm.
In the squint Terrain Observation by Progressive scabs (TOPSAR) mode, the different azimuth scatters have different Doppler center frequencies, which causes azimuth under-sampling problem and increase the coupling between range and azimuth. Considering at the characteristics of squint TOPSAR echo, this article proposes a new full aperture imaging algorithm: first, the azimuth signal aliasing is removed by introducing the nonlinear walk correction; second, the nonlinear chirp scaling algorithm for azimuth focusing is applied to compensating the Doppler modulation rate; finally, the image geometric distortion is eliminated by 2-D chirp scaling operation. Compared with the traditional algorithm, the proposed algorithm avoids the interpolation under the case of extending a small amount of data, hence the calculation needed is less. The simulation results prove the effectiveness of the algorithm.
2017, 39(7): 1612-1618.
doi: 10.11999/JEIT160915
Abstract:
In order to detect concealed body-worn weapon at standoff range, the depolarization effect of radar targets is utilized. By measuring radar echoes of the object at different polarization directions, detection parameters can be obtained and whether the human is carrying concealed weapon or not can be decided. In order to verify the effectiveness of the method, a 140 GHz broadband polarized radar is designed and used to carry out experimental measurements. The experimental results show that, on the one hand, for the firearms, or Improvised Explosive Devices (IED) and other targets that have significant depolarization effect, the system has a better detection effect; on the other hand, the detection of targets with less depolarization effect and the increase of detection distance will result in the increase of the probability of false alarm and probability of leakage alarm and deterioration of the system performance. The system performance can be improved by increasing the size of the transmitting antenna, and the detection performance at the same distance can also be improved by optimizating the detection parameters.
In order to detect concealed body-worn weapon at standoff range, the depolarization effect of radar targets is utilized. By measuring radar echoes of the object at different polarization directions, detection parameters can be obtained and whether the human is carrying concealed weapon or not can be decided. In order to verify the effectiveness of the method, a 140 GHz broadband polarized radar is designed and used to carry out experimental measurements. The experimental results show that, on the one hand, for the firearms, or Improvised Explosive Devices (IED) and other targets that have significant depolarization effect, the system has a better detection effect; on the other hand, the detection of targets with less depolarization effect and the increase of detection distance will result in the increase of the probability of false alarm and probability of leakage alarm and deterioration of the system performance. The system performance can be improved by increasing the size of the transmitting antenna, and the detection performance at the same distance can also be improved by optimizating the detection parameters.
2017, 39(7): 1619-1625.
doi: 10.11999/JEIT161094
Abstract:
When the airborne weather radar detects microburst field, the echoes of wind field are usually submerged by strong background clutter. In this paper, a novel method of wind speed estimation of microburst based on pre-filtering based reduced rank STAP approach is proposed. The method constructs reduced-rank adaptive processors for distributed meteorologic target to achieve clutter suppression and signal matching, and to obtain the wind speed of the microburst field. The experimental results show that the proposed method can achieve clutter suppression and get accurate wind speed estimation with low computational burden.
When the airborne weather radar detects microburst field, the echoes of wind field are usually submerged by strong background clutter. In this paper, a novel method of wind speed estimation of microburst based on pre-filtering based reduced rank STAP approach is proposed. The method constructs reduced-rank adaptive processors for distributed meteorologic target to achieve clutter suppression and signal matching, and to obtain the wind speed of the microburst field. The experimental results show that the proposed method can achieve clutter suppression and get accurate wind speed estimation with low computational burden.
2017, 39(7): 1626-1633.
doi: 10.11999/JEIT161060
Abstract:
Based on the physical models and memory mechanisms of memristor, memcapacitor and meminductor, this study proposes a unified form of mem-elements model. Then a universal emulator of mem-elements is implemented with linear Voltage Control Floating Impedance (VCFI) circuit and a current integrator. By the choice (resistor/capacitor/inductor) of different type components, the emulator can simulate the electrical behavior of memristor, memcapacitor, and meminductor, respectively. Finally, the proposed emulator is applied to RLC series resonant circuit, and the circuit influences of mem-elements from both the time domain and frequency domain are studied. PSPICE simulations verify that the correctness and effectiveness of the universal emulator.
Based on the physical models and memory mechanisms of memristor, memcapacitor and meminductor, this study proposes a unified form of mem-elements model. Then a universal emulator of mem-elements is implemented with linear Voltage Control Floating Impedance (VCFI) circuit and a current integrator. By the choice (resistor/capacitor/inductor) of different type components, the emulator can simulate the electrical behavior of memristor, memcapacitor, and meminductor, respectively. Finally, the proposed emulator is applied to RLC series resonant circuit, and the circuit influences of mem-elements from both the time domain and frequency domain are studied. PSPICE simulations verify that the correctness and effectiveness of the universal emulator.
2017, 39(7): 1634-1639.
doi: 10.11999/JEIT161096
Abstract:
Feedback structure is an efficient topology for noise-reducing in analog circuit while the cyclic circuit is widely used in digital circuit only for sequential circuit design due to its data-keeping property. However, few works study the reliability of the feedback structure for combinational circuits especially for the low power application. Many researchers pay their attentions to Markov Random Field (MRF) theory based circuits, which can operate in ultra-low supply voltage with high noise-immune. However, the MRF based circuit design methodology has a lack of the proof of the final feedback structures. Thus the reliability of MRF based feedback structures is not explained clearly. This paper uses the probabilistic CMOS model to analysis the NAND-NAND based feedback structure. The probability boundedness and increasing monotonicity properties of feedback structure are proved. Besides, it is proved that the feedback structure of MRF can achieve higher probability than the traditional design. In measurement, the result can support of proof and analysis.
Feedback structure is an efficient topology for noise-reducing in analog circuit while the cyclic circuit is widely used in digital circuit only for sequential circuit design due to its data-keeping property. However, few works study the reliability of the feedback structure for combinational circuits especially for the low power application. Many researchers pay their attentions to Markov Random Field (MRF) theory based circuits, which can operate in ultra-low supply voltage with high noise-immune. However, the MRF based circuit design methodology has a lack of the proof of the final feedback structures. Thus the reliability of MRF based feedback structures is not explained clearly. This paper uses the probabilistic CMOS model to analysis the NAND-NAND based feedback structure. The probability boundedness and increasing monotonicity properties of feedback structure are proved. Besides, it is proved that the feedback structure of MRF can achieve higher probability than the traditional design. In measurement, the result can support of proof and analysis.
2017, 39(7): 1640-1645.
doi: 10.11999/JEIT161113
Abstract:
Under nano-scaled technology, the Integrated Circuit (IC) reliability issues caused by both aging mechanism and soft error become very critical. In this paper, from critical charge and delay points of view, the effects of Bias Temperature Instability (BTI), including Negative BTI (NBTI) and Positive BTI (PBTI), on Soft Error Rate (SER) are analyzed. Firstly, how BTI affects critical charge and delay is focused on. The delay increasing model is derived, and the critical charge changing procedure is introduced. Further, using the derived SER computational model considering critical charge, and mapping the changed delay into electrical mask procedure, the SER is accurately calculated. Experimental results on ISCAS89 benchmark circuits show that, considering two factors of BTI, SER estimation has high accuracy.
Under nano-scaled technology, the Integrated Circuit (IC) reliability issues caused by both aging mechanism and soft error become very critical. In this paper, from critical charge and delay points of view, the effects of Bias Temperature Instability (BTI), including Negative BTI (NBTI) and Positive BTI (PBTI), on Soft Error Rate (SER) are analyzed. Firstly, how BTI affects critical charge and delay is focused on. The delay increasing model is derived, and the critical charge changing procedure is introduced. Further, using the derived SER computational model considering critical charge, and mapping the changed delay into electrical mask procedure, the SER is accurately calculated. Experimental results on ISCAS89 benchmark circuits show that, considering two factors of BTI, SER estimation has high accuracy.
2017, 39(7): 1646-1650.
doi: 10.11999/JEIT161088
Abstract:
This paper presents an effective method based on transfer function to predict the jitter performance of self biased PLL due to power supply noise. The supply noise sensitivity of replica biased regulator of PLL is derived from small signal analysis. The analysis shows that there exist tradeoffs between the closed-loop bandwidth and supply noise sensitivity of regulator. As an example, the supply noise performance of one specific self biased PLL which is used as a clock generator in PI based CDR is analyzed. The comparison between transient simulation method and the proposed method is made. The results show that the proposed method can predict the value of period jitter with considerable accuracy. It can also give guidelines on how to further improve the noise performance of SBPLL.
This paper presents an effective method based on transfer function to predict the jitter performance of self biased PLL due to power supply noise. The supply noise sensitivity of replica biased regulator of PLL is derived from small signal analysis. The analysis shows that there exist tradeoffs between the closed-loop bandwidth and supply noise sensitivity of regulator. As an example, the supply noise performance of one specific self biased PLL which is used as a clock generator in PI based CDR is analyzed. The comparison between transient simulation method and the proposed method is made. The results show that the proposed method can predict the value of period jitter with considerable accuracy. It can also give guidelines on how to further improve the noise performance of SBPLL.
2017, 39(7): 1651-1657.
doi: 10.11999/JEIT160993
Abstract:
For the emergence of massive burst amplitude and phase-modulated signals with uncertain modulation type including PSK, APSK, QAM and so on, a universal algorithm is proposed to accomplish the forward carrier phase recovery. Based on the symmetry of constellation, a statistic of wrapped phase is established from the phase information of the constellation points. As a result, the carrier phase blind estimation for unknown burst signals is accomplished regardless of the modulation type and training sequence. Simulations indicate that compared with the current algorithms, the proposed method has better anti-noise ability, flexibility, robustness, making it a good choice for engineering practice of burst signals under non-cooperative condition.
For the emergence of massive burst amplitude and phase-modulated signals with uncertain modulation type including PSK, APSK, QAM and so on, a universal algorithm is proposed to accomplish the forward carrier phase recovery. Based on the symmetry of constellation, a statistic of wrapped phase is established from the phase information of the constellation points. As a result, the carrier phase blind estimation for unknown burst signals is accomplished regardless of the modulation type and training sequence. Simulations indicate that compared with the current algorithms, the proposed method has better anti-noise ability, flexibility, robustness, making it a good choice for engineering practice of burst signals under non-cooperative condition.
2017, 39(7): 1658-1665.
doi: 10.11999/JEIT160864
Abstract:
In order to improve the BER performance of MLC NAND Flash, this paper presents a shortened polarization-based optimized codes for MLC NAND Flash. Optimized shortened codes are obtained by optimizing shortened pattern. Firstly, basic shortened pattern is obtained by bit reversal reordering, and then the freeze bits are selected with a lower channel capacity to constitute optimized shortened pattern, the resulting punctered bits are all frozen bits, this method can significantly improve the error correction performance. Meanwhile, according to the error asymmetry of MLC unit, unequal error protection is used for the LSB and MSB. Simulation results show that the performance of the optimized shortened codes is better than LDPC and basic shortened polar code about 3.72~5.89 dB and 1.47~3.49 dB gain at the frame error rate of 10-3; compared to the same rate based optimized shortened codes, the new ECC program obtains gain about 0.25 dB .
In order to improve the BER performance of MLC NAND Flash, this paper presents a shortened polarization-based optimized codes for MLC NAND Flash. Optimized shortened codes are obtained by optimizing shortened pattern. Firstly, basic shortened pattern is obtained by bit reversal reordering, and then the freeze bits are selected with a lower channel capacity to constitute optimized shortened pattern, the resulting punctered bits are all frozen bits, this method can significantly improve the error correction performance. Meanwhile, according to the error asymmetry of MLC unit, unequal error protection is used for the LSB and MSB. Simulation results show that the performance of the optimized shortened codes is better than LDPC and basic shortened polar code about 3.72~5.89 dB and 1.47~3.49 dB gain at the frame error rate of 10-3; compared to the same rate based optimized shortened codes, the new ECC program obtains gain about 0.25 dB .
2017, 39(7): 1666-1672.
doi: 10.11999/JEIT161045
Abstract:
To solve the problem of recognition for frame synchronization word whose frame length is not the same, a novel frame synchronization word identification algorithm based on multi-fractal spectrum is proposed. Firstly, through the analysis of frame structure and bias of synchronization words and information bits, the conclusion that bias of protocol frame is less than synchronization words is got. Then, due to the feature of bias distribution that can be described by multi-fractal spectrum, information bits can be deleted effectively through multi-fractal spectrum calculation. Finally, the synchronization word can be identified by concentration calculation of fixed length bit string of deleted sequence. The new method, which has higher accurate recognition than existing algorithms suggested by simulation results, has significant potential in engineering application.
To solve the problem of recognition for frame synchronization word whose frame length is not the same, a novel frame synchronization word identification algorithm based on multi-fractal spectrum is proposed. Firstly, through the analysis of frame structure and bias of synchronization words and information bits, the conclusion that bias of protocol frame is less than synchronization words is got. Then, due to the feature of bias distribution that can be described by multi-fractal spectrum, information bits can be deleted effectively through multi-fractal spectrum calculation. Finally, the synchronization word can be identified by concentration calculation of fixed length bit string of deleted sequence. The new method, which has higher accurate recognition than existing algorithms suggested by simulation results, has significant potential in engineering application.
2017, 39(7): 1673-1680.
doi: 10.11999/JEIT161152
Abstract:
To guarantee the secrecy of downlink confidential information sent to marcocell user when there were multiple colluding eavesdroppers in dense heterogenous networks, a cooperative secrecy beamforming scheme resistant to multi-eavesdroppers is proposed in the paper. The transmission rate of confidential information is improved and eavesdroppers are jammed by jointly optimizing the beamforming vectors at macrocell base station and femtocell base stations. To obtain the optimal beamforming vectors, the problem of maximizing the secrecy rate is modeled under the limitations of quality of service and base station power. The secrecy rate maximization problem is non-convex. Due to the intractability, this problem is recast into a series of SemiDefinite Programs (SDP) using the SemiDefinite Relaxation (SDR) technique and the Lagrange duality. Simulation results validate the efficacy and the secrecy of the proposed scheme.
To guarantee the secrecy of downlink confidential information sent to marcocell user when there were multiple colluding eavesdroppers in dense heterogenous networks, a cooperative secrecy beamforming scheme resistant to multi-eavesdroppers is proposed in the paper. The transmission rate of confidential information is improved and eavesdroppers are jammed by jointly optimizing the beamforming vectors at macrocell base station and femtocell base stations. To obtain the optimal beamforming vectors, the problem of maximizing the secrecy rate is modeled under the limitations of quality of service and base station power. The secrecy rate maximization problem is non-convex. Due to the intractability, this problem is recast into a series of SemiDefinite Programs (SDP) using the SemiDefinite Relaxation (SDR) technique and the Lagrange duality. Simulation results validate the efficacy and the secrecy of the proposed scheme.
2017, 39(7): 1681-1687.
doi: 10.11999/JEIT161055
Abstract:
It is shown that the Multi-Address Interference (MAI) can be effectively reduced when Double Waveforms (DW) are used with chip code in the asynchronous DS-CDMA system. A novel method to reduce the MAI based on Waveform Sets (WS) is proposed. For any number of users, each user can choose single waveform from the WS that can be designed with only two or three kinds of seven conventional waveforms, and can obtain better Signal to Interference plus Noise Ratio (SINR) and Bit Error Rate (BER) performance than DW method. The analytical expression of SINR at the output of each correlation receiver for the proposed method is given. Besides, the BER by Improved Gaussian Approximation (IGA) over an additive white Gaussian noise channel is derived. Relative to the existing DW method, it is shown that the proposed method can further improve the SINR and BER performance through theoretical analysis and simulation results. The overlap between simulation curves and theoretical BER curves with IGA proves that the accuracy of BER with IGA outweighs BER with Gaussian approximation (GA).
It is shown that the Multi-Address Interference (MAI) can be effectively reduced when Double Waveforms (DW) are used with chip code in the asynchronous DS-CDMA system. A novel method to reduce the MAI based on Waveform Sets (WS) is proposed. For any number of users, each user can choose single waveform from the WS that can be designed with only two or three kinds of seven conventional waveforms, and can obtain better Signal to Interference plus Noise Ratio (SINR) and Bit Error Rate (BER) performance than DW method. The analytical expression of SINR at the output of each correlation receiver for the proposed method is given. Besides, the BER by Improved Gaussian Approximation (IGA) over an additive white Gaussian noise channel is derived. Relative to the existing DW method, it is shown that the proposed method can further improve the SINR and BER performance through theoretical analysis and simulation results. The overlap between simulation curves and theoretical BER curves with IGA proves that the accuracy of BER with IGA outweighs BER with Gaussian approximation (GA).
2017, 39(7): 1688-1696.
doi: 10.11999/JEIT161142
Abstract:
Compressed Video Sensing (CVS) has great significance to the scenarios with a resource-deprived video acquisition side. Reconstruction algorithm is the key technique in compressed video sensing. The Multi-Hypothesis (MH) prediction based prediction-residual reconstruction framework has good reconstruction performance. However, most of the existing multi-hypothesis prediction algorithms are proposed in measurement domain, which cause block artifacts in the predicted frames and decrease reconstruction accuracy due to the restriction of non-overlapping block partitioning. To address this issue, this paper proposes a two-stage Multi-Hypothesis Reconstruction (2sMHR) idea by incorporating the measurement-domain MH prediction with pixel-domain MH prediction. Two implementation schemes, GOP-wise (Gw) and Frame-wise (Fw) scheme, are designed for the 2sMHR. Simulation results show that the proposed 2sMHR algorithm can effectively reduce block artifacts and obtain higher video reconstruction accuracy while requiring lower computational complexity than the state-of- the-art CVS prediction methods.
Compressed Video Sensing (CVS) has great significance to the scenarios with a resource-deprived video acquisition side. Reconstruction algorithm is the key technique in compressed video sensing. The Multi-Hypothesis (MH) prediction based prediction-residual reconstruction framework has good reconstruction performance. However, most of the existing multi-hypothesis prediction algorithms are proposed in measurement domain, which cause block artifacts in the predicted frames and decrease reconstruction accuracy due to the restriction of non-overlapping block partitioning. To address this issue, this paper proposes a two-stage Multi-Hypothesis Reconstruction (2sMHR) idea by incorporating the measurement-domain MH prediction with pixel-domain MH prediction. Two implementation schemes, GOP-wise (Gw) and Frame-wise (Fw) scheme, are designed for the 2sMHR. Simulation results show that the proposed 2sMHR algorithm can effectively reduce block artifacts and obtain higher video reconstruction accuracy while requiring lower computational complexity than the state-of- the-art CVS prediction methods.
2017, 39(7): 1697-1703.
doi: 10.11999/JEIT161093
Abstract:
In the data-center interconnected Elastic Optical Network (EON), for reducing blocking probability and energy consumption of anycast request, a Collision-aware based on Spectrum Efficiency First Reconfiguration (CSEFR) strategy is proposed. The light-paths optimized by blocking probability are firstly calculated and allocated spectrum with the First-Fit (FF) mode. And the light-paths are sorted ascend by the light-paths spectrum efficiency. If the destination of light-path is a non-renewable energy supply data-center, the light-path is reserved. Then, an energy-saving light-path with suboptimal spectrum efficiency is established to connect a renewable energy supply data-center and allocated spectrum with the Last-Fit (LF) mode. But if the destination of light-path optimized by blocking probability is a renewable energy supply data-center which conflicts with other anycasts energy-saving light-paths, those conflicted energy-saving light-paths are reconfigured to their reserved light-paths optimized by blocking probability. The simulations results show that the proposed CSFER strategy can achieve a better tradeoff routing strategy between the blocking probability and energy consumption, and the CSFER strategy is universal for optical networks configured with different data centers.
In the data-center interconnected Elastic Optical Network (EON), for reducing blocking probability and energy consumption of anycast request, a Collision-aware based on Spectrum Efficiency First Reconfiguration (CSEFR) strategy is proposed. The light-paths optimized by blocking probability are firstly calculated and allocated spectrum with the First-Fit (FF) mode. And the light-paths are sorted ascend by the light-paths spectrum efficiency. If the destination of light-path is a non-renewable energy supply data-center, the light-path is reserved. Then, an energy-saving light-path with suboptimal spectrum efficiency is established to connect a renewable energy supply data-center and allocated spectrum with the Last-Fit (LF) mode. But if the destination of light-path optimized by blocking probability is a renewable energy supply data-center which conflicts with other anycasts energy-saving light-paths, those conflicted energy-saving light-paths are reconfigured to their reserved light-paths optimized by blocking probability. The simulations results show that the proposed CSFER strategy can achieve a better tradeoff routing strategy between the blocking probability and energy consumption, and the CSFER strategy is universal for optical networks configured with different data centers.
2017, 39(7): 1704-1710.
doi: 10.11999/JEIT161185
Abstract:
MORUS is an authenticated stream cipher, which is selected is third-round candidate of the ongoing CAESAR competition. In this work, the security of MORUS-640-128 against collision attack is evaluated. The partition method is utilized to find the information leakage between the word differences of message in the nonlinear function determined by the collision. The necessary conditions of collision after two steps are proposed for the first time. The distribution of input difference is determined. Furthermore, necessary conditions are turned into Pseudo-Boolean optimization problems. With the usage of mixed integer programming, it is found that the weight of message difference must be higher than 28 with the collision probability less than 2-140 , which is a better upper bound than ref. [7] 2-130. The result shows that MORUS-640-128 has a good performance on resistance against collision attack.
MORUS is an authenticated stream cipher, which is selected is third-round candidate of the ongoing CAESAR competition. In this work, the security of MORUS-640-128 against collision attack is evaluated. The partition method is utilized to find the information leakage between the word differences of message in the nonlinear function determined by the collision. The necessary conditions of collision after two steps are proposed for the first time. The distribution of input difference is determined. Furthermore, necessary conditions are turned into Pseudo-Boolean optimization problems. With the usage of mixed integer programming, it is found that the weight of message difference must be higher than 28 with the collision probability less than 2-140 , which is a better upper bound than ref. [7] 2-130. The result shows that MORUS-640-128 has a good performance on resistance against collision attack.
2017, 39(7): 1711-1718.
doi: 10.11999/JEIT161043
Abstract:
To guarantee the isolation of the smart grid business and optimize the allocation of wireless resources, an optimal resource allocation mechanism for electric power wireless virtual networks is proposed. First, a virtualization system model according to the characteristics of electric power wireless network is proposed, and abstract physical wireless resource is built to realize resource sharing. Then, a wireless resource allocation model with several factors is designed, which are network cost, profit, service isolation constraint, backhaul bandwidth constraint, and QoS constraints. Finally, a tabu search algorithm based on these models is designed to allocate virtual resource to realize business isolation and QoS requirements. The simulation results show that, the proposed network model and optimal resources allocation mechanism can support QoS requirements, reduce the energy consumption of base stations, as well as improve the economic benefits of the network.
To guarantee the isolation of the smart grid business and optimize the allocation of wireless resources, an optimal resource allocation mechanism for electric power wireless virtual networks is proposed. First, a virtualization system model according to the characteristics of electric power wireless network is proposed, and abstract physical wireless resource is built to realize resource sharing. Then, a wireless resource allocation model with several factors is designed, which are network cost, profit, service isolation constraint, backhaul bandwidth constraint, and QoS constraints. Finally, a tabu search algorithm based on these models is designed to allocate virtual resource to realize business isolation and QoS requirements. The simulation results show that, the proposed network model and optimal resources allocation mechanism can support QoS requirements, reduce the energy consumption of base stations, as well as improve the economic benefits of the network.
2017, 39(7): 1719-1726.
doi: 10.11999/JEIT161182
Abstract:
In the vehicle heterogeneous network with noise and interference, the current vertical handoff algorithms based on decision tree have the problem of low handoff accuracy. In this paper, the decision processes of current algorithms are analyzed in detail and the formulation of false decision probability is given. Firstly, the Kalman filtering algorithm is employed to obtain the more accurate network attribute values according to the predicted values, the current values, and their noise deviations. Secondly, a probability threshold interval method is proposed to do a twice detection to the situation of the attribute value which is near the threshold. Simulation results show that the proposed algorithm can improve the accuracy of handoff decision and the total network throughput, and can also reduce the ping-pong effect and the failed handoff. Meanwhile, it still keeps the same-ordered time complexity with the traditional algorithms.
In the vehicle heterogeneous network with noise and interference, the current vertical handoff algorithms based on decision tree have the problem of low handoff accuracy. In this paper, the decision processes of current algorithms are analyzed in detail and the formulation of false decision probability is given. Firstly, the Kalman filtering algorithm is employed to obtain the more accurate network attribute values according to the predicted values, the current values, and their noise deviations. Secondly, a probability threshold interval method is proposed to do a twice detection to the situation of the attribute value which is near the threshold. Simulation results show that the proposed algorithm can improve the accuracy of handoff decision and the total network throughput, and can also reduce the ping-pong effect and the failed handoff. Meanwhile, it still keeps the same-ordered time complexity with the traditional algorithms.
2017, 39(7): 1727-1734.
doi: 10.11999/JEIT160820
Abstract:
Control delay, survivability of control plane and control redundancy of control plane are important criteria to judge whether the network performance is good in Software Defined Optical Network (SDON). A method that the survivability deployment for controller with time-delay constraint is put forward. This method takes full account of the network performance factors, such as time delay, survivability and controller redundancy. In the premise of a user specified delay, in order to improve the control plane survivability, it is insured that each network node has at least two control links. At the same time, the node deployments are kept to the minimum when complete of the entire network coverage, to reduce the redundancy of control plane. Simulation results show that the method can effectively reduce the control delay, improve the existence of control plane, and reduce the number of controller, decline the control redundancy, effectively improve the overall network performance of software defined optical network. The method ensures at least two control links have the same protection as C-MPC algorithm. Compared with the MCC algorithm, the reliability of SDON control plane is improved by 20%. Meanwhile, under the constraint conditions of 10 ms time delay, compared with C-MPC algorithm, the proposed algorithm can reduce the number of controller deployment by 88% and 75% in NSF and COST239 network.
Control delay, survivability of control plane and control redundancy of control plane are important criteria to judge whether the network performance is good in Software Defined Optical Network (SDON). A method that the survivability deployment for controller with time-delay constraint is put forward. This method takes full account of the network performance factors, such as time delay, survivability and controller redundancy. In the premise of a user specified delay, in order to improve the control plane survivability, it is insured that each network node has at least two control links. At the same time, the node deployments are kept to the minimum when complete of the entire network coverage, to reduce the redundancy of control plane. Simulation results show that the method can effectively reduce the control delay, improve the existence of control plane, and reduce the number of controller, decline the control redundancy, effectively improve the overall network performance of software defined optical network. The method ensures at least two control links have the same protection as C-MPC algorithm. Compared with the MCC algorithm, the reliability of SDON control plane is improved by 20%. Meanwhile, under the constraint conditions of 10 ms time delay, compared with C-MPC algorithm, the proposed algorithm can reduce the number of controller deployment by 88% and 75% in NSF and COST239 network.
2017, 39(7): 1735-1740.
doi: 10.11999/JEIT161237
Abstract:
The classical MDS-MAP algorithm has the disadvantage of large error and the computational complexity increases sharply with the increase of network size in the localization of wireless sensor networks. The clustering method based on the residual energy of the neighbor nodes is designed. The cluster has the proper node degree and cluster size, which reduces the calculation amount and error of the next-step localization algorithm. Then, for the intra-cluster nodes with only connectivity information, the distance between the sink and other single-hop nodes is obtained using the time difference ranging method. A multi-hop distance error correction algorithm is proposed. The distance between nodes in a cluster is obtained using the geometrical relationship of neighboring nodes and the node connectivity. Multi-Dimensional Scaling (MDS) is used to calculate the relative coordinates of nodes in each cluster, and the inter-cluster coordinates are merged and converted into absolute coordinates by the anchor nodes. Finally, the localization of the nodes is realized. The proposed method provides more accurate information of inter-node distance based on energy clustering and multi-hop interval weighted geometric distance correction algorithm. Compared with classical MDS algorithm, this method can further improve the positioning accuracy and reduce the power consumption of wireless sensor network localization.
The classical MDS-MAP algorithm has the disadvantage of large error and the computational complexity increases sharply with the increase of network size in the localization of wireless sensor networks. The clustering method based on the residual energy of the neighbor nodes is designed. The cluster has the proper node degree and cluster size, which reduces the calculation amount and error of the next-step localization algorithm. Then, for the intra-cluster nodes with only connectivity information, the distance between the sink and other single-hop nodes is obtained using the time difference ranging method. A multi-hop distance error correction algorithm is proposed. The distance between nodes in a cluster is obtained using the geometrical relationship of neighboring nodes and the node connectivity. Multi-Dimensional Scaling (MDS) is used to calculate the relative coordinates of nodes in each cluster, and the inter-cluster coordinates are merged and converted into absolute coordinates by the anchor nodes. Finally, the localization of the nodes is realized. The proposed method provides more accurate information of inter-node distance based on energy clustering and multi-hop interval weighted geometric distance correction algorithm. Compared with classical MDS algorithm, this method can further improve the positioning accuracy and reduce the power consumption of wireless sensor network localization.
2017, 39(7): 1741-1747.
doi: 10.11999/JEIT160971
Abstract:
Most of the existing searchable encryption schemes can not support fuzzy keyword search, and can not resist threats from malicious server, that the cloud computing needs to provide an encryption scheme, which can support typos and verification of the search result. Considering the data is updated frequently in cloud computing, a verifiable fuzzy searchable encryption scheme in dynamic cloud storage is presented. The proposed scheme constructs the fuzzy keyword set based on edit distance technique, and builds secure index based on pseudorandom function and random permutation, so as to protect the users data privacy. The RSA accumulator and Hash function are used to verify the correctness of the search result, in order to detect the cheating behavior of the malicious attacker. The security analysis proves that the proposed scheme is privacy preserving and verifiable. The experiment results show that the proposed scheme is efficient.
Most of the existing searchable encryption schemes can not support fuzzy keyword search, and can not resist threats from malicious server, that the cloud computing needs to provide an encryption scheme, which can support typos and verification of the search result. Considering the data is updated frequently in cloud computing, a verifiable fuzzy searchable encryption scheme in dynamic cloud storage is presented. The proposed scheme constructs the fuzzy keyword set based on edit distance technique, and builds secure index based on pseudorandom function and random permutation, so as to protect the users data privacy. The RSA accumulator and Hash function are used to verify the correctness of the search result, in order to detect the cheating behavior of the malicious attacker. The security analysis proves that the proposed scheme is privacy preserving and verifiable. The experiment results show that the proposed scheme is efficient.
2017, 39(7): 1748-1758.
doi: 10.11999/JEIT161004
Abstract:
As a kind of renewable and clean energy, wind energy is widespread concerned by countries around the world. However, it is shown that wind farms have a serious impact on air traffic surveillance radar and weather observation radar. Based on the radar scattering characteristics and micro-motion features of wind turbine, the effect of wind farms on air traffic surveillance radar and weather radar is analyzed systematically from the respects of the influence evaluation and clutter mitigation respectively. This paper gives a comprehensive overview of the influence evaluation methods and clutter mitigation technologies. The key technologies and the problems to be solved are proposed, and the development trend and prospects are also addressed.
As a kind of renewable and clean energy, wind energy is widespread concerned by countries around the world. However, it is shown that wind farms have a serious impact on air traffic surveillance radar and weather observation radar. Based on the radar scattering characteristics and micro-motion features of wind turbine, the effect of wind farms on air traffic surveillance radar and weather radar is analyzed systematically from the respects of the influence evaluation and clutter mitigation respectively. This paper gives a comprehensive overview of the influence evaluation methods and clutter mitigation technologies. The key technologies and the problems to be solved are proposed, and the development trend and prospects are also addressed.
2017, 39(7): 1759-1763.
doi: 10.11999/JEIT160860
Abstract:
To solve the problem of passive location for the emitter with fixed Pulse Framework Cycle (PFC), a passive location algorithm based on virtual TDOAs of moving array is proposed. The signal is received by a moving single sensor. The position of emitter is estimated with the multiple measurements of pulse arrival time in different sensor locations. A passive location algorithm based on high-order statistics is proposed for pulse framework cycle known and unknown. The Cramer-Rao Lower Band (CRLB) is derived in the above two cases. The simulation results show that the location accuracy is close to CRLBs in different cases. When pulse framework cycle is known, the location accuracy of virtual TDOAs is better than the real aperture array. When pulse framework cycle is unknown, the location accuracy of virtual TDOAs is slightly lower than the real aperture array.
To solve the problem of passive location for the emitter with fixed Pulse Framework Cycle (PFC), a passive location algorithm based on virtual TDOAs of moving array is proposed. The signal is received by a moving single sensor. The position of emitter is estimated with the multiple measurements of pulse arrival time in different sensor locations. A passive location algorithm based on high-order statistics is proposed for pulse framework cycle known and unknown. The Cramer-Rao Lower Band (CRLB) is derived in the above two cases. The simulation results show that the location accuracy is close to CRLBs in different cases. When pulse framework cycle is known, the location accuracy of virtual TDOAs is better than the real aperture array. When pulse framework cycle is unknown, the location accuracy of virtual TDOAs is slightly lower than the real aperture array.
2017, 39(7): 1764-1768.
doi: 10.11999/JEIT160758
Abstract:
Based on the principle of the complementary antenna, a kind of dual-polarized antenna element is studied, which combines the printed dipole antenna and circular microstrip patch. The printed dipole antenna is the electric current radiation source and the circular microstrip patch is the magnetic current radiation source. The two polarization ports are composed of the electronic and magnetic radiation resource, respectively, which radiate dual polarized electromagnetic fields. The electromagnetic simulation and optimization design of the dual-polarized antenna are carried out using the full wave electromagnetic simulation technique, and the simulation results show that the isolation of the designed antenna is above 22 dB and the cross polarization level is lower than -20 dB at the boresight direction within the operational frequency range. The measured results of the fabricated antenna indicate that the Voltage Standing Wave Ratio (VSWR), port isolation degree and the cross polarization level can satisfy technical requirements.
Based on the principle of the complementary antenna, a kind of dual-polarized antenna element is studied, which combines the printed dipole antenna and circular microstrip patch. The printed dipole antenna is the electric current radiation source and the circular microstrip patch is the magnetic current radiation source. The two polarization ports are composed of the electronic and magnetic radiation resource, respectively, which radiate dual polarized electromagnetic fields. The electromagnetic simulation and optimization design of the dual-polarized antenna are carried out using the full wave electromagnetic simulation technique, and the simulation results show that the isolation of the designed antenna is above 22 dB and the cross polarization level is lower than -20 dB at the boresight direction within the operational frequency range. The measured results of the fabricated antenna indicate that the Voltage Standing Wave Ratio (VSWR), port isolation degree and the cross polarization level can satisfy technical requirements.
2017, 39(7): 1769-1773.
doi: 10.11999/JEIT160819
Abstract:
In order to enhance long-distance communication performance and jamming ability in electronic warfare for shortwave equipment, performance improvement of near-ground wideband short wave phased array is required. Firstly, method of moments is adopted to construct the analysis framework, then the radiation field of antenna elements is decomposed into free-space part and Sommerfeld-integral part with the help of formulation of spatial Green,s function, the former part can be expressed in closed form and the latter part can be approximated by two-level DCIM. After that, the efficiency of filling impedance matrix is enormously increased. Finally, based on the impedance matrix, combining with network theory, Quantum-behaved Particle Swarm Optimization (QPSO) is employed to search for optimal excitation phases, through which high gain and beam scanning are realized. Furthermore, point-point sky wave propagation is implemented neatly in the condition of temporal and spatial variation of ionosphere parameters, thus the array is of great value in practical applications.
In order to enhance long-distance communication performance and jamming ability in electronic warfare for shortwave equipment, performance improvement of near-ground wideband short wave phased array is required. Firstly, method of moments is adopted to construct the analysis framework, then the radiation field of antenna elements is decomposed into free-space part and Sommerfeld-integral part with the help of formulation of spatial Green,s function, the former part can be expressed in closed form and the latter part can be approximated by two-level DCIM. After that, the efficiency of filling impedance matrix is enormously increased. Finally, based on the impedance matrix, combining with network theory, Quantum-behaved Particle Swarm Optimization (QPSO) is employed to search for optimal excitation phases, through which high gain and beam scanning are realized. Furthermore, point-point sky wave propagation is implemented neatly in the condition of temporal and spatial variation of ionosphere parameters, thus the array is of great value in practical applications.
2017, 39(7): 1774-1778.
doi: 10.11999/JEIT161084
Abstract:
In order to solve the problem of robust track-to-track association in the presence of sensor biases and non-identical observation, an anti-bias track association algorithm based on t-distribution mixture model is proposed. The robust track-to-track association problem is turned into the non-rigid point matching problem. The tracks of non-common are regarded as outliers in the point matching for the effects of the track-to-track association caused by non-identical observation. The heavy-tailed t-distribution mixture model is established with better robustness to outliers. The closed-form solution of t-distribution mixture model is solved by Expectation Maximization (EM) algorithm. The conditional expectation function is added a regular item of point set, so that the points have a feature of Coherent Point Drift (CPD). Finally, the effectiveness of the proposed algorithm is verified by simulation experiments at the presence of sensor biases and missed detections.
In order to solve the problem of robust track-to-track association in the presence of sensor biases and non-identical observation, an anti-bias track association algorithm based on t-distribution mixture model is proposed. The robust track-to-track association problem is turned into the non-rigid point matching problem. The tracks of non-common are regarded as outliers in the point matching for the effects of the track-to-track association caused by non-identical observation. The heavy-tailed t-distribution mixture model is established with better robustness to outliers. The closed-form solution of t-distribution mixture model is solved by Expectation Maximization (EM) algorithm. The conditional expectation function is added a regular item of point set, so that the points have a feature of Coherent Point Drift (CPD). Finally, the effectiveness of the proposed algorithm is verified by simulation experiments at the presence of sensor biases and missed detections.