Email alert
2017 Vol. 39, No. 8
Display Method:
2017, 39(8): 1779-1787.
doi: 10.11999/JEIT161172
Abstract:
This paper proposes a preference ranking elimination NSGAII algorithm to deal with the time-consuming issue of the preference NSGAII algorithm in optimizing HF network frequency assignment in multi-areas outstanding coverage. The proposed algorithm sorts and eliminates solutions according to their preference evaluation priori to the non-dominate sorting. By eliminating solutions with low ranking, the number of solutions participates in non-dominate sorting is reduced. The calculation time and the probability of selecting low ranking individuals for crossover or mutation are both decreased. The proposed algorithm simultaneously achieves the best performance and least calculation time in 38 of 48 sets experiments. Constrained with the same iteration number, the proposed algorithm saves 27% of computation time against the preference NSGAII algorithm. Experimental results show that by adopting preference evaluation sorting, the proposed algorithm takes less time and obtains a better solution.
This paper proposes a preference ranking elimination NSGAII algorithm to deal with the time-consuming issue of the preference NSGAII algorithm in optimizing HF network frequency assignment in multi-areas outstanding coverage. The proposed algorithm sorts and eliminates solutions according to their preference evaluation priori to the non-dominate sorting. By eliminating solutions with low ranking, the number of solutions participates in non-dominate sorting is reduced. The calculation time and the probability of selecting low ranking individuals for crossover or mutation are both decreased. The proposed algorithm simultaneously achieves the best performance and least calculation time in 38 of 48 sets experiments. Constrained with the same iteration number, the proposed algorithm saves 27% of computation time against the preference NSGAII algorithm. Experimental results show that by adopting preference evaluation sorting, the proposed algorithm takes less time and obtains a better solution.
2017, 39(8): 1788-1795.
doi: 10.11999/JEIT161211
Abstract:
The digital and analogue Hybrid Precoding (HP) is able to keep the performance close to that of the fully digital precoding with reduced Radio Frequency (RF) chains. In a millimeter wave massive MIMO system, the HP can be used to overcome the undesired hardware cost and calibration workload caused by the excessive RFs. Considering that the conventional HP structure is not practical, the research is based on a simple fixed sub-connection structure. The condition that the analogue precoding matrix should meet to maximize the sum achievable rate is deduced, so that the design of the analogue precoding matrix is transformed into an optimization problem. The optimal analogue precoding matrix is obtained by using Bird Swarm Algorithm (BSA). Considering that finite resolution phase shifters are used, a straightforward quantization solution and an improved discrete BSA based solution are proposed. The simulation results show that the proposed algorithm can achieve good performance based on simple structure. While using finite resolution phase shifter, both of the proposed solutions are effective, furthermore, the solution based on the discrete BSA can get better performance while the resolution is low.
The digital and analogue Hybrid Precoding (HP) is able to keep the performance close to that of the fully digital precoding with reduced Radio Frequency (RF) chains. In a millimeter wave massive MIMO system, the HP can be used to overcome the undesired hardware cost and calibration workload caused by the excessive RFs. Considering that the conventional HP structure is not practical, the research is based on a simple fixed sub-connection structure. The condition that the analogue precoding matrix should meet to maximize the sum achievable rate is deduced, so that the design of the analogue precoding matrix is transformed into an optimization problem. The optimal analogue precoding matrix is obtained by using Bird Swarm Algorithm (BSA). Considering that finite resolution phase shifters are used, a straightforward quantization solution and an improved discrete BSA based solution are proposed. The simulation results show that the proposed algorithm can achieve good performance based on simple structure. While using finite resolution phase shifter, both of the proposed solutions are effective, furthermore, the solution based on the discrete BSA can get better performance while the resolution is low.
2017, 39(8): 1796-1803.
doi: 10.11999/JEIT161265
Abstract:
A novel two time slots Orthogonal Frequency Division Multiplexing-Index Modulation (OFDM-IM)- aided cooperative relaying protocol is proposed for Cooperative Cognitive Radio Networks (CCRN). In the proposed scheme, OFDM-IM technique is used at the Secondary User (SU) to split the transmission space into the signal constellation domain and the index domain. Specifically, the Secondary Transmitter (ST) of SU acts as a Decode-and-Forward (DF) relay to transmit the information of the Primary User (PU) in the signal constellation domain, while the information bits of SU are carried by the index domain. Through this design concept, the mutual interference between the PU and SU can be avoided. Upper bounds on the Bit Error Probabilities (BEPs) of the PU and SU are analytically derived. The influence of STs location to the BER performance of PU and SU is also analysed. Several numerical results and discussions are provided to substantiate the theoretical analysis, and it is shown that the proposed protocol is a viable candidate for OFDM-based CR networks since it can enhance the BER performances of both PU and SU.
A novel two time slots Orthogonal Frequency Division Multiplexing-Index Modulation (OFDM-IM)- aided cooperative relaying protocol is proposed for Cooperative Cognitive Radio Networks (CCRN). In the proposed scheme, OFDM-IM technique is used at the Secondary User (SU) to split the transmission space into the signal constellation domain and the index domain. Specifically, the Secondary Transmitter (ST) of SU acts as a Decode-and-Forward (DF) relay to transmit the information of the Primary User (PU) in the signal constellation domain, while the information bits of SU are carried by the index domain. Through this design concept, the mutual interference between the PU and SU can be avoided. Upper bounds on the Bit Error Probabilities (BEPs) of the PU and SU are analytically derived. The influence of STs location to the BER performance of PU and SU is also analysed. Several numerical results and discussions are provided to substantiate the theoretical analysis, and it is shown that the proposed protocol is a viable candidate for OFDM-based CR networks since it can enhance the BER performances of both PU and SU.
2017, 39(8): 1804-1811.
doi: 10.11999/JEIT161197
Abstract:
A Multiple-Input Multiple-Output (MIMO) downlink communication system employing Non- Orthogonal Multiple Access (NOMA) technique is studied. Based on the theoretical analysis taking error propagation into account, a distance and spatial correlation based user matching criteria is proposed. In addition, the allocation of the base stations transmit power is optimized. Simulation results show that the proposed method can increase the number of subscribers that the system can accommodate, while guaranteeing good systems sum-rate performance.
A Multiple-Input Multiple-Output (MIMO) downlink communication system employing Non- Orthogonal Multiple Access (NOMA) technique is studied. Based on the theoretical analysis taking error propagation into account, a distance and spatial correlation based user matching criteria is proposed. In addition, the allocation of the base stations transmit power is optimized. Simulation results show that the proposed method can increase the number of subscribers that the system can accommodate, while guaranteeing good systems sum-rate performance.
2017, 39(8): 1812-1818.
doi: 10.11999/JEIT161322
Abstract:
To realize the dynamic allocation of network resources, improve the network resources utilization and meet the demand of the diverse networks, this paper proposes a virtual resource allocation algorithm based on network utility maximization. The spectrum resource is used as the revenue and the differentiated price is commercialized according to slicing networks. It also takes the computing resources and the backhaul as the cost, and also takes into account the different demands of the slicing network on the computing resources and spectrum resources. Finally, the utility model is established to maximize the network revenue. A distributed iterative algorithm is designed to solve the utility model by Lagrangian dual decomposition. The simulation results show that the algorithm improves the percentage of service users and maximizes the network revenue.
To realize the dynamic allocation of network resources, improve the network resources utilization and meet the demand of the diverse networks, this paper proposes a virtual resource allocation algorithm based on network utility maximization. The spectrum resource is used as the revenue and the differentiated price is commercialized according to slicing networks. It also takes the computing resources and the backhaul as the cost, and also takes into account the different demands of the slicing network on the computing resources and spectrum resources. Finally, the utility model is established to maximize the network revenue. A distributed iterative algorithm is designed to solve the utility model by Lagrangian dual decomposition. The simulation results show that the algorithm improves the percentage of service users and maximizes the network revenue.
2017, 39(8): 1819-1825.
doi: 10.11999/JEIT161159
Abstract:
Due to the high resource redundancy of full protection for requests multi-link failure in elastic optical networks, the strategy of Minimum Fault Risk Loss Probability Protection (MFRL-PP) is proposed to protect the request with high spectrum-efficiency. In the MFRL-PP, a link cost function integrated considering the link payload and fault risk is designed to choose the protection lightpath, which has minimum fault risk and consumes a small amount of spectrum resource. When the minimum fault risk lightpath is unavailable for shortage of idle spectrum resource, for further reducing the failure risk of the probability protection lightpath, a minimum fault risk loss probability protection mechanism for the non-symmetrical flow is designed to split the flow into two sub-flows for easily selecting the protection branch lightpath with the minimum fault risk loss. Moreover, in the phase of spectrum allocation for the lightpath, for reducing the number of spectrum fragmentations, a spectrum allocation strategy based on the maximum spectrum coincidence degree is put forward to allocate spectrum resource. The simulation results indicate that the proposed MFRL-PP algorithm can improve the spectrum utilization, and provide a better tradeoff between the bandwidth blocking probability and fault risk degree.
Due to the high resource redundancy of full protection for requests multi-link failure in elastic optical networks, the strategy of Minimum Fault Risk Loss Probability Protection (MFRL-PP) is proposed to protect the request with high spectrum-efficiency. In the MFRL-PP, a link cost function integrated considering the link payload and fault risk is designed to choose the protection lightpath, which has minimum fault risk and consumes a small amount of spectrum resource. When the minimum fault risk lightpath is unavailable for shortage of idle spectrum resource, for further reducing the failure risk of the probability protection lightpath, a minimum fault risk loss probability protection mechanism for the non-symmetrical flow is designed to split the flow into two sub-flows for easily selecting the protection branch lightpath with the minimum fault risk loss. Moreover, in the phase of spectrum allocation for the lightpath, for reducing the number of spectrum fragmentations, a spectrum allocation strategy based on the maximum spectrum coincidence degree is put forward to allocate spectrum resource. The simulation results indicate that the proposed MFRL-PP algorithm can improve the spectrum utilization, and provide a better tradeoff between the bandwidth blocking probability and fault risk degree.
2017, 39(8): 1826-1834.
doi: 10.11999/JEIT161269
Abstract:
To deal with the high cost involved in the location fingerprint database construction due to the dense Reference Points (RPs) distribution and point-by-point Received Signal Strength (RSS) collection in the conventional Wireless Local Area Network (WLAN) indoor localization systems, a new database construction approach based on the integrated semi-supervised manifold learning and cubic spline interpolation is proposed. The proposed approach utilizes a small amount of labeled data and a massive amount of unlabeled data to find the optimal solution to localization target function, and meanwhile relies on the mapping relations between the high-dimensional signal strength space and low-dimensional physical location space to calibrate the unlabeled data with location coordinates. The extensive experiments demonstrate that the proposed approach is able to guarantee the high localization accuracy, as well as significantly reduce the cost involved in location fingerprint database construction.
To deal with the high cost involved in the location fingerprint database construction due to the dense Reference Points (RPs) distribution and point-by-point Received Signal Strength (RSS) collection in the conventional Wireless Local Area Network (WLAN) indoor localization systems, a new database construction approach based on the integrated semi-supervised manifold learning and cubic spline interpolation is proposed. The proposed approach utilizes a small amount of labeled data and a massive amount of unlabeled data to find the optimal solution to localization target function, and meanwhile relies on the mapping relations between the high-dimensional signal strength space and low-dimensional physical location space to calibrate the unlabeled data with location coordinates. The extensive experiments demonstrate that the proposed approach is able to guarantee the high localization accuracy, as well as significantly reduce the cost involved in location fingerprint database construction.
2017, 39(8): 1835-1840.
doi: 10.11999/JEIT161190
Abstract:
The vibrating or rotating parts of the traditional non-contact voltmeters are exposed, thus they can not be used in high-risk areas, and can hardly measure moving bodies. To solve the above problems, this paper develops new non-contact voltmeters based on MEMS electric field sensors. A new detecting electrode, which connects to the sensor chip, is brought out and effectively enhances the sensitivity. Eleven electrodes are placed on a door frame, and measure the charge distribution of head, shoulder, arm, hand, leg and foot at the same time. By means of a metal human body model, a new calibration method for application is proposed. The voltmeter built in this paper is accurately calibrated. The voltmeters have significant advantages, such as no exposed moving components, safety, high environmental adaptability, and therefore can be used under high dust concentration, high concentration of flammable gas and other harsh environments. Test results show that the measurement range is -30~30 kV, the volt resolution is better than 50 V, and the uncertainty is better than 3%.
The vibrating or rotating parts of the traditional non-contact voltmeters are exposed, thus they can not be used in high-risk areas, and can hardly measure moving bodies. To solve the above problems, this paper develops new non-contact voltmeters based on MEMS electric field sensors. A new detecting electrode, which connects to the sensor chip, is brought out and effectively enhances the sensitivity. Eleven electrodes are placed on a door frame, and measure the charge distribution of head, shoulder, arm, hand, leg and foot at the same time. By means of a metal human body model, a new calibration method for application is proposed. The voltmeter built in this paper is accurately calibrated. The voltmeters have significant advantages, such as no exposed moving components, safety, high environmental adaptability, and therefore can be used under high dust concentration, high concentration of flammable gas and other harsh environments. Test results show that the measurement range is -30~30 kV, the volt resolution is better than 50 V, and the uncertainty is better than 3%.
2017, 39(8): 1841-1847.
doi: 10.11999/JEIT161233
Abstract:
Microwave Imager Combined Active and Passive (MICAP) is a new ocean salinity measure. The imager is composed of L, C, and K band radiometers and an L band scattermeter. The L band one dimensional synthetic aperture radiometer is a key part of MICAP, which is used to realize the high precision measurement of the ocean salinity. In order to demonstrate the concept and performance of MICAP, a ground-based prototype is developed and experiments are carried out. This paper introduces the development and experimental results of the L band synthetic aperture radiometer in the MICAP prototype. For the first time, a method is proposed to correct the AD offset error of digital to analog correlation conversion by using the three level quantization means. These experimental studies provide technical preparation for both the future Chinese ocean salinity mission and the global water cycle observation mission.
Microwave Imager Combined Active and Passive (MICAP) is a new ocean salinity measure. The imager is composed of L, C, and K band radiometers and an L band scattermeter. The L band one dimensional synthetic aperture radiometer is a key part of MICAP, which is used to realize the high precision measurement of the ocean salinity. In order to demonstrate the concept and performance of MICAP, a ground-based prototype is developed and experiments are carried out. This paper introduces the development and experimental results of the L band synthetic aperture radiometer in the MICAP prototype. For the first time, a method is proposed to correct the AD offset error of digital to analog correlation conversion by using the three level quantization means. These experimental studies provide technical preparation for both the future Chinese ocean salinity mission and the global water cycle observation mission.
2017, 39(8): 1848-1856.
doi: 10.11999/JEIT161224
Abstract:
Due to algorithms for buildings geometric parameters extraction from single high remote sensing image usually suffer from disturbances of background, noise and intensity similarity, resulting in wrong extraction results. In this paper, a novel variational level set model is proposed based on shape prior which integrates edge, gray, and shape prior including roof and facade, to extract buildings geometric parameters from a single high resolution remote sensing image. Experimental results show that the proposed method can extract buildings geometric parameters accurately. Moreover, it has strong robustness with respect to the disturbance of facade due to the more sufficient utilization of the whole shape prior.
Due to algorithms for buildings geometric parameters extraction from single high remote sensing image usually suffer from disturbances of background, noise and intensity similarity, resulting in wrong extraction results. In this paper, a novel variational level set model is proposed based on shape prior which integrates edge, gray, and shape prior including roof and facade, to extract buildings geometric parameters from a single high resolution remote sensing image. Experimental results show that the proposed method can extract buildings geometric parameters accurately. Moreover, it has strong robustness with respect to the disturbance of facade due to the more sufficient utilization of the whole shape prior.
2017, 39(8): 1857-1864.
doi: 10.11999/JEIT161220
Abstract:
In order to improve the sensitivity problem of using Probabilistic Principal Component Analysis (PPCA) model for HRRP recognition, a modified method is proposed. T-distribution is adopted as the basis of PPCA model rather than Gaussian distribution in this method, which utilizes not only the t-distributions robustness, but also less free parameters of PPCA characteristic. Further, to eliminate the targets azimuth sensitivity, the mixture t-distribution is substituted for single t-distribution. This modification offers a potential to model the similar density of HRRP in different azimuth range adequately for clustering and reduces the mismatch between models, thus improves the recognition performance. Estimation of parameters is achieved by EM algorithm to avoid the drawbacks of maximum-likelihood estimation and improve the estimation efficiency. Finally, in the simulation experiment Bayesian rule and the estimation statistical features are adopted together to test new HRRPs, the results show this method can improve the robustness of PPCA model in low SNR conditions.
In order to improve the sensitivity problem of using Probabilistic Principal Component Analysis (PPCA) model for HRRP recognition, a modified method is proposed. T-distribution is adopted as the basis of PPCA model rather than Gaussian distribution in this method, which utilizes not only the t-distributions robustness, but also less free parameters of PPCA characteristic. Further, to eliminate the targets azimuth sensitivity, the mixture t-distribution is substituted for single t-distribution. This modification offers a potential to model the similar density of HRRP in different azimuth range adequately for clustering and reduces the mismatch between models, thus improves the recognition performance. Estimation of parameters is achieved by EM algorithm to avoid the drawbacks of maximum-likelihood estimation and improve the estimation efficiency. Finally, in the simulation experiment Bayesian rule and the estimation statistical features are adopted together to test new HRRPs, the results show this method can improve the robustness of PPCA model in low SNR conditions.
2017, 39(8): 1865-1871.
doi: 10.11999/JEIT161134
Abstract:
The Three-Dimensional (3-D) imaging and 3-D micro-motion feature extraction techniques for ballistic targets based on the micro-Doppler (m-D) effect theory can provide significant information for target recognition and ballistic missile defense. The idea of multi-antenna interferometry processing from Interferometric Inverse Synthetic Aperture Radar (InISAR) is introduced into the 3-D imaging and 3-D micro-motion feature extraction of ballistic targets in the paper. Based on the integration of m-D effect theory and multi-antenna interferometry processing technology, the true 3-D image of target is obtained. The parameters of micro-motion and target structure are extracted by utilizing the reconstructed 3-D coordinates of target. Simulations are given to validate the effectiveness and robustness of the proposed algorithm.
The Three-Dimensional (3-D) imaging and 3-D micro-motion feature extraction techniques for ballistic targets based on the micro-Doppler (m-D) effect theory can provide significant information for target recognition and ballistic missile defense. The idea of multi-antenna interferometry processing from Interferometric Inverse Synthetic Aperture Radar (InISAR) is introduced into the 3-D imaging and 3-D micro-motion feature extraction of ballistic targets in the paper. Based on the integration of m-D effect theory and multi-antenna interferometry processing technology, the true 3-D image of target is obtained. The parameters of micro-motion and target structure are extracted by utilizing the reconstructed 3-D coordinates of target. Simulations are given to validate the effectiveness and robustness of the proposed algorithm.
2017, 39(8): 1872-1878.
doi: 10.11999/JEIT161384
Abstract:
Because of compact size, light weight, low-cost, FMCW SAR develops rapidly in recent years with the rose of the small Unmanned Aerial Vehicle (UAV). However, the signal processing methods are very different from that of pulsed-SAR due to the invalid suppose of stop and go and hence it is some confused in the real-data processing. Therefore, it is significant to make a deep research of FMCW SAR. However, much attention is paid on the aspect of data focusing while little research concentrates on the raw data simulation, especially for moving targets. In view of these, this paper firstly derived an accurate 2-D spectrum model in the consideration of the in-pulses motion of FMCW SAR, and then a highly-efficient raw data simulation method for FMCW SAR with moving targets is proposed. Point targets and real scene raw data simulation experiments are carried out to validate it and the its efficiencyis analyzed, results show its highefficiency comparing with the conventional methods.
Because of compact size, light weight, low-cost, FMCW SAR develops rapidly in recent years with the rose of the small Unmanned Aerial Vehicle (UAV). However, the signal processing methods are very different from that of pulsed-SAR due to the invalid suppose of stop and go and hence it is some confused in the real-data processing. Therefore, it is significant to make a deep research of FMCW SAR. However, much attention is paid on the aspect of data focusing while little research concentrates on the raw data simulation, especially for moving targets. In view of these, this paper firstly derived an accurate 2-D spectrum model in the consideration of the in-pulses motion of FMCW SAR, and then a highly-efficient raw data simulation method for FMCW SAR with moving targets is proposed. Point targets and real scene raw data simulation experiments are carried out to validate it and the its efficiencyis analyzed, results show its highefficiency comparing with the conventional methods.
2017, 39(8): 1879-1886.
doi: 10.11999/JEIT161146
Abstract:
The existing joint design methods of transmit waveform and receive filter for MIMO radar do not take into account the non-linear characteristics of radio frequency amplifier and the imprecise information about the target in practical applications. For these problems, a robust joint design of transmit waveform and receive filter for MIMO radar in presence of clutter with the power constraint of each element and the Peak-to-Average-power Ratio (PAR) constraint of transmit waveform from each element is proposed. The novel scheme sets an optimization model of MIMO radars output Signal-to-Interference-plus-Noise Ratio (SINR) within the uncertainty of targets steering vector via Max-Min method. As for the resulting non-convex joint optimization problem, Semi-Definite Relaxation (SDR), Charnes-Cooper transformation, sequential optimization, and Lagrange dual theorem are adopted to converse the non-convex original problem into two convex Semi-Definite Programming (SDP) sub-problems, which are concerned about the covariance matrix of transmit space-time code and receive space-time filter, respectively. The final transmit waveform and receive filter can be obtained by randomization method. The efficiency and robustness of the proposed algorithm are verified by the simulation results.
The existing joint design methods of transmit waveform and receive filter for MIMO radar do not take into account the non-linear characteristics of radio frequency amplifier and the imprecise information about the target in practical applications. For these problems, a robust joint design of transmit waveform and receive filter for MIMO radar in presence of clutter with the power constraint of each element and the Peak-to-Average-power Ratio (PAR) constraint of transmit waveform from each element is proposed. The novel scheme sets an optimization model of MIMO radars output Signal-to-Interference-plus-Noise Ratio (SINR) within the uncertainty of targets steering vector via Max-Min method. As for the resulting non-convex joint optimization problem, Semi-Definite Relaxation (SDR), Charnes-Cooper transformation, sequential optimization, and Lagrange dual theorem are adopted to converse the non-convex original problem into two convex Semi-Definite Programming (SDP) sub-problems, which are concerned about the covariance matrix of transmit space-time code and receive space-time filter, respectively. The final transmit waveform and receive filter can be obtained by randomization method. The efficiency and robustness of the proposed algorithm are verified by the simulation results.
2017, 39(8): 1887-1893.
doi: 10.11999/JEIT161033
Abstract:
Wind farms, which are motorial large obstacles, may degrade the performance of the Secondary Surveillance Radar (SSR) in Air Traffic Control (ATC). The SSR is undergoing the upgrade from Mode A/C to Mode S. Thus, it is important and meaningful to study the impact of wind farms on SSR for protecting the safety of civil aviation. The conditions of determining the impacted flight region of the interrogation and response signals are analyzed based on the contrastive analysis of signal characteristics of Mode S and A/C. Also, the calculation method of impacted flight region caused by wind farms as well as the quantitative comparison are given. The method and the conclusion can be used to provide a basis for selecting proper location for the radar near the wind farms (or address for new wind farms near the radar), and for designing a flight program.
Wind farms, which are motorial large obstacles, may degrade the performance of the Secondary Surveillance Radar (SSR) in Air Traffic Control (ATC). The SSR is undergoing the upgrade from Mode A/C to Mode S. Thus, it is important and meaningful to study the impact of wind farms on SSR for protecting the safety of civil aviation. The conditions of determining the impacted flight region of the interrogation and response signals are analyzed based on the contrastive analysis of signal characteristics of Mode S and A/C. Also, the calculation method of impacted flight region caused by wind farms as well as the quantitative comparison are given. The method and the conclusion can be used to provide a basis for selecting proper location for the radar near the wind farms (or address for new wind farms near the radar), and for designing a flight program.
2017, 39(8): 1894-1898.
doi: 10.11999/JEIT161124
Abstract:
This paper presents a new method for evaluating the anti-interception performance of thirteen kinds of radar signals, which can be used to guide radar waveform designing. The method assumes that white noise signal has the best anti-interception performance, by comparing different radar signals with it, the similarity degree of radar signal and white noise signal can be obtained. First, the characteristics distribution function of white noise and radar signal distribution function are introduced using Wigner semicircle law. Second, KL divergence is used to represent the similarity between radar signal and white noise. Small value of KL divergence means better anti interception performance, and vice versa. Theoretical derivation and simulation results show that this method can evaluate the anti interception performance of different radar signals effectively.
This paper presents a new method for evaluating the anti-interception performance of thirteen kinds of radar signals, which can be used to guide radar waveform designing. The method assumes that white noise signal has the best anti-interception performance, by comparing different radar signals with it, the similarity degree of radar signal and white noise signal can be obtained. First, the characteristics distribution function of white noise and radar signal distribution function are introduced using Wigner semicircle law. Second, KL divergence is used to represent the similarity between radar signal and white noise. Small value of KL divergence means better anti interception performance, and vice versa. Theoretical derivation and simulation results show that this method can evaluate the anti interception performance of different radar signals effectively.
2017, 39(8): 1899-1905.
doi: 10.11999/JEIT161058
Abstract:
It is not easy to accurately measure the direction angles of calibration-source signals, which limits the precision of array active-calibration methods. On the other hand, passive-calibration methods are difficult to apply to the presence of large array errors, which severely limits their practical applications. This paper proposes a rotation measurement-based method to calibrate array gain-phase errors, which can achieve high calibration precision without measuring the direction angles of calibration-source signals. Using the known array-rotation angles, the maximum likelihood-based method is able to simultaneously estimate the array gain-phase errors, direction angles and complex amplitudes of calibration-source signals without ambiguity. Compared with accurately measuring the direction angles of calibration-source signals, accurately measuring the array-rotation angles is much easier to be accomplished with a special test turntable, thus the proposed method can achieve quite high calibration precision at a low cost. Some simulation tests demonstrate the effectiveness and generality of the proposed method.
It is not easy to accurately measure the direction angles of calibration-source signals, which limits the precision of array active-calibration methods. On the other hand, passive-calibration methods are difficult to apply to the presence of large array errors, which severely limits their practical applications. This paper proposes a rotation measurement-based method to calibrate array gain-phase errors, which can achieve high calibration precision without measuring the direction angles of calibration-source signals. Using the known array-rotation angles, the maximum likelihood-based method is able to simultaneously estimate the array gain-phase errors, direction angles and complex amplitudes of calibration-source signals without ambiguity. Compared with accurately measuring the direction angles of calibration-source signals, accurately measuring the array-rotation angles is much easier to be accomplished with a special test turntable, thus the proposed method can achieve quite high calibration precision at a low cost. Some simulation tests demonstrate the effectiveness and generality of the proposed method.
2017, 39(8): 1906-1912.
doi: 10.11999/JEIT161222
Abstract:
SynchroSqueezing Transform (SST), based on the wavelet transform, can effectively improve the energy distribution and time-frequency aggregation of a signal by compressing the wavelet coefficients in a short frequency domain. To solve the parameter estimation problem of Linear Frequency Modulation (LFM) signals, a new SynchroSqueezing Chirplet Transform (SSCT) is proposed within the framework of synchrosqueezing. Taking full use of the linear relationship between the time and the frequency of an LFM signal, the SSCT method can improve the energy density on the time-frequency plane and estimate the signal parameters accurately, which at the same time keeps the advantages of the chirplet transform, such as flexible window function selecting and no cross-term interfering. Then a Fractional Lower Order SSCT (FLOSSCT) method is proposed in order to estimate the parameters of an LFM signal in the complex noise environment. The simulation results show that the SSCT and the FLOSSCT have good performance under the background of Gaussian and impulsive noise, respectively.
SynchroSqueezing Transform (SST), based on the wavelet transform, can effectively improve the energy distribution and time-frequency aggregation of a signal by compressing the wavelet coefficients in a short frequency domain. To solve the parameter estimation problem of Linear Frequency Modulation (LFM) signals, a new SynchroSqueezing Chirplet Transform (SSCT) is proposed within the framework of synchrosqueezing. Taking full use of the linear relationship between the time and the frequency of an LFM signal, the SSCT method can improve the energy density on the time-frequency plane and estimate the signal parameters accurately, which at the same time keeps the advantages of the chirplet transform, such as flexible window function selecting and no cross-term interfering. Then a Fractional Lower Order SSCT (FLOSSCT) method is proposed in order to estimate the parameters of an LFM signal in the complex noise environment. The simulation results show that the SSCT and the FLOSSCT have good performance under the background of Gaussian and impulsive noise, respectively.
2017, 39(8): 1913-1918.
doi: 10.11999/JEIT161157
Abstract:
To solve the problem of coherent sources using sparse reconstruction method, this paper proposes an improved method for solving coherent sources using the eigenvectors corresponding to the largest eigenvalues after Singular Value Decomposition (SVD) decomposition of received data. The method reconstructs the angle by iterating the feature vector, and reconstructs the angle information accurately without knowing the number of the signal source. Compared with the classical SVD algorithm, the operation speed is faster, and the sparse reconstruction effect is better. Theoretical analysis and simulation results verify the good performance of the algorithm.
To solve the problem of coherent sources using sparse reconstruction method, this paper proposes an improved method for solving coherent sources using the eigenvectors corresponding to the largest eigenvalues after Singular Value Decomposition (SVD) decomposition of received data. The method reconstructs the angle by iterating the feature vector, and reconstructs the angle information accurately without knowing the number of the signal source. Compared with the classical SVD algorithm, the operation speed is faster, and the sparse reconstruction effect is better. Theoretical analysis and simulation results verify the good performance of the algorithm.
2017, 39(8): 1919-1926.
doi: 10.11999/JEIT161206
Abstract:
The purpose of single image blind deconvolution is to estimate the unknown blur kernel from a single observed blurred image and recover the original sharp image. Such a task is severely ill-posed and even more challenging especially in the condition that the noise in the input image can not be negligible. In this paper, the main problem this study focuses on is how to effectively apply low rank prior to blind deconvolution. A single noisy and blurry image blind deconvolution algorithm is proposed, using alternating Maximum A Posteriori (MAP) estimation combined with low rank prior. First, when estimating the intermediate latent image, low rank prior is used as the constraint that is used for noise suppression of the restored image. Then the denoised intermediate latent image in turn leads to higher quality blur kernel estimation. These two operations are iterated in this manner to arrive at reliable blur kernel estimation. Finally, the non-blind deconvolution method is chosen to be used with sparse prior knowledge to achieve the final latent image restoration. Extensive experiments manifest the superiority of the proposed method over state-of-the-art techniques, both qualitatively and quantitatively.
The purpose of single image blind deconvolution is to estimate the unknown blur kernel from a single observed blurred image and recover the original sharp image. Such a task is severely ill-posed and even more challenging especially in the condition that the noise in the input image can not be negligible. In this paper, the main problem this study focuses on is how to effectively apply low rank prior to blind deconvolution. A single noisy and blurry image blind deconvolution algorithm is proposed, using alternating Maximum A Posteriori (MAP) estimation combined with low rank prior. First, when estimating the intermediate latent image, low rank prior is used as the constraint that is used for noise suppression of the restored image. Then the denoised intermediate latent image in turn leads to higher quality blur kernel estimation. These two operations are iterated in this manner to arrive at reliable blur kernel estimation. Finally, the non-blind deconvolution method is chosen to be used with sparse prior knowledge to achieve the final latent image restoration. Extensive experiments manifest the superiority of the proposed method over state-of-the-art techniques, both qualitatively and quantitatively.
2017, 39(8): 1927-1933.
doi: 10.11999/JEIT161217
Abstract:
Image fusion based on image transform technologies is always used in multi-focus image fusion. It transforms images into transform domain and fuses the transformed image according to a specific fusion rule. After that, the fused image is achieved by the inverse image transform. The transform based image fusion methods are robust to noise and the fused results are widely accepted. This paper proposes a multi-focus image fusion method based on discrete Tchebichef orthogonal polynomial transform. Discrete orthogonal polynomial transform is firstly introduced to the field of multi-focus image fusion. The proposed method combines the spatial frequency with the discrete orthogonal polynomial transform coefficients of image, and it directly achieves the value of spatial frequency by the discrete orthogonal polynomial transform coefficients of the image and avoids the process of recalculation that transforms the discrete orthogonal polynomial transform coefficients to space domain. The proposed method can reduce the fusing time in multi-focus image fusion and improves the fusion effect.
Image fusion based on image transform technologies is always used in multi-focus image fusion. It transforms images into transform domain and fuses the transformed image according to a specific fusion rule. After that, the fused image is achieved by the inverse image transform. The transform based image fusion methods are robust to noise and the fused results are widely accepted. This paper proposes a multi-focus image fusion method based on discrete Tchebichef orthogonal polynomial transform. Discrete orthogonal polynomial transform is firstly introduced to the field of multi-focus image fusion. The proposed method combines the spatial frequency with the discrete orthogonal polynomial transform coefficients of image, and it directly achieves the value of spatial frequency by the discrete orthogonal polynomial transform coefficients of the image and avoids the process of recalculation that transforms the discrete orthogonal polynomial transform coefficients to space domain. The proposed method can reduce the fusing time in multi-focus image fusion and improves the fusion effect.
2017, 39(8): 1934-1941.
doi: 10.11999/JEIT161296
Abstract:
Based on Laplace similarity metrics, corresponding diffusion-based saliency models are proposed according to different clusters (sparse or dense) of salient seeds in the two-stage detection, a diffusion-based two-stage complementary method for salient object detection is therefore investigated. Especially for the introduction of sink points in the second stage, saliency maps obtained by this proposed method can well restrain background parts, as well as become more robust with the change of control factor. Experiments show that different diffusion models will cause diversities of saliency diffusion degree when salient seeds are determined. In addition, the two-stage Laplace-based diffusion model with sink points is more effective and robust than other two-stage diffusion models. Meanwhile, the proposed algorithm is superior over the existing five state-of-the-art methods in terms of different metrics. This exactly shows that the similarity metrics method applied to image retrieval and classification is also available for salient objects detection.
Based on Laplace similarity metrics, corresponding diffusion-based saliency models are proposed according to different clusters (sparse or dense) of salient seeds in the two-stage detection, a diffusion-based two-stage complementary method for salient object detection is therefore investigated. Especially for the introduction of sink points in the second stage, saliency maps obtained by this proposed method can well restrain background parts, as well as become more robust with the change of control factor. Experiments show that different diffusion models will cause diversities of saliency diffusion degree when salient seeds are determined. In addition, the two-stage Laplace-based diffusion model with sink points is more effective and robust than other two-stage diffusion models. Meanwhile, the proposed algorithm is superior over the existing five state-of-the-art methods in terms of different metrics. This exactly shows that the similarity metrics method applied to image retrieval and classification is also available for salient objects detection.
2017, 39(8): 1942-1949.
doi: 10.11999/JEIT161154
Abstract:
The existing collaborative recommendation algorithms have low robustness against shilling attacks. To solve this problem, a robust collaborative recommendation algorithm is proposed based on Fuzzy Kernel Clustering (FKC) and Support Vector Machine (SVM). Firstly, according to the high correlation characteristic between attack profiles, the FKC method is used to cluster user profiles in high-dimensional feature space, which is the first stage of the attack profile detection. Then, the SVM classifier is used to classify the cluster including attack profiles, which is the second stage of the attack profile detection. Finally, an indicator function is constructed based on the attack detection results to reduce the influence of attack profiles on the recommendation, and it is combined with the matrix factorization technology to devise the corresponding robust collaborative recommendation algorithm. Experimental results show that the proposed algorithm outperforms the existing methods in terms of both recommendation accuracy and robustness.
The existing collaborative recommendation algorithms have low robustness against shilling attacks. To solve this problem, a robust collaborative recommendation algorithm is proposed based on Fuzzy Kernel Clustering (FKC) and Support Vector Machine (SVM). Firstly, according to the high correlation characteristic between attack profiles, the FKC method is used to cluster user profiles in high-dimensional feature space, which is the first stage of the attack profile detection. Then, the SVM classifier is used to classify the cluster including attack profiles, which is the second stage of the attack profile detection. Finally, an indicator function is constructed based on the attack detection results to reduce the influence of attack profiles on the recommendation, and it is combined with the matrix factorization technology to devise the corresponding robust collaborative recommendation algorithm. Experimental results show that the proposed algorithm outperforms the existing methods in terms of both recommendation accuracy and robustness.
2017, 39(8): 1950-1955.
doi: 10.11999/JEIT161049
Abstract:
Nonconvex nonsmooth optimization problems are related to many fields of science and engineering applications, which are research hotspots. For the lack of neural network based on early penalty function for nonsmooth optimization problems, a recurrent neural network model is proposed using Lagrange multiplier penalty function to solve the nonconvex nonsmooth optimization problems with equality and inequality constrains. Since the penalty factor in this network model is variable, without calculating initial penalty factor value, the network can still guarantee convergence to the optimal solution, which is more convenient for network computing. Compared with the traditional Lagrange method, the network model adds an equality constraint penalty term, which can improve the convergence ability of the network. Through the detailed analysis, it is proved that the trajectory of the network model can reach the feasible region in finite time and finally converge to the critical point set. In the end, numerical experiments are given to verify the effectiveness of the theoretic results.
Nonconvex nonsmooth optimization problems are related to many fields of science and engineering applications, which are research hotspots. For the lack of neural network based on early penalty function for nonsmooth optimization problems, a recurrent neural network model is proposed using Lagrange multiplier penalty function to solve the nonconvex nonsmooth optimization problems with equality and inequality constrains. Since the penalty factor in this network model is variable, without calculating initial penalty factor value, the network can still guarantee convergence to the optimal solution, which is more convenient for network computing. Compared with the traditional Lagrange method, the network model adds an equality constraint penalty term, which can improve the convergence ability of the network. Through the detailed analysis, it is proved that the trajectory of the network model can reach the feasible region in finite time and finally converge to the critical point set. In the end, numerical experiments are given to verify the effectiveness of the theoretic results.
2017, 39(8): 1956-1963.
doi: 10.11999/JEIT161290
Abstract:
How to apply machine learning to retinal vessel segmentation effectively has become a trend, however, choosing what kind of features for the blood vessels is still a problem. In this paper, the blood vessels of pixels are regarded as a theory of binary classification, and a hybrid 5D features for each pixel is put forward to extract retinal blood vessels from the background simplely and quickly. The 5D eigenvector includes Contrast Limited Adaptive Histgram Equalization (CLAHE), Gaussian matched filter, Hessian matrix transform, morphological bottom hat transform and Bar-selective Combination Of Shifted Filter Responses (B-COSFIRE). Then the fusion features are input into the Support Vector Machine (SVM) classifier to train a model that is needed. The proposed method is evaluated on two publicly available datasets of DRIVE and STARE, respectively. Se, Sp, Acc, Ppv, Npv, F1-measure are used to test the proposed method, and average classification accuracies are 0.9573 and 0.9575 on the DRIVE and STARE datasets, respectively. Performance results show that the fusion method also outperform the state-of-the-art method including B-COSFIRE and other currently proposed fusion features method.
How to apply machine learning to retinal vessel segmentation effectively has become a trend, however, choosing what kind of features for the blood vessels is still a problem. In this paper, the blood vessels of pixels are regarded as a theory of binary classification, and a hybrid 5D features for each pixel is put forward to extract retinal blood vessels from the background simplely and quickly. The 5D eigenvector includes Contrast Limited Adaptive Histgram Equalization (CLAHE), Gaussian matched filter, Hessian matrix transform, morphological bottom hat transform and Bar-selective Combination Of Shifted Filter Responses (B-COSFIRE). Then the fusion features are input into the Support Vector Machine (SVM) classifier to train a model that is needed. The proposed method is evaluated on two publicly available datasets of DRIVE and STARE, respectively. Se, Sp, Acc, Ppv, Npv, F1-measure are used to test the proposed method, and average classification accuracies are 0.9573 and 0.9575 on the DRIVE and STARE datasets, respectively. Performance results show that the fusion method also outperform the state-of-the-art method including B-COSFIRE and other currently proposed fusion features method.
2017, 39(8): 1964-1971.
doi: 10.11999/JEIT161109
Abstract:
Overlapping is one of the most important characteristics of real-world networks. Based on the classic labeling algorithm, the overlapping-community orientated label propagation algorithm based on contribution function is proposed. In this algorithm, each node is indicated by a set of triples (threshold, label, and coefficient). The threshold value of every node is used as a metric for labels decision, which is calculated automatically by multiple linear regression equation. The dependent coefficient is used to measure the relevance of the current node with the correspondent community which is marked by the label. A greater value of dependent coefficient means a stronger association between the node and the community. During each iteration process, the dependent coefficients are calculated through Contribution Function (CF) of each node, and new triples are produced. Then the labels in terms of decision rules are selected, and the dependent coefficients of the node are normalized. According to the tests with real-world networks and automatic generation of LFR (Lancichinetti Fortunato Radicchi) test network, the algorithm can divide communication with high accuracy and robust result.
Overlapping is one of the most important characteristics of real-world networks. Based on the classic labeling algorithm, the overlapping-community orientated label propagation algorithm based on contribution function is proposed. In this algorithm, each node is indicated by a set of triples (threshold, label, and coefficient). The threshold value of every node is used as a metric for labels decision, which is calculated automatically by multiple linear regression equation. The dependent coefficient is used to measure the relevance of the current node with the correspondent community which is marked by the label. A greater value of dependent coefficient means a stronger association between the node and the community. During each iteration process, the dependent coefficients are calculated through Contribution Function (CF) of each node, and new triples are produced. Then the labels in terms of decision rules are selected, and the dependent coefficients of the node are normalized. According to the tests with real-world networks and automatic generation of LFR (Lancichinetti Fortunato Radicchi) test network, the algorithm can divide communication with high accuracy and robust result.
2017, 39(8): 1972-1978.
doi: 10.11999/JEIT161216
Abstract:
In Software-Defined Networking (SDN), if a controller has unrecoverable failure, the related switches immigrate to other controllers, which degrades network performance. Concerning the above problem, a strategy of controller placement and switch immigration is proposed for controller failure. Different from the present algorithms which only optimize switch immigration method, the proposed strategy also considers the influence of controller placement. Firstly, Label Propagation Algorithm (LPA) is used to construct alternate domains set and partition bilayer domains. Then, one controller is placed in each domain on properly selected situation. Finally, the switches are assigned to corresponding master and slave controllers. The experimental results show that controller overloading problem is well solved compared with the present algorithms. Network performance before and after failure can be traded off by adjusting parameters, which decreases average control path latency after switch immigration.
In Software-Defined Networking (SDN), if a controller has unrecoverable failure, the related switches immigrate to other controllers, which degrades network performance. Concerning the above problem, a strategy of controller placement and switch immigration is proposed for controller failure. Different from the present algorithms which only optimize switch immigration method, the proposed strategy also considers the influence of controller placement. Firstly, Label Propagation Algorithm (LPA) is used to construct alternate domains set and partition bilayer domains. Then, one controller is placed in each domain on properly selected situation. Finally, the switches are assigned to corresponding master and slave controllers. The experimental results show that controller overloading problem is well solved compared with the present algorithms. Network performance before and after failure can be traded off by adjusting parameters, which decreases average control path latency after switch immigration.
Virtual Network Mapping Algorithm Based on Node Adjacent-awareness and Path Comprehensive Evaluation
2017, 39(8): 1979-1985.
doi: 10.11999/JEIT161252
Abstract:
To solve the problems of poor correlation in node mapping and link mapping, wide apart of adjacent virtual node during mapping and imbalance resource consumption of nodes with their adjacent links, a two-stage Virtual Network Mapping algorithm is proposed based on Node Adjacent-awareness and Path comprehensive evaluation (NA-PVNM). In the stage of node mapping, firstly, virtual nodes are ranked according to resources request and breadth-first search, secondly, a node fitness function is set to find the best node in candidates of a virtual node, which takes resource richness and topology connection feature into account. In the stage of link mapping, a path fitness function is set to find the best path in candidates, which takes available bandwidth, node resource and hops of path into account. Simulation results show that the path distances of virtual links are reduced, the acceptance ratio and revenue/cost ratio of virtual networks are improved using the proposed NA-PVNM algorithm. The influence of location constraint and substrate topology feature on algorithm performance, and the resource occupancy of substrate network during mapping are analyzed by experiments. Experimental results show that, under the constraint of physical resource distribution and virtual network requests, the critical factor of improving success rate is to reduce resource consumption during mapping.
To solve the problems of poor correlation in node mapping and link mapping, wide apart of adjacent virtual node during mapping and imbalance resource consumption of nodes with their adjacent links, a two-stage Virtual Network Mapping algorithm is proposed based on Node Adjacent-awareness and Path comprehensive evaluation (NA-PVNM). In the stage of node mapping, firstly, virtual nodes are ranked according to resources request and breadth-first search, secondly, a node fitness function is set to find the best node in candidates of a virtual node, which takes resource richness and topology connection feature into account. In the stage of link mapping, a path fitness function is set to find the best path in candidates, which takes available bandwidth, node resource and hops of path into account. Simulation results show that the path distances of virtual links are reduced, the acceptance ratio and revenue/cost ratio of virtual networks are improved using the proposed NA-PVNM algorithm. The influence of location constraint and substrate topology feature on algorithm performance, and the resource occupancy of substrate network during mapping are analyzed by experiments. Experimental results show that, under the constraint of physical resource distribution and virtual network requests, the critical factor of improving success rate is to reduce resource consumption during mapping.
2017, 39(8): 1986-1992.
doi: 10.11999/JEIT161335
Abstract:
In view of the selfishness of nodes in mobile Peer-to-Peer (P2P) network, combined with its features of the resource-constrained, self-organization and opening, this paper proposes a novel Incentive Protocol DAIP of mobile P2P network based on Double Auction model of incomplete information on both sides. The incentive mechanism adopts virtual currency payment method. The node calculates the evaluation of a message forwarding based on the virtual currency, resource state of it and the property of message, then gives the corresponding price according to the evaluation and game strategy. Through the game analysis, the linear strategy Bayes Nash equilibrium solution of DAIP strategy is given, which makes each node to maximize its own benefits, encourages them to cooperate with the message forwarding, and then improves the success rate of message forwarding in the network system. Analysis and simulation show that this incentive mechanism is able to effectively reduce the system's energy consumption, improve the success rate of message forwarding in the whole network system, and improve the overall effectiveness of the system.
In view of the selfishness of nodes in mobile Peer-to-Peer (P2P) network, combined with its features of the resource-constrained, self-organization and opening, this paper proposes a novel Incentive Protocol DAIP of mobile P2P network based on Double Auction model of incomplete information on both sides. The incentive mechanism adopts virtual currency payment method. The node calculates the evaluation of a message forwarding based on the virtual currency, resource state of it and the property of message, then gives the corresponding price according to the evaluation and game strategy. Through the game analysis, the linear strategy Bayes Nash equilibrium solution of DAIP strategy is given, which makes each node to maximize its own benefits, encourages them to cooperate with the message forwarding, and then improves the success rate of message forwarding in the network system. Analysis and simulation show that this incentive mechanism is able to effectively reduce the system's energy consumption, improve the success rate of message forwarding in the whole network system, and improve the overall effectiveness of the system.
2017, 39(8): 1993-1999.
doi: 10.11999/JEIT161398
Abstract:
Based on the four-order power system model, a fractional-order power system model with excitation model is presented in this paper and the dynamic properties of the fractional-order system are investigated and controlled. Firstly, the fractional-order power system of 4D is given and then the minimum order for existence of chaotic oscillation in power system with fixed parameters is achieved through bifurcation diagram and maximum Lyapunov exponent. Secondly, the influence of mechanical power, damping coefficient and excitation gain on system dynamics behavior is studied respectively. The bifurcation diagrams and Lyapunov exponent spectrum of the system are plotted through numerical simulations, respectively. In addition, the coexistence of attractors with different initial conditions in the same system is investigated. Finally, from the stability theory of fractional-order system and nonlinear feedback control theory, a synchronous controller of two power systems with different initials is designed, and numerical simulations show the effectiveness of the controller.
Based on the four-order power system model, a fractional-order power system model with excitation model is presented in this paper and the dynamic properties of the fractional-order system are investigated and controlled. Firstly, the fractional-order power system of 4D is given and then the minimum order for existence of chaotic oscillation in power system with fixed parameters is achieved through bifurcation diagram and maximum Lyapunov exponent. Secondly, the influence of mechanical power, damping coefficient and excitation gain on system dynamics behavior is studied respectively. The bifurcation diagrams and Lyapunov exponent spectrum of the system are plotted through numerical simulations, respectively. In addition, the coexistence of attractors with different initial conditions in the same system is investigated. Finally, from the stability theory of fractional-order system and nonlinear feedback control theory, a synchronous controller of two power systems with different initials is designed, and numerical simulations show the effectiveness of the controller.
2017, 39(8): 2000-2006.
doi: 10.11999/JEIT161213
Abstract:
The impulse radio Ultra-WideBand (UWB) Tracking, Telemetry, and Command (TTC) system is a new kind of TTC system that can greatly improve the concealment and anti-interference performance. To solve the acquisition problem of the impulse radio UWB TTC signal, an acquisition scheme based on Partial Matched Filtering and Fast Fourier Transform (PMF-FFT) is proposed to accomplish the three-dimensional acquisition of pulse phase, pseudorandom code phase and Doppler frequency simultaneously. Then, according to the problem of excessive search space, long acquisition time and low estimation accuracy of Doppler frequency, a new improved acquisition scheme is proposed. It adopts the two-step scheme to accomplish time delay phase acquisition, and uses the modified Rife algorithm to further estimate the Doppler frequency. Simulation results show that this scheme can effectively improve the acquisition speed, reduce the acquisition time, and greatly improve the estimation accuracy of Doppler frequency.
The impulse radio Ultra-WideBand (UWB) Tracking, Telemetry, and Command (TTC) system is a new kind of TTC system that can greatly improve the concealment and anti-interference performance. To solve the acquisition problem of the impulse radio UWB TTC signal, an acquisition scheme based on Partial Matched Filtering and Fast Fourier Transform (PMF-FFT) is proposed to accomplish the three-dimensional acquisition of pulse phase, pseudorandom code phase and Doppler frequency simultaneously. Then, according to the problem of excessive search space, long acquisition time and low estimation accuracy of Doppler frequency, a new improved acquisition scheme is proposed. It adopts the two-step scheme to accomplish time delay phase acquisition, and uses the modified Rife algorithm to further estimate the Doppler frequency. Simulation results show that this scheme can effectively improve the acquisition speed, reduce the acquisition time, and greatly improve the estimation accuracy of Doppler frequency.
2017, 39(8): 2007-2013.
doi: 10.11999/JEIT161123
Abstract:
Marine magnetic survey is one of the basic methods for oceanographic observation, seabed resource prospection and national defense security. Learning about the mechanism of the wave induced magnetic field, the prediction model and the methods of noise suppression are of great importance for improving the measurements accuracy. On the basis of analysing the first order Stokes wave motion equations, this paper proposes an improved model of the ocean wave induced magnetic field, together with simplified formulas for the deep and shallow water conditions. In order to verify the validity of the model, an observation was done in the South China Sea in 2015. By comparing the proposed model with the classic Weavers model and the test results, it is shown that the proposed modified model can make an accurate prediction which is nearly an order of magnitude higher than Weavers model on the ocean wave induced magnetic field. This makes the proposed model a more effective tool of noise suppression for Marine magnetic field measurement.
Marine magnetic survey is one of the basic methods for oceanographic observation, seabed resource prospection and national defense security. Learning about the mechanism of the wave induced magnetic field, the prediction model and the methods of noise suppression are of great importance for improving the measurements accuracy. On the basis of analysing the first order Stokes wave motion equations, this paper proposes an improved model of the ocean wave induced magnetic field, together with simplified formulas for the deep and shallow water conditions. In order to verify the validity of the model, an observation was done in the South China Sea in 2015. By comparing the proposed model with the classic Weavers model and the test results, it is shown that the proposed modified model can make an accurate prediction which is nearly an order of magnitude higher than Weavers model on the ocean wave induced magnetic field. This makes the proposed model a more effective tool of noise suppression for Marine magnetic field measurement.
2017, 39(8): 2014-2018.
doi: 10.11999/JEIT161101
Abstract:
The field-to-wire coupling in metal enclosure is an important issue in the field of electromagnetic compatibility. In this paper, an efficient and accurate approach is presented to calculate the EMI (ElectroMagnetic Interference) of a complex cable bundle in an enclosure, which involves two methods: mode matching and BLT equation. The issue is divided into the two sub-questions: aperture coupling and field-to-wire coupling, the electromagnetic field in enclosure is calculated by the Mode function and MOM, the EMI of the cables in an enclosure is calculated by Agrawal,s field-to-wire coupling theory and BLT equation. In comparison with measurement data shows that the electromagnetic field in enclosure can be accurately calculated by the mode matching method. The proposed method can also significantly reduce the simulation time and improve the efficiency of simulation compared with CST, which can be used to calculate the field-to-wire coupling in an enclosure.
The field-to-wire coupling in metal enclosure is an important issue in the field of electromagnetic compatibility. In this paper, an efficient and accurate approach is presented to calculate the EMI (ElectroMagnetic Interference) of a complex cable bundle in an enclosure, which involves two methods: mode matching and BLT equation. The issue is divided into the two sub-questions: aperture coupling and field-to-wire coupling, the electromagnetic field in enclosure is calculated by the Mode function and MOM, the EMI of the cables in an enclosure is calculated by Agrawal,s field-to-wire coupling theory and BLT equation. In comparison with measurement data shows that the electromagnetic field in enclosure can be accurately calculated by the mode matching method. The proposed method can also significantly reduce the simulation time and improve the efficiency of simulation compared with CST, which can be used to calculate the field-to-wire coupling in an enclosure.
2017, 39(8): 2019-2022.
doi: 10.11999/JEIT161332
Abstract:
Based on the wide-angle parabolic equation method, propagation characteristics of low-frequency radio wave in transition regions over irregular terrain are analyzed. The formula of the direction factor is deduced with certain assumptions. Electric field distributions within 60 km height of three path models are simulated, and the propagation characteristics are compared with those on the ground. By the proposed approach calculation schemes of the electric field both on the ground and in transition regions can be unified with high accuracy, which is important for solutions of the sky-wave propagation problems in a complex earth-ionosphere waveguide.
Based on the wide-angle parabolic equation method, propagation characteristics of low-frequency radio wave in transition regions over irregular terrain are analyzed. The formula of the direction factor is deduced with certain assumptions. Electric field distributions within 60 km height of three path models are simulated, and the propagation characteristics are compared with those on the ground. By the proposed approach calculation schemes of the electric field both on the ground and in transition regions can be unified with high accuracy, which is important for solutions of the sky-wave propagation problems in a complex earth-ionosphere waveguide.
2017, 39(8): 2023-2027.
doi: 10.11999/JEIT161103
Abstract:
The traditional face recognition is sensitive to light condition as well as facial expression, and has a shortcoming of high intra-group dispersion, a novel method is proposed to overcome these defects by combining Gabor wavelet and a weighted computation based on the cross-covariance. Firstly, Gabor features are extracted from the face image. Then, a weighted cross-covariance matrix is used for dimension reduction and feature extraction. Finally, the nearest neighbor classifier is performed for classification. Experimental results on the ORL face database and the AR face database show that the recognition performance of the proposed method is superior over the 2DPCA and its improved algorithm. It also reduces the dimensionality of feature and improves the recognition performance effectively.
The traditional face recognition is sensitive to light condition as well as facial expression, and has a shortcoming of high intra-group dispersion, a novel method is proposed to overcome these defects by combining Gabor wavelet and a weighted computation based on the cross-covariance. Firstly, Gabor features are extracted from the face image. Then, a weighted cross-covariance matrix is used for dimension reduction and feature extraction. Finally, the nearest neighbor classifier is performed for classification. Experimental results on the ORL face database and the AR face database show that the recognition performance of the proposed method is superior over the 2DPCA and its improved algorithm. It also reduces the dimensionality of feature and improves the recognition performance effectively.
2017, 39(8): 2028-2032.
doi: 10.11999/JEIT161112
Abstract:
Global measurement of ocean salinity using satellite borne synthetic aperture radiometer is one of the research focuses in the field of microwave remote sensing. In order to achieve the accuracy of the ocean salinity detection, the radiometer units of the synthetic aperture radiometer need to have very high sensitivity and very high calibration stability. In this paper, the technique of the radiometer with high sensitivity and high stability is researched. High stability is realized by the real-time calibration method, and the sensitivity is effectively improved by the calibration data average technology. The optimal average time is obtained by the frequency domain analysis for the first time. Long time stability experiments are completed to demonstrate its performance. Experimental results show that the stability of this L-band radiometer reaches 0.12 K (in 3 days), and the sensitivity reaches 0.1 K, which can reach the requirement of the synthetic aperture radiometer for ocean salinity detection.
Global measurement of ocean salinity using satellite borne synthetic aperture radiometer is one of the research focuses in the field of microwave remote sensing. In order to achieve the accuracy of the ocean salinity detection, the radiometer units of the synthetic aperture radiometer need to have very high sensitivity and very high calibration stability. In this paper, the technique of the radiometer with high sensitivity and high stability is researched. High stability is realized by the real-time calibration method, and the sensitivity is effectively improved by the calibration data average technology. The optimal average time is obtained by the frequency domain analysis for the first time. Long time stability experiments are completed to demonstrate its performance. Experimental results show that the stability of this L-band radiometer reaches 0.12 K (in 3 days), and the sensitivity reaches 0.1 K, which can reach the requirement of the synthetic aperture radiometer for ocean salinity detection.