Email alert
2010 Vol. 32, No. 11
Display Method:
2010, 32(11): 2541-2546.
doi: 10.3724/SP.J.1146.2009.01468
Abstract:
The current video coding standards have higher coding efficiency, but also increase the coding complexity. Thus improving the encoding speed is very important. According to the distribution of the residual coefficients, a fast encoding algorithm is proposed. The algorithm first uses residual coefficients which are obtained from the Inter1616 or Inter88 for the hypothesis testing. Then the results of the hypothesis testing are used to predict the possible coding mode. And the criteria for all-zero blocks, 1 1 sub-blocks, 2 2 sub-blocks and 3 3 sub-blocks are calculated based on the distribution of the DCT coefficients, thus avoiding all or some of the operation on the coefficients of DCT, Q, IQ, IDCT in the 4 4 sub-blocks. The experimental results show that the encoding speed of the proposed algorithm is significantly faster than other algorithms with negligible coding loss.
The current video coding standards have higher coding efficiency, but also increase the coding complexity. Thus improving the encoding speed is very important. According to the distribution of the residual coefficients, a fast encoding algorithm is proposed. The algorithm first uses residual coefficients which are obtained from the Inter1616 or Inter88 for the hypothesis testing. Then the results of the hypothesis testing are used to predict the possible coding mode. And the criteria for all-zero blocks, 1 1 sub-blocks, 2 2 sub-blocks and 3 3 sub-blocks are calculated based on the distribution of the DCT coefficients, thus avoiding all or some of the operation on the coefficients of DCT, Q, IQ, IDCT in the 4 4 sub-blocks. The experimental results show that the encoding speed of the proposed algorithm is significantly faster than other algorithms with negligible coding loss.
2010, 32(11): 2547-2553.
doi: 10.3724/SP.J.1146.2009.01431
Abstract:
To address the bad effect of intra-coding Rate Control (RC) of H.264, a novel image complexity adaptive I-frame RC algorithm is proposed. First, the gradient of luma pixel is detected in I-frame with Sobel operator and the edge direction histogram is established for each 44 block, hereby the most probable intra prediction mode and corresponding reconstructed block are got. Finally, the residual picture which is close to the actual coding residual is obtained. The mean absolute value of residual is used to represent I-frame coding complexity, then an empirical Rate-Quantization (R-Q) model is proposed, and the optimal intra-QP is accurately determined for each GOP according to allocated target bits by considering simultaneously buffer status and sequence characteristic. Experimental results show that the proposed scheme obtains more accurate I-frame output bit-rate and more steady video quality. Buffer overflow and frame skip are effectively prevented, and sequence PSNR fluctuation has reduced by 60%.
To address the bad effect of intra-coding Rate Control (RC) of H.264, a novel image complexity adaptive I-frame RC algorithm is proposed. First, the gradient of luma pixel is detected in I-frame with Sobel operator and the edge direction histogram is established for each 44 block, hereby the most probable intra prediction mode and corresponding reconstructed block are got. Finally, the residual picture which is close to the actual coding residual is obtained. The mean absolute value of residual is used to represent I-frame coding complexity, then an empirical Rate-Quantization (R-Q) model is proposed, and the optimal intra-QP is accurately determined for each GOP according to allocated target bits by considering simultaneously buffer status and sequence characteristic. Experimental results show that the proposed scheme obtains more accurate I-frame output bit-rate and more steady video quality. Buffer overflow and frame skip are effectively prevented, and sequence PSNR fluctuation has reduced by 60%.
2010, 32(11): 2554-2559.
doi: 10.3724/SP.J.1146.2009.01427
Abstract:
The channelization technique is a key element of onboard switching in the next generation broadband satellite communication systems. With this technique, the independent communication signals can be extracted from the wideband uplink signal. And having experienced frequency down-conversion, baseband processing and switching, they are synthesized together to enter the downlink. This paper proposes a Complex-Exponential Modulation Perfect Reconstruction Filter Bank (CEM PRFB) based channelizer for broadband satellite communications, which is fit for not only homogeneous onboard switching, but also non-homogeneous onboard switching. Both floating point and fixed point simulations are carried out in detail to explicitly verify the flexibility and scalability of the proposed channelizer. With this channelizer, the limitation in existing channelization techniques is overcome. Furthermore, significant performance improvement and reduction in memory size have been demonstrated.
The channelization technique is a key element of onboard switching in the next generation broadband satellite communication systems. With this technique, the independent communication signals can be extracted from the wideband uplink signal. And having experienced frequency down-conversion, baseband processing and switching, they are synthesized together to enter the downlink. This paper proposes a Complex-Exponential Modulation Perfect Reconstruction Filter Bank (CEM PRFB) based channelizer for broadband satellite communications, which is fit for not only homogeneous onboard switching, but also non-homogeneous onboard switching. Both floating point and fixed point simulations are carried out in detail to explicitly verify the flexibility and scalability of the proposed channelizer. With this channelizer, the limitation in existing channelization techniques is overcome. Furthermore, significant performance improvement and reduction in memory size have been demonstrated.
2010, 32(11): 2560-2564.
doi: 10.3724/SP.J.1146.2009.01632
Abstract:
A distributed power control algorithm in hybrid cellular and Peer-to-Peer (P2P) network is proposed. In the objective of overall system spectrum efficiency maximization, a distributed power control scheme of P2P transmission in the hybrid system model is devised based on the tools of convex optimization. The algorithm considers the harmful interference between cellular sub-system and P2P sub-system, and provides effective interference avoidance mechanism, thus improves the total system throughput. Simulation results prove the convergence and performance of the proposed scheme.
A distributed power control algorithm in hybrid cellular and Peer-to-Peer (P2P) network is proposed. In the objective of overall system spectrum efficiency maximization, a distributed power control scheme of P2P transmission in the hybrid system model is devised based on the tools of convex optimization. The algorithm considers the harmful interference between cellular sub-system and P2P sub-system, and provides effective interference avoidance mechanism, thus improves the total system throughput. Simulation results prove the convergence and performance of the proposed scheme.
2010, 32(11): 2565-2570.
doi: 10.3724/SP.J.1146.2009.01532
Abstract:
In this paper, the issue of channel estimation is investigated for distributed space-frequency coding system in cooperative communications. To make the channel being identifiable, Cyclic Convolution Filters (CCF) are utilized at the relay nodes. In the training stage, the suboptimal design is derived for both the pilot sequence and CCFs adopted at relay nodes. Furthermore, the closed expression of power allocation between source node and relay nodes is obtained. Finally, computer simulations demonstrate the effectiveness of the proposed approach.
In this paper, the issue of channel estimation is investigated for distributed space-frequency coding system in cooperative communications. To make the channel being identifiable, Cyclic Convolution Filters (CCF) are utilized at the relay nodes. In the training stage, the suboptimal design is derived for both the pilot sequence and CCFs adopted at relay nodes. Furthermore, the closed expression of power allocation between source node and relay nodes is obtained. Finally, computer simulations demonstrate the effectiveness of the proposed approach.
2010, 32(11): 2571-2575.
doi: 10.3724/SP.J.1146.2009.01434
Abstract:
Spectrum sensing is one of the key technologies for cognitive radio systems. After analyzing the correlation matrix of the received signals, the Difference between the Maximum eigenvalue and the Minimum eigenvalue (DMM) is employed as the test statistic to sense the available spectrum for the cognitive users. Both the simulation and the theoretical results show that the proposed method is robust to noise uncertainty, and greatly outperforms the classical energy detection method.
Spectrum sensing is one of the key technologies for cognitive radio systems. After analyzing the correlation matrix of the received signals, the Difference between the Maximum eigenvalue and the Minimum eigenvalue (DMM) is employed as the test statistic to sense the available spectrum for the cognitive users. Both the simulation and the theoretical results show that the proposed method is robust to noise uncertainty, and greatly outperforms the classical energy detection method.
2010, 32(11): 2576-2581.
doi: 10.3724/SP.J.1146.2009.01429
Abstract:
To improve the resource utilization of traditional two-way wireless relay system which employs Equal Power Allocation (EPA) between the source and relays, an Optimal Power Allocation (OPA) algorithm for a cooperative network coding scheme is proposed. By minimizing the larger outage probability of the two sources and using the high order approximation of outage expression, the optimal power factors are obtained through iterative computation. The proposed algorithm has low implementation complexity and only needs the average channel gain information. Simulation results show that compared to the EPA algorithm, the proposed OPA algorithm is superior in terms of both outage and Symbol Error Rate (SER) performance.
To improve the resource utilization of traditional two-way wireless relay system which employs Equal Power Allocation (EPA) between the source and relays, an Optimal Power Allocation (OPA) algorithm for a cooperative network coding scheme is proposed. By minimizing the larger outage probability of the two sources and using the high order approximation of outage expression, the optimal power factors are obtained through iterative computation. The proposed algorithm has low implementation complexity and only needs the average channel gain information. Simulation results show that compared to the EPA algorithm, the proposed OPA algorithm is superior in terms of both outage and Symbol Error Rate (SER) performance.
2010, 32(11): 2582-2587.
doi: 10.3724/SP.J.1146.2009.01581
Abstract:
The channels between the Base Station and the relays (BS-relay) and those between the relays and the MoBile stations (relay-MB) can be modeled as mixed fading channels in cellular communication relay, which means that the BS-relay channels are Rician fading channels and the relay-MB channels are Rayleigh fading channels. The performance of a dual-hop decode-and-forward relay system is investigated in the mixed fading channels. Firstly, under optimal relay selection, the closed-form expressions for the outage probability, average symbol error probability and asymptotic average symbol error probability are derived. Secondly, an optimizing power allocation method is derived based on the asymptotic average symbol error probability. Monte Carlo simulations validate the analysis and the simulation results show that the system performance under optimal power allocation is better that under equal power allocation.
The channels between the Base Station and the relays (BS-relay) and those between the relays and the MoBile stations (relay-MB) can be modeled as mixed fading channels in cellular communication relay, which means that the BS-relay channels are Rician fading channels and the relay-MB channels are Rayleigh fading channels. The performance of a dual-hop decode-and-forward relay system is investigated in the mixed fading channels. Firstly, under optimal relay selection, the closed-form expressions for the outage probability, average symbol error probability and asymptotic average symbol error probability are derived. Secondly, an optimizing power allocation method is derived based on the asymptotic average symbol error probability. Monte Carlo simulations validate the analysis and the simulation results show that the system performance under optimal power allocation is better that under equal power allocation.
2010, 32(11): 2588-2592.
doi: 10.3724/SP.J.1146.2009.01585
Abstract:
For the issue of the synchronization in nonsinusoidal orthogonal modulation signal in time domain based on Prolate Spheroidal Wave Functions (PSWF), a synchronization algorithm based on auxiliary sequence is proposed by analyzing the pulses sequence characters. The auxiliary sequence with single peak value is designed by using time-limited baseband PSWF pulses with first order modulated by Barker code. Synchronization is firstly achieved to within a reasonable phase sector in short time by using of serial-search slider coherent acquisition, then fine acquisition is achieved and the accuracy is verified by using MAX/TC algorithm. The performance of the synchronization algorithm is derived under the additive white Gaussian noise by theory analysis. The simulation result shows that the proposed synchronization algorithm is feasible and with high acquisition probability under low signal noise ratio.
For the issue of the synchronization in nonsinusoidal orthogonal modulation signal in time domain based on Prolate Spheroidal Wave Functions (PSWF), a synchronization algorithm based on auxiliary sequence is proposed by analyzing the pulses sequence characters. The auxiliary sequence with single peak value is designed by using time-limited baseband PSWF pulses with first order modulated by Barker code. Synchronization is firstly achieved to within a reasonable phase sector in short time by using of serial-search slider coherent acquisition, then fine acquisition is achieved and the accuracy is verified by using MAX/TC algorithm. The performance of the synchronization algorithm is derived under the additive white Gaussian noise by theory analysis. The simulation result shows that the proposed synchronization algorithm is feasible and with high acquisition probability under low signal noise ratio.
2010, 32(11): 2593-2598.
doi: 10.3724/SP.J.1146.2009.01513
Abstract:
A topology-transparent link activation Media Access Control (MAC) protocol is proposed for Ad hoc networks with Multiple Input Multiple Output (MIMO) links. The protocol allocates transmission slots for each link in the networks based on the theory of orthogonal Latin squares , so that each link can successfully transmit its data streams in at least one slot in a frame. The average throughput of the protocol is deduced through theoretical analysis. To maximize the average throughput, a method of searching the optimal protocol parameters is also derived in this paper. Numerical results show that, compared with existing topology-transparent link activation and node activation MAC protocol, the proposed protocol can increase the throughput of each network node.
A topology-transparent link activation Media Access Control (MAC) protocol is proposed for Ad hoc networks with Multiple Input Multiple Output (MIMO) links. The protocol allocates transmission slots for each link in the networks based on the theory of orthogonal Latin squares , so that each link can successfully transmit its data streams in at least one slot in a frame. The average throughput of the protocol is deduced through theoretical analysis. To maximize the average throughput, a method of searching the optimal protocol parameters is also derived in this paper. Numerical results show that, compared with existing topology-transparent link activation and node activation MAC protocol, the proposed protocol can increase the throughput of each network node.
2010, 32(11): 2599-2605.
doi: 10.3724/SP.J.1146.2009.01504
Abstract:
Strip-based wireless sensor network is a typical application of wireless sensor network (WSN). Existent the network lifetime model is primarily focused on specific distribution and working model, which can not be applied to the lifetime estimation in strip-based WSN directly. This paper proposes a distributed vertex cut-set computing algorithm to forecast the lifetime of a strip-based WSN. According to this algorithm, each node only computes a near-minimum vertex cut-set and its local residual lifetime with the assistance of position information and residual lifetime information of neighboring nodes, and then exchange signaling messages carrying such local estimated residual lifetime for computing the residual lifetime of the whole network. Simulation results show that, compared to previous gradient-based lifetime estimating algorithm, the proposed algorithm can estimate the network lifetime in real-time and also more accurately.
Strip-based wireless sensor network is a typical application of wireless sensor network (WSN). Existent the network lifetime model is primarily focused on specific distribution and working model, which can not be applied to the lifetime estimation in strip-based WSN directly. This paper proposes a distributed vertex cut-set computing algorithm to forecast the lifetime of a strip-based WSN. According to this algorithm, each node only computes a near-minimum vertex cut-set and its local residual lifetime with the assistance of position information and residual lifetime information of neighboring nodes, and then exchange signaling messages carrying such local estimated residual lifetime for computing the residual lifetime of the whole network. Simulation results show that, compared to previous gradient-based lifetime estimating algorithm, the proposed algorithm can estimate the network lifetime in real-time and also more accurately.
2010, 32(11): 2606-2611.
doi: 10.3724/SP.J.1146.2009.01595
Abstract:
An algorithm based on an improved particle swarm optimization algorithm is proposed for the task management in the applications to multihop clustered wireless sensor networks which require collaborative processing executed in parallel on sensor nodes. Duplication-based mutation operation is set up and weighted entropy based technique for order preference by similarity to ideal solution is used to evaluate and choose objectively on the result of the algorithm. The components of the algorithm are explicitly presented and simulation results validate that it searches effectively and can obtain multi-objective optimized task allocation and scheduling solutions which outperform compared with that of other algorithms in the literature.
An algorithm based on an improved particle swarm optimization algorithm is proposed for the task management in the applications to multihop clustered wireless sensor networks which require collaborative processing executed in parallel on sensor nodes. Duplication-based mutation operation is set up and weighted entropy based technique for order preference by similarity to ideal solution is used to evaluate and choose objectively on the result of the algorithm. The components of the algorithm are explicitly presented and simulation results validate that it searches effectively and can obtain multi-objective optimized task allocation and scheduling solutions which outperform compared with that of other algorithms in the literature.
2010, 32(11): 2612-2617.
doi: 10.3724/SP.J.1146.2009.01530
Abstract:
Vehicle positioning method is a hot research topic for many years. In this paper, an ITS (Intelligent Transportation System) based on RFID is proposed, including the structure and communication protocol. A novel vehicle positioning method, based on the Time Difference of Arrival (TDOA) is proposed. This vehicle positioning method is a two-step robust positioning method. The rough position of a vehicle can be estimated by a robust estimation in the first step. Then the three base stations which are closest to the vehicle can be found, and the two TDOA of these three base stations are employed to calculate the vehicles final position. Simulation results show that the performance of the proposed algorithm is better than the classic Chan algorithm and 2-step algorithm based on averaging method. The performance of the proposed algorithm can meet the requirement of positioning service in ITS.
Vehicle positioning method is a hot research topic for many years. In this paper, an ITS (Intelligent Transportation System) based on RFID is proposed, including the structure and communication protocol. A novel vehicle positioning method, based on the Time Difference of Arrival (TDOA) is proposed. This vehicle positioning method is a two-step robust positioning method. The rough position of a vehicle can be estimated by a robust estimation in the first step. Then the three base stations which are closest to the vehicle can be found, and the two TDOA of these three base stations are employed to calculate the vehicles final position. Simulation results show that the performance of the proposed algorithm is better than the classic Chan algorithm and 2-step algorithm based on averaging method. The performance of the proposed algorithm can meet the requirement of positioning service in ITS.
2010, 32(11): 2618-2623.
doi: 10.3724/SP.J.1146.2009.01622
Abstract:
A new and fast level set method for segmentation of high-resolution SAR images into statistical homogenous areas is proposed. This approach is based on the G0 statistical model which can describe high-resolution SAR images as well as low-resolution SAR images. And the segmentation is obtained by minimizing energy function with level set methods. As the energy functional is designed to have global stationary minimum, a global and fast segmentation technique can be obtained, thus the practicality of the proposed approach is enhanced. The performance of the algorithm is verified with experiments based on both synthetic and real SAR images.
A new and fast level set method for segmentation of high-resolution SAR images into statistical homogenous areas is proposed. This approach is based on the G0 statistical model which can describe high-resolution SAR images as well as low-resolution SAR images. And the segmentation is obtained by minimizing energy function with level set methods. As the energy functional is designed to have global stationary minimum, a global and fast segmentation technique can be obtained, thus the practicality of the proposed approach is enhanced. The performance of the algorithm is verified with experiments based on both synthetic and real SAR images.
2010, 32(11): 2624-2629.
doi: 10.3724/SP.J.1146.2009.01575
Abstract:
The Ultra WideBand Short-Pulse (UWB-SP) Radar used for detecting targets is a promising technique in counter-terrorism, calamity rescue scenarios, urban-warfare and other fields as its high range resolution, strong penetrating power and good resolving ability. The detecting of targets through wall is an important application based on UWB-SP Radar. As the SEABED algorithm (a new target localization and identification algorithm) based on the UWB-SP Radar is not applicable for that application, this paper proposes a new algorithm to offset the influence brought from the wall firstly and then obtain the image of targets behind the wall by using the SEABED algorithm. Simulation results show that the proposed algorithm can remove the effect of the wall visibly and the imaging results can estimate the shape of targets well.
The Ultra WideBand Short-Pulse (UWB-SP) Radar used for detecting targets is a promising technique in counter-terrorism, calamity rescue scenarios, urban-warfare and other fields as its high range resolution, strong penetrating power and good resolving ability. The detecting of targets through wall is an important application based on UWB-SP Radar. As the SEABED algorithm (a new target localization and identification algorithm) based on the UWB-SP Radar is not applicable for that application, this paper proposes a new algorithm to offset the influence brought from the wall firstly and then obtain the image of targets behind the wall by using the SEABED algorithm. Simulation results show that the proposed algorithm can remove the effect of the wall visibly and the imaging results can estimate the shape of targets well.
2010, 32(11): 2630-2635.
doi: 10.3724/SP.J.1146.2009.01348
Abstract:
Micro-motion offers a new method to resolute targets and recognition. In order to implement micro- motion targets resolution in a high noise environment, the paper proposes a novel method based on B-Distribution (BD) and Viterbi algorithm. In this paper, an effective time-frequency BD in conjunction with the Viterbi algorithm for instantaneous frequency estimation is applied to extract the multi-targets micro-Doppler, and then implement resolution. in a high noise environment. Simulation results based on synthetic data show that the method is able to extract the micro-motion parameters of targets and implement the multi-targets resolution in a high noise environment.
Micro-motion offers a new method to resolute targets and recognition. In order to implement micro- motion targets resolution in a high noise environment, the paper proposes a novel method based on B-Distribution (BD) and Viterbi algorithm. In this paper, an effective time-frequency BD in conjunction with the Viterbi algorithm for instantaneous frequency estimation is applied to extract the multi-targets micro-Doppler, and then implement resolution. in a high noise environment. Simulation results based on synthetic data show that the method is able to extract the micro-motion parameters of targets and implement the multi-targets resolution in a high noise environment.
2010, 32(11): 2636-2641.
doi: 10.3724/SP.J.1146.2009.01570
Abstract:
A novel approach to moving target detection is proposed for dual-channel SAR system. This approach is on the basis of eigen-decomposition of the sample covariance matrix and examines the statistic of the second eigenvalue and the Along-Track Interferometric (ATI) phase for ground moving target indication. Based on this statistic, a new Constant False Alarm Rate (CFAR) detector can be designed to solve the problem of GMTI. To detect slow moving targets more accurately, the second eigenvalue and the ATI phase pre-thresholds are implemented before a CFAR detector. Experimental results on measured SAR data are presented to demonstrate that this novel detector has wider range of detection velocity and lower false alarm probability.
A novel approach to moving target detection is proposed for dual-channel SAR system. This approach is on the basis of eigen-decomposition of the sample covariance matrix and examines the statistic of the second eigenvalue and the Along-Track Interferometric (ATI) phase for ground moving target indication. Based on this statistic, a new Constant False Alarm Rate (CFAR) detector can be designed to solve the problem of GMTI. To detect slow moving targets more accurately, the second eigenvalue and the ATI phase pre-thresholds are implemented before a CFAR detector. Experimental results on measured SAR data are presented to demonstrate that this novel detector has wider range of detection velocity and lower false alarm probability.
2010, 32(11): 2642-2647.
doi: 10.3724/SP.J.1146.2010.00345
Abstract:
This paper investigates practical processing approaches for interferogram generation based on coarse DEM (Digital Elevation Model). The key point of the proposed method is to make full use of available coarse DEM to complete the operations of SAR image coregistration and interferometric phase filtering. The method firstly computes the two dimensional shift amounts of each pixel in SAR image according to the coarse DEM and system parameters, thereby achieving image coregistration. For phase filtering, interferograms corresponding to the coarse DEM are used to compensate the terrain changes so that independent and identically distributed (i.i.d.) samples are obtained to increase the performance of phase filtering. And then the compensated phase is added to the filtered results. In the case of high-resolution SAR images, the coregistration error caused by the system parameters and coarse DEM maybe not meet the requirement of InSAR processing. Consequently, the joint pixel subspace projection is adopted to increase the accuracy of image coregistration and achieve well performance of phase filtering. Finally, computer simulation verifies the validity of the proposed algorithm.
This paper investigates practical processing approaches for interferogram generation based on coarse DEM (Digital Elevation Model). The key point of the proposed method is to make full use of available coarse DEM to complete the operations of SAR image coregistration and interferometric phase filtering. The method firstly computes the two dimensional shift amounts of each pixel in SAR image according to the coarse DEM and system parameters, thereby achieving image coregistration. For phase filtering, interferograms corresponding to the coarse DEM are used to compensate the terrain changes so that independent and identically distributed (i.i.d.) samples are obtained to increase the performance of phase filtering. And then the compensated phase is added to the filtered results. In the case of high-resolution SAR images, the coregistration error caused by the system parameters and coarse DEM maybe not meet the requirement of InSAR processing. Consequently, the joint pixel subspace projection is adopted to increase the accuracy of image coregistration and achieve well performance of phase filtering. Finally, computer simulation verifies the validity of the proposed algorithm.
2010, 32(11): 2648-2654.
doi: 10.3724/SP.J.1146.2009.01461
Abstract:
The asynchronous bistatic SAR imaging issue of variable velocity transceiver is studied in this paper. First of all, the geometric scene of the variable velocity transceiver model is constructed, and the approximation model of the echo of the bistatic SAR is obtained based on the analysis of the instantaneous transceiver distance of the echo. Variable velocity transceiver leads to transceiver position asynchronous, and cause that the SAR data in the azimuth position samples non-uniformly. This paper first analyzes the method with which asynchronous bistatic model equals to monostatic variable velocity motion model. Then with the non-uniform FFT, it resolves the non-uniform sample problems raised by variable velocity motion. The simulation experiments show that the bistatic SAR images have good quality, which verifies the effectiveness of the method provided in this paper.
The asynchronous bistatic SAR imaging issue of variable velocity transceiver is studied in this paper. First of all, the geometric scene of the variable velocity transceiver model is constructed, and the approximation model of the echo of the bistatic SAR is obtained based on the analysis of the instantaneous transceiver distance of the echo. Variable velocity transceiver leads to transceiver position asynchronous, and cause that the SAR data in the azimuth position samples non-uniformly. This paper first analyzes the method with which asynchronous bistatic model equals to monostatic variable velocity motion model. Then with the non-uniform FFT, it resolves the non-uniform sample problems raised by variable velocity motion. The simulation experiments show that the bistatic SAR images have good quality, which verifies the effectiveness of the method provided in this paper.
2010, 32(11): 2655-2660.
doi: 10.3724/SP.J.1146.2009.01473
Abstract:
Standard sliding spotlight imaging algorithms are inefficient due to the additional processing to overcome the azimuth spectra aliasing. In this paper, based on azimuth frequency de-ramping principle, a novel processing approach is proposed. Compared with other algorithms it has two main advantages. The efficiency of the processor is improved, and the implement of azimuth antenna pattern correction becomes much easier. Computational complexity is analyzed and compared with other algorithms. Simulation results validate the effectiveness of presented processing approach.
Standard sliding spotlight imaging algorithms are inefficient due to the additional processing to overcome the azimuth spectra aliasing. In this paper, based on azimuth frequency de-ramping principle, a novel processing approach is proposed. Compared with other algorithms it has two main advantages. The efficiency of the processor is improved, and the implement of azimuth antenna pattern correction becomes much easier. Computational complexity is analyzed and compared with other algorithms. Simulation results validate the effectiveness of presented processing approach.
2010, 32(11): 2661-2667.
doi: 10.3724/SP.J.1146.2009.01423
Abstract:
In this paper, the motion compensation of spotlight mode SAR data processing is studied with Polar Format Algorithm (PFA). A new approach for motion error estimation and compensation in azimuth wavenumber domain is proposed. The framework applies range matched filtering and azimuth dechirp processing to obtain the two dimensional wavenumber data support. First, range compression via fast Fourier transform and PFA interpolation to transform the polar coordinate support to rectangular one is performed. Then the error phase is estimated and compensated in sub-apertures respectively. This framework can both compensate the second or high order motion error accurately, and solve the spatial-variant errors. Compared with the traditional time domain motion compensation, the proposal can achieve better performance for focusing synthetic aperture radar imagery. Experiment?with real data is done to verify the effectiveness and advantage of the proposal.
In this paper, the motion compensation of spotlight mode SAR data processing is studied with Polar Format Algorithm (PFA). A new approach for motion error estimation and compensation in azimuth wavenumber domain is proposed. The framework applies range matched filtering and azimuth dechirp processing to obtain the two dimensional wavenumber data support. First, range compression via fast Fourier transform and PFA interpolation to transform the polar coordinate support to rectangular one is performed. Then the error phase is estimated and compensated in sub-apertures respectively. This framework can both compensate the second or high order motion error accurately, and solve the spatial-variant errors. Compared with the traditional time domain motion compensation, the proposal can achieve better performance for focusing synthetic aperture radar imagery. Experiment?with real data is done to verify the effectiveness and advantage of the proposal.
2010, 32(11): 2668-2673.
doi: 10.3724/SP.J.1146.2009.01527
Abstract:
A new ship wake CFAR detection algorithm is proposed in SAR images. The algorithm first detects all the ships and replaces the pixels gray value of the detected ship with the gray mean value; Then, with the ship barycenter as the center, a square image with a certain length is got and the image is segmented into four sub-images and to perform Normalized Hough Transform on every sub-image; The gray contrast of wake to clutter in the sub-image is enhanced. The probability model in the transform domain of each sub-image is constructed to realize CFAR detection; Finally, the detection results of the sub-images are fused to get the final detection. The detecting results are much better and can be used for CFAR detection. The simulation experiment proves the effectiveness of the algorithm.
A new ship wake CFAR detection algorithm is proposed in SAR images. The algorithm first detects all the ships and replaces the pixels gray value of the detected ship with the gray mean value; Then, with the ship barycenter as the center, a square image with a certain length is got and the image is segmented into four sub-images and to perform Normalized Hough Transform on every sub-image; The gray contrast of wake to clutter in the sub-image is enhanced. The probability model in the transform domain of each sub-image is constructed to realize CFAR detection; Finally, the detection results of the sub-images are fused to get the final detection. The detecting results are much better and can be used for CFAR detection. The simulation experiment proves the effectiveness of the algorithm.
2010, 32(11): 2674-2679.
doi: 10.3724/SP.J.1146.2009.01600
Abstract:
Motion blur due to camera shaking during exposure is one common phenomena of image degradation. Based on the variational Bayesian estimation theory and the statistical characteristic of the natural images gradient, a blind image deconvolution algorithm is proposed to restore camera-shake blurred image. In addition, based on sub-region detection and Fuzzy filter, a deringing method is proposed to reduce ringing effect, which is not avoided in the process of image deconvolution. The experimental results show that the algorithm of blind image deconvolution can effectively remove the motion blur caused by camera shaking, and can effectively reduce the ringing effect, while preserve the image edge and details well and improve the quality of the restored image.
Motion blur due to camera shaking during exposure is one common phenomena of image degradation. Based on the variational Bayesian estimation theory and the statistical characteristic of the natural images gradient, a blind image deconvolution algorithm is proposed to restore camera-shake blurred image. In addition, based on sub-region detection and Fuzzy filter, a deringing method is proposed to reduce ringing effect, which is not avoided in the process of image deconvolution. The experimental results show that the algorithm of blind image deconvolution can effectively remove the motion blur caused by camera shaking, and can effectively reduce the ringing effect, while preserve the image edge and details well and improve the quality of the restored image.
2010, 32(11): 2680-2685.
doi: 10.3724/SP.J.1146.2009.01543
Abstract:
An object tracking algorithm with global kernel density seeking is proposed to avoid local probability mode in mean shift tracking process. Firstly, a monotonically decreasing sequence of bandwidths is obtained according to the object scale. At the first bandwidth, a maximum probability can be found with mean shift, and the next iteration loop started at the previous convergence location. Finally, the best density mode is obtained at the optimal bandwidth. In the convergence process, with the smoothness effect of the large bandwidth, the compact of the local probability mode is avoided, and the precise position of the object can be found with the optimal bandwidth, which is similar to the object scale. To speed up the convergence, Over-Relaxed strategy is introduced to enlarge the step size. Under the convergence rule, the correlation coefficient is used to adopt the learning rate. The experimental results prove that the proposed tracker with global kernel density seeking is robust in high-speed object tracking, and performs well in occlusions. The adaptive Over-Relaxed strategy is effective to lower the convergence iterations by enlarging the step size.
An object tracking algorithm with global kernel density seeking is proposed to avoid local probability mode in mean shift tracking process. Firstly, a monotonically decreasing sequence of bandwidths is obtained according to the object scale. At the first bandwidth, a maximum probability can be found with mean shift, and the next iteration loop started at the previous convergence location. Finally, the best density mode is obtained at the optimal bandwidth. In the convergence process, with the smoothness effect of the large bandwidth, the compact of the local probability mode is avoided, and the precise position of the object can be found with the optimal bandwidth, which is similar to the object scale. To speed up the convergence, Over-Relaxed strategy is introduced to enlarge the step size. Under the convergence rule, the correlation coefficient is used to adopt the learning rate. The experimental results prove that the proposed tracker with global kernel density seeking is robust in high-speed object tracking, and performs well in occlusions. The adaptive Over-Relaxed strategy is effective to lower the convergence iterations by enlarging the step size.
2010, 32(11): 2686-2690.
doi: 10.3724/SP.J.1146.2009.01549
Abstract:
In multi-target tracking, aiming at the data association problem that arises due to indistinguishable measurements in the presence of clutter, and the curse of dimensionality that arises due to the increased size of the state-space associated with multiple targets, a novel algorithm based on Gaussian Particle Joint Probabilistic Data Association Filter (GP-JPDAF) is proposed, which introduces Gaussian Particle Filtering (GPF) concept to the JPDA framework. For each of the targets, the marginal association probabilities are approximated with Gaussian particles rather than Gaussians in the JPDAF. Moreover, GPF is utilized for approximating the prediction and update distributions. Finally, the proposed method is applied to passive multi-sensor multi-target tracking. Simulation results show that the method can obtain better tracking performance than Monte Carlo JPDAF (MC -JPDAF).
In multi-target tracking, aiming at the data association problem that arises due to indistinguishable measurements in the presence of clutter, and the curse of dimensionality that arises due to the increased size of the state-space associated with multiple targets, a novel algorithm based on Gaussian Particle Joint Probabilistic Data Association Filter (GP-JPDAF) is proposed, which introduces Gaussian Particle Filtering (GPF) concept to the JPDA framework. For each of the targets, the marginal association probabilities are approximated with Gaussian particles rather than Gaussians in the JPDAF. Moreover, GPF is utilized for approximating the prediction and update distributions. Finally, the proposed method is applied to passive multi-sensor multi-target tracking. Simulation results show that the method can obtain better tracking performance than Monte Carlo JPDAF (MC -JPDAF).
2010, 32(11): 2691-2694.
doi: 10.3724/SP.J.1146.2009.01580
Abstract:
Probability Hypothesis Density (PHD) filter has emerged as one of powerful tools for multi-target tracking. In the Sequential Monte Carlo (SMC) implementation of it, the filters output is particle approximation of PHD, so some special algorithm is needed to extract the target states from those particles. In this paper, an improved algorithm is proposed. Firstly particles are clustered by their positions using the k-means algorithm, and then the positions with maximum of particles weight are searched and estimated in each cluster as the targets positions. Because the information of both particles weight and spatial distribution are utilized, confirmed by simulation results, the new algorithm can provide estimation of the targets states more accurately.
Probability Hypothesis Density (PHD) filter has emerged as one of powerful tools for multi-target tracking. In the Sequential Monte Carlo (SMC) implementation of it, the filters output is particle approximation of PHD, so some special algorithm is needed to extract the target states from those particles. In this paper, an improved algorithm is proposed. Firstly particles are clustered by their positions using the k-means algorithm, and then the positions with maximum of particles weight are searched and estimated in each cluster as the targets positions. Because the information of both particles weight and spatial distribution are utilized, confirmed by simulation results, the new algorithm can provide estimation of the targets states more accurately.
2010, 32(11): 2695-2700.
doi: 10.3724/SP.J.1146.2009.01493
Abstract:
The lack of semantic information is a critical problem of context tree kernel in text representation. A context tree kernel method based on latent topics is proposed. First, words are mapped to latent topic space through Latent Dirichlet Allocation(LDA). Then, context tree models are built using latent topics. Finally, context tree kernel for text is defined through mutual information between the models. In this approach, document generative models are defined using semantic class instead of words, and the issue of statistic data sparse is solved. The clustering experiment results on text data set show, the proposed context tree kernel is a better measure of topic similarity between documents, and the performance of text clustering is greatly improved.
The lack of semantic information is a critical problem of context tree kernel in text representation. A context tree kernel method based on latent topics is proposed. First, words are mapped to latent topic space through Latent Dirichlet Allocation(LDA). Then, context tree models are built using latent topics. Finally, context tree kernel for text is defined through mutual information between the models. In this approach, document generative models are defined using semantic class instead of words, and the issue of statistic data sparse is solved. The clustering experiment results on text data set show, the proposed context tree kernel is a better measure of topic similarity between documents, and the performance of text clustering is greatly improved.
2010, 32(11): 2701-2706.
doi: 10.3724/SP.J.1146.2009.01489
Abstract:
Due to the bottleneck of the current representation of the state space and match rule in the negative selection algorithm, a negative selection algorithm based on the matrix representation is presented, which extends the state space from the vector to the matrix. The elemental match distance is defined by introducing the matrix to denote self and nonself space, the bi-directional match rule is established. Moreover, a detector generating algorithm based on coverage rate testing is developed according to the characteristics of state space. The experimental results show that the proposed algorithm achieves better performance than the real-valued negative selection algorithm, and solves effectively the problem of the linkage of the detection rate and false rate. Furthermore, it is verified to generate more effective detectors.
Due to the bottleneck of the current representation of the state space and match rule in the negative selection algorithm, a negative selection algorithm based on the matrix representation is presented, which extends the state space from the vector to the matrix. The elemental match distance is defined by introducing the matrix to denote self and nonself space, the bi-directional match rule is established. Moreover, a detector generating algorithm based on coverage rate testing is developed according to the characteristics of state space. The experimental results show that the proposed algorithm achieves better performance than the real-valued negative selection algorithm, and solves effectively the problem of the linkage of the detection rate and false rate. Furthermore, it is verified to generate more effective detectors.
2010, 32(11): 2707-2712.
doi: 10.3724/SP.J.1146.2009.01589
Abstract:
High dimensional clustering algorithm based on equal or random width density grid cannot guarantee high quality clustering results in complicated data sets. In this paper, a High dimensional Clustering algorithm based on Local Significant Unit (HC_LSU) is proposed to deal with this problem, based on the kernel estimation and spatial statistical theory. Firstly, a structure, namely Local Significant Unit (LSU) is introduced by local kernel density estimation and spatial statistical test; secondly, a greedy algorithm named Greedy Algorithm for LSU (GA_LSU) is proposed to quickly find out the local significant units in the data set; and eventually, the single-linkage algorithm is run on the local significant units with the same attribute subset to generate the clustering results. Experimental results on 4 synthetic and 6 real world data sets showed that the proposed high-dimensional clustering algorithm, HC_LSU, could effectively find out high quality clustering results from the highly complicated data sets.
High dimensional clustering algorithm based on equal or random width density grid cannot guarantee high quality clustering results in complicated data sets. In this paper, a High dimensional Clustering algorithm based on Local Significant Unit (HC_LSU) is proposed to deal with this problem, based on the kernel estimation and spatial statistical theory. Firstly, a structure, namely Local Significant Unit (LSU) is introduced by local kernel density estimation and spatial statistical test; secondly, a greedy algorithm named Greedy Algorithm for LSU (GA_LSU) is proposed to quickly find out the local significant units in the data set; and eventually, the single-linkage algorithm is run on the local significant units with the same attribute subset to generate the clustering results. Experimental results on 4 synthetic and 6 real world data sets showed that the proposed high-dimensional clustering algorithm, HC_LSU, could effectively find out high quality clustering results from the highly complicated data sets.
2010, 32(11): 2713-2717.
doi: 10.3724/SP.J.1146.2009.01623
Abstract:
Compressive sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. In this case, the small amount of signal values can be reconstructed accurately when the signal is sparse or compressible. In this paper, a new Regularized Adaptive Matching Pursuit (RAMP) algorithm is presented with the idea of regularization. The proposed algorithm could control the accuracy of reconstruction by both the adaptive process which chooses the candidate set automatically and the regularization process which gets the atoms in the final support set although the sparsity of the original signal is unknown. The experimental results show that the proposed algorithm can get better reconstruction performances and it is superior to other algorithms both visually and objectively.
Compressive sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. In this case, the small amount of signal values can be reconstructed accurately when the signal is sparse or compressible. In this paper, a new Regularized Adaptive Matching Pursuit (RAMP) algorithm is presented with the idea of regularization. The proposed algorithm could control the accuracy of reconstruction by both the adaptive process which chooses the candidate set automatically and the regularization process which gets the atoms in the final support set although the sparsity of the original signal is unknown. The experimental results show that the proposed algorithm can get better reconstruction performances and it is superior to other algorithms both visually and objectively.
2010, 32(11): 2718-2723.
doi: 10.3724/SP.J.1146.2009.01438
Abstract:
LFM signal is widely used in radar application. It is a key technology to obtain the high accuracy estimation of the Doppler frequency rate from the observed LFM signals. In this paper, a Fractional Fourier Transform (FrFT) based Doppler frequency rate estimation algorithm is proposed. The signal energy is congregated in the fractional Fourier transform domain. The signal to noise rate is strengthen and the infection of the frequency rate of LFM signal is eliminated. The coherent phase information is utilized to obtain the Doppler frequency rate estimates. The theoretical analysis indicates that the estimation variance approaches the theoretical low band. The simulation results validate the presented algorithm.
LFM signal is widely used in radar application. It is a key technology to obtain the high accuracy estimation of the Doppler frequency rate from the observed LFM signals. In this paper, a Fractional Fourier Transform (FrFT) based Doppler frequency rate estimation algorithm is proposed. The signal energy is congregated in the fractional Fourier transform domain. The signal to noise rate is strengthen and the infection of the frequency rate of LFM signal is eliminated. The coherent phase information is utilized to obtain the Doppler frequency rate estimates. The theoretical analysis indicates that the estimation variance approaches the theoretical low band. The simulation results validate the presented algorithm.
2010, 32(11): 2724-2729.
doi: 10.3724/SP.J.1146.2009.01430
Abstract:
To improve effectively the performance on speech emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech feature data lying on a nonlinear manifold embedded in high-dimensional acoustic space. Supervised Locally Linear Embedding (SLLE) is a typical supervised manifold learning algorithm for nonlinear dimensionality reduction. Considering the existing drawbacks of SLLE, this paper proposes an improved version of SLLE, which enhances the discriminating power of low-dimensional embedded data and possesses the optimal generalization ability. The proposed algorithm is used to conduct nonlinear dimensionality reduction for 48-dimensional speech emotional feature data including prosody and voice quality features, and extract low-dimensional embedded discriminating features so as to recognize four emotions including anger, joy, sadness and neutral. Experimental results on the natural speech emotional database demonstrate that the proposed algorithm obtains the highest accuracy of 90.78% with only less 9 embedded features, making 15.65% improvement over SLLE. Therefore, the proposed algorithm can significantly improve speech emotion recognition results when applied for reducing dimensionality of speech emotional feature data.
To improve effectively the performance on speech emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech feature data lying on a nonlinear manifold embedded in high-dimensional acoustic space. Supervised Locally Linear Embedding (SLLE) is a typical supervised manifold learning algorithm for nonlinear dimensionality reduction. Considering the existing drawbacks of SLLE, this paper proposes an improved version of SLLE, which enhances the discriminating power of low-dimensional embedded data and possesses the optimal generalization ability. The proposed algorithm is used to conduct nonlinear dimensionality reduction for 48-dimensional speech emotional feature data including prosody and voice quality features, and extract low-dimensional embedded discriminating features so as to recognize four emotions including anger, joy, sadness and neutral. Experimental results on the natural speech emotional database demonstrate that the proposed algorithm obtains the highest accuracy of 90.78% with only less 9 embedded features, making 15.65% improvement over SLLE. Therefore, the proposed algorithm can significantly improve speech emotion recognition results when applied for reducing dimensionality of speech emotional feature data.
2010, 32(11): 2730-2734.
doi: 10.3724/SP.J.1146.2009.01637
Abstract:
The Physical Optics (PO) algorithm is utilized to compute the transient scattering response and wideband Radar Cross Section (RCS) of electrically large targets modeled with NonUniform Rational B-Spline (NURBS) surfaces. The formula for the time-domain scattered field is obtained with an inverse Fourier transform, which contains a convolution product. The time-domain PO integral is also derived with the inverse Fourier transform. In order to avoid the utilization of numerical integrations, the NURBS surfaces are discretized into small triangular facets, and Radon transform is introduced to obtain closed-form expressions for the time-domain and frequency-domain PO integrals. The improved z-buffer technique is used in the judgement and elimination of shadows for the sake of acceleration. The wideband RCS is obtained with the Fast Fourier Transform (FFT). The RCS of several targets is calculated under Gaussian-pulse plane wave incidence. Results show that the proposed method has a high accuracy and is faster than the traditional Time-Domain Physical Optics (TDPO).
The Physical Optics (PO) algorithm is utilized to compute the transient scattering response and wideband Radar Cross Section (RCS) of electrically large targets modeled with NonUniform Rational B-Spline (NURBS) surfaces. The formula for the time-domain scattered field is obtained with an inverse Fourier transform, which contains a convolution product. The time-domain PO integral is also derived with the inverse Fourier transform. In order to avoid the utilization of numerical integrations, the NURBS surfaces are discretized into small triangular facets, and Radon transform is introduced to obtain closed-form expressions for the time-domain and frequency-domain PO integrals. The improved z-buffer technique is used in the judgement and elimination of shadows for the sake of acceleration. The wideband RCS is obtained with the Fast Fourier Transform (FFT). The RCS of several targets is calculated under Gaussian-pulse plane wave incidence. Results show that the proposed method has a high accuracy and is faster than the traditional Time-Domain Physical Optics (TDPO).
2010, 32(11): 2735-2739.
doi: 10.3724/SP.J.1146.2009.01611
Abstract:
An electrochemical bionic sensor for cholesterol detection is developed with molecular imprinting self-assembled film deposited on the screen printed gold electrode. The surface topography of planar bare gold electrode and thick film bare gold electrode are compared with Scanning Electron Microscope (SEM). The electrochemical characteristics of the electrode during modification are studied with cyclic voltammetry technique. The results show that thick film electrode by screen printing technology is suited to the modification of molecular imprinting self-assembled film, and exhibits obvious amplification at nano level. The response of the sensor to the concentration of cholesterol is detected with chronoamperometric measurements. Cholesterol between 0 and 700 nM are detected with this sensor. The linearity range is from 50 nM to 700 nM with the sensitivity of -4.94 A/[lg(nM)] and linearly dependent coefficient of 0.994. And this cholesterol sensor has high accuracy, which reaches 99.56%.
An electrochemical bionic sensor for cholesterol detection is developed with molecular imprinting self-assembled film deposited on the screen printed gold electrode. The surface topography of planar bare gold electrode and thick film bare gold electrode are compared with Scanning Electron Microscope (SEM). The electrochemical characteristics of the electrode during modification are studied with cyclic voltammetry technique. The results show that thick film electrode by screen printing technology is suited to the modification of molecular imprinting self-assembled film, and exhibits obvious amplification at nano level. The response of the sensor to the concentration of cholesterol is detected with chronoamperometric measurements. Cholesterol between 0 and 700 nM are detected with this sensor. The linearity range is from 50 nM to 700 nM with the sensitivity of -4.94 A/[lg(nM)] and linearly dependent coefficient of 0.994. And this cholesterol sensor has high accuracy, which reaches 99.56%.
2010, 32(11): 2740-2745.
doi: 10.3724/SP.J.1146.2009.01495
Abstract:
Complex filter is one of the major parts in low-if receiver to reject imagine signal. Considering the low-power requirement of IEEE802.15.4 standard, this paper presents a pseudo OTA with flexible configuration of common mode feedback block, which is suitable to use in cascade systems for its inner common mode feed-forward and common mode detection. A third-order Butterworth Gm-C complex filter is designed based on this OTA, its center frequency locate at 1 MHz, with a 1.3 MHz bandwidth, the in-band group delay ripple is less than 0.16 s, its IRR(Imagine Rejection Ratio) can satisfy IEEE802.15.4 standards requirements. A frequency tuning method is also presented in this paper, it is simple and power efficiency comparing to traditional PLL tuning method.
Complex filter is one of the major parts in low-if receiver to reject imagine signal. Considering the low-power requirement of IEEE802.15.4 standard, this paper presents a pseudo OTA with flexible configuration of common mode feedback block, which is suitable to use in cascade systems for its inner common mode feed-forward and common mode detection. A third-order Butterworth Gm-C complex filter is designed based on this OTA, its center frequency locate at 1 MHz, with a 1.3 MHz bandwidth, the in-band group delay ripple is less than 0.16 s, its IRR(Imagine Rejection Ratio) can satisfy IEEE802.15.4 standards requirements. A frequency tuning method is also presented in this paper, it is simple and power efficiency comparing to traditional PLL tuning method.
2010, 32(11): 2746-2750.
doi: 10.3724/SP.J.1146.2009.01512
Abstract:
Based on polarization differences between jamming signal and target echo, polarization filter can combat active suppression jamming effectively. However, traditional polarization radars consist of two transmitting channels and receiving channels, facing such limits as system complexity, high realization cost. As a result, a novel polarized radar model named as dual polarized antenna radar is advanced. It can realize polarization measurement using only single transmitting channel and receiving channel. Based on spatial polarization characteristic of dual polarized antenna, received signal models of active suppression jamming and target echo are given, and spatial virtual polarization filtering algorithm to counter active suppression jamming is advanced. Following results can be obtained from simulations: For narrowband noise frequency modulation jamming, Signal Interference Ratio (SIR) improvement factor can be greater than 20 dB.
Based on polarization differences between jamming signal and target echo, polarization filter can combat active suppression jamming effectively. However, traditional polarization radars consist of two transmitting channels and receiving channels, facing such limits as system complexity, high realization cost. As a result, a novel polarized radar model named as dual polarized antenna radar is advanced. It can realize polarization measurement using only single transmitting channel and receiving channel. Based on spatial polarization characteristic of dual polarized antenna, received signal models of active suppression jamming and target echo are given, and spatial virtual polarization filtering algorithm to counter active suppression jamming is advanced. Following results can be obtained from simulations: For narrowband noise frequency modulation jamming, Signal Interference Ratio (SIR) improvement factor can be greater than 20 dB.
2010, 32(11): 2751-2754.
doi: 10.3724/SP.J.1146.2009.01534
Abstract:
A new algorithm named Uncorrelated Discriminant Neighborhood Preserving Projections (UDNPP) is proposed based on manifold learning. And UDNPP algorithm includes the advantages of Linear Discriminant Analysis (LDA) and Neighborhood Preserving Projections (NPP). Actually, UDNPP attempts to preserve the geometry of neighborhoods, while maximizing the between-class distance. Moreover, the features extracted are statistically uncorrelated by introducing an uncorrelated constraint. Thus the interference from redundant information are reduced. The experimental results from millimeter wave radar target recognition show that UDNPP algorithm can give competitive results in comparison with current popular algorithms.
A new algorithm named Uncorrelated Discriminant Neighborhood Preserving Projections (UDNPP) is proposed based on manifold learning. And UDNPP algorithm includes the advantages of Linear Discriminant Analysis (LDA) and Neighborhood Preserving Projections (NPP). Actually, UDNPP attempts to preserve the geometry of neighborhoods, while maximizing the between-class distance. Moreover, the features extracted are statistically uncorrelated by introducing an uncorrelated constraint. Thus the interference from redundant information are reduced. The experimental results from millimeter wave radar target recognition show that UDNPP algorithm can give competitive results in comparison with current popular algorithms.
2010, 32(11): 2755-2759.
doi: 10.3724/SP.J.1146.2009.01439
Abstract:
A new detection scheme based on Hough transform for spatially distributed moving target which range walks and has strong range cells is proposed. The proposed scheme, Hough Detector (HD), consists of two steps. In the first step, a primary threshold is set and any range-time cell of High Resolution Range Profiles (HRRPs) with a value exceeding this threshold is mapped into Hough parameter space and then every Hough cell is refined using its own Cumulative Distribution Function (CDF). In the second step, summations of several strongest cells of every angle in Hough space are calculated, then the summations is mapped by their own CDF. The maximum of the mapped summations is used as the test statistic for target detection. Experimental results for measured data of three planes illustrate that Hough detector achieves at least 1.3 dB improvement comparing with the existing Scatter Density Dependent Generalized Likelihood Rate Test (SDD-GLRT).
A new detection scheme based on Hough transform for spatially distributed moving target which range walks and has strong range cells is proposed. The proposed scheme, Hough Detector (HD), consists of two steps. In the first step, a primary threshold is set and any range-time cell of High Resolution Range Profiles (HRRPs) with a value exceeding this threshold is mapped into Hough parameter space and then every Hough cell is refined using its own Cumulative Distribution Function (CDF). In the second step, summations of several strongest cells of every angle in Hough space are calculated, then the summations is mapped by their own CDF. The maximum of the mapped summations is used as the test statistic for target detection. Experimental results for measured data of three planes illustrate that Hough detector achieves at least 1.3 dB improvement comparing with the existing Scatter Density Dependent Generalized Likelihood Rate Test (SDD-GLRT).
2010, 32(11): 2760-2763.
doi: 10.3724/SP.J.1146.2009.01511
Abstract:
Two parameters are normally employed to evaluate recognition ability of the Ground Penetrating Radar (GPR) for the thin-layer: the least recognition thickness and the reflectance recognition precision. In this paper, the spectrum inversion method is used to improve the recognition ability of the GPR. The spectrum transmission function of the multilayer media is firstly deduced, and the forward model of the reflecting wave spectrum is obtained. Then the two parameters are inversed by using the damp least-squares algorithm. The experiment demonstrates that the method can identify the layer effectively with a thickness less then one-eighth wavelength.
Two parameters are normally employed to evaluate recognition ability of the Ground Penetrating Radar (GPR) for the thin-layer: the least recognition thickness and the reflectance recognition precision. In this paper, the spectrum inversion method is used to improve the recognition ability of the GPR. The spectrum transmission function of the multilayer media is firstly deduced, and the forward model of the reflecting wave spectrum is obtained. Then the two parameters are inversed by using the damp least-squares algorithm. The experiment demonstrates that the method can identify the layer effectively with a thickness less then one-eighth wavelength.
2010, 32(11): 2764-2767.
doi: 10.3724/SP.J.1146.2009.01183
Abstract:
Design of Electronic cabinet is constrained by structure intensity, ventilation and Shielding Effectiveness (SE). Practically, there are three fields affecting electronic equipment, electromagnetic, temperature, and elastic deformation fields which are functions of enclosure structure parameters. Because of the relationship between three fields, in this paper, a multi-field-coupled model called STEM is established. Furthermore, an optimization model based on STEM is proposed in this manuscript. Structure optimization is also proposed on a practical enclosure with satisfying result.
Design of Electronic cabinet is constrained by structure intensity, ventilation and Shielding Effectiveness (SE). Practically, there are three fields affecting electronic equipment, electromagnetic, temperature, and elastic deformation fields which are functions of enclosure structure parameters. Because of the relationship between three fields, in this paper, a multi-field-coupled model called STEM is established. Furthermore, an optimization model based on STEM is proposed in this manuscript. Structure optimization is also proposed on a practical enclosure with satisfying result.
2010, 32(11): 2768-2771.
doi: 10.3724/SP.J.1146.2010.00428
Abstract:
With the increasing of system frequency, the characteristics of microstrip substrates are non-ignorable factor for affecting the crosstalk between transmission lines. This paper analyzes the crosstalk between two parallel microstrip lines based on the transmission line equations and S parameters, simulates the crosstalk with different permittivity and thickness of the substrate with 3D full-wave electromagnetic tool. The electric field distributions of microstrips with different substrates and the results of near-end and far-end crosstalks with the changing of frequency, substrate permittivity and thickness are obtained. The far-end crosstalk will be greater than the near-end crosstalk with the frequency increasing, and moreover, with the increasing of substrate permittivity and thickness, the crosstalks of microstrips will present a sine upward trend.
With the increasing of system frequency, the characteristics of microstrip substrates are non-ignorable factor for affecting the crosstalk between transmission lines. This paper analyzes the crosstalk between two parallel microstrip lines based on the transmission line equations and S parameters, simulates the crosstalk with different permittivity and thickness of the substrate with 3D full-wave electromagnetic tool. The electric field distributions of microstrips with different substrates and the results of near-end and far-end crosstalks with the changing of frequency, substrate permittivity and thickness are obtained. The far-end crosstalk will be greater than the near-end crosstalk with the frequency increasing, and moreover, with the increasing of substrate permittivity and thickness, the crosstalks of microstrips will present a sine upward trend.
2010, 32(11): 2772-2775.
doi: 10.3724/SP.J.1146.2009.01418
Abstract:
Multi-standard convergence for high speed ultra-wideband wireless communication leads the evolution for future RF devices. In this paper a CMOS wideband low noise amplifier (LNA) with novel gain control method is presented. In the first stage shunt-resistor feedback is used for wideband input matching, while for low noise performance noise cancelling circuit is adopted. A novel 6-bit digital programmable gain control circuit is exploited and adopted in the second stage for variable gain. Fabricated with SMIC 0.13m RF CMOS process, die area is 0.76mm2. Measured results show that the LNA operates in 1.1-1.8 GHz frequency band, and the maximum and minimum gain are 21.8 dB and 8.2 dB, 7 gain modes in all. Minimum noise figure is 2.7 dB. Typical IIP3 is -7 dBm.
Multi-standard convergence for high speed ultra-wideband wireless communication leads the evolution for future RF devices. In this paper a CMOS wideband low noise amplifier (LNA) with novel gain control method is presented. In the first stage shunt-resistor feedback is used for wideband input matching, while for low noise performance noise cancelling circuit is adopted. A novel 6-bit digital programmable gain control circuit is exploited and adopted in the second stage for variable gain. Fabricated with SMIC 0.13m RF CMOS process, die area is 0.76mm2. Measured results show that the LNA operates in 1.1-1.8 GHz frequency band, and the maximum and minimum gain are 21.8 dB and 8.2 dB, 7 gain modes in all. Minimum noise figure is 2.7 dB. Typical IIP3 is -7 dBm.
2010, 32(11): 2776-2780.
doi: 10.3724/SP.J.1146.2009.01542
Abstract:
This paper proposes a hybrid spectrum sharing model which combines Overlay with Underlay. Based on the proposed model, considering of perfect and imperfect spectrum sensing, it models the channel activity with discrete-state Markov process. Furthermore, under the criterion of maximizing the system average (ergodic) capacity, it analyzes the power allocation of the hybrid spectrum sharing and deduces the optimal power allocation scheme as well as the maximum achievable capacity of the system. Analysis and simulation results indicate the hybrid spectrum sharing is provided with higher capacity than single Overlay or Underlay system.
This paper proposes a hybrid spectrum sharing model which combines Overlay with Underlay. Based on the proposed model, considering of perfect and imperfect spectrum sensing, it models the channel activity with discrete-state Markov process. Furthermore, under the criterion of maximizing the system average (ergodic) capacity, it analyzes the power allocation of the hybrid spectrum sharing and deduces the optimal power allocation scheme as well as the maximum achievable capacity of the system. Analysis and simulation results indicate the hybrid spectrum sharing is provided with higher capacity than single Overlay or Underlay system.
2010, 32(11): 2781-2784.
doi: 10.3724/SP.J.1146.2009.01652
Abstract:
The Normalized Min-Sum (NMS) algorithm can be implemented with low complexity and is widely used in the LDPC decoders, but there is a significant performance gap between the Belief Propagation (BP) algorithm and NMS algorithm for low-rate LDPC codes due to the inaccurate approximations of the check-nodes with low weight. In this paper, an improved NMS algorithm combined with the Oscillation (OSC) correction of bit-node updating and Multiple Factors (MF) modification of check-node updating is proposed. Although the row weights of the low-rate protograph LDPC codes may vary considerably, the error of the approximation in check-node updating can be effectively reduced by MF modification. Moreover, the OSC correction can reduce the positive feedback and achieve furthermore improvement on the decoding performance of low-rate protograph LDPC codes, where the decoding convergence is slow. Simulation results show that the OSC-MF-NMS algorithm can obtain a noticeable performance gain in decoding of low-rate protograph LDPC codes. The complexity of the OSC and MF process is quite low, so the proposed algorithm is a good trade-off between the decoding complexity and error performance.
The Normalized Min-Sum (NMS) algorithm can be implemented with low complexity and is widely used in the LDPC decoders, but there is a significant performance gap between the Belief Propagation (BP) algorithm and NMS algorithm for low-rate LDPC codes due to the inaccurate approximations of the check-nodes with low weight. In this paper, an improved NMS algorithm combined with the Oscillation (OSC) correction of bit-node updating and Multiple Factors (MF) modification of check-node updating is proposed. Although the row weights of the low-rate protograph LDPC codes may vary considerably, the error of the approximation in check-node updating can be effectively reduced by MF modification. Moreover, the OSC correction can reduce the positive feedback and achieve furthermore improvement on the decoding performance of low-rate protograph LDPC codes, where the decoding convergence is slow. Simulation results show that the OSC-MF-NMS algorithm can obtain a noticeable performance gain in decoding of low-rate protograph LDPC codes. The complexity of the OSC and MF process is quite low, so the proposed algorithm is a good trade-off between the decoding complexity and error performance.
2010, 32(11): 2785-2789.
doi: 10.3724/SP.J.1146.2009.01519
Abstract:
In the research on multi-user selection of Multiple Input Multiple Output (MIMO) broadcast system with limited feedback errors in users SINR estimation in existing methods are big, which constrains the system capacity. In this paper by combining the deduced upper and lower bound to estimate the users received signal power, a novel users SINR estimation method with smaller error is derived and a new corresponding multi-user selection algorithm is proposed. Simulation results show that better performance and lower complexity can be achieved with the proposed algorithm both in low SNR and in high SNR.
In the research on multi-user selection of Multiple Input Multiple Output (MIMO) broadcast system with limited feedback errors in users SINR estimation in existing methods are big, which constrains the system capacity. In this paper by combining the deduced upper and lower bound to estimate the users received signal power, a novel users SINR estimation method with smaller error is derived and a new corresponding multi-user selection algorithm is proposed. Simulation results show that better performance and lower complexity can be achieved with the proposed algorithm both in low SNR and in high SNR.
2010, 32(11): 2790-2794.
doi: 10.3724/SP.J.1146.2010.00388
Abstract:
This paper re-evaluates the security of Zodiac against Square attacks. There are 8-round Square distinguishers of Zodiac. In this paper, four equivalent structures of Zodiac are given, based on which two new 9-round distinguishers are proposed. Then by using the 9-round Square distinguishers, Square attacks are applied to 12/13/14/15/16-round Zodiac with time complexities being 237.3, 262.9, 296.1, 2137.1, 2189.5, and data complexities being 210.3, 211, 211.6, 212.1, 212.6, respectively. Additionally, these attacks show that full 16-round Zodiac-192 is not immune to Square attack.
This paper re-evaluates the security of Zodiac against Square attacks. There are 8-round Square distinguishers of Zodiac. In this paper, four equivalent structures of Zodiac are given, based on which two new 9-round distinguishers are proposed. Then by using the 9-round Square distinguishers, Square attacks are applied to 12/13/14/15/16-round Zodiac with time complexities being 237.3, 262.9, 296.1, 2137.1, 2189.5, and data complexities being 210.3, 211, 211.6, 212.1, 212.6, respectively. Additionally, these attacks show that full 16-round Zodiac-192 is not immune to Square attack.