Email alert
2014 Vol. 36, No. 11
Display Method:
2014, 36(11): 2541-2548.
doi: 10.3724/SP.J.1146.2014.00255
Abstract:
An image interpolation method based on structure component bidirectional diffusion is proposed in this paper, by which the edge diffusion in image magnification is effectively decreased and the sharpening edges are generated. In this method, the edges are enhanced by using the advanced coupling of bidirectional diffusion filter after contour stencils interpolation. In order to deal with the edge contours more precisely, the structure component of initial interpolated image is filtered after separating from the initial interpolated image via Morphological Component Analysis (MCA). Furthermore, the coupling of bidirectional filter is improved to adaptively adjust the edge diffusion degree according to the edge gradient, and to control pixels value change along gradient direction more gently. The experimental results show that, the proposed method outperforms other comparing algorithms including the traditional interpolation algorithm and the related edge adaptive interpolation algorithms and several widely-used commercial software in terms of both objective and visual quality of the interpolation image. The method enhances the image sharpness effectively, and gains smooth edges, nature transition, also avoids producing the edge aliasing and overshoot artifacts.
An image interpolation method based on structure component bidirectional diffusion is proposed in this paper, by which the edge diffusion in image magnification is effectively decreased and the sharpening edges are generated. In this method, the edges are enhanced by using the advanced coupling of bidirectional diffusion filter after contour stencils interpolation. In order to deal with the edge contours more precisely, the structure component of initial interpolated image is filtered after separating from the initial interpolated image via Morphological Component Analysis (MCA). Furthermore, the coupling of bidirectional filter is improved to adaptively adjust the edge diffusion degree according to the edge gradient, and to control pixels value change along gradient direction more gently. The experimental results show that, the proposed method outperforms other comparing algorithms including the traditional interpolation algorithm and the related edge adaptive interpolation algorithms and several widely-used commercial software in terms of both objective and visual quality of the interpolation image. The method enhances the image sharpness effectively, and gains smooth edges, nature transition, also avoids producing the edge aliasing and overshoot artifacts.
2014, 36(11): 2549-2555.
doi: 10.3724/SP.J.1146.2014.00446
Abstract:
The traditional Super-Resolution (SR) algorithms are very sensitive to image registration errors, model errors or noise, which limits their real utility. To enhance the robustness of SR algorithm, this paper improves the traditional SR algorithm from two aspects of image registration and reconstruction. On registration phase, the probabilistic motion field is introduced to prevent the SR algorithm from depending on accuracy of registration. In addition, the Heaviside function is adopted to implement the motion weight mapping, which enhances self-adaption of the algorithm further. On reconstruction phase, a regularized estimation based on Huber norm is used to reconstruct the SR image, which makes the proposed algorithm more stable to minimize the cost function while still robust against large errors. The experimental results show that the proposed algorithm has a good performance on sequence SR reconstruction compared with some existing SR methods.
The traditional Super-Resolution (SR) algorithms are very sensitive to image registration errors, model errors or noise, which limits their real utility. To enhance the robustness of SR algorithm, this paper improves the traditional SR algorithm from two aspects of image registration and reconstruction. On registration phase, the probabilistic motion field is introduced to prevent the SR algorithm from depending on accuracy of registration. In addition, the Heaviside function is adopted to implement the motion weight mapping, which enhances self-adaption of the algorithm further. On reconstruction phase, a regularized estimation based on Huber norm is used to reconstruct the SR image, which makes the proposed algorithm more stable to minimize the cost function while still robust against large errors. The experimental results show that the proposed algorithm has a good performance on sequence SR reconstruction compared with some existing SR methods.
2014, 36(11): 2556-2562.
doi: 10.3724/SP.J.1146.2013.01884
Abstract:
Generally, the patch-based image processing methods use Euclidean Distance as the criterion of similar patches, which is impossible to fully reveal the patch structures and brings about the existence of blocking effect in the reconstructed image. In this paper, combining with the patch redundancy exploitation of the patch-based Wiener filter and the spatial distribution of blocking effect, an improved image reconstruction algorithm based on the patch-based locally optimal Wiener filtering is proposed. Firstly, the high-frequency part of the image is sampled sparsely, restricting the blocking effect to the border of adjacent blocks. Then the image is separately divided into two parts: the marginal region of blocks and the central region of blocks, and photometrically and geometrically similar patches are further utilized to determine the filtering parameters, which contribute to the smoothness of blocking region. As is shown by the experimental results, the proposed algorithm is able to efficiently reduce the blocking effect produced in the reconstruction and achieve a much better performance in terms of the images with rich textures.
Generally, the patch-based image processing methods use Euclidean Distance as the criterion of similar patches, which is impossible to fully reveal the patch structures and brings about the existence of blocking effect in the reconstructed image. In this paper, combining with the patch redundancy exploitation of the patch-based Wiener filter and the spatial distribution of blocking effect, an improved image reconstruction algorithm based on the patch-based locally optimal Wiener filtering is proposed. Firstly, the high-frequency part of the image is sampled sparsely, restricting the blocking effect to the border of adjacent blocks. Then the image is separately divided into two parts: the marginal region of blocks and the central region of blocks, and photometrically and geometrically similar patches are further utilized to determine the filtering parameters, which contribute to the smoothness of blocking region. As is shown by the experimental results, the proposed algorithm is able to efficiently reduce the blocking effect produced in the reconstruction and achieve a much better performance in terms of the images with rich textures.
2014, 36(11): 2563-2570.
doi: 10.3724/SP.J.1146.2013.01762
Abstract:
Extensive experiments demonstrate that locally dense features are able to improve greatly performances of image classification, and the popular way is to conduct spatially uniform sampling for locally dense feature extraction. In this paper, a new method to extract locally dense features, region-based non-uniform spatial sampling is proposed to improve further the performance of image classification. Firstly, an over-segmentation operator is performed on the image, and then a saliency detection method is applied to estimate the importance of each segmented region. To keep the same sampling number of local features, the dense features are extracted along the boundary of the important salient region with dense sampling, as well as inside the region with random sampling according to its area and importance. Finally, the Bog-of-Words representation model is used for image classification. Extensive experiments are conducted on two widely-used datasets (UIUC Sports and Caltech-256). The experimental results show that proposed sampling strategy obtains an efficient performance.
Extensive experiments demonstrate that locally dense features are able to improve greatly performances of image classification, and the popular way is to conduct spatially uniform sampling for locally dense feature extraction. In this paper, a new method to extract locally dense features, region-based non-uniform spatial sampling is proposed to improve further the performance of image classification. Firstly, an over-segmentation operator is performed on the image, and then a saliency detection method is applied to estimate the importance of each segmented region. To keep the same sampling number of local features, the dense features are extracted along the boundary of the important salient region with dense sampling, as well as inside the region with random sampling according to its area and importance. Finally, the Bog-of-Words representation model is used for image classification. Extensive experiments are conducted on two widely-used datasets (UIUC Sports and Caltech-256). The experimental results show that proposed sampling strategy obtains an efficient performance.
2014, 36(11): 2571-2577.
doi: 10.3724/SP.J.1146.2013.01960
Abstract:
An optimization algorithm based on Spatial Constraint for Speeded Up Robust Feature (SURF) matching is proposed, called SC-SURF. First, SURF is used for the image feature point detection and matching. Then the matched points are ranked according to the principle that the lower is the ratio of the nearest neighbor, the higher is the matching accuracy. A new coordinate system is created based on the optimal matched points. Every pair of matched points is encoded using the relative spatial map. At the same time, representative data sets are constructed to simplify RANndom SAmple Consensus (RANSAC) by using a minimal number of optimal matches. The target homographic matrix is fitted based on the representative data set. Finally, the spatial verification is performed using the relative spatial map among the weighted matched points and simplified RANSAC. Experiments demonstrate that SC-SURF algorithm achieves good robustness and high speed while maintaining high matching accuracy.
An optimization algorithm based on Spatial Constraint for Speeded Up Robust Feature (SURF) matching is proposed, called SC-SURF. First, SURF is used for the image feature point detection and matching. Then the matched points are ranked according to the principle that the lower is the ratio of the nearest neighbor, the higher is the matching accuracy. A new coordinate system is created based on the optimal matched points. Every pair of matched points is encoded using the relative spatial map. At the same time, representative data sets are constructed to simplify RANndom SAmple Consensus (RANSAC) by using a minimal number of optimal matches. The target homographic matrix is fitted based on the representative data set. Finally, the spatial verification is performed using the relative spatial map among the weighted matched points and simplified RANSAC. Experiments demonstrate that SC-SURF algorithm achieves good robustness and high speed while maintaining high matching accuracy.
2014, 36(11): 2578-2585.
doi: 10.3724/SP.J.1146.2014.00271
Abstract:
Vision-based road detection is a popular area in research of driving security, however, detecting in complex road scenery is still a challenging topic. An approach is proposed to detect drivable road region from monocular images in urban environments. The algorithm is based on multi-scale sparse representation, with local texture in large scale, and context in medium scale. Experiments show that, distinguishing the similar texture of pavements from that of surrounding buildings and obstacles brings a well-performance in structured roads as well as the diverse road environments such as lack of lanes or clear boundaries but full of complex illuminations.
Vision-based road detection is a popular area in research of driving security, however, detecting in complex road scenery is still a challenging topic. An approach is proposed to detect drivable road region from monocular images in urban environments. The algorithm is based on multi-scale sparse representation, with local texture in large scale, and context in medium scale. Experiments show that, distinguishing the similar texture of pavements from that of surrounding buildings and obstacles brings a well-performance in structured roads as well as the diverse road environments such as lack of lanes or clear boundaries but full of complex illuminations.
2014, 36(11): 2586-2592.
doi: 10.3724/SP.J.1146.2013.01974
Abstract:
In automatic retinal image analysis, the location of the optic disk and macula fovea is the prerequisite for a system of computer aided diagnosis or automatic screening of diabetic retinopathy. This paper proposes a novel method to detect macular firstly before optic disk location or vessels detection using directional local contrast filter and local vessel density feature, which improves effectively the accuracy of the detection of the macular and optic disk, and is different from existing algorithms that detected optic disk firstly before macula fovea location. 169 color images with diabetic macular edema from HEI-MED public database are used to evaluate the performance of the developed method. From the experimental results, the accuracy rate both of macular and optic disk detection is found to be 98.2%. The proposed method is simple and unsupervised, and has high practical value.
In automatic retinal image analysis, the location of the optic disk and macula fovea is the prerequisite for a system of computer aided diagnosis or automatic screening of diabetic retinopathy. This paper proposes a novel method to detect macular firstly before optic disk location or vessels detection using directional local contrast filter and local vessel density feature, which improves effectively the accuracy of the detection of the macular and optic disk, and is different from existing algorithms that detected optic disk firstly before macula fovea location. 169 color images with diabetic macular edema from HEI-MED public database are used to evaluate the performance of the developed method. From the experimental results, the accuracy rate both of macular and optic disk detection is found to be 98.2%. The proposed method is simple and unsupervised, and has high practical value.
2014, 36(11): 2593-2599.
doi: 10.3724/SP.J.1146.2013.02029
Abstract:
The Detection and tracking of multi-target is a challenging issue under the condition with unknown and varied target number, especially when the Signal-to-Noise Ratio (SNR) is low. An improved Track-Before-Detect (TBD) method for multiple spread targets is proposed by using point spread observation model. The method is prepared from the framework of the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) filter, and it is implemented by firstly adopting an adaptive particle generation strategy, which can obtain the rough position estimates of the potential targets. The particle set is then partitioned into multiple subsets according to their position coordinates in 2D image plane and an efficient evaluation of the updated particle weights is accomplished by utilizing the convergence property of the particles. Target tracks are finally constructed from the extracted multitarget states via dynamic clustering technique. Simulation results show that the presented method can not only greatly improve the performance of multitarget TBD, but also significantly reduce the executing time of SMC-PHD based implementation.
The Detection and tracking of multi-target is a challenging issue under the condition with unknown and varied target number, especially when the Signal-to-Noise Ratio (SNR) is low. An improved Track-Before-Detect (TBD) method for multiple spread targets is proposed by using point spread observation model. The method is prepared from the framework of the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) filter, and it is implemented by firstly adopting an adaptive particle generation strategy, which can obtain the rough position estimates of the potential targets. The particle set is then partitioned into multiple subsets according to their position coordinates in 2D image plane and an efficient evaluation of the updated particle weights is accomplished by utilizing the convergence property of the particles. Target tracks are finally constructed from the extracted multitarget states via dynamic clustering technique. Simulation results show that the presented method can not only greatly improve the performance of multitarget TBD, but also significantly reduce the executing time of SMC-PHD based implementation.
2014, 36(11): 2600-2606.
doi: 10.3724/SP.J.1146.2013.01814
Abstract:
The Electronic Speckle Pattern Interferometry (ESPI) is widely used for deformation measurement and nondestructive testing of the optical rough surface in recent years. Removal of the speckle noise is of fundamental importance for accurate extraction of the phase information. The Partial Differential Equation (PDE) filters are well-known for their good processing results, especially the oriented partial differential equation can control the direction of the filtering, which is more suitable for the ESPI image. Furthermore, the filtering degree of different pixels is considered. A new oriented PDE filter model is proposed for the ESPI fringe, in which the DisContinuities Measure (DCM) of an image is introduced to control the diffusion speed. The effectiveness of the proposed method is tested by means of the computer simulation and the experimentation on a real ESPI fringe patterns respectively. The results show that noise is effectively suppressed and the fringe edge is well preserved, even for very dense fringes.
The Electronic Speckle Pattern Interferometry (ESPI) is widely used for deformation measurement and nondestructive testing of the optical rough surface in recent years. Removal of the speckle noise is of fundamental importance for accurate extraction of the phase information. The Partial Differential Equation (PDE) filters are well-known for their good processing results, especially the oriented partial differential equation can control the direction of the filtering, which is more suitable for the ESPI image. Furthermore, the filtering degree of different pixels is considered. A new oriented PDE filter model is proposed for the ESPI fringe, in which the DisContinuities Measure (DCM) of an image is introduced to control the diffusion speed. The effectiveness of the proposed method is tested by means of the computer simulation and the experimentation on a real ESPI fringe patterns respectively. The results show that noise is effectively suppressed and the fringe edge is well preserved, even for very dense fringes.
2014, 36(11): 2607-2613.
doi: 10.3724/SP.J.1146.2014.00068
Abstract:
Based on Hoff and Arbib s control theory of the minimum jerk, this paper presents a new control model with cerebellar-like structure which is able to account for the temporal coordination of arm transport and hand preshape during reach and grasp tasks. And it is suggested that how the structure can learn the two key functions required in the Hoff-Arbib theory, namely state look-ahead and Time-To-Go (TTG) estimation. By the simulation for two-dimensional motion of arm transport and hand preshape, the results demonstrate that some key features of human reach-grasp kinematics obtained by Hoff-Arbib model can be achieved by the cerebellum control model and some performances are even better. In a word, by learning and training, this model can create a more accurate and smooth motor trajectory.
Based on Hoff and Arbib s control theory of the minimum jerk, this paper presents a new control model with cerebellar-like structure which is able to account for the temporal coordination of arm transport and hand preshape during reach and grasp tasks. And it is suggested that how the structure can learn the two key functions required in the Hoff-Arbib theory, namely state look-ahead and Time-To-Go (TTG) estimation. By the simulation for two-dimensional motion of arm transport and hand preshape, the results demonstrate that some key features of human reach-grasp kinematics obtained by Hoff-Arbib model can be achieved by the cerebellum control model and some performances are even better. In a word, by learning and training, this model can create a more accurate and smooth motor trajectory.
2014, 36(11): 2614-2620.
doi: 10.3724/SP.J.1146.2013.01909
Abstract:
In the Time Difference Of Arrival (TDOA) measurement system, the performance of time difference estimation degrades due to the inevitable presence of phase noises of local oscillators at spatially separated receivers. The effect of the phase noise on the unbiasedness of classic cross-correlation time difference estimator is discussed herein, as well as the Cramr-Rao Lower Bound (CRLB) of estimation is derived in the phase noise environment. Furthermore, the CRLB of time difference estimation in additive noise environment with phase noise considered is derived. The CRLB degradation coefficient with respect to the CRLB in the additive noise environment is also given. The theoretical analysis is validated by the simulation results.
In the Time Difference Of Arrival (TDOA) measurement system, the performance of time difference estimation degrades due to the inevitable presence of phase noises of local oscillators at spatially separated receivers. The effect of the phase noise on the unbiasedness of classic cross-correlation time difference estimator is discussed herein, as well as the Cramr-Rao Lower Bound (CRLB) of estimation is derived in the phase noise environment. Furthermore, the CRLB of time difference estimation in additive noise environment with phase noise considered is derived. The CRLB degradation coefficient with respect to the CRLB in the additive noise environment is also given. The theoretical analysis is validated by the simulation results.
2014, 36(11): 2621-2627.
doi: 10.3724/SP.J.1146.2013.01578
Abstract:
A new method for estimating parameters of quadratic frequency modulated signals is proposed basing on a product kernel function. Firstly, the signal is multiplied by its conjugate reverse signal with the phase-matching transformation being performed, and then the estimated value of the chirp rate can be obtained by searching one-dimension maximum position of accumulated signals. Secondly, the chirp rate of the signal is compensated and a new product kernel function for the dechirped signal is structured to transform it into the two-dimensional time-lag domain, and the phase-matching transformation and FFT respectively are performed along time and lag axis. As a result, by the maximum searching in the new change rate of the chirp rate-frequency domain after transformation, the estimated values of both the change rate of the chirp rate and the center frequency can be obtained, with the phase of the signal being able to compensated and the amplitude estimated by calculating the magnitude of its average, thereby leading to the reconstruction of the signal. It is shown that the proposed method precludes the iterative search of all phase parameters and improves the operational efficiency. Finally, the paper presents the simulated results that confirm the effectiveness of this method.
A new method for estimating parameters of quadratic frequency modulated signals is proposed basing on a product kernel function. Firstly, the signal is multiplied by its conjugate reverse signal with the phase-matching transformation being performed, and then the estimated value of the chirp rate can be obtained by searching one-dimension maximum position of accumulated signals. Secondly, the chirp rate of the signal is compensated and a new product kernel function for the dechirped signal is structured to transform it into the two-dimensional time-lag domain, and the phase-matching transformation and FFT respectively are performed along time and lag axis. As a result, by the maximum searching in the new change rate of the chirp rate-frequency domain after transformation, the estimated values of both the change rate of the chirp rate and the center frequency can be obtained, with the phase of the signal being able to compensated and the amplitude estimated by calculating the magnitude of its average, thereby leading to the reconstruction of the signal. It is shown that the proposed method precludes the iterative search of all phase parameters and improves the operational efficiency. Finally, the paper presents the simulated results that confirm the effectiveness of this method.
2014, 36(11): 2628-2632.
doi: 10.3724/SP.J.1146.2013.01817
Abstract:
The traditional polarization sensitive array signal processing methods, e.g. parameter estimation and beamformimg, can not achieve excellent performance under the situation of coherent signal sources. An improved decorrelation algorithm is proposed based on polarization smoothing algorithm. By choosing optimal weight vectors for the signal covariance matrixes of each subarray and ensuring the smoothing covariance matrix satisfies the Toeplitz constraint, the correlation between signals is then eliminated. The derivation of optimal weight vector is given and the rank of the smoothing covariance matrix is analyzed. Simulation results verify the effectiveness of the improved algorithm and the algorithm is also suitable for the nonuniform and coherent noise.
The traditional polarization sensitive array signal processing methods, e.g. parameter estimation and beamformimg, can not achieve excellent performance under the situation of coherent signal sources. An improved decorrelation algorithm is proposed based on polarization smoothing algorithm. By choosing optimal weight vectors for the signal covariance matrixes of each subarray and ensuring the smoothing covariance matrix satisfies the Toeplitz constraint, the correlation between signals is then eliminated. The derivation of optimal weight vector is given and the rank of the smoothing covariance matrix is analyzed. Simulation results verify the effectiveness of the improved algorithm and the algorithm is also suitable for the nonuniform and coherent noise.
2014, 36(11): 2633-2639.
doi: 10.3724/SP.J.1146.2013.01796
Abstract:
To solve the problem that in two-dimensional imaging the target under water is easily hidden by the strong coherent interference, and high side lobe of arc array beam pattern causes more false alarm, an optimized two-dimensional imaging method based on second-order cone programming is proposed. Not only the strong coherent interference is suppressed, but also the side lobe is controlled well with the method. The issue that the sliding window in time domain leads to the difference between the steering vector and array manifold is analyzed, which causes the mismatch between the weights computed by the second-order cone programming and steering vector, and the failure to satisfy the side lobe control and null design requirement. To solve the issue, the method using the theoretical steering vector calculated by the sliding window in time domain instead of the array manifold to design the second-order cone programming weights is proposed, and the optimizations of the beam pattern and the arc array two-dimensional imaging are combined by the improved second-order cone programming weights .The validity of the proposed method is demonstrated by computer simulation and the pool experiment.
To solve the problem that in two-dimensional imaging the target under water is easily hidden by the strong coherent interference, and high side lobe of arc array beam pattern causes more false alarm, an optimized two-dimensional imaging method based on second-order cone programming is proposed. Not only the strong coherent interference is suppressed, but also the side lobe is controlled well with the method. The issue that the sliding window in time domain leads to the difference between the steering vector and array manifold is analyzed, which causes the mismatch between the weights computed by the second-order cone programming and steering vector, and the failure to satisfy the side lobe control and null design requirement. To solve the issue, the method using the theoretical steering vector calculated by the sliding window in time domain instead of the array manifold to design the second-order cone programming weights is proposed, and the optimizations of the beam pattern and the arc array two-dimensional imaging are combined by the improved second-order cone programming weights .The validity of the proposed method is demonstrated by computer simulation and the pool experiment.
2014, 36(11): 2640-2645.
doi: 10.3724/SP.J.1146.2014.00234
Abstract:
In this paper, an adaptive belief difference-map propagation algorithm with low complexity is proposed for short and middle length LDPC regular codes by modifying message update rules and transforming factor graph. To improve decoding performance, a new selective belief propagation difference-map message update rule is introduced by borrowing the difference-map strategy for variable node messages oscillation, and the normalized factor is adjusted adaptively. Meanwhile, the computational complexity exponential in the degree of check node is decreased into linear in degree by opening the check node. The simulation results illustrate that the proposed algorithm has better performance and lower complexity than other iterative decoding algorithms based on the modified factor graphs. Compared to the LLR-BP, it better performance at high Eb/N0 and the computational complexity is apparently downgraded at low Eb/N0.
In this paper, an adaptive belief difference-map propagation algorithm with low complexity is proposed for short and middle length LDPC regular codes by modifying message update rules and transforming factor graph. To improve decoding performance, a new selective belief propagation difference-map message update rule is introduced by borrowing the difference-map strategy for variable node messages oscillation, and the normalized factor is adjusted adaptively. Meanwhile, the computational complexity exponential in the degree of check node is decreased into linear in degree by opening the check node. The simulation results illustrate that the proposed algorithm has better performance and lower complexity than other iterative decoding algorithms based on the modified factor graphs. Compared to the LLR-BP, it better performance at high Eb/N0 and the computational complexity is apparently downgraded at low Eb/N0.
2014, 36(11): 2646-2651.
doi: 10.3724/SP.J.1146.2013.01624
Abstract:
A new form of Chirp Scaling (CS) algorithm based on Chebyshev polynomials in Synthetic Aperture Radar is proposed in this paper. Instead of Taylor series expansion, the Chebyshev polynomials is used to approximate the two-dimensional frequency spectrum of the received signal which leads to a more accurate spectrum. Then the mathematic model extracted from the optical system is exploited to confirm the scaling function for range cell migration correction. Moreover, the approximation errors have the bounded limitation which improve the focusing effect of edge point and increase the depth of focus of the scene. The results of simulations confirm the effectiveness of the proposed algorithm.
A new form of Chirp Scaling (CS) algorithm based on Chebyshev polynomials in Synthetic Aperture Radar is proposed in this paper. Instead of Taylor series expansion, the Chebyshev polynomials is used to approximate the two-dimensional frequency spectrum of the received signal which leads to a more accurate spectrum. Then the mathematic model extracted from the optical system is exploited to confirm the scaling function for range cell migration correction. Moreover, the approximation errors have the bounded limitation which improve the focusing effect of edge point and increase the depth of focus of the scene. The results of simulations confirm the effectiveness of the proposed algorithm.
2014, 36(11): 2652-2658.
doi: 10.3724/SP.J.1146.2013.01875
Abstract:
Modern high-resolution radar obtains high resolution in down-range dimension from wideband waveform and high resolution in cross-range dimension from large aperture. In this paper, a beam is formed using Frequency Diversity (FD) coherent processing between two array elements, according to the principle of Single Input Multiple Output (SIMO) linear array antennas beam forming. Then two-dimension image of targets is achieved by using the High Resolution Range Profiles (HRRP). Further to reduce the bandwidth of the FD coherent processing utilizing sparse array, and large aperture is resolved through array design. Finally, simulation verifies the effectiveness of the proposed method.
Modern high-resolution radar obtains high resolution in down-range dimension from wideband waveform and high resolution in cross-range dimension from large aperture. In this paper, a beam is formed using Frequency Diversity (FD) coherent processing between two array elements, according to the principle of Single Input Multiple Output (SIMO) linear array antennas beam forming. Then two-dimension image of targets is achieved by using the High Resolution Range Profiles (HRRP). Further to reduce the bandwidth of the FD coherent processing utilizing sparse array, and large aperture is resolved through array design. Finally, simulation verifies the effectiveness of the proposed method.
2014, 36(11): 2659-2665.
doi: 10.3724/SP.J.1146.2013.01803
Abstract:
In multichannel Synthetic Aperture Radar-Ground Moving Targets Indication (SAR-GMTI) systems, three problems are led by utilizing the conventional Post Doppler-Space-Time Adaptive Processing (PD-STAP) technique. Firstly, the target Doppler spectrum is wrapped because of the Doppler shift caused by the cross-track velocity of the moving target. The ambiguities are appeared if directly using the matched filtering. Secondly, in the case of the signal undersampling, a Pulse Repeating Frequency (PRF) shifting is caused in the azimuth direction for a moving target in the focused image. This makes the moving target detection much more complicated and challenging. Thirdly, the traditional PD-STAP technique also has a high computational complexity. To overcome these problems, a novel space-time adaptive processing method based on Deramp processing is proposed. Simulation results validate the effectiveness of the proposed algorithm.
In multichannel Synthetic Aperture Radar-Ground Moving Targets Indication (SAR-GMTI) systems, three problems are led by utilizing the conventional Post Doppler-Space-Time Adaptive Processing (PD-STAP) technique. Firstly, the target Doppler spectrum is wrapped because of the Doppler shift caused by the cross-track velocity of the moving target. The ambiguities are appeared if directly using the matched filtering. Secondly, in the case of the signal undersampling, a Pulse Repeating Frequency (PRF) shifting is caused in the azimuth direction for a moving target in the focused image. This makes the moving target detection much more complicated and challenging. Thirdly, the traditional PD-STAP technique also has a high computational complexity. To overcome these problems, a novel space-time adaptive processing method based on Deramp processing is proposed. Simulation results validate the effectiveness of the proposed algorithm.
2014, 36(11): 2666-2671.
doi: 10.3724/SP.J.1146.2013.01925
Abstract:
A joint detection and tracking processing algorithm is proposed in this paper with constant false alarm rate property. Under the precondition that the average false alarm rate of the gate is fixed, the aim is to improve the target detection probability as well as the tracking performance of the system. Firstly, according to the Bayes theory, the likelihood ratio detector is modified with the adoption of the feedback from the tracker. Then, the averaged detection probability and false alarm rate over the gate is derived. Substituting them into the calculation of the association probability of the Probabilistic Data Association (PDA) filter, the procedure of the proposed algorithm is obtained. Finally, the feasibility and validity of the algorithm are verified by the simulation results.
A joint detection and tracking processing algorithm is proposed in this paper with constant false alarm rate property. Under the precondition that the average false alarm rate of the gate is fixed, the aim is to improve the target detection probability as well as the tracking performance of the system. Firstly, according to the Bayes theory, the likelihood ratio detector is modified with the adoption of the feedback from the tracker. Then, the averaged detection probability and false alarm rate over the gate is derived. Substituting them into the calculation of the association probability of the Probabilistic Data Association (PDA) filter, the procedure of the proposed algorithm is obtained. Finally, the feasibility and validity of the algorithm are verified by the simulation results.
2014, 36(11): 2672-2677.
doi: 10.3724/SP.J.1146.2013.01963
Abstract:
Frequency Modulated Continuous Wave (FMCW) array antenna SAR system combines the FMCW and array antenna imaging technology, which is of abroad interest in both civil and military applications. While in practical FMCW array SAR, the presents of unavoidable sweep rate non-linearity error in the frequency modulation and the multi-channel amplitude/phase errors severely degrade the reconstruction image quality. The signal model with sweep rate non-linearity error and multi-channel amplitude/phase errors is proposed in this paper. An error calibration scheme and data processing flow chart based on single prominent point echo are presented. Extensive simulations and real data processing are performed to validate the effectiveness of the proposed calibration scheme.
Frequency Modulated Continuous Wave (FMCW) array antenna SAR system combines the FMCW and array antenna imaging technology, which is of abroad interest in both civil and military applications. While in practical FMCW array SAR, the presents of unavoidable sweep rate non-linearity error in the frequency modulation and the multi-channel amplitude/phase errors severely degrade the reconstruction image quality. The signal model with sweep rate non-linearity error and multi-channel amplitude/phase errors is proposed in this paper. An error calibration scheme and data processing flow chart based on single prominent point echo are presented. Extensive simulations and real data processing are performed to validate the effectiveness of the proposed calibration scheme.
2014, 36(11): 2678-2683.
doi: 10.3724/SP.J.1146.2013.01895
Abstract:
The present study aims to investigate the Cramer-Rao Bound (CRB) for estimating the direction and velocity of the moving target using the moving bistatic MIMO radar system in the clutter environment. Firstly, the bistatic MIMO radar signal model with the moving transmit and receive arrays is constructed. And the general CRB expression is derived for the direction and velocity of the moving multi-target contaminated by the clutter echoes. Then this study gets the closed-form CRB expressions for the Direction Of Departure (DOD), Direction Of Arrival (DOA) and velocity of a moving target in the clutter-free environment. The impact of some parameters on the CRB is analyzed depending on the closed-form expression. Theoretical analyses and computer simulations show that the estimation performance of the direction and velocity in a clutter-free environment is better than those of the clutter environment. The velocities of the arrays and target have no impact on the estimation performance of the target DOD (DOA), but strongly affect the estimation accuracy of the target velocity. The estimation performance of target velocity can lead to degradation as the velocity of arrays and target increase.
The present study aims to investigate the Cramer-Rao Bound (CRB) for estimating the direction and velocity of the moving target using the moving bistatic MIMO radar system in the clutter environment. Firstly, the bistatic MIMO radar signal model with the moving transmit and receive arrays is constructed. And the general CRB expression is derived for the direction and velocity of the moving multi-target contaminated by the clutter echoes. Then this study gets the closed-form CRB expressions for the Direction Of Departure (DOD), Direction Of Arrival (DOA) and velocity of a moving target in the clutter-free environment. The impact of some parameters on the CRB is analyzed depending on the closed-form expression. Theoretical analyses and computer simulations show that the estimation performance of the direction and velocity in a clutter-free environment is better than those of the clutter environment. The velocities of the arrays and target have no impact on the estimation performance of the target DOD (DOA), but strongly affect the estimation accuracy of the target velocity. The estimation performance of target velocity can lead to degradation as the velocity of arrays and target increase.
2014, 36(11): 2684-2690.
doi: 10.3724/SP.J.1146.2013.01769
Abstract:
To resolve the problem of fast-moving target detection and imaging in Frequency Modulated Continuous Wave SAR (FMCW-SAR), a new method of moving target detection and imaging in double-channel FMCW-SAR is presented in this paper. Considering the inherent characteristics of FMCW, the stationary clutter is eliminated and moving targets are detected by combining the Doppler frequency shift compensation with Displaced Phase Center Antenna (DPCA) technology. To solve the problem of fast-moving targets defocus and Doppler spectrum split, one method combining the Keystone transform with azimuth deramp is presented to refocus fast-moving targets. Finally, simulation results are provided to demonstrate the effectiveness and feasibility of the proposed method.
To resolve the problem of fast-moving target detection and imaging in Frequency Modulated Continuous Wave SAR (FMCW-SAR), a new method of moving target detection and imaging in double-channel FMCW-SAR is presented in this paper. Considering the inherent characteristics of FMCW, the stationary clutter is eliminated and moving targets are detected by combining the Doppler frequency shift compensation with Displaced Phase Center Antenna (DPCA) technology. To solve the problem of fast-moving targets defocus and Doppler spectrum split, one method combining the Keystone transform with azimuth deramp is presented to refocus fast-moving targets. Finally, simulation results are provided to demonstrate the effectiveness and feasibility of the proposed method.
2014, 36(11): 2691-2697.
doi: 10.3724/SP.J.1146.2013.01910
Abstract:
This paper analyzes the production mechanism of the multipath Spread Doppler Clutter (SDC) in skywave Over The Horizon Radar (OTHR). Because the amplitude and phase errors always exist in the array and the power of desired signal (sea clutter) is greater than the power of SDC, the SDC suppression ability declines through conventional Adaptive Digital Beam Forming (ADBF) process, at the same time, the desired signal is weakened and Signal-to-Noise Ratio (SNR) is seriously decreased. To resolve the above problems, an adaptive SDC suppression method is proposed. In this method, an improved Noise Subspace Fitting (NSF) method is utilized to eliminate amplitude and phase errors of OTHR array, and the direction of arrival angle of desired signal and SDC can be accurately obtained. Then, orthogonal projection weight vector is used in ADBF to solve the problem that conventional ADBF weight vector can not be accurately estimated because of the strong desired signal. Theoretical analysis and simulation results show that the scheme can completely suppress multipath SDC.
This paper analyzes the production mechanism of the multipath Spread Doppler Clutter (SDC) in skywave Over The Horizon Radar (OTHR). Because the amplitude and phase errors always exist in the array and the power of desired signal (sea clutter) is greater than the power of SDC, the SDC suppression ability declines through conventional Adaptive Digital Beam Forming (ADBF) process, at the same time, the desired signal is weakened and Signal-to-Noise Ratio (SNR) is seriously decreased. To resolve the above problems, an adaptive SDC suppression method is proposed. In this method, an improved Noise Subspace Fitting (NSF) method is utilized to eliminate amplitude and phase errors of OTHR array, and the direction of arrival angle of desired signal and SDC can be accurately obtained. Then, orthogonal projection weight vector is used in ADBF to solve the problem that conventional ADBF weight vector can not be accurately estimated because of the strong desired signal. Theoretical analysis and simulation results show that the scheme can completely suppress multipath SDC.
2014, 36(11): 2698-2704.
doi: 10.3724/SP.J.1146.2013.01900
Abstract:
The maximum number of targets that can be uniquely identified by the traditional MIMO radar is limited by the number of virtual sensors. To alleviate this issue, a novel antenna array in MIMO radar which is based on the concept of nested arrays is designed in this paper, and a modified spatial sparsity-based Direction-Of-Arrival (DOA) estimation method is proposed. First, the effect of nested sampling with application to virtual array of traditional MIMO radar on the DOA estimation performance is analyzed. Second, the method to design antenna array of nested MIMO radar is proposed. It is proven that nested MIMO radar can detect more targets than traditional MIMO radar when they share the same number of virtual sensors. Finally, a modified spatial sparsity-based approach to DOA estimation in nested MIMO radar is proposed based on noise subspace weighted minimization problem, which can increase resolution and effectively suppress spurious peaks. Extensive simulation results demonstrate the effectiveness and superiority of the proposed methods.
The maximum number of targets that can be uniquely identified by the traditional MIMO radar is limited by the number of virtual sensors. To alleviate this issue, a novel antenna array in MIMO radar which is based on the concept of nested arrays is designed in this paper, and a modified spatial sparsity-based Direction-Of-Arrival (DOA) estimation method is proposed. First, the effect of nested sampling with application to virtual array of traditional MIMO radar on the DOA estimation performance is analyzed. Second, the method to design antenna array of nested MIMO radar is proposed. It is proven that nested MIMO radar can detect more targets than traditional MIMO radar when they share the same number of virtual sensors. Finally, a modified spatial sparsity-based approach to DOA estimation in nested MIMO radar is proposed based on noise subspace weighted minimization problem, which can increase resolution and effectively suppress spurious peaks. Extensive simulation results demonstrate the effectiveness and superiority of the proposed methods.
2014, 36(11): 2705-2710.
doi: 10.3724/SP.J.1146.2013.02004
Abstract:
The Short-Time Fourier Transform (STFT) is an important method widely used in the study of the nonstationary signals. After discussing the filtering and the estimation of chirp rate for Linear Frequency Modulation (LFM) signal by using the STFT, this paper proposes a novel autofocus method based on the STFT. The proposed autofocus method firstly utilizes the STFT to estimate and compensate the Quadratic Phase Error (QPE), which is the main part of phase error influencing the quality of the SAR image. And when estimating the residual phase error, the STFT filtering is used for filtering the noise and interference of the time-variant signals to raise the Signal-to-Clutter Ratio (SCR). The experiment results using both simulated data and real data demonstrate the validation of the proposed autofocus method.
The Short-Time Fourier Transform (STFT) is an important method widely used in the study of the nonstationary signals. After discussing the filtering and the estimation of chirp rate for Linear Frequency Modulation (LFM) signal by using the STFT, this paper proposes a novel autofocus method based on the STFT. The proposed autofocus method firstly utilizes the STFT to estimate and compensate the Quadratic Phase Error (QPE), which is the main part of phase error influencing the quality of the SAR image. And when estimating the residual phase error, the STFT filtering is used for filtering the noise and interference of the time-variant signals to raise the Signal-to-Clutter Ratio (SCR). The experiment results using both simulated data and real data demonstrate the validation of the proposed autofocus method.
2014, 36(11): 2711-2716.
doi: 10.3724/SP.J.1146.2013.02002
Abstract:
Digital Beam Forming (DBF) in elevation is regarded as an important candidate for high resolution and wide coverage imaging SAR system, since it can enhance the performance of SAR system by forming a high gain and sharp beam pattern to receive the pulse echoes. This paper is based on range DBF for wide-swath SAR system, and analyzes particularly on performance of SAR system affected by DBF in elevation. The expressions of Noise Equivalent Sigma Zero (NESZ) and Range Ambiguity to Signal Ratio (RASR) for the range DBF-SAR system are derived in detail. In order to suppress the range ambiguity energy, the null-steering technology is introduced. All the simulation results validate the advantages of range DBF-SAR system compared with the conventional single channel system.
Digital Beam Forming (DBF) in elevation is regarded as an important candidate for high resolution and wide coverage imaging SAR system, since it can enhance the performance of SAR system by forming a high gain and sharp beam pattern to receive the pulse echoes. This paper is based on range DBF for wide-swath SAR system, and analyzes particularly on performance of SAR system affected by DBF in elevation. The expressions of Noise Equivalent Sigma Zero (NESZ) and Range Ambiguity to Signal Ratio (RASR) for the range DBF-SAR system are derived in detail. In order to suppress the range ambiguity energy, the null-steering technology is introduced. All the simulation results validate the advantages of range DBF-SAR system compared with the conventional single channel system.
2014, 36(11): 2717-2722.
doi: 10.3724/SP.J.1146.2013.01979
Abstract:
Inversion and interpretation of underground structure are ultimate aim of Ground Penetrating Radar (GPR) working. Most of inversion problems are non-linear, hence, investigations of non-linear inversion methods are significant. In this paper, an improved Particle Swarm Optimization (PSO) is used to solve GPR inverse problem. Comparison results with other inversion including the genetic algorithm show that the proposed method has higher accuracy and better simplicity; inversion results under a condition of complicated model, multi- parameter and low SNR indicate the effectiveness dealing with multi-parameter and better anti-noise ability of the proposed algorithm; the inversion results of actual measurement data further verify the feasibility of this algorithm.
Inversion and interpretation of underground structure are ultimate aim of Ground Penetrating Radar (GPR) working. Most of inversion problems are non-linear, hence, investigations of non-linear inversion methods are significant. In this paper, an improved Particle Swarm Optimization (PSO) is used to solve GPR inverse problem. Comparison results with other inversion including the genetic algorithm show that the proposed method has higher accuracy and better simplicity; inversion results under a condition of complicated model, multi- parameter and low SNR indicate the effectiveness dealing with multi-parameter and better anti-noise ability of the proposed algorithm; the inversion results of actual measurement data further verify the feasibility of this algorithm.
2014, 36(11): 2723-2729.
doi: 10.3724/SP.J.1146.2013.01840
Abstract:
To reflect different intensities of noises among the different bands in the transform domain and the intrinsic structures of the transformed data, a new approach for denoising the hyperspectral images is proposed based on Principal Component Analysis (PCA) and dictionary learning. At first, a group of the principle component images are achieved by using the PCA transform. Then, these noises which exist in the spatial- and the spectral- domain of the components with low energy are denoised by an adaptively learned dictionary based sparse representation method and the dual-tree complex wavelet transform, respectively. Finally, the denoised data is obtained using the inverse PCA transform. By taking advantages of principal component analysis and dictionary learning, the proposed approach is superior to the traditional ones in preserving the details and alleviating the blocking artifacts. The experiment results on the synthetic and real hyperspectral remote sensing images demonstrate the effectiveness of the proposed approach.
To reflect different intensities of noises among the different bands in the transform domain and the intrinsic structures of the transformed data, a new approach for denoising the hyperspectral images is proposed based on Principal Component Analysis (PCA) and dictionary learning. At first, a group of the principle component images are achieved by using the PCA transform. Then, these noises which exist in the spatial- and the spectral- domain of the components with low energy are denoised by an adaptively learned dictionary based sparse representation method and the dual-tree complex wavelet transform, respectively. Finally, the denoised data is obtained using the inverse PCA transform. By taking advantages of principal component analysis and dictionary learning, the proposed approach is superior to the traditional ones in preserving the details and alleviating the blocking artifacts. The experiment results on the synthetic and real hyperspectral remote sensing images demonstrate the effectiveness of the proposed approach.
2014, 36(11): 2730-2736.
doi: 10.3724/SP.J.1146.2013.01751
Abstract:
This paper presents a new algorithm for image segmentation, which combines Hidden Markov Random Field (HMRF) and Gaussian Regression Model (GRM) to Fuzzy C-Means (FCM) clustering. The proposed algorithm uses the KL (Kullback-Leibler) information to regularize the objective function of FCM, and then utilizes HMRF and GRM to model the neighborhood relationship of the label field and feature field, respectively. The HMRF model characterizes the neighborhood relationship through its prior probability, while the GRM is established under the assumption that a pixel has the same label with its neighbors. This paper takes some experiments with the proposed algorithm and other FCM based algorithms on the simulation image, real SAR image and texture image, respectively, and the accuracy of segmentation is evaluated. By comparing the results of them, the proposed algorithm can provided more accuracy segmentation result.
This paper presents a new algorithm for image segmentation, which combines Hidden Markov Random Field (HMRF) and Gaussian Regression Model (GRM) to Fuzzy C-Means (FCM) clustering. The proposed algorithm uses the KL (Kullback-Leibler) information to regularize the objective function of FCM, and then utilizes HMRF and GRM to model the neighborhood relationship of the label field and feature field, respectively. The HMRF model characterizes the neighborhood relationship through its prior probability, while the GRM is established under the assumption that a pixel has the same label with its neighbors. This paper takes some experiments with the proposed algorithm and other FCM based algorithms on the simulation image, real SAR image and texture image, respectively, and the accuracy of segmentation is evaluated. By comparing the results of them, the proposed algorithm can provided more accuracy segmentation result.
2014, 36(11): 2737-2743.
doi: 10.3724/SP.J.1146.2013.01511
Abstract:
A simple and effective reconstruction scheme of hyperspectral data with spectral Compressive Sensing (CS) is proposed based on the widely used linear mixing model. The scheme is different from the traditional reconstruction methods of compressive sensing, which reconstruct hyperspectral data directly. The proposed scheme separates hyperspectral data into endmembers and abundances to reconstruct respectively, then generates hyperspectral data by reconstructed endmembers and abundances. Experimental results show that the reconstruction quality of the proposed scheme is better than the standard compressive sensing, furthermore the computing speed greatly ascends. Simultaneously, as a byproduct, endmembers and abundances can be obtained directly.
A simple and effective reconstruction scheme of hyperspectral data with spectral Compressive Sensing (CS) is proposed based on the widely used linear mixing model. The scheme is different from the traditional reconstruction methods of compressive sensing, which reconstruct hyperspectral data directly. The proposed scheme separates hyperspectral data into endmembers and abundances to reconstruct respectively, then generates hyperspectral data by reconstructed endmembers and abundances. Experimental results show that the reconstruction quality of the proposed scheme is better than the standard compressive sensing, furthermore the computing speed greatly ascends. Simultaneously, as a byproduct, endmembers and abundances can be obtained directly.
2014, 36(11): 2744-2749.
doi: 10.3724/SP.J.1146.2013.01838
Abstract:
A robust security communication protocol for wireless network based on quantum teleportation is proposed in this paper by further study about wireless security protocol 802.11i, and quantum entanglement of nonlocality is used to enhance the security of data link layer of wireless network. This paper focuses on the description of theory of quantum teleportation and the analysis of pairwise key hierarchy and group key hierarchy of temporal key integrity protocol and counter-mode/CBC-MAC protocol, puts forward the design and corresponding algorithm of embedding quantum teleportation in the hierarchy of pairwise key and group key, theoretically brings forth network security. There is no need to change the user, access points and authentication server, which belong to the network infrastructure. It is only required to add relevant equipment for quantum key authentication work, which can ensure the overall framework of network to make less change.
A robust security communication protocol for wireless network based on quantum teleportation is proposed in this paper by further study about wireless security protocol 802.11i, and quantum entanglement of nonlocality is used to enhance the security of data link layer of wireless network. This paper focuses on the description of theory of quantum teleportation and the analysis of pairwise key hierarchy and group key hierarchy of temporal key integrity protocol and counter-mode/CBC-MAC protocol, puts forward the design and corresponding algorithm of embedding quantum teleportation in the hierarchy of pairwise key and group key, theoretically brings forth network security. There is no need to change the user, access points and authentication server, which belong to the network infrastructure. It is only required to add relevant equipment for quantum key authentication work, which can ensure the overall framework of network to make less change.
2014, 36(11): 2750-2755.
doi: 10.3724/SP.J.1146.2013.00546
Abstract:
Most of the existing works on the cross-layer design of dynamic resource allocation in wireless multi-hop networks assume that every node can avail perfect Channel State Information (CSI) of other nodes in the networks. However, because of the channel fluctuations and the feedback delay, the availed CSI is usually outdated or partly outdated in a dynamic wireless network. In this paper, the impact of outdated channel information is firstly investigated in wireless multi-hop network, and a distributed joint congestion control and power control algorithm with outdated CSI is proposed. The simulation results demonstrate that the proposed algorithm significantly improves network efficiency and energy efficiency of the multi-hop networks.
Most of the existing works on the cross-layer design of dynamic resource allocation in wireless multi-hop networks assume that every node can avail perfect Channel State Information (CSI) of other nodes in the networks. However, because of the channel fluctuations and the feedback delay, the availed CSI is usually outdated or partly outdated in a dynamic wireless network. In this paper, the impact of outdated channel information is firstly investigated in wireless multi-hop network, and a distributed joint congestion control and power control algorithm with outdated CSI is proposed. The simulation results demonstrate that the proposed algorithm significantly improves network efficiency and energy efficiency of the multi-hop networks.
2014, 36(11): 2756-2761.
doi: 10.3724/SP.J.1146.2013.02019
Abstract:
It has a positive effect on the research of brain function to introduce the concept of network into neuroscience. However, in the real application the brain network with complex characteristics makes it hard to understand. In this paper, based on the functional connectivity patterns estimated by the Directed Transfer Function (DTF) methods, flow gain is proposed to assess the role of the specific brain region involved in the information transmission process. Integrating input and output information simultaneously, flow gain simplifies the identification of complex networks, as well as improves the display scale of the results. Both the simulation and spontaneous, evoked ElectroEncephaloGram (EEG) data indicate that flow gain can describe the output intensity of specific region to the whole brain. The results prove that with the definition of flow gain, it is possible to further the understanding of brain cognitive mechanism.
It has a positive effect on the research of brain function to introduce the concept of network into neuroscience. However, in the real application the brain network with complex characteristics makes it hard to understand. In this paper, based on the functional connectivity patterns estimated by the Directed Transfer Function (DTF) methods, flow gain is proposed to assess the role of the specific brain region involved in the information transmission process. Integrating input and output information simultaneously, flow gain simplifies the identification of complex networks, as well as improves the display scale of the results. Both the simulation and spontaneous, evoked ElectroEncephaloGram (EEG) data indicate that flow gain can describe the output intensity of specific region to the whole brain. The results prove that with the definition of flow gain, it is possible to further the understanding of brain cognitive mechanism.
2014, 36(11): 2762-2767.
doi: 10.3724/SP.J.1146.2013.01975
Abstract:
Spectrum sensing is a key technology to enable dynamic spectrum access in Cognitive Radio (CR). To ensure that primary users are properly protected while maximizing the performance of secondary users, most related work considers the metrics of probabilities of missed detection and false alarm for determining optimal spectrum sensing parameters. However, these metrics only take account of the collision probability between primary user and secondary user to measure the performance impact on primary user. Since it fails to consider the impact of intensity of interference, it only adopt to homogeneous spectrum environment. In heterogeneous spectrum environment where the access opportunities of secondary user are differed by its positions, it can not?maximize the spectrum utilization?efficiency. So, in this paper, a new metric, throughput loss is proposed for spatial-temporal opportunity sensing firstly. Throughput loss is the average throughput loss percentage of primary user due to secondary user accesses the authorized spectrum. It is a comprehensive metric measuring impact of secondary user on primary user, and contains the collision probability and intensity of interference of two factors. Then secondary?user throughput?optimization problem based it is addressed. Finally, theoretical analysis and numerical simulations show that the new spectrum sensing technology proposed in this paper improves significantly the spectrum utilization efficiency compared with some traditional sensing technologies.
Spectrum sensing is a key technology to enable dynamic spectrum access in Cognitive Radio (CR). To ensure that primary users are properly protected while maximizing the performance of secondary users, most related work considers the metrics of probabilities of missed detection and false alarm for determining optimal spectrum sensing parameters. However, these metrics only take account of the collision probability between primary user and secondary user to measure the performance impact on primary user. Since it fails to consider the impact of intensity of interference, it only adopt to homogeneous spectrum environment. In heterogeneous spectrum environment where the access opportunities of secondary user are differed by its positions, it can not?maximize the spectrum utilization?efficiency. So, in this paper, a new metric, throughput loss is proposed for spatial-temporal opportunity sensing firstly. Throughput loss is the average throughput loss percentage of primary user due to secondary user accesses the authorized spectrum. It is a comprehensive metric measuring impact of secondary user on primary user, and contains the collision probability and intensity of interference of two factors. Then secondary?user throughput?optimization problem based it is addressed. Finally, theoretical analysis and numerical simulations show that the new spectrum sensing technology proposed in this paper improves significantly the spectrum utilization efficiency compared with some traditional sensing technologies.
2014, 36(11): 2768-2774.
doi: 10.3724/SP.J.1146.2013.01879
Abstract:
Thread-Level Speculation (TLS) is a thread-level automatic parallelization technique to accelerate sequential programs on multi-core. Loops are usually regular structures and programs spent significant amounts of time executing them, thus loops are ideal candidates for exploiting the parallelism of programs. However, it is difficult to decide which set of loops should be parallelized to improve overall program performance. In order to solve the problem, this paper proposes a loop selection approach based on performance prediction. Basing on the input training set, the paper gathers profiling information during program pre-execution. Combining profiling information associated with the program and various speculative execution factors, the paper establishes a performance prediction model for loops. Then, based on the result of prediction, the paper can quantitatively estimate the speedup of loops and decide which loops should be parallelized on runtime. The experimental results show that the proposed approach effectively predicts the parallelism of loops when speculative execution and accurately selects beneficial loops for speculative parallelization according to the predicted results, finally Olden benchmarks reach 12.34% speedup performance improvement on average speedup.
Thread-Level Speculation (TLS) is a thread-level automatic parallelization technique to accelerate sequential programs on multi-core. Loops are usually regular structures and programs spent significant amounts of time executing them, thus loops are ideal candidates for exploiting the parallelism of programs. However, it is difficult to decide which set of loops should be parallelized to improve overall program performance. In order to solve the problem, this paper proposes a loop selection approach based on performance prediction. Basing on the input training set, the paper gathers profiling information during program pre-execution. Combining profiling information associated with the program and various speculative execution factors, the paper establishes a performance prediction model for loops. Then, based on the result of prediction, the paper can quantitatively estimate the speedup of loops and decide which loops should be parallelized on runtime. The experimental results show that the proposed approach effectively predicts the parallelism of loops when speculative execution and accurately selects beneficial loops for speculative parallelization according to the predicted results, finally Olden benchmarks reach 12.34% speedup performance improvement on average speedup.
2014, 36(11): 2775-2780.
doi: 10.3724/SP.J.1146.2013.01987
Abstract:
According to the physical mechanism and the equivalent circuit model of the planar Electromagnetic Band-Gap (EBG) structure, a novel planar EBG structure named CBS-EBG for Simultaneous Switching Noise (SSN) suppression in high-speed circuits is proposed by introducing the special C-shaped Bridges and Slits (CBS). Real-data experiment shows that 40 dB stopband is realized from 296 MHz~15 GHz. Compared with the LBS-EBG structure, the lower cutoff frequency decreases from 432 MHz to 296 MHz with a similar higher frequency range performance, and a lower band-gap center frequency is realized. The transfer characteristic of the signal under localized CBS-EBG is studied. Simulation and measurement are performed to verify the high performance of the proposed CBS-EBG both in SSN suppression and signal integrity with the local topology and appropriate routing policy.
According to the physical mechanism and the equivalent circuit model of the planar Electromagnetic Band-Gap (EBG) structure, a novel planar EBG structure named CBS-EBG for Simultaneous Switching Noise (SSN) suppression in high-speed circuits is proposed by introducing the special C-shaped Bridges and Slits (CBS). Real-data experiment shows that 40 dB stopband is realized from 296 MHz~15 GHz. Compared with the LBS-EBG structure, the lower cutoff frequency decreases from 432 MHz to 296 MHz with a similar higher frequency range performance, and a lower band-gap center frequency is realized. The transfer characteristic of the signal under localized CBS-EBG is studied. Simulation and measurement are performed to verify the high performance of the proposed CBS-EBG both in SSN suppression and signal integrity with the local topology and appropriate routing policy.
2014, 36(11): 2781-2785.
doi: 10.3724/SP.J.1146.2013.02032
Abstract:
In this paper, a novel metric is proposed to evaluate the 3-D mesh quality via analyzing curvature, since the curvature describes well the visual characteristics of a 3-D mesh. Firstly, the curvature at each vertex is estimated, then a curvature matrix is constructed in the neighbourhood of each vertex, and the local distortion at each vertex is measured in terms of the differences between the singular values of the curvature matrix in the original mesh and that of the corresponding matrix in the distorted mesh. Finally, the global distortion is obtained by weighted combination of the local distortions. Experimental results reveal that the proposed metric not only achieves superior performance in prediction accuracy over all the other competing metrics, but also has very good robustness and stability.
In this paper, a novel metric is proposed to evaluate the 3-D mesh quality via analyzing curvature, since the curvature describes well the visual characteristics of a 3-D mesh. Firstly, the curvature at each vertex is estimated, then a curvature matrix is constructed in the neighbourhood of each vertex, and the local distortion at each vertex is measured in terms of the differences between the singular values of the curvature matrix in the original mesh and that of the corresponding matrix in the distorted mesh. Finally, the global distortion is obtained by weighted combination of the local distortions. Experimental results reveal that the proposed metric not only achieves superior performance in prediction accuracy over all the other competing metrics, but also has very good robustness and stability.
2014, 36(11): 2786-2790.
doi: 10.3724/SP.J.1146.2014.00176
Abstract:
Ordered Binary Decision Diagram (OBDD) is commonly used in network reliability calculation. When evaluating the network reliability based on OBDD, computation time mainly depends on the size of the operating OBDD, which mostly relies on the variable ordering of OBDD. An algorithm is called BF-OBDD which is considered as the Boolean Function-OBDD, and it is the optimization algorithm for computing the reliability of the network. This paper shows that the reliability of network can be improved considerably by using of the proposed BF-OBDD algorithm. The experimental results demonstrate that the improved algorithm has less OBDD node numbers which cost less time when calculating the network reliability.
Ordered Binary Decision Diagram (OBDD) is commonly used in network reliability calculation. When evaluating the network reliability based on OBDD, computation time mainly depends on the size of the operating OBDD, which mostly relies on the variable ordering of OBDD. An algorithm is called BF-OBDD which is considered as the Boolean Function-OBDD, and it is the optimization algorithm for computing the reliability of the network. This paper shows that the reliability of network can be improved considerably by using of the proposed BF-OBDD algorithm. The experimental results demonstrate that the improved algorithm has less OBDD node numbers which cost less time when calculating the network reliability.
2014, 36(11): 2791-2794.
doi: 10.3724/SP.J.1146.2013.01665
Abstract:
In this paper, a novel miniaturized Substrate Integrated Waveguide (SIW) dual-mode filter is presented. By adopting an orthogonal input and output feed line, and an inclined slot line for disturbing two degenerate modes, two Transmission Zeros (TZs) are created. A 15 GHz SIW dual-mode filter with 350 MHz bandwidth is designed and fabricated using the proposed method. The filter has a very simple structure, low cost and is easily fabricated. The measured results are in good agreement with the simulated results, which show that the design of the proposed filter is correct and effective.
In this paper, a novel miniaturized Substrate Integrated Waveguide (SIW) dual-mode filter is presented. By adopting an orthogonal input and output feed line, and an inclined slot line for disturbing two degenerate modes, two Transmission Zeros (TZs) are created. A 15 GHz SIW dual-mode filter with 350 MHz bandwidth is designed and fabricated using the proposed method. The filter has a very simple structure, low cost and is easily fabricated. The measured results are in good agreement with the simulated results, which show that the design of the proposed filter is correct and effective.