Email alert
2014 Vol. 36, No. 2
Display Method:
2014, 36(2): 255-259.
doi: 10.3724/SP.J.1146.2013.00726
Abstract:
Under the influence of additive white Gaussian noise, the classical dectionary learning algorithms, such as K-means Singular Value Decomposition (K-SVD), Recursive Least Squares Dictionary Learning Algorithm (RLS-DLA) and K-means Singular Value Decomposition Denoising (K-SVDD), can not effectively remove the noise of Cubic Phase Signal (CPS). A novel dictionary learning algorithm for denoising CPS is proposed. Firstly,the dictionary is learned by using the RLS-DLA algorithm. Secondly,the update stage of the RLS-DLA algorithm is modified by using Non-Linear Least Squares (NLLS) in the algorithm. Finally, the signal is reconstructed via sparse representations over learned dictionary.Signal to Noise Ratio (SNR) obtained by using the novel dictionary learning algorithm is obviously higher than other algorithms,and the Mean Squares Error (MSE) obtained by using the novel dictionary learning algorithm is obviously lower than other algorithms. Therefore there is obviously denoising performance for using the dictionary learned by the algorithm to sparsely represent CPS. The experimental results show that the average SNR obtained by using the algorithm is9.55 dB, 13.94 dB and9.76 dB higher than K-SVD, RLS-DLS and K-SVDD.
Under the influence of additive white Gaussian noise, the classical dectionary learning algorithms, such as K-means Singular Value Decomposition (K-SVD), Recursive Least Squares Dictionary Learning Algorithm (RLS-DLA) and K-means Singular Value Decomposition Denoising (K-SVDD), can not effectively remove the noise of Cubic Phase Signal (CPS). A novel dictionary learning algorithm for denoising CPS is proposed. Firstly,the dictionary is learned by using the RLS-DLA algorithm. Secondly,the update stage of the RLS-DLA algorithm is modified by using Non-Linear Least Squares (NLLS) in the algorithm. Finally, the signal is reconstructed via sparse representations over learned dictionary.Signal to Noise Ratio (SNR) obtained by using the novel dictionary learning algorithm is obviously higher than other algorithms,and the Mean Squares Error (MSE) obtained by using the novel dictionary learning algorithm is obviously lower than other algorithms. Therefore there is obviously denoising performance for using the dictionary learned by the algorithm to sparsely represent CPS. The experimental results show that the average SNR obtained by using the algorithm is9.55 dB, 13.94 dB and9.76 dB higher than K-SVD, RLS-DLS and K-SVDD.
2014, 36(2): 260-265.
doi: 10.3724/SP.J.1146.2013.00601
Abstract:
Most DOA estimation algorithms of distributed sources only estimate central angle of the arriving signal and its distribution parameters. These methods can not obtain the real distribution curve, and may require a two-dimensional search which costs a large amount of computation. In this paper, a method to obtain distribution curves is proposed on the assumption that the distribution curves are bell-shaped and symmetrical. Furthermore, the sparse signal reconstruction is introduced as mathematical tools, and it works well in the condition of sufficient number of array elements and can still obtain an approximate solution with insufficient array elements. In addition, its performance to estimate the central angle of arriving signal is not worse than the existing algorithm. Without requirement for a two-dimensional search, this method has lower computational complexity. Simulations verify the above conclusions.
Most DOA estimation algorithms of distributed sources only estimate central angle of the arriving signal and its distribution parameters. These methods can not obtain the real distribution curve, and may require a two-dimensional search which costs a large amount of computation. In this paper, a method to obtain distribution curves is proposed on the assumption that the distribution curves are bell-shaped and symmetrical. Furthermore, the sparse signal reconstruction is introduced as mathematical tools, and it works well in the condition of sufficient number of array elements and can still obtain an approximate solution with insufficient array elements. In addition, its performance to estimate the central angle of arriving signal is not worse than the existing algorithm. Without requirement for a two-dimensional search, this method has lower computational complexity. Simulations verify the above conclusions.
2014, 36(2): 266-270.
doi: 10.3724/SP.J.1146.2013.00593
Abstract:
The adaptive phase-only beamforming technique is very important for conventional phased array radar to suppress interference. For the digital array radar, the phase-only technique can be used to enhance the radar power by making full use of transmitting module microwave power of each array element. In this paper, a novel phase-only algorithm is proposed. Assuming that the initial phase weight vector has a small phase perturbation, the objective function and constraint function can be replaced by the first-order Taylor expansion, thus the original non-convex problem becomes the convex optimization problem, which can be solved using Second-Order Cone Programming (SOCP). The new weight vector is obtained through updating the current weight vector using the solved perturbation vector. The current weight vector is replaced by the new weight vector and the above procedure is repeated until the array pattern meets the termination condition. The computer simulation results demonstrate the correctness and effectiveness of the proposed approach.
The adaptive phase-only beamforming technique is very important for conventional phased array radar to suppress interference. For the digital array radar, the phase-only technique can be used to enhance the radar power by making full use of transmitting module microwave power of each array element. In this paper, a novel phase-only algorithm is proposed. Assuming that the initial phase weight vector has a small phase perturbation, the objective function and constraint function can be replaced by the first-order Taylor expansion, thus the original non-convex problem becomes the convex optimization problem, which can be solved using Second-Order Cone Programming (SOCP). The new weight vector is obtained through updating the current weight vector using the solved perturbation vector. The current weight vector is replaced by the new weight vector and the above procedure is repeated until the array pattern meets the termination condition. The computer simulation results demonstrate the correctness and effectiveness of the proposed approach.
2014, 36(2): 271-276.
doi: 10.3724/SP.J.1146.2013.00558
Abstract:
图像彩色化是根据人工描绘的局部初始颜色用计算机为灰度图像着色。该文提出一种各向异性的非线性扩散图像彩色化方法,该方法能够克服颜色越界问题。首先建立基于偏微分的扩散方程,通过设置自适应的张量函数,使颜色在平滑区域快速均匀扩散,在颜色交界处按各向异性方式扩散,能够有效增强边缘颜色的连贯性。算法采用有限差分法实现扩散方程的数值求解,与当前流行的彩色化技术相比,该方法能得到更加清晰自然的彩色化图像和更高的图像质量。
图像彩色化是根据人工描绘的局部初始颜色用计算机为灰度图像着色。该文提出一种各向异性的非线性扩散图像彩色化方法,该方法能够克服颜色越界问题。首先建立基于偏微分的扩散方程,通过设置自适应的张量函数,使颜色在平滑区域快速均匀扩散,在颜色交界处按各向异性方式扩散,能够有效增强边缘颜色的连贯性。算法采用有限差分法实现扩散方程的数值求解,与当前流行的彩色化技术相比,该方法能得到更加清晰自然的彩色化图像和更高的图像质量。
2014, 36(2): 277-284.
doi: 10.3724/SP.J.1146.2013.00135
Abstract:
The Local Tangent Space Alignment (LTSA) is one of the popular manifold learning algorithms since it is straightforward to implementation and global optimal. However, LTSA may fail when high-dimensional observation data are sparse or non-uniformly distributed. To address this issue, a modified LTSA algorithm is presented. At first, a new L1 norm based method is presented to estimate the local tangent space of the data manifold. By considering both distance and structure factors, the proposed method is more accurate than traditional Principal Component Analysis (PCA) method. To reduce the bias of coordinate alignment, a weighted scheme based on manifold structure is then designed, and the detailed solving method is also presented. Experimental results on both synthetic and real datasets demonstrate the effectiveness of the proposed method when dealing with sparse and non-uniformly manifold data.
The Local Tangent Space Alignment (LTSA) is one of the popular manifold learning algorithms since it is straightforward to implementation and global optimal. However, LTSA may fail when high-dimensional observation data are sparse or non-uniformly distributed. To address this issue, a modified LTSA algorithm is presented. At first, a new L1 norm based method is presented to estimate the local tangent space of the data manifold. By considering both distance and structure factors, the proposed method is more accurate than traditional Principal Component Analysis (PCA) method. To reduce the bias of coordinate alignment, a weighted scheme based on manifold structure is then designed, and the detailed solving method is also presented. Experimental results on both synthetic and real datasets demonstrate the effectiveness of the proposed method when dealing with sparse and non-uniformly manifold data.
2014, 36(2): 285-292.
doi: 10.3724/SP.J.1146.2013.00396
Abstract:
To improve the rate-distortion performance of video Compressed Sensing (CS) reconstruction, the temporal-spatial characteristics of video are used to jointly recover the video signal in this paper. At the collection terminal, each block in a single-frame is measured at the fixed sampling rates to advoid excessive complexity. At the reconstruction terminal, two regularization terms are respectively added to the minimum Total Variation (TV) reconstruction model to advance the performance of prediction-residual reconstruction, and the terms are constructed in terms of temporal-spatial Auto-Regressive (AR) model and Multiple Hypothesis (MH) model. In addition, considering that the statistics of video source are dynamically varying in spatial and temporal domain, it is discussed how the five different inter-prediction modes impact on precision and computational complexity of reconstruction. Simulation results show that the proposed algorithms effectively improve the quality of reconstructed video at the cost of the computational complexity , and the improvement of inter-prediction mode enhances reconstruction quality in some extent.
To improve the rate-distortion performance of video Compressed Sensing (CS) reconstruction, the temporal-spatial characteristics of video are used to jointly recover the video signal in this paper. At the collection terminal, each block in a single-frame is measured at the fixed sampling rates to advoid excessive complexity. At the reconstruction terminal, two regularization terms are respectively added to the minimum Total Variation (TV) reconstruction model to advance the performance of prediction-residual reconstruction, and the terms are constructed in terms of temporal-spatial Auto-Regressive (AR) model and Multiple Hypothesis (MH) model. In addition, considering that the statistics of video source are dynamically varying in spatial and temporal domain, it is discussed how the five different inter-prediction modes impact on precision and computational complexity of reconstruction. Simulation results show that the proposed algorithms effectively improve the quality of reconstructed video at the cost of the computational complexity , and the improvement of inter-prediction mode enhances reconstruction quality in some extent.
2014, 36(2): 293-297.
doi: 10.3724/SP.J.1146.2013.00582
Abstract:
The traditional methods based on linear regression model preserve the edge in some degree, but hardly work on the sharp edge. To solve this problem, an edge directed interpolation algorithm based on regularization is proposed in this paper, which is composed of the parameters estimation part and the data estimation part. In the first part, the high resolution structures which have been estimated are taken as one part of the training pixel to estimate the parameters of the linear regression model for effectively describing the structure. In the second part, the smooth pixels direction is applied as the regularization to reduce the error of estimated data aroused from the incorrect parameters. Experimented results show that the proposed method preserves the edge of image effectively, and both the visual effects and the PSNR are all better than bi-cubic and Regularized Local Linear Regression (RLLR).
The traditional methods based on linear regression model preserve the edge in some degree, but hardly work on the sharp edge. To solve this problem, an edge directed interpolation algorithm based on regularization is proposed in this paper, which is composed of the parameters estimation part and the data estimation part. In the first part, the high resolution structures which have been estimated are taken as one part of the training pixel to estimate the parameters of the linear regression model for effectively describing the structure. In the second part, the smooth pixels direction is applied as the regularization to reduce the error of estimated data aroused from the incorrect parameters. Experimented results show that the proposed method preserves the edge of image effectively, and both the visual effects and the PSNR are all better than bi-cubic and Regularized Local Linear Regression (RLLR).
2014, 36(2): 298-303.
doi: 10.3724/SP.J.1146.2013.00421
Abstract:
Weighting methods, such as Hamming window, can suppress the peak sidelobe level of matching filter output of Linear Frequency Modulated (LFM) signal, but they also cause obviously mainlobe widening which results in range resolution worsening. Considering requirements of both sidelobe suppression and range resolution, a novel sidelobe suppression method without mainlobe widening is proposed. The amplitude outputs of matching filter and weighting window are firstly normalized, and then compared point by point, finally the minimum value of each point is chosen as the output data. This method fuses both merits of matching filter and weighting processing. The -3 dB mainlobe widening coefficient of the proposed method is only 1 compared with matching filter, and sidelobe suppressing performance is equivalent with the employed weighting window. Simulation results and lake experiment demonstrate the validity of the proposed method.
Weighting methods, such as Hamming window, can suppress the peak sidelobe level of matching filter output of Linear Frequency Modulated (LFM) signal, but they also cause obviously mainlobe widening which results in range resolution worsening. Considering requirements of both sidelobe suppression and range resolution, a novel sidelobe suppression method without mainlobe widening is proposed. The amplitude outputs of matching filter and weighting window are firstly normalized, and then compared point by point, finally the minimum value of each point is chosen as the output data. This method fuses both merits of matching filter and weighting processing. The -3 dB mainlobe widening coefficient of the proposed method is only 1 compared with matching filter, and sidelobe suppressing performance is equivalent with the employed weighting window. Simulation results and lake experiment demonstrate the validity of the proposed method.
2014, 36(2): 304-311.
doi: 10.3724/SP.J.1146.2013.00542
Abstract:
The Improved Non-Subsampled Contourlet Transform (INSCT) algorithm combines redundancy ascending transform and multi direction analysis. Neville-operator is used in the part of redundancy ascending and direction information is lost in the process of multi-scale decomposition, which is bad for follow-up analysis. To solve this problem, a new set of operators which effectively preserves direction information in frequency band decomposition part is designed in this paper, and a new image fusion algorithm using the operators is proposed. First, the histogram equalization method is used to enhance grey values of the target zone in infrared image. Second, the multi-scale decomposition is performed on visible image and the enhanced infrared image using the new set of operator rather than the Neville-operator. Finally, low-frequency sub-band is fused using a new method incorporating the activity level based on Weighted Average-Window Based Activity measurement (WA-WBA) rather than the simple weighted sum method. The use of neighborhood homogeneous measurement realizes the adaptive fusion of each sub-band coefficient, and the proposed method effectively makes up the disadvantages of pixel-based image fusion method. The experimental results show that the proposed method preserves details of the visible image and obtains clear target information in the infrared image.
The Improved Non-Subsampled Contourlet Transform (INSCT) algorithm combines redundancy ascending transform and multi direction analysis. Neville-operator is used in the part of redundancy ascending and direction information is lost in the process of multi-scale decomposition, which is bad for follow-up analysis. To solve this problem, a new set of operators which effectively preserves direction information in frequency band decomposition part is designed in this paper, and a new image fusion algorithm using the operators is proposed. First, the histogram equalization method is used to enhance grey values of the target zone in infrared image. Second, the multi-scale decomposition is performed on visible image and the enhanced infrared image using the new set of operator rather than the Neville-operator. Finally, low-frequency sub-band is fused using a new method incorporating the activity level based on Weighted Average-Window Based Activity measurement (WA-WBA) rather than the simple weighted sum method. The use of neighborhood homogeneous measurement realizes the adaptive fusion of each sub-band coefficient, and the proposed method effectively makes up the disadvantages of pixel-based image fusion method. The experimental results show that the proposed method preserves details of the visible image and obtains clear target information in the infrared image.
A Method for People Counting in Complex Scenes Based on Normalized Foreground and Corner Information
2014, 36(2): 312-317.
doi: 10.3724/SP.J.1146.2013.00620
Abstract:
For the problem of people counting in intelligent video surveillance, a method of people counting in complex scenes based on the normalized foreground and corner information is proposed. First, based on the binary foreground, the area of normalized foreground after perspective correction is calculated. Second, the optimized corner information of foreground is extracted to compute the occlusion coefficient of crowd. Finally, the above two features are used as the inputs of the Back Propagation (BP) neural network to train and test the people counting. Experiments results show that, the proposed method exhibits good performance in complex scenes.
For the problem of people counting in intelligent video surveillance, a method of people counting in complex scenes based on the normalized foreground and corner information is proposed. First, based on the binary foreground, the area of normalized foreground after perspective correction is calculated. Second, the optimized corner information of foreground is extracted to compute the occlusion coefficient of crowd. Finally, the above two features are used as the inputs of the Back Propagation (BP) neural network to train and test the people counting. Experiments results show that, the proposed method exhibits good performance in complex scenes.
2014, 36(2): 318-324.
doi: 10.3724/SP.J.1146.2012.01373
Abstract:
The location estimated accuracy of Autonomous Underwater Vehicle (AUV) and landmarks decrease because of the degeneracy and impoverishment of samples in standard Fast Simultaneous Localization And Mapping (FastSLAM) algorithm. A improved FastSLAM algorithm based on Iterative Extended Kalman Filter (IEKF) proposal distribution and linear optimization resampling is presented in order to solve this issue. The latest observation is integrated with IEKF in order to decrease the sample degeneracy while the new samples are produced by the linear combination of copied samples and some abandoned ones in order to reduce the sample impoverishment. The kinematic model of AUV, feature model and the measurement models of sensors are all established. And then features are extracted with Hough transform to build the global map. The experiment of the improved FastSLAM algorithm with trial data shows that it can avoid the degeneracy and impoverishment of samples effectively and enhance the location estimation accuracy of AUV and landmarks. Moreover, the consistency analysis showed that the method possesses the consistency of long term.
The location estimated accuracy of Autonomous Underwater Vehicle (AUV) and landmarks decrease because of the degeneracy and impoverishment of samples in standard Fast Simultaneous Localization And Mapping (FastSLAM) algorithm. A improved FastSLAM algorithm based on Iterative Extended Kalman Filter (IEKF) proposal distribution and linear optimization resampling is presented in order to solve this issue. The latest observation is integrated with IEKF in order to decrease the sample degeneracy while the new samples are produced by the linear combination of copied samples and some abandoned ones in order to reduce the sample impoverishment. The kinematic model of AUV, feature model and the measurement models of sensors are all established. And then features are extracted with Hough transform to build the global map. The experiment of the improved FastSLAM algorithm with trial data shows that it can avoid the degeneracy and impoverishment of samples effectively and enhance the location estimation accuracy of AUV and landmarks. Moreover, the consistency analysis showed that the method possesses the consistency of long term.
2014, 36(2): 325-331.
doi: 10.3724/SP.J.1146.2013.00657
Abstract:
The temproral link prediction method is investigated in this paper. The disadvantages of the static link prediction methods are analyzed, considering that ignoring the evolving information of networks will lead to a negative impact on link predicting. The concept of link prediction error is proposed to describe the evolving information of networks, and a temporal link prediction method is proposed based on the prediction error correction. Firstly, several static link prediction are carried out using each graph in the previous periods window, and then the prediction errors are recorded and used for calculating the modification value. At last, the final prediction result is acquired through refining the static prediction result with the modification value. Several experiments are conducted using two real network datasets. The results show that the proposed method achieves better performance than the static link prediction methods and a typical temporal link prediction method. In addition, it can be found that a relation of mirror symmetry exists between prediction error series and total link number series, which demonstrates the universality of the proposed method.
The temproral link prediction method is investigated in this paper. The disadvantages of the static link prediction methods are analyzed, considering that ignoring the evolving information of networks will lead to a negative impact on link predicting. The concept of link prediction error is proposed to describe the evolving information of networks, and a temporal link prediction method is proposed based on the prediction error correction. Firstly, several static link prediction are carried out using each graph in the previous periods window, and then the prediction errors are recorded and used for calculating the modification value. At last, the final prediction result is acquired through refining the static prediction result with the modification value. Several experiments are conducted using two real network datasets. The results show that the proposed method achieves better performance than the static link prediction methods and a typical temporal link prediction method. In addition, it can be found that a relation of mirror symmetry exists between prediction error series and total link number series, which demonstrates the universality of the proposed method.
2014, 36(2): 332-339.
doi: 10.3724/SP.J.1146.2013.00584
Abstract:
To solve the issues of blind identification of primitive BCH codes encoding parameters, a novel identification algorithm with probability approximation is presented. Frist, by taking advantage of the approximation of random code words root probability character which uses Gaussian distribution and Poisson distribution, the thresholds for searching code length are structured. Second, though analyzing the checking ability of the primitive element and the impact of isomorphism on searching, the coding filed is determined by using the method of nearby fields pair which improves the performace of identification. Finally, the calculation is reduced by creating and using the conjugate roots table in the recognition of generator polynomial. Simulation results show that, the proposed algorithm achieves a significant improvement in identification probability even if in high BER situation.
To solve the issues of blind identification of primitive BCH codes encoding parameters, a novel identification algorithm with probability approximation is presented. Frist, by taking advantage of the approximation of random code words root probability character which uses Gaussian distribution and Poisson distribution, the thresholds for searching code length are structured. Second, though analyzing the checking ability of the primitive element and the impact of isomorphism on searching, the coding filed is determined by using the method of nearby fields pair which improves the performace of identification. Finally, the calculation is reduced by creating and using the conjugate roots table in the recognition of generator polynomial. Simulation results show that, the proposed algorithm achieves a significant improvement in identification probability even if in high BER situation.
2014, 36(2): 340-345.
doi: 10.3724/SP.J.1146.2013.00595
Abstract:
Based on orthogonal matrixes, a class of polyphase periodic complementary sequence sets with Zero- Correlation Zone (ZCZ) are proposed. The resultant polyphase ZCZ periodic complementary sets are optimal with respect to the theoretical bound. Moreover, the proposed approach provides flexible choices for parameters such as ZCZ length and number of subsequences. Since the number of orthogonal matrixes is huge, the proposed method can generate a large number of ZCZ periodic complementary sequence sets.
Based on orthogonal matrixes, a class of polyphase periodic complementary sequence sets with Zero- Correlation Zone (ZCZ) are proposed. The resultant polyphase ZCZ periodic complementary sets are optimal with respect to the theoretical bound. Moreover, the proposed approach provides flexible choices for parameters such as ZCZ length and number of subsequences. Since the number of orthogonal matrixes is huge, the proposed method can generate a large number of ZCZ periodic complementary sequence sets.
2014, 36(2): 346-352.
doi: 10.3724/SP.J.1146.2013.00512
Abstract:
An anti-frequency-offset algorithm is proposed by utilizing amplitude distribution feature for modulation recognition of conventional satellite modulations, such as QPSK, 16QAM and new modulations like 16APSK and 32APSK. The algorithm is based on adaptive construction of amplitude distribution template. After calculating the matching error between the amplitude distribution template and the actual amplitude distribution vector, the algorithm can recognize the modulation type by choosing the modulation type with the minimum matching error. This method does not need any prior knowledge about Carrier-to-Noise ratio (C/N), as well as threshold, and it is not sensitive to frequency offset. Becasuse of these advantages, the algorith is suitable for engineering application. Computer simulations show that the correct recognition probability is more than 98% when C/N is greater than 9 dB and 4000 symbols are used. It verifies the effectiveness of the algorithm.
An anti-frequency-offset algorithm is proposed by utilizing amplitude distribution feature for modulation recognition of conventional satellite modulations, such as QPSK, 16QAM and new modulations like 16APSK and 32APSK. The algorithm is based on adaptive construction of amplitude distribution template. After calculating the matching error between the amplitude distribution template and the actual amplitude distribution vector, the algorithm can recognize the modulation type by choosing the modulation type with the minimum matching error. This method does not need any prior knowledge about Carrier-to-Noise ratio (C/N), as well as threshold, and it is not sensitive to frequency offset. Becasuse of these advantages, the algorith is suitable for engineering application. Computer simulations show that the correct recognition probability is more than 98% when C/N is greater than 9 dB and 4000 symbols are used. It verifies the effectiveness of the algorithm.
2014, 36(2): 353-357.
doi: 10.3724/SP.J.1146.2013.00445
Abstract:
The Signal-to-Noise Ratio (SNR) is an important parameter to measure the quality of the channel, this paper studies SNR estimation method based on Sounding Reference Signal (SRS) in the Long Term Evolution (LTE) system. Since the noise estimation error of Difference of Adjacent Subcarrier Signal (DASS) algorithm is larger in high SNR region, this paper presents an improved DASS method applicable to SRS. By redefining the differential mode of the subcarriers, the estimation error of the noise in this method is reduced, on the other hand, since the three consecutive SRS frequency points only need to estimate noise once, the complexity of the method is only 1/3 of the original DASS method. Simulation results show that the estimated performance of the proposed method is superior to the rest of the method, especially for the low-latency and medium-latency channel, estimation accuracy of the proposed method is improved by about 10 times in high SNR region.
The Signal-to-Noise Ratio (SNR) is an important parameter to measure the quality of the channel, this paper studies SNR estimation method based on Sounding Reference Signal (SRS) in the Long Term Evolution (LTE) system. Since the noise estimation error of Difference of Adjacent Subcarrier Signal (DASS) algorithm is larger in high SNR region, this paper presents an improved DASS method applicable to SRS. By redefining the differential mode of the subcarriers, the estimation error of the noise in this method is reduced, on the other hand, since the three consecutive SRS frequency points only need to estimate noise once, the complexity of the method is only 1/3 of the original DASS method. Simulation results show that the estimated performance of the proposed method is superior to the rest of the method, especially for the low-latency and medium-latency channel, estimation accuracy of the proposed method is improved by about 10 times in high SNR region.
2014, 36(2): 358-363.
doi: 10.3724/SP.J.1146.2013.00316
Abstract:
In this paper, the Bit Error Rate (BER) performance of OFDM based on Co-time Co-frequency Full Duplex (CCFD) is analyzed with amplitude estimation error of self-interference and phase error, in remote multi-path Rayleigh fading channel and Additive White Gaussian Noise (AWGN) self-interference channel. The results show that BER generally decreases while the absolute value of time error and amplitude error decreases for fixed Signal to Interference Ratio (SIR). Phase error610-6 and amplitude error 310-5 leads to 0.8 dB degradation on condition of 2.3 GHz carrier frequency, -70 dB SIR, sub-carrier number 4096, sub-carrier separation 15 kHz, BER 10-2. Phase error 0.5 and amplitude error 1% achieved 40 dB Interference Cancellation Ratio (ICR) under the same conditions.
In this paper, the Bit Error Rate (BER) performance of OFDM based on Co-time Co-frequency Full Duplex (CCFD) is analyzed with amplitude estimation error of self-interference and phase error, in remote multi-path Rayleigh fading channel and Additive White Gaussian Noise (AWGN) self-interference channel. The results show that BER generally decreases while the absolute value of time error and amplitude error decreases for fixed Signal to Interference Ratio (SIR). Phase error610-6 and amplitude error 310-5 leads to 0.8 dB degradation on condition of 2.3 GHz carrier frequency, -70 dB SIR, sub-carrier number 4096, sub-carrier separation 15 kHz, BER 10-2. Phase error 0.5 and amplitude error 1% achieved 40 dB Interference Cancellation Ratio (ICR) under the same conditions.
2014, 36(2): 364-370.
doi: 10.3724/SP.J.1146.2013.00928
Abstract:
Energy Detector (ED) is the most common way of idle spectrum sensing in cognitive radio. However, its performance may suffer seriously from the Noise Power Uncertainty (NPU). In this paper, a low computational algorithm is proposed to estimate the NPU interval, and the SNR WALL deterioration phenomenon with estimated noise power is analyzed theoretically. The SNR WALL deterioration theorems are obtained. In addition, a new ED algorithm based on modified threshold is proposed to eliminate SNR WALL deterioration. Numerical simulation results show that the proposed algorithm can estimate accurately the NPU interval, and verify the correctness of the SNR WALL deterioration theorems. Furthermore, both analytical and simulation results show that the proposed ED under NPU outperforms the ED of Robust Statistics Approach (RSA). The SNR WALL deterioration can be reduced effectively, hence improving the robustness of detection.
Energy Detector (ED) is the most common way of idle spectrum sensing in cognitive radio. However, its performance may suffer seriously from the Noise Power Uncertainty (NPU). In this paper, a low computational algorithm is proposed to estimate the NPU interval, and the SNR WALL deterioration phenomenon with estimated noise power is analyzed theoretically. The SNR WALL deterioration theorems are obtained. In addition, a new ED algorithm based on modified threshold is proposed to eliminate SNR WALL deterioration. Numerical simulation results show that the proposed algorithm can estimate accurately the NPU interval, and verify the correctness of the SNR WALL deterioration theorems. Furthermore, both analytical and simulation results show that the proposed ED under NPU outperforms the ED of Robust Statistics Approach (RSA). The SNR WALL deterioration can be reduced effectively, hence improving the robustness of detection.
2014, 36(2): 371-376.
doi: 10.3724/SP.J.1146.2013.00653
Abstract:
It can not make decision by multi-objective optimization over the multipath channel for Single Carrier Frequency Domain Equalization (SC-FDE) cognitive systems. In order to solve the issue, a novel cognitive radio decision engine is proposed based on channel classification and Adaptive Modulation and Coding (AMC). Firstly, the channel is classified to determine the current channel state by the proposed engine. Then, the optimal Modulation and Coding Scheme (MCS) is selected in the MCS switching table according to the current channel state, and its Modulation and Coding Scheme Duration (MCSD) is calculated. Once the current MCS lasts longer than its MCSD, the cognitive radio decision engine will update the optimal MCS. The simulation results show that the proposed cognitive radio decision engine can provide the optimal transmission strategy to improve the spectral efficiency for SC-FDE cognitive systems. Therefore, the engine makes SC-FDE cognitive systems adapt to the complex electromagnetic environment better.
It can not make decision by multi-objective optimization over the multipath channel for Single Carrier Frequency Domain Equalization (SC-FDE) cognitive systems. In order to solve the issue, a novel cognitive radio decision engine is proposed based on channel classification and Adaptive Modulation and Coding (AMC). Firstly, the channel is classified to determine the current channel state by the proposed engine. Then, the optimal Modulation and Coding Scheme (MCS) is selected in the MCS switching table according to the current channel state, and its Modulation and Coding Scheme Duration (MCSD) is calculated. Once the current MCS lasts longer than its MCSD, the cognitive radio decision engine will update the optimal MCS. The simulation results show that the proposed cognitive radio decision engine can provide the optimal transmission strategy to improve the spectral efficiency for SC-FDE cognitive systems. Therefore, the engine makes SC-FDE cognitive systems adapt to the complex electromagnetic environment better.
2014, 36(2): 377-383.
doi: 10.3724/SP.J.1146.2013.00528
Abstract:
To deal with the problem of entire B-frame loss in stereoscopic video coding with Hierarchical B Pictures (HBP) predict structure, this paper analyzes the motion vector correlations in the frames of adjacent views in a double-view sequence, and proposes a hierarchical Error Concealment (EC) algorithm. This algorithm possesses two features distinct from the popular methods. First, the algorithm applies hierarchical concealment technique, which uses different error concealment methods according to the important level of the B frames. Second, the macroblocks motion vector correlations in adjacent-view sequence are taken into account. Experiments show that the proposed method outperforms state-of-the-art EC algorithms used in the H.264/MVC for entire frame loss.
To deal with the problem of entire B-frame loss in stereoscopic video coding with Hierarchical B Pictures (HBP) predict structure, this paper analyzes the motion vector correlations in the frames of adjacent views in a double-view sequence, and proposes a hierarchical Error Concealment (EC) algorithm. This algorithm possesses two features distinct from the popular methods. First, the algorithm applies hierarchical concealment technique, which uses different error concealment methods according to the important level of the B frames. Second, the macroblocks motion vector correlations in adjacent-view sequence are taken into account. Experiments show that the proposed method outperforms state-of-the-art EC algorithms used in the H.264/MVC for entire frame loss.
2014, 36(2): 384-389.
doi: 10.3724/SP.J.1146.2013.01143
Abstract:
When the probabilistic packet marking technique for traceback and localization of malicious nodes in Wireless Sensor Networks (WSNs), the determination of marking probability is the key to influence the convergence, the weakest link, and the node burden of the algorithm. First, the disadvantages of the Basic Probabilistic Packet Marking (BPPM) algorithm and the Equal Probabilistic Packet Marking (EPPM) algorithm is analyzed. Then, a Layered Mixed Probabilistic Packet Marking (LMPPM) algorithm is proposed to overcome the defects of the above algorithms. In the proposed algorithm, WSN is clustered, and each cluster is considered as a big cluster nodes, therefore, the whole network consists of some big cluster nodes. Correspondingly, each cluster nodes internal contains a certain number of sensor nodes. The EPPM algorithm is used between the cluster nodes, and the BPPM algorithm is used in the cluster nodes. Experiments show that LMPPM is better than BPPM in convergence and the weakest link, and the node storage burden of the proposed algorithm is lower than that of the EPPM algorithm. The experiments confirm that the proposed algorithm is a kind of whole optimization under the conditions of resource constraint.
When the probabilistic packet marking technique for traceback and localization of malicious nodes in Wireless Sensor Networks (WSNs), the determination of marking probability is the key to influence the convergence, the weakest link, and the node burden of the algorithm. First, the disadvantages of the Basic Probabilistic Packet Marking (BPPM) algorithm and the Equal Probabilistic Packet Marking (EPPM) algorithm is analyzed. Then, a Layered Mixed Probabilistic Packet Marking (LMPPM) algorithm is proposed to overcome the defects of the above algorithms. In the proposed algorithm, WSN is clustered, and each cluster is considered as a big cluster nodes, therefore, the whole network consists of some big cluster nodes. Correspondingly, each cluster nodes internal contains a certain number of sensor nodes. The EPPM algorithm is used between the cluster nodes, and the BPPM algorithm is used in the cluster nodes. Experiments show that LMPPM is better than BPPM in convergence and the weakest link, and the node storage burden of the proposed algorithm is lower than that of the EPPM algorithm. The experiments confirm that the proposed algorithm is a kind of whole optimization under the conditions of resource constraint.
2014, 36(2): 390-395.
doi: 10.3724/SP.J.1146.2012.01677
Abstract:
The buffer resource utilization in opportunistic networks can be improved by efficient buffer management strategy. The delivery probability of message is directly related to its necessity of forwarding and buffering. An adaptive buffer management strategy with message delivery probability estimating method is proposed. Through establishing the node connection status analysis model, the diversity of node service ability could be evaluated. Accordingly, to estimate the delivery probability of the message. Furthermore, the transmitting and removing priority could be determined reasonably to perform buffer management operations. Numerical results show that overhead ratio can be reduced about 57% by the proposed buffer management strategy. The delivery ratio is improved and the delivery latency is reduced efficiently.
The buffer resource utilization in opportunistic networks can be improved by efficient buffer management strategy. The delivery probability of message is directly related to its necessity of forwarding and buffering. An adaptive buffer management strategy with message delivery probability estimating method is proposed. Through establishing the node connection status analysis model, the diversity of node service ability could be evaluated. Accordingly, to estimate the delivery probability of the message. Furthermore, the transmitting and removing priority could be determined reasonably to perform buffer management operations. Numerical results show that overhead ratio can be reduced about 57% by the proposed buffer management strategy. The delivery ratio is improved and the delivery latency is reduced efficiently.
2014, 36(2): 396-402.
doi: 10.3724/SP.J.1146.2013.00367
Abstract:
A fault tolerant virtual network mapping model based on Openflow network is proposed, and it is solved by the ant colony algorithm. In view of the virtual network fault recovery mechanism, a distinction user priority failure recovery algorithm named Priority_Diff is proposed, and the algorithm provides users different network reliability levels. The failed link is replaced by a backup path for advanced users, and remapped for low-level users. In addition, a failed Backup Link ReMapping (BLRM) algorithm is proposed, and the backup resources in the failed link are migrated to the adjacent link, which improves the availability of the backup link. Finally, the performance parameters, including virtual network failure repairing ratio, virtual network success running ratio, and working link resource utilization are evaluated by simulation experiments, and the results demonstrate the superiority of the proposed algorithms.
A fault tolerant virtual network mapping model based on Openflow network is proposed, and it is solved by the ant colony algorithm. In view of the virtual network fault recovery mechanism, a distinction user priority failure recovery algorithm named Priority_Diff is proposed, and the algorithm provides users different network reliability levels. The failed link is replaced by a backup path for advanced users, and remapped for low-level users. In addition, a failed Backup Link ReMapping (BLRM) algorithm is proposed, and the backup resources in the failed link are migrated to the adjacent link, which improves the availability of the backup link. Finally, the performance parameters, including virtual network failure repairing ratio, virtual network success running ratio, and working link resource utilization are evaluated by simulation experiments, and the results demonstrate the superiority of the proposed algorithms.
2014, 36(2): 403-408.
doi: 10.3724/SP.J.1146.2013.00731
Abstract:
This paper presents clustering data gathering algorithm based on multiple cluster heads to enhance the reliability of data gathering and prolong the lifetime of network.First, the network is divided into equal grids, and the nodes in the same grid form a cluster. Then, multiple cluster heads are selected in each grid according to the failure probability of nodes, and the cluster heads in the same grid gather the data of nodes in this grid cooperatively. In addition,the algorithm adopts some measures to diminish energy consumption. Simulation results show that, comparing with correlative existing algorithms, the algorithm has higher reliability of data gathering and remarkably prolongs the lifetime of network.
This paper presents clustering data gathering algorithm based on multiple cluster heads to enhance the reliability of data gathering and prolong the lifetime of network.First, the network is divided into equal grids, and the nodes in the same grid form a cluster. Then, multiple cluster heads are selected in each grid according to the failure probability of nodes, and the cluster heads in the same grid gather the data of nodes in this grid cooperatively. In addition,the algorithm adopts some measures to diminish energy consumption. Simulation results show that, comparing with correlative existing algorithms, the algorithm has higher reliability of data gathering and remarkably prolongs the lifetime of network.
2014, 36(2): 409-414.
doi: 10.3724/SP.J.1146.2013.00609
Abstract:
A novel efficient geolocation method for spaceborne Synthetic Aperture Radar (SAR) is proposed based on recursion formulae in this paper. In this method, three groups of recursion formulae are utilized to compute the three-axis position increments between the position-unknown pixel and the adjacent position-determined pixel, i.e. the reference pixel, with the input increments of the height, slant range and azimuth time between the two pixels. Subsequently, these increments are added to the position of the reference pixel to obtain the position of the position-unknown pixel. Regarding respectively the height, slant range and azimuth time as the variables, these recursion formulae are acquired by differentiating the geolocation equations. Consequently, the construction of three-dimensional grid, coefficients fitting and interpolation are avoided in this method. The geolocation results of simulated and real data validate the accuracy and effectiveness of this method.
A novel efficient geolocation method for spaceborne Synthetic Aperture Radar (SAR) is proposed based on recursion formulae in this paper. In this method, three groups of recursion formulae are utilized to compute the three-axis position increments between the position-unknown pixel and the adjacent position-determined pixel, i.e. the reference pixel, with the input increments of the height, slant range and azimuth time between the two pixels. Subsequently, these increments are added to the position of the reference pixel to obtain the position of the position-unknown pixel. Regarding respectively the height, slant range and azimuth time as the variables, these recursion formulae are acquired by differentiating the geolocation equations. Consequently, the construction of three-dimensional grid, coefficients fitting and interpolation are avoided in this method. The geolocation results of simulated and real data validate the accuracy and effectiveness of this method.
2014, 36(2): 415-421.
doi: 10.3724/SP.J.1146.2013.00479
Abstract:
In order to analyze quantitatively the effects of Center-Beam Approximation (CBA) on MOtion COmpensation (MOCO) for airborne Interferometric SAR (InSAR), a mathematical model of MOCO residual error under the condition of squint is firstly established. The form of residual error is similar to the slant range error. Then, the effects of quadratic slant range error on InSAR are deduced on condition that the squint angle is not zero, and the accuracy of the theoretical derivation is verified with simulation data. Finally, the effects of CBA on image quality and coherence coefficient for airborne InSAR are discussed in detail for different bands, squint angles, trajectory deviations, topography variation and slant ranges. The research provides technical support for the estimation of MOCO precision in signal processing of airborne repeat-pass interferometric SAR.
In order to analyze quantitatively the effects of Center-Beam Approximation (CBA) on MOtion COmpensation (MOCO) for airborne Interferometric SAR (InSAR), a mathematical model of MOCO residual error under the condition of squint is firstly established. The form of residual error is similar to the slant range error. Then, the effects of quadratic slant range error on InSAR are deduced on condition that the squint angle is not zero, and the accuracy of the theoretical derivation is verified with simulation data. Finally, the effects of CBA on image quality and coherence coefficient for airborne InSAR are discussed in detail for different bands, squint angles, trajectory deviations, topography variation and slant ranges. The research provides technical support for the estimation of MOCO precision in signal processing of airborne repeat-pass interferometric SAR.
2014, 36(2): 422-427.
doi: 10.3724/SP.J.1146.2013.00426
Abstract:
Accurate estimation of the clutter covariance matrix is the core issue of STAP. The sample covariance matrix estimation method based on maximum likelihood criterion is only applicable to homogeneous environment. For improving the estimation precision of covariance matrix under the heterogeneous environment, an iterative weighted covariance matrix estimation method is proposed. The proposed method determines the weighting factors of all samples according to the distance of Generalized Inner Product (GIP) value from the statistical average, and it improves the estimation precision further by establishing the probability histogram of GIP and iterative processing. The simulation results show that the proposed method can improve the performance of covariance matrix estimation under nonhomogeneous environment.
Accurate estimation of the clutter covariance matrix is the core issue of STAP. The sample covariance matrix estimation method based on maximum likelihood criterion is only applicable to homogeneous environment. For improving the estimation precision of covariance matrix under the heterogeneous environment, an iterative weighted covariance matrix estimation method is proposed. The proposed method determines the weighting factors of all samples according to the distance of Generalized Inner Product (GIP) value from the statistical average, and it improves the estimation precision further by establishing the probability histogram of GIP and iterative processing. The simulation results show that the proposed method can improve the performance of covariance matrix estimation under nonhomogeneous environment.
2014, 36(2): 428-434.
doi: 10.3724/SP.J.1146.2013.00500
Abstract:
The effect of waveform selection on compressive sensing MIMO radar imaging using sparse model and an improved calibration method in the presence of angle imperfections are researched in this paper. Firstly the relationship between ambiguity function and Compressive Sensing (CS) dictionary coherence coefficient is analyzed. Then, in the presence of small spatial angle, an improved method based on Sparse Learning via Iterative Minimization (SLIM) algorithm is proposed to calibrate angle errors. Simulation results illustrate that the imaging quality can be enhanced when selected waveforms have low sidelobes and prove that the modifed method can calibrate angle errors effectively.
The effect of waveform selection on compressive sensing MIMO radar imaging using sparse model and an improved calibration method in the presence of angle imperfections are researched in this paper. Firstly the relationship between ambiguity function and Compressive Sensing (CS) dictionary coherence coefficient is analyzed. Then, in the presence of small spatial angle, an improved method based on Sparse Learning via Iterative Minimization (SLIM) algorithm is proposed to calibrate angle errors. Simulation results illustrate that the imaging quality can be enhanced when selected waveforms have low sidelobes and prove that the modifed method can calibrate angle errors effectively.
2014, 36(2): 435-440.
doi: 10.3724/SP.J.1146.2013.00475
Abstract:
The waiting time between transmitting and receiving time is not used in conventional task scheduling methods for phased array radar in which dell is impartible, thus system scheduling capacity is restrained. Based on analysis of dell interleaving rules and rule selection guidelines, a dell interleaving scheduling algorithm is proposed based on sampling period division, which can solve the task confliction issue caused by different sampling periods. The scheduling flow, interleaving flow and tactic for overload are also analyzed. The simulation results show that the proposed algorithm improves scheduling capability greatly and achieves better performance compared with conventional task scheduling algorithm.
The waiting time between transmitting and receiving time is not used in conventional task scheduling methods for phased array radar in which dell is impartible, thus system scheduling capacity is restrained. Based on analysis of dell interleaving rules and rule selection guidelines, a dell interleaving scheduling algorithm is proposed based on sampling period division, which can solve the task confliction issue caused by different sampling periods. The scheduling flow, interleaving flow and tactic for overload are also analyzed. The simulation results show that the proposed algorithm improves scheduling capability greatly and achieves better performance compared with conventional task scheduling algorithm.
2014, 36(2): 441-444.
doi: 10.3724/SP.J.1146.2013.00465
Abstract:
A method of Ground Moving Target Indication (GMTI) for spaceborne multi-channel High Resolution Wide Swath (HRWS) Synthetic Aperture Radar (SAR) system is presented. Firstly, the method utilizes beamforming for clutter suppression, then estimates moving target direction by fitting for the slant range of the moving target. Secondly, focusing for the clutter suppressed data is performed to obtain ambiguous images of the moving target, then all ambiguous moving targets are obtained by Constant False Alarm Rate (CFAR) detection technology. Finally, this method detects real targets according to the spatial relationships of fuzzy images and motion direction estimated. The spaceborne HRWS simulation data verifies the validity of the proposed method.
A method of Ground Moving Target Indication (GMTI) for spaceborne multi-channel High Resolution Wide Swath (HRWS) Synthetic Aperture Radar (SAR) system is presented. Firstly, the method utilizes beamforming for clutter suppression, then estimates moving target direction by fitting for the slant range of the moving target. Secondly, focusing for the clutter suppressed data is performed to obtain ambiguous images of the moving target, then all ambiguous moving targets are obtained by Constant False Alarm Rate (CFAR) detection technology. Finally, this method detects real targets according to the spatial relationships of fuzzy images and motion direction estimated. The spaceborne HRWS simulation data verifies the validity of the proposed method.
2014, 36(2): 445-452.
doi: 10.3724/SP.J.1146.2013.00596
Abstract:
In order to suppress range sidelobes and improve anti-deception interference performance of conventional radar, a novel waveform transmitting strategy is proposed for conventional radar using nearly orthogonal waveforms. Given a bunch of polyphase coded waveforms with good orthogonality and low range sidelobes, a randomly selected waveform would be transmitted at each transmission. The receiver has the knowledge of the randomly chosen waveform and then can match filtering received signals. Finally, coherent accumulation would be performed for received signals at multiple adjacent transmissions. Both theoretical analysis and numerical results indicate that as range mainlobes can be coherently accumulated while range sidelobes at different transmissions are approximately white, the peak sidelobe level can be suppressed significantly after coherent accumulation.
In order to suppress range sidelobes and improve anti-deception interference performance of conventional radar, a novel waveform transmitting strategy is proposed for conventional radar using nearly orthogonal waveforms. Given a bunch of polyphase coded waveforms with good orthogonality and low range sidelobes, a randomly selected waveform would be transmitted at each transmission. The receiver has the knowledge of the randomly chosen waveform and then can match filtering received signals. Finally, coherent accumulation would be performed for received signals at multiple adjacent transmissions. Both theoretical analysis and numerical results indicate that as range mainlobes can be coherently accumulated while range sidelobes at different transmissions are approximately white, the peak sidelobe level can be suppressed significantly after coherent accumulation.
2014, 36(2): 453-458.
doi: 10.3724/SP.J.1146.2013.00624
Abstract:
The micro-Doppler effect induced by the rotating parts of the target, which provides a new approach for accurate auto radar target recognition, attracts great research attention in recent years. In this paper, taking target with rotating parts for an example, a method for the application of Duffing oscillators to micro-motion feature extraction of radar target under low Signal-to-Noise Ratio (SNR) is proposed. The echo of the rotating parts after multiplying with the conjugate of the reference signal is constructed by several sinusoidal components. Therefore, the Duffing oscillators are used to detect the sinusoidal components of the echo signal. Then, the power of the range cells on the range-slow time plane corresponding to the frequencies of the detected sinusoidal components are enhanced. Finally, the micro-motion feature of the radar target is obtained by the Hough transform. A computer simulation is given to verify the effectiveness of the proposed method.
The micro-Doppler effect induced by the rotating parts of the target, which provides a new approach for accurate auto radar target recognition, attracts great research attention in recent years. In this paper, taking target with rotating parts for an example, a method for the application of Duffing oscillators to micro-motion feature extraction of radar target under low Signal-to-Noise Ratio (SNR) is proposed. The echo of the rotating parts after multiplying with the conjugate of the reference signal is constructed by several sinusoidal components. Therefore, the Duffing oscillators are used to detect the sinusoidal components of the echo signal. Then, the power of the range cells on the range-slow time plane corresponding to the frequencies of the detected sinusoidal components are enhanced. Finally, the micro-motion feature of the radar target is obtained by the Hough transform. A computer simulation is given to verify the effectiveness of the proposed method.
2014, 36(2): 459-464.
doi: 10.3724/SP.J.1146.2013.00257
Abstract:
A polarization filtering scheme is employed to resolve interference from GSM base station to the radar in practice. By analyzing the existing Auto Polarization Cancellation (APC) algorithm, a new filtering algorithm based on the optimal reception of partially polarized signal is proposed. The polarization Stokes vector of GSM signal is estimated, and then the optimal receptive polarization is calculated on the principle of the minimum interference power. The effectiveness of polarization filter is finally validated by the experiment of interference suppression. Compared with the existing algorithm, the proposed method not only achieves good performance in anti-interference, but needs shorter time in weight convergence, and it is more stable as well.
A polarization filtering scheme is employed to resolve interference from GSM base station to the radar in practice. By analyzing the existing Auto Polarization Cancellation (APC) algorithm, a new filtering algorithm based on the optimal reception of partially polarized signal is proposed. The polarization Stokes vector of GSM signal is estimated, and then the optimal receptive polarization is calculated on the principle of the minimum interference power. The effectiveness of polarization filter is finally validated by the experiment of interference suppression. Compared with the existing algorithm, the proposed method not only achieves good performance in anti-interference, but needs shorter time in weight convergence, and it is more stable as well.
2014, 36(2): 465-470.
doi: 10.3724/SP.J.1146.2013.01142
Abstract:
At present the height of the warhead of ballistic missile is measured by the radio fuze for the burst height control of the warhead, but the reentry environment of the warhead has very high requirement for the measure set of the radio fuze, and there exists the issue of the radio fuze to be jammed. In the paper, for the ballistic missile adopting the radar scene matching terminal guidance, a calculation method of burst height of ballistic missile based on radar seeker is put forward, by fusing the height measure data of the scene matching radar seeker and the inertial navigation data, the height of the warhead is calculated in real time to realize the burst height control, which ensures the precision and improves anti-jamming capability. The validity of the method and the calculation precision of the burst height are proved by simulation and calculation.
At present the height of the warhead of ballistic missile is measured by the radio fuze for the burst height control of the warhead, but the reentry environment of the warhead has very high requirement for the measure set of the radio fuze, and there exists the issue of the radio fuze to be jammed. In the paper, for the ballistic missile adopting the radar scene matching terminal guidance, a calculation method of burst height of ballistic missile based on radar seeker is put forward, by fusing the height measure data of the scene matching radar seeker and the inertial navigation data, the height of the warhead is calculated in real time to realize the burst height control, which ensures the precision and improves anti-jamming capability. The validity of the method and the calculation precision of the burst height are proved by simulation and calculation.
2014, 36(2): 471-475.
doi: 10.3724/SP.J.1146.2013.00499
Abstract:
An ultra-wideband planar antenna with a shallow cavity for the life detection radar is designed. The antenna is composed of a pair of semi-elliptical antenna arms and fed by a compact 1:2 unbalanced to balanced transmission line transformer. For the life detection radar working in pulse mode, large bandwidth and good time domain characteristics are key requirements to ensure the signal without distortion during the antenna design. In order to eliminate the reflections from the truncated edge on the antenna arm, one semicircle is digged out from the truncated edge, which improves effectively the absorption ability of the loaded resistors by forming two sharp tips at the end and enhancing the concentration of residual current on the antenna arm end. The simulated and measured results show that the designed antenna has good time domain characteristics and is suitable for the development and requirements of the life detection radar design.
An ultra-wideband planar antenna with a shallow cavity for the life detection radar is designed. The antenna is composed of a pair of semi-elliptical antenna arms and fed by a compact 1:2 unbalanced to balanced transmission line transformer. For the life detection radar working in pulse mode, large bandwidth and good time domain characteristics are key requirements to ensure the signal without distortion during the antenna design. In order to eliminate the reflections from the truncated edge on the antenna arm, one semicircle is digged out from the truncated edge, which improves effectively the absorption ability of the loaded resistors by forming two sharp tips at the end and enhancing the concentration of residual current on the antenna arm end. The simulated and measured results show that the designed antenna has good time domain characteristics and is suitable for the development and requirements of the life detection radar design.
2014, 36(2): 476-481.
doi: 10.3724/SP.J.1146.2013.00526
Abstract:
A dimensionality reduction method based on modified genetic algorithm is presented under the constraints of fixing the aperture, the number of elements and the minimum element spacing. In order to utilize effectively the freedom of array element, the proposed method transforms the positions of two-dimensional concentric rings array optimization design into one-dimensional linear array when sparse array meet multiple optimization constraints, and then restore to the concentric rings array when calculating its performance. The proposed method reduces greatly the computation time and the complexity of the model. Meanwhile, due to the combined optimization of all elements, the optimization design is improved. Simulation results demonstrate the effectiveness of the proposed method.
A dimensionality reduction method based on modified genetic algorithm is presented under the constraints of fixing the aperture, the number of elements and the minimum element spacing. In order to utilize effectively the freedom of array element, the proposed method transforms the positions of two-dimensional concentric rings array optimization design into one-dimensional linear array when sparse array meet multiple optimization constraints, and then restore to the concentric rings array when calculating its performance. The proposed method reduces greatly the computation time and the complexity of the model. Meanwhile, due to the combined optimization of all elements, the optimization design is improved. Simulation results demonstrate the effectiveness of the proposed method.
2014, 36(2): 482-487.
doi: 10.3724/SP.J.1146.2013.00643
Abstract:
A dual band-notched UltraWide Band (UWB) antenna is presented to avoid the interference of service that work in the UWB band such as WiMax and WLAN applications. The dual band-notched characteristics are achieved by employing an arc H-shaped slot on the radiating patch and etching a couple of L-shaped slots on the ground plane. The proposed antenna operates on the ultra wide band (3.1~10.6 GHz) efficiently, except for the bandwidth of 3.3~3.6 GHz for WiMax application and 5.1~5.9 GHz for WLAN application. The parametric study of the arc H-shaped slot shows that this structure enables centre frequency of notched band to be conveniently tuned. Simulated and measured results show that the proposed antenna provides excellent band-notched function on the rejectband and has nearly omnidirectional radiation patterns on the passband.
A dual band-notched UltraWide Band (UWB) antenna is presented to avoid the interference of service that work in the UWB band such as WiMax and WLAN applications. The dual band-notched characteristics are achieved by employing an arc H-shaped slot on the radiating patch and etching a couple of L-shaped slots on the ground plane. The proposed antenna operates on the ultra wide band (3.1~10.6 GHz) efficiently, except for the bandwidth of 3.3~3.6 GHz for WiMax application and 5.1~5.9 GHz for WLAN application. The parametric study of the arc H-shaped slot shows that this structure enables centre frequency of notched band to be conveniently tuned. Simulated and measured results show that the proposed antenna provides excellent band-notched function on the rejectband and has nearly omnidirectional radiation patterns on the passband.
2014, 36(2): 488-492.
doi: 10.3724/SP.J.1146.2013.00634
Abstract:
Combination of business and Service Oriented Architetures (SOA) is getting more attention of enterprise decision makers. The combination employs information technology to improve the business performance of enterprises. SOA possesses the features of reusability and loose coupling, which makes business agile. However, when the managers face how to select and deploy Web services and such technical problems, enterprise,s strategic objectives are often ignored. To solve this problem, and also to consider the related service QoS criteria reflects the importance of the criteria more accurate, an improved analytic hierarchy process to balance technical and strategic decisions is proposed, which helps the enterprise managers to better use Web services to achieve enterprise strategic objectives.
Combination of business and Service Oriented Architetures (SOA) is getting more attention of enterprise decision makers. The combination employs information technology to improve the business performance of enterprises. SOA possesses the features of reusability and loose coupling, which makes business agile. However, when the managers face how to select and deploy Web services and such technical problems, enterprise,s strategic objectives are often ignored. To solve this problem, and also to consider the related service QoS criteria reflects the importance of the criteria more accurate, an improved analytic hierarchy process to balance technical and strategic decisions is proposed, which helps the enterprise managers to better use Web services to achieve enterprise strategic objectives.