Email alert
2011 Vol. 33, No. 2
Display Method:
2011, 33(2): 255-259.
doi: 10.3724/SP.J.1146.2010.00398
Abstract:
Because of the characteristics of Internet best-effort and the complexity of the access network structure, it is difficult to meet the quality of services such as long range depend real-time interaction streaming media for Next Generation Broadcasting (NGB) network. According to statistical multiplexing of the quality of service guarantee policy framework, availability of finite-timescale bandwidth provisioning for streaming media traffic with self-similarity is proved. Furthermore, depending on the Poisson Pareto burst process traffic model, a bandwidth provisioning method is proposed based on traffic roughness that describes the burst of the traffic. And it is analyzed that the self-similarity of the aggregated traffic is dominated by the sub-traffic. So the burst can be reduced by the large scale aggregation of the traffic, and the finite-timescale bandwidth provisioning is critical for the deployment of Next Generation Broadcasting (NGB) network.
Because of the characteristics of Internet best-effort and the complexity of the access network structure, it is difficult to meet the quality of services such as long range depend real-time interaction streaming media for Next Generation Broadcasting (NGB) network. According to statistical multiplexing of the quality of service guarantee policy framework, availability of finite-timescale bandwidth provisioning for streaming media traffic with self-similarity is proved. Furthermore, depending on the Poisson Pareto burst process traffic model, a bandwidth provisioning method is proposed based on traffic roughness that describes the burst of the traffic. And it is analyzed that the self-similarity of the aggregated traffic is dominated by the sub-traffic. So the burst can be reduced by the large scale aggregation of the traffic, and the finite-timescale bandwidth provisioning is critical for the deployment of Next Generation Broadcasting (NGB) network.
2011, 33(2): 260-265.
doi: 10.3724/SP.J.1146.2010.00438
Abstract:
Although network coding is an effective technology to improve the performance of multicast communication, encoding of node brings the additional overhead. In order to overcome this limitation, this paper proposes a network coding optimization model under the framework of algebraic network coding. And an algorithm called the MCN (Minimizing Coding Nodes) is proposed, which is based on the improved genetic algorithm. In MCN some new methods are introduced into the simple generic algorithm in order to avoid locality problem and to reduce optimization time. The experimental results show that MCN is effective and it runs faster, and that the output network coding scheme requires less coding nodes. Moreover, when it is applied to the actual meaningful network, it can guarantee the same network throughput, and much lower average delay and network overhead as the traditional network.
Although network coding is an effective technology to improve the performance of multicast communication, encoding of node brings the additional overhead. In order to overcome this limitation, this paper proposes a network coding optimization model under the framework of algebraic network coding. And an algorithm called the MCN (Minimizing Coding Nodes) is proposed, which is based on the improved genetic algorithm. In MCN some new methods are introduced into the simple generic algorithm in order to avoid locality problem and to reduce optimization time. The experimental results show that MCN is effective and it runs faster, and that the output network coding scheme requires less coding nodes. Moreover, when it is applied to the actual meaningful network, it can guarantee the same network throughput, and much lower average delay and network overhead as the traditional network.
2011, 33(2): 266-271.
doi: 10.3724/SP.J.1146.2010.00348
Abstract:
Churn is one of the main problems restricting the development and deployment of Distributed Hash Table networks. In terms of churn treatment, this paper researches the quick start-up bootstrapping mechanism of Kademlia and proposes a heuristic bootstrapping algorithm to overcome the defects of the original. By changing the way how to populate the routing tables, the heuristic algorithm decreases the messages sent by the joining nodes. Theoretical analysis and simulation result show that this algorithm can reduce the cost incurred by node joining and boost the systems capability of recovering from churn.
Churn is one of the main problems restricting the development and deployment of Distributed Hash Table networks. In terms of churn treatment, this paper researches the quick start-up bootstrapping mechanism of Kademlia and proposes a heuristic bootstrapping algorithm to overcome the defects of the original. By changing the way how to populate the routing tables, the heuristic algorithm decreases the messages sent by the joining nodes. Theoretical analysis and simulation result show that this algorithm can reduce the cost incurred by node joining and boost the systems capability of recovering from churn.
2011, 33(2): 272-277.
doi: 10.3724/SP.J.1146.2010.00162
Abstract:
Hashing is popularly adopted when it comes to a large scale of IP flows. High throughout is available with minimized average memory access number. This paper mainly focused on the lookup performance of CAM (Content Addressable Memory) Aided Hash Table (CAHT). By rational approximation, the paper provides the lower bound on average memory access number over lookup of CASHT; based on the analysis of CASHT, the paper also proposes the condition when to get the lower bound on average memory access number over lookup of CAMHT; Finally, simulation of actual network data shows its consistency to the theory model, which gives essential theory support to design and evaluate the hashing scheme in the actual applications.
Hashing is popularly adopted when it comes to a large scale of IP flows. High throughout is available with minimized average memory access number. This paper mainly focused on the lookup performance of CAM (Content Addressable Memory) Aided Hash Table (CAHT). By rational approximation, the paper provides the lower bound on average memory access number over lookup of CASHT; based on the analysis of CASHT, the paper also proposes the condition when to get the lower bound on average memory access number over lookup of CAMHT; Finally, simulation of actual network data shows its consistency to the theory model, which gives essential theory support to design and evaluate the hashing scheme in the actual applications.
2011, 33(2): 278-283.
doi: 10.3724/SP.J.1146.2010.00375
Abstract:
Text information hiding detecting algorithm aims at distinguishing between normal text and sgeto-text. How to perform securely collaborative text information detection remains unsettled. For this reason, a privacy- preserving text information hiding detecting algorithm is proposed based on homomorphic cryptosystem. The new detecting algorithm achieves securely two-party collaborative text information detecting, by which the party holding privately parameters of text information hiding detecting algorithm and the party holding a private text to detect can cooperatively distinguish between sgeto-text and normal text while no privacy is disclosed. It is shown the privacy-preserving algorithm is still secure while a couple of parties cooperate for many times. Communication overheads and computation complexity of the privacy-preserving algorithm areO(m2) where m is the number of words in the dictionary using by text detecting algorithm. Experimental result shows the algorithm is efficient.
Text information hiding detecting algorithm aims at distinguishing between normal text and sgeto-text. How to perform securely collaborative text information detection remains unsettled. For this reason, a privacy- preserving text information hiding detecting algorithm is proposed based on homomorphic cryptosystem. The new detecting algorithm achieves securely two-party collaborative text information detecting, by which the party holding privately parameters of text information hiding detecting algorithm and the party holding a private text to detect can cooperatively distinguish between sgeto-text and normal text while no privacy is disclosed. It is shown the privacy-preserving algorithm is still secure while a couple of parties cooperate for many times. Communication overheads and computation complexity of the privacy-preserving algorithm areO(m2) where m is the number of words in the dictionary using by text detecting algorithm. Experimental result shows the algorithm is efficient.
2011, 33(2): 284-288.
doi: 10.3724/SP.J.1146.2010.00470
Abstract:
Extend algebraic immunity of Boolean functions are investigated in this paper. Firstly, a sufficient and necessary condition is presented that algebraic immunity of a Boolean function equals to its extended algebraic immunity. Secondly, it is proved that two classes of Boolean functions with maximum algebraic immunity also have optimal extended algebraic immunity. Finally, it is analyzed that the structure of the annihilators of Boolean functions with the algebraic complement.
Extend algebraic immunity of Boolean functions are investigated in this paper. Firstly, a sufficient and necessary condition is presented that algebraic immunity of a Boolean function equals to its extended algebraic immunity. Secondly, it is proved that two classes of Boolean functions with maximum algebraic immunity also have optimal extended algebraic immunity. Finally, it is analyzed that the structure of the annihilators of Boolean functions with the algebraic complement.
2011, 33(2): 289-294.
doi: 10.3724/SP.J.1146.2010.00301
Abstract:
The distortion metrics Sum of Square Error (SSE) and Sum of Absolute Difference (SAD), adopted in RDO on H.264 reference software, have been proved not correlating well with Human Visual System (HVS). Referring to JM16.2, this paper proposes to Combine SSE and Structural SIMilarity (SSIM) as the distortion metric in RDO (CSSRDO) instead of SSE. Firstly, the algorithm finds the appropriate relationship between SSIM and rate. After utilizing the RDO function where distortion is measured as SSE and considering the HVS, the RDO based on combined SSE and SSIM as the distortion metric is proposed. Further more, CSSRDO is introduced into JM16.2 for intra-mode selection. Results show that the proposed algorithm can achieve better coding efficiency and image quality.
The distortion metrics Sum of Square Error (SSE) and Sum of Absolute Difference (SAD), adopted in RDO on H.264 reference software, have been proved not correlating well with Human Visual System (HVS). Referring to JM16.2, this paper proposes to Combine SSE and Structural SIMilarity (SSIM) as the distortion metric in RDO (CSSRDO) instead of SSE. Firstly, the algorithm finds the appropriate relationship between SSIM and rate. After utilizing the RDO function where distortion is measured as SSE and considering the HVS, the RDO based on combined SSE and SSIM as the distortion metric is proposed. Further more, CSSRDO is introduced into JM16.2 for intra-mode selection. Results show that the proposed algorithm can achieve better coding efficiency and image quality.
2011, 33(2): 295-299.
doi: 10.3724/SP.J.1146.2010.00320
Abstract:
Precise ranging is the core technology of autonomous navigation. For code measurements yield noisy but unambiguous pseudorange estimates and the pseudorange obtained with carrier phase measurements are almost noiseless but are affected by integer ambiguity, a Fading Memory Gaussian Sum Filter (FMGSF) algorithm based on Bayesian recursive relations is proposed. Posteriori probability?density is approximated as a finite Gaussian mixture, the means and variances of Gaussian terms are updated according to the principle of Kalman filter. Fading memory factor is imported to avoid the issue of filter divergence due to mismodeling and resampling is performed to resolve the issue of increasing in computational complexity caused by carrier phase measurement uncertainty. Theoretical analysis and simulation results show that this algorithm can overcome the effect of cycle slips to a certain extent and achieve higher range accuracy.
Precise ranging is the core technology of autonomous navigation. For code measurements yield noisy but unambiguous pseudorange estimates and the pseudorange obtained with carrier phase measurements are almost noiseless but are affected by integer ambiguity, a Fading Memory Gaussian Sum Filter (FMGSF) algorithm based on Bayesian recursive relations is proposed. Posteriori probability?density is approximated as a finite Gaussian mixture, the means and variances of Gaussian terms are updated according to the principle of Kalman filter. Fading memory factor is imported to avoid the issue of filter divergence due to mismodeling and resampling is performed to resolve the issue of increasing in computational complexity caused by carrier phase measurement uncertainty. Theoretical analysis and simulation results show that this algorithm can overcome the effect of cycle slips to a certain extent and achieve higher range accuracy.
2011, 33(2): 300-303.
doi: 10.3724/SP.J.1146.2010.00346
Abstract:
This paper analyzes the performance of fine time synchronization which using cross correlation method and gives the relationship between its performance and the parameters of carrier frequency offset and signal to noise ratio. Then this paper proposal a new fine time synchronization method which is less affected by frequency offset. This paper gives the upper limit for synchronization sequence length in a certain frequency offset. In new method, synchronization sequence is split into several parts and correlates with receive signal separately. The delayed correlation values for every segment are added to get the synchronization metric. The simulation results demonstrate that the performance of the new method is robust against frequency offset.
This paper analyzes the performance of fine time synchronization which using cross correlation method and gives the relationship between its performance and the parameters of carrier frequency offset and signal to noise ratio. Then this paper proposal a new fine time synchronization method which is less affected by frequency offset. This paper gives the upper limit for synchronization sequence length in a certain frequency offset. In new method, synchronization sequence is split into several parts and correlates with receive signal separately. The delayed correlation values for every segment are added to get the synchronization metric. The simulation results demonstrate that the performance of the new method is robust against frequency offset.
2011, 33(2): 304-308.
doi: 10.3724/SP.J.1146.2010.00028
Abstract:
The issue of insufficient efficiency and accuracy of current estimation methods for characteristic polynomial of m sequence under high error conditions is studied. A equivalent relationship between m sequence and BCH codes is derived by studying their generation principles, and then a new estimation algorithm for characteristic polynomial of m sequence is proposed in the paper. By constructing equivalent BCH codes, characteristic polynomial of m sequence is estimated using their good error-correction performance under high error conditions. Simulation results show that the algorithm can solve the estimation for characteristic polynomial of m sequence under error conditions, operation speed of the algorithm can mainly be accepted for analysis of m sequence lower than 20-order in signal processing.
The issue of insufficient efficiency and accuracy of current estimation methods for characteristic polynomial of m sequence under high error conditions is studied. A equivalent relationship between m sequence and BCH codes is derived by studying their generation principles, and then a new estimation algorithm for characteristic polynomial of m sequence is proposed in the paper. By constructing equivalent BCH codes, characteristic polynomial of m sequence is estimated using their good error-correction performance under high error conditions. Simulation results show that the algorithm can solve the estimation for characteristic polynomial of m sequence under error conditions, operation speed of the algorithm can mainly be accepted for analysis of m sequence lower than 20-order in signal processing.
2011, 33(2): 309-314.
doi: 10.3724/SP.J.1146.2010.00257
Abstract:
To reduce decoding computational complexity of nonbinary Low-Density Parity-Check (LDPC) codes, a weighted symbol-flipping decoding algorithm based on a new criterion is proposed. The flipped symbol is determined according to the symbol flipping function and the reliabilities of the received bits in the algorithm. The decoding procedure would be stopped in advance by analyzing the trend of the number of unsatisfied checks. The simulation results show that the new algorithm can tremendously reduces the average number of required iterations with negligible performance degradation compared to the symbol-flipping decoding algorithm. Thus it achieves an appealing tradeoff between performance and complexity.
To reduce decoding computational complexity of nonbinary Low-Density Parity-Check (LDPC) codes, a weighted symbol-flipping decoding algorithm based on a new criterion is proposed. The flipped symbol is determined according to the symbol flipping function and the reliabilities of the received bits in the algorithm. The decoding procedure would be stopped in advance by analyzing the trend of the number of unsatisfied checks. The simulation results show that the new algorithm can tremendously reduces the average number of required iterations with negligible performance degradation compared to the symbol-flipping decoding algorithm. Thus it achieves an appealing tradeoff between performance and complexity.
2011, 33(2): 315-320.
doi: 10.3724/SP.J.1146.2010.00921
Abstract:
A novel algorithm based on Complex Discrete Hopfield Neural Network (CDHNN) is proposed to detect blindly multi-valued QAM signals in this paper. A multi-valued discrete activation function is constructed in both of the real part and imaginary part of CDHNN. Limitation for the energy function of the classic binary-valued discrete Hopfield neural network is analyzed in this paper and a new energy function for CDHNN is also constructed. Further more the stability for multi-valued CDHNN is also proved in the paper. While the weighted matrix of CDHNN is constructed by the complementary projection operator of received signals, the problem of quadratic optimization with integer constraints can successfully solved with the CDHNN, and the QAM signals are blindly detected. Simulation results show that the algorithm reaches the real equilibrium points with shorter received signals and show high speed to detect blindly multi-valued signals.
A novel algorithm based on Complex Discrete Hopfield Neural Network (CDHNN) is proposed to detect blindly multi-valued QAM signals in this paper. A multi-valued discrete activation function is constructed in both of the real part and imaginary part of CDHNN. Limitation for the energy function of the classic binary-valued discrete Hopfield neural network is analyzed in this paper and a new energy function for CDHNN is also constructed. Further more the stability for multi-valued CDHNN is also proved in the paper. While the weighted matrix of CDHNN is constructed by the complementary projection operator of received signals, the problem of quadratic optimization with integer constraints can successfully solved with the CDHNN, and the QAM signals are blindly detected. Simulation results show that the algorithm reaches the real equilibrium points with shorter received signals and show high speed to detect blindly multi-valued signals.
2011, 33(2): 321-325.
doi: 10.3724/SP.J.1146.2010.00418
Abstract:
For the problem of the source number and Direction Of Arrivals (DOAs) estimation of the weak signals in the presence of the strong signals, a new method based on eigen beamforming for estimating source number and DOAs is proposed. Firstly, the eigenvectors of the spatial covariance matrix is sorted according to the descending order of the eigenvalues. Secondly, the spatial spectrum of each eigen beam is estimated in turn by the proposed equation. Finally, whether the signal source have or not is judged by the ration of the maximum value of the spatial spectrum to the mean value of the side-lobe peaks and the direction of arrival of the signal is obtained by the angle corresponding to the maximum value. Without the precise DOA knowledge and the iterative steps, the proposed method has the merits of the simpler computation and the more accuracy compared with RELAX and JJM. The effectiveness and the superiority are demonstrated by the simulation results and the measured data.
For the problem of the source number and Direction Of Arrivals (DOAs) estimation of the weak signals in the presence of the strong signals, a new method based on eigen beamforming for estimating source number and DOAs is proposed. Firstly, the eigenvectors of the spatial covariance matrix is sorted according to the descending order of the eigenvalues. Secondly, the spatial spectrum of each eigen beam is estimated in turn by the proposed equation. Finally, whether the signal source have or not is judged by the ration of the maximum value of the spatial spectrum to the mean value of the side-lobe peaks and the direction of arrival of the signal is obtained by the angle corresponding to the maximum value. Without the precise DOA knowledge and the iterative steps, the proposed method has the merits of the simpler computation and the more accuracy compared with RELAX and JJM. The effectiveness and the superiority are demonstrated by the simulation results and the measured data.
2011, 33(2): 326-331.
doi: 10.3724/SP.J.1146.2010.00305
Abstract:
The appearance of Compressed Sensing (CS) has been paid a great deal of attention over the world in the recent years. A basic requirement of CS is that a signal should be sparse or it can be sparsely represented in some orthogonal bases. Based on the Peak Transform (PT), a new algorithm called PTCS algorithm is proposed for the signals (such as the Linear Frequency Modulated signal) that are non-sparse themselves and can not be sparsely represented by wavelet transform. For the peak sequence produced by the Peak Transform, value expansion approach of reversible watermarking is exploited such that the peak sequence can be embedded into the measurements of the signal, which avoids increasing additional points for the transmission. By using the Peak Transform, non-sparse wavelet coefficients can be transformed into sparse coefficients, which greatly improves the reconstruction result of CS. Comparing with the original CS algorithm, simulation results show that the reconstruction results of the proposed PTCS algorithm significantly improves the reconstruction quality of signals.
The appearance of Compressed Sensing (CS) has been paid a great deal of attention over the world in the recent years. A basic requirement of CS is that a signal should be sparse or it can be sparsely represented in some orthogonal bases. Based on the Peak Transform (PT), a new algorithm called PTCS algorithm is proposed for the signals (such as the Linear Frequency Modulated signal) that are non-sparse themselves and can not be sparsely represented by wavelet transform. For the peak sequence produced by the Peak Transform, value expansion approach of reversible watermarking is exploited such that the peak sequence can be embedded into the measurements of the signal, which avoids increasing additional points for the transmission. By using the Peak Transform, non-sparse wavelet coefficients can be transformed into sparse coefficients, which greatly improves the reconstruction result of CS. Comparing with the original CS algorithm, simulation results show that the reconstruction results of the proposed PTCS algorithm significantly improves the reconstruction quality of signals.
2011, 33(2): 332-336.
doi: 10.3724/SP.J.1146.2010.00472
Abstract:
A blind modulation recognition algorithm for MQAM signals is proposed. The method firstly estimates the signal carrier frequency and bandwidth from its spectrum to complete the frequency conversion and the low-pass filtering. Then the baud rate is estimated by calculating the frequency spectrum of the square envelope, and the baud rate sampling is done according the symbol timing. Finally the modulation type is identified by comparing the variance of sequences in the minimum ring of vectogram. It does not require prior information about carrier frequency and baud rate, and is not sensitive to the carrier offset. Moreover, there is no complex iterative process. So it is suitable for signal recognition of practical applications. Computer simulations show that the correct recognition probability is larger than 99% if SNR is greater than 16dB and using 2400 symbols. It verifies the validity of the algorithm.
A blind modulation recognition algorithm for MQAM signals is proposed. The method firstly estimates the signal carrier frequency and bandwidth from its spectrum to complete the frequency conversion and the low-pass filtering. Then the baud rate is estimated by calculating the frequency spectrum of the square envelope, and the baud rate sampling is done according the symbol timing. Finally the modulation type is identified by comparing the variance of sequences in the minimum ring of vectogram. It does not require prior information about carrier frequency and baud rate, and is not sensitive to the carrier offset. Moreover, there is no complex iterative process. So it is suitable for signal recognition of practical applications. Computer simulations show that the correct recognition probability is larger than 99% if SNR is greater than 16dB and using 2400 symbols. It verifies the validity of the algorithm.
2011, 33(2): 337-341.
doi: 10.3724/SP.J.1146.2010.00212
Abstract:
The NMF (Non-negative Matrix Factorization)-based image hashing is robust to common image operations (such as lossy compression, low-pass filtering, resolution scaling and etc.), but is sensitive to rotation operations. After carefully investigating the blocking strategy of the original NMF-based scheme, a rotation-resilient image hashing algorithm is proposed. The proposed algorithm reduces the undesirable effect induced by image rotation through constraining blocking range and adopting appropriate block size, and thus provides better robustness to image rotation. Experimental results demonstrate that the proposed hashing algorithm provides a satisfactory robustness to image rotation while keeping its performance to common image processing operations.
The NMF (Non-negative Matrix Factorization)-based image hashing is robust to common image operations (such as lossy compression, low-pass filtering, resolution scaling and etc.), but is sensitive to rotation operations. After carefully investigating the blocking strategy of the original NMF-based scheme, a rotation-resilient image hashing algorithm is proposed. The proposed algorithm reduces the undesirable effect induced by image rotation through constraining blocking range and adopting appropriate block size, and thus provides better robustness to image rotation. Experimental results demonstrate that the proposed hashing algorithm provides a satisfactory robustness to image rotation while keeping its performance to common image processing operations.
2011, 33(2): 342-346.
doi: 10.3724/SP.J.1146.2010.00335
Abstract:
An accelerated Landweber iterative thresholding algorithm in wavelet domain is proposed by using quadratic approximation of the fidelity term. In this algprithm, each iterate depends on a linear combination of the previous two iterates. Comparing with the standard iterative thresholding algorithm, the proposed algorithm is more effective and more flexible due to the alternatives of parameter. The numerical experiments show that the algorithm can improve the quality of the restored image effectively.
An accelerated Landweber iterative thresholding algorithm in wavelet domain is proposed by using quadratic approximation of the fidelity term. In this algprithm, each iterate depends on a linear combination of the previous two iterates. Comparing with the standard iterative thresholding algorithm, the proposed algorithm is more effective and more flexible due to the alternatives of parameter. The numerical experiments show that the algorithm can improve the quality of the restored image effectively.
Spatial Semantic Objects-based Hybrid Learning Method for Automatic Complicated Scene Classification
2011, 33(2): 347-354.
doi: 10.3724/SP.J.1146.2010.00361
Abstract:
Scene image classification refers to the task of grouping different images into semantic categories. A new spatial semantic objects-based hybrid learning method is proposed to overcome the disadvantages existing in most of the relative methods. This method uses generative model to deal with the objects obtained by multi-scale segmentation instead of whole image, and calculates kinds of visual features to mine the category information of every objects. Then, an intermediate vector is generated using spatial-pyramid matching algorithm, to describe both the layer data and semantic information and narrow down the semantic gap. The method also combines a discriminative learning procedure to train a more confident classifier. Experimental results demonstrate that the proposed method can achieve high training efficiency and classification accuracy in interpreting manifold and complicated images.
Scene image classification refers to the task of grouping different images into semantic categories. A new spatial semantic objects-based hybrid learning method is proposed to overcome the disadvantages existing in most of the relative methods. This method uses generative model to deal with the objects obtained by multi-scale segmentation instead of whole image, and calculates kinds of visual features to mine the category information of every objects. Then, an intermediate vector is generated using spatial-pyramid matching algorithm, to describe both the layer data and semantic information and narrow down the semantic gap. The method also combines a discriminative learning procedure to train a more confident classifier. Experimental results demonstrate that the proposed method can achieve high training efficiency and classification accuracy in interpreting manifold and complicated images.
2011, 33(2): 355-362.
doi: 10.3724/SP.J.1146.2010.00171
Abstract:
This paper focuses on the generation of wide-swath and high azimuth resolution image with the constellation of micro satellites in the condition of considering the effects of satellite ellipse orbit and earth rotation. The method of coordinate equivalent transformation to overcome the geometric complexity from ellipse orbit and earth rotation is proposed, which includes two aspects: the first is the transformation from earth inertial coordinate to earth fixed coordinate. In this way, the illuminated swath is still and the location vectors of satellites are equivalent rotating. The second is separation of the whole aperture into several sub-apertures and building the relative coordinates in which parallel tracks with constant baseline are obtained and the involved approximation error is numeric analyzed. The Doppler ambiguity is suppressed in every sub-aperture coordinate. Through assembling the sub-apertures, conventional algorithm can be applied to focus the wide-swath and high resolution image. Setting CARTWHEEL as an example, numeric simulation result confirms the validity of the method.
This paper focuses on the generation of wide-swath and high azimuth resolution image with the constellation of micro satellites in the condition of considering the effects of satellite ellipse orbit and earth rotation. The method of coordinate equivalent transformation to overcome the geometric complexity from ellipse orbit and earth rotation is proposed, which includes two aspects: the first is the transformation from earth inertial coordinate to earth fixed coordinate. In this way, the illuminated swath is still and the location vectors of satellites are equivalent rotating. The second is separation of the whole aperture into several sub-apertures and building the relative coordinates in which parallel tracks with constant baseline are obtained and the involved approximation error is numeric analyzed. The Doppler ambiguity is suppressed in every sub-aperture coordinate. Through assembling the sub-apertures, conventional algorithm can be applied to focus the wide-swath and high resolution image. Setting CARTWHEEL as an example, numeric simulation result confirms the validity of the method.
2011, 33(2): 363-368.
doi: 10.3724/SP.J.1146.2010.00342
Abstract:
A novel image diffusion equations and Markov Random Field (MRF)-based segmentation method for SAR images is proposed. The method involves the image diffusion based on the traditional MRF method. It is used to smooth the noise components in the SAR image, protect the desired object boundaries, and reduce the iterative times. The input SAR image is initially smoothed. Estimates of posterior probabilities are then obtained using MRF. The posterior probability is also smoothed using diffusion. Comparing with the traditional MRF method, this method wipes off the inaccuracy classified blocks, and reduces the cost of time.
A novel image diffusion equations and Markov Random Field (MRF)-based segmentation method for SAR images is proposed. The method involves the image diffusion based on the traditional MRF method. It is used to smooth the noise components in the SAR image, protect the desired object boundaries, and reduce the iterative times. The input SAR image is initially smoothed. Estimates of posterior probabilities are then obtained using MRF. The posterior probability is also smoothed using diffusion. Comparing with the traditional MRF method, this method wipes off the inaccuracy classified blocks, and reduces the cost of time.
2011, 33(2): 369-374.
doi: 10.3724/SP.J.1146.2010.00440
Abstract:
A two-dimensional phase unwrapping algorithm is proposed for InSAR based on quality-guided and minimum discontinuity. The wrapped phase is partitioned into high and low quality areas according to its quality map, and the quality-guided algorithm is used to retrieve the phase of the high quality areas. Then the unwrapped high quality areas are taken as abstract phase points, and the minimum discontinuity algorithm is performed in the interior of low quality areas and the abstract phase points to retrieve the final unwrapped phase. The experimental result performed on real InSAR data shows that the proposed approach overcomes the drawbacks of the quality-guided and minimum discontinuity algorithms which both tend to spread the errors, and has the advantages of keeping the accuracy of the unwrapped phase in the high quality areas and also improving efficiency.
A two-dimensional phase unwrapping algorithm is proposed for InSAR based on quality-guided and minimum discontinuity. The wrapped phase is partitioned into high and low quality areas according to its quality map, and the quality-guided algorithm is used to retrieve the phase of the high quality areas. Then the unwrapped high quality areas are taken as abstract phase points, and the minimum discontinuity algorithm is performed in the interior of low quality areas and the abstract phase points to retrieve the final unwrapped phase. The experimental result performed on real InSAR data shows that the proposed approach overcomes the drawbacks of the quality-guided and minimum discontinuity algorithms which both tend to spread the errors, and has the advantages of keeping the accuracy of the unwrapped phase in the high quality areas and also improving efficiency.
2011, 33(2): 375-380.
doi: 10.3724/SP.J.1146.2010.00430
Abstract:
Because of small size, lightweight, inexpensive and high-resolution, recently FMCW-SAR develops rapidly and has a great prospect in the cost-effective civil applications. The system transfer function of StripMap FMCW-SAR is different from that of pulsed SAR, therefore the efficient simulation approaches for pulsed SAR could not directly be applied to FMCW-SAR. Based on the analysis of the StripMap FMCW-SAR signal characteristics, a 2-D Fourier domain algorithm of the raw signal simulation for stripmap FMCW-SAR is proposed. The proposed algorithm computes the range and azimuth integration used in time-domain simulation method efficiently by FFT technique. With respect to the time-domain simulation, the computational load is reduced by the order of O(NaNr/log2(NaNr)). The rationality and effectiveness of the presented algorithm are verified with the simulation.
Because of small size, lightweight, inexpensive and high-resolution, recently FMCW-SAR develops rapidly and has a great prospect in the cost-effective civil applications. The system transfer function of StripMap FMCW-SAR is different from that of pulsed SAR, therefore the efficient simulation approaches for pulsed SAR could not directly be applied to FMCW-SAR. Based on the analysis of the StripMap FMCW-SAR signal characteristics, a 2-D Fourier domain algorithm of the raw signal simulation for stripmap FMCW-SAR is proposed. The proposed algorithm computes the range and azimuth integration used in time-domain simulation method efficiently by FFT technique. With respect to the time-domain simulation, the computational load is reduced by the order of O(NaNr/log2(NaNr)). The rationality and effectiveness of the presented algorithm are verified with the simulation.
2011, 33(2): 381-387.
doi: 10.3724/SP.J.1146.2010.00185
Abstract:
PS (Permanent Scatters) selection methods mostly used points statistic characteristics as criteria, which of course ignores temporal information of the data sets, some good permanent scatterers may also be ignored. This paper is focused on quasi permanent scatterers, which have good stability during the whole monitoring period except for a short time disturbance. After introducing the concept of quasi permanent scatterers, its character is simulated, the method of choosing it then formed, which is called PS temporal selection method. Then this method is applied to Envisat ASAR data in Tianjin area. With quasi permanent scatterers help, the number of PS points increase and distribute evenly, keeping high coherence at the same time.
PS (Permanent Scatters) selection methods mostly used points statistic characteristics as criteria, which of course ignores temporal information of the data sets, some good permanent scatterers may also be ignored. This paper is focused on quasi permanent scatterers, which have good stability during the whole monitoring period except for a short time disturbance. After introducing the concept of quasi permanent scatterers, its character is simulated, the method of choosing it then formed, which is called PS temporal selection method. Then this method is applied to Envisat ASAR data in Tianjin area. With quasi permanent scatterers help, the number of PS points increase and distribute evenly, keeping high coherence at the same time.
2011, 33(2): 388-394.
doi: 10.3724/SP.J.1146.2010.00236
Abstract:
Chaotic FM signal has ideal auto-correlation performance and good Electronic Counter-Counter Measure (ECCM) capabilities like random signals. In this paper, a type of chaotic FM signal generated by Bernoulli map is used for ultra-wideband through-the-wall imaging. The signal model is also built. After analyzing the detection capability, resolution capability and anti-multipath interference performance of chaotic FM radar system, it is compared with the LFM radar system. Simulation result shows that the chaotic FM signal is better than the LFM signal in target detection, resolution and anti-multipath interference when it is applied to through-the-wall radar system.
Chaotic FM signal has ideal auto-correlation performance and good Electronic Counter-Counter Measure (ECCM) capabilities like random signals. In this paper, a type of chaotic FM signal generated by Bernoulli map is used for ultra-wideband through-the-wall imaging. The signal model is also built. After analyzing the detection capability, resolution capability and anti-multipath interference performance of chaotic FM radar system, it is compared with the LFM radar system. Simulation result shows that the chaotic FM signal is better than the LFM signal in target detection, resolution and anti-multipath interference when it is applied to through-the-wall radar system.
2011, 33(2): 395-400.
doi: 10.3724/SP.J.1146.2010.00023
Abstract:
A new polarimetric SAR image CFAR target detection method is proposed in this paper. Firstly, the statistical distribution of PMF (Polarimetric Matched Filter) metric is deduced (signified as P-G0 distribution) based on product model combining the inverse Gamma distribution which is effective in modeling of clutter with diverse homogeneity. Farther, the parameter estimator of such distribution is educed using logarithm cumulants grounded on Mellin transform, which assures the exact modeling of PMF metric. Finally, the formula of CFAR detection threshold is deduced and a new CFAR detection method is designed. The experimental results using RADARSAT-2 polarimetric SAR data demonstrate P-G0 distribution is efficiency in data fitting of terrains and the proposed detection method can implement the accurate and automatic target detection in clutter with diverse homogeneity.
A new polarimetric SAR image CFAR target detection method is proposed in this paper. Firstly, the statistical distribution of PMF (Polarimetric Matched Filter) metric is deduced (signified as P-G0 distribution) based on product model combining the inverse Gamma distribution which is effective in modeling of clutter with diverse homogeneity. Farther, the parameter estimator of such distribution is educed using logarithm cumulants grounded on Mellin transform, which assures the exact modeling of PMF metric. Finally, the formula of CFAR detection threshold is deduced and a new CFAR detection method is designed. The experimental results using RADARSAT-2 polarimetric SAR data demonstrate P-G0 distribution is efficiency in data fitting of terrains and the proposed detection method can implement the accurate and automatic target detection in clutter with diverse homogeneity.
2011, 33(2): 401-406.
doi: 10.3724/SP.J.1146.2009.01409
Abstract:
Multiple aperture sub-band synthetic technique can be realized for Multiple-input and Multiple-output SAR at the state-of-the-art level. The frequency sub-band synthetic technique is researched; The mathematical model is proposed. Combining mathematical deduction, the principle and implementation steps are presented in detail, the method for getting the filter and the image algorithm are proposed. The computer simulations show the effectiveness and feasibility of the method.
Multiple aperture sub-band synthetic technique can be realized for Multiple-input and Multiple-output SAR at the state-of-the-art level. The frequency sub-band synthetic technique is researched; The mathematical model is proposed. Combining mathematical deduction, the principle and implementation steps are presented in detail, the method for getting the filter and the image algorithm are proposed. The computer simulations show the effectiveness and feasibility of the method.
2011, 33(2): 407-411.
doi: 10.3724/SP.J.1146.2010.00414
Abstract:
Increasing integration time is the main approach to improve performance of digital TV based passive radar, but the range and Doppler migration caused by velocity and acceleration of the targets restrict the coherent integration time. This paper presents the echo signal model of the moving targets in uniform acceleration for passive radar, and analyzes the influence of velocity and acceleration, and then proposes a new migration compensation algorithm based on envelope interpolation and fractional Fourier transform for digital TV based passive radar. Simulation results show that the proposed algorithm can efficiently compensate the range and Doppler migration caused by long integration time and moving targets, and the coherent integration gain is improved, so the constraint of integration time limited by velocity and acceleration is removed.
Increasing integration time is the main approach to improve performance of digital TV based passive radar, but the range and Doppler migration caused by velocity and acceleration of the targets restrict the coherent integration time. This paper presents the echo signal model of the moving targets in uniform acceleration for passive radar, and analyzes the influence of velocity and acceleration, and then proposes a new migration compensation algorithm based on envelope interpolation and fractional Fourier transform for digital TV based passive radar. Simulation results show that the proposed algorithm can efficiently compensate the range and Doppler migration caused by long integration time and moving targets, and the coherent integration gain is improved, so the constraint of integration time limited by velocity and acceleration is removed.
2011, 33(2): 412-417.
doi: 10.3724/SP.J.1146.2010.00331
Abstract:
The polarimetric error will cause optimal interferometric phase estimation error. In this paper the polarimetric error transferring in the Polarimetric Interferometric SAR (PolInSAR) measurement is studied and a new polarimetric error transfer model is proposed based on polarimetric interferometric coherence optimization. The model is proved through the experimental simulation and the impact to the precision of optimal interferometric phase estimation due to the polarimetric error is analyzed based on the model. The conclusions could provide theoretical support and reference for the precision analysis and calibration requirements in the application.
The polarimetric error will cause optimal interferometric phase estimation error. In this paper the polarimetric error transferring in the Polarimetric Interferometric SAR (PolInSAR) measurement is studied and a new polarimetric error transfer model is proposed based on polarimetric interferometric coherence optimization. The model is proved through the experimental simulation and the impact to the precision of optimal interferometric phase estimation due to the polarimetric error is analyzed based on the model. The conclusions could provide theoretical support and reference for the precision analysis and calibration requirements in the application.
2011, 33(2): 418-423.
doi: 10.3724/SP.J.1146.2010.00380
Abstract:
A novel pseudo-random multi-phase code Continuous Wave (CW) radar using Compressive Sensing (CS) is presented considering the sparse of radar target space. This paper establishes targets information sensing model. Compressive sensing is employed to sample targets echo under Nyquist sampling rate. Then the information of target scene is effectively extracted from a few sampling data in the presence of noise. To improve the effectiveness of targets information extraction, the waveform is optimized using Simulated Annealing (SA) algorithm. Simulation results demonstrate the merits of the proposed approach.
A novel pseudo-random multi-phase code Continuous Wave (CW) radar using Compressive Sensing (CS) is presented considering the sparse of radar target space. This paper establishes targets information sensing model. Compressive sensing is employed to sample targets echo under Nyquist sampling rate. Then the information of target scene is effectively extracted from a few sampling data in the presence of noise. To improve the effectiveness of targets information extraction, the waveform is optimized using Simulated Annealing (SA) algorithm. Simulation results demonstrate the merits of the proposed approach.
2011, 33(2): 424-428.
doi: 10.3724/SP.J.1146.2010.00356
Abstract:
The sounding depth is one of the most important parameter of GPR (Ground Penetrating Radar), how to expand detectable range under a given resolution is a concern issue in the field of GPR all over the world. The application of Pseudo Random Binary Sequence (PRBS) to GPR is studied in this paper. The advantages of PRBS and the demands of GPR are analyzed. A GPR experiment system adopting m-sequence is described. Some experimental data of this system and the theoretical analysis are presented. The experiments results are very coincide with theoretical analysis. This PRBS GPR can achieve deeper detectable range than Impulse GPR under the same parameters is proved with the result of analysis and experiment, and it can meet the demands of the archaeology and geologys deep sounding.
The sounding depth is one of the most important parameter of GPR (Ground Penetrating Radar), how to expand detectable range under a given resolution is a concern issue in the field of GPR all over the world. The application of Pseudo Random Binary Sequence (PRBS) to GPR is studied in this paper. The advantages of PRBS and the demands of GPR are analyzed. A GPR experiment system adopting m-sequence is described. Some experimental data of this system and the theoretical analysis are presented. The experiments results are very coincide with theoretical analysis. This PRBS GPR can achieve deeper detectable range than Impulse GPR under the same parameters is proved with the result of analysis and experiment, and it can meet the demands of the archaeology and geologys deep sounding.
2011, 33(2): 429-434.
doi: 10.3724/SP.J.1146.2010.00328
Abstract:
An improved Shuffled Frog Leaping Algorithm (SFLA) is proposed to solve the Capacitated Vehicle Routing Problem(CVRP)based on real-coded patterns. It is then combined with the power-law Extremal Optimization (-EO) to further improve the local search ability. The fitness for the components of an individual is carefully designed and the neighborhood for-EO mutation is established according to power-law probability distribution. Experimental results show that the proposed algorithm outperforms other heuristic algorithms base on PSO and GA.
An improved Shuffled Frog Leaping Algorithm (SFLA) is proposed to solve the Capacitated Vehicle Routing Problem(CVRP)based on real-coded patterns. It is then combined with the power-law Extremal Optimization (-EO) to further improve the local search ability. The fitness for the components of an individual is carefully designed and the neighborhood for-EO mutation is established according to power-law probability distribution. Experimental results show that the proposed algorithm outperforms other heuristic algorithms base on PSO and GA.
2011, 33(2): 435-441.
doi: 10.3724/SP.J.1146.2010.00217
Abstract:
A new incremental knowledge acquisition method based on granular computing theory is proposed. First, an original knowledge granule tree is established according to the decision-making information system. Then, for any new additional data, its matched knowledge granule in original knowledge granule tree is found at first, and then the original knowledge granule tree is updated according to the corresponding decision-making value. The new method is an efficient tool for processing dynamic data information. Both algorithm analysis and experiment results show that the new method for processing dynamic information systems and acquiring corresponding rules is superior to RGAGC and ID4 respectively.
A new incremental knowledge acquisition method based on granular computing theory is proposed. First, an original knowledge granule tree is established according to the decision-making information system. Then, for any new additional data, its matched knowledge granule in original knowledge granule tree is found at first, and then the original knowledge granule tree is updated according to the corresponding decision-making value. The new method is an efficient tool for processing dynamic data information. Both algorithm analysis and experiment results show that the new method for processing dynamic information systems and acquiring corresponding rules is superior to RGAGC and ID4 respectively.
2011, 33(2): 442-447.
doi: 10.3724/SP.J.1146.2010.00166
Abstract:
Watermark robustness to geometric attacks is still a challenging issue. A blind watermarking scheme is proposed against geometric attacks to construct watermark synchronization information by using Directionlet: Firstly, the edge detection operator is utilized to extract image edges; Then, the slope of those edges are computed according to Lagrange theorem; Finally, the direction vector of the two choosed edges constitute the generator matrix of Directionlet; Watermark are adaptively embedded into Directionlet coefficients of the selected cosets. It is Directionlet matrix formed of image edges that eliminates the effect of geometric attacks. Experimental results show the proposed watermarking algorithm is robust, especially at the geometric attacks such as rotation.
Watermark robustness to geometric attacks is still a challenging issue. A blind watermarking scheme is proposed against geometric attacks to construct watermark synchronization information by using Directionlet: Firstly, the edge detection operator is utilized to extract image edges; Then, the slope of those edges are computed according to Lagrange theorem; Finally, the direction vector of the two choosed edges constitute the generator matrix of Directionlet; Watermark are adaptively embedded into Directionlet coefficients of the selected cosets. It is Directionlet matrix formed of image edges that eliminates the effect of geometric attacks. Experimental results show the proposed watermarking algorithm is robust, especially at the geometric attacks such as rotation.
2011, 33(2): 448-454.
doi: 10.3724/SP.J.1146.2010.00294
Abstract:
Nowadays, the research on the implementation of color-based particle filters is facing the problem of tracking accuracy and real-time processing speed. This paper presents a modified color-based particle filter, which basing on the traditional SR resample algorithm, scatters the left particles causing by hardware circuits around the target. Results show that proposed particle filters improve the accuracy and robustness of object tracking. Moreover, the architecture of its full hardware implementation is depicted in the paper. The experimental study on FPGA indicates that the proposed color-based particle filters perform robust tracking at 72 FPS and with 7387 LEs (Logic Element) hardware cost. In addition to that, the framework of scalable distributed particle filters and its resample scheme are presented to adapt more complex scenarios, carrying out multi-feature and multi-target tracking.
Nowadays, the research on the implementation of color-based particle filters is facing the problem of tracking accuracy and real-time processing speed. This paper presents a modified color-based particle filter, which basing on the traditional SR resample algorithm, scatters the left particles causing by hardware circuits around the target. Results show that proposed particle filters improve the accuracy and robustness of object tracking. Moreover, the architecture of its full hardware implementation is depicted in the paper. The experimental study on FPGA indicates that the proposed color-based particle filters perform robust tracking at 72 FPS and with 7387 LEs (Logic Element) hardware cost. In addition to that, the framework of scalable distributed particle filters and its resample scheme are presented to adapt more complex scenarios, carrying out multi-feature and multi-target tracking.
2011, 33(2): 455-460.
doi: 10.3724/SP.J.1146.2010.00249
Abstract:
Conductivity and dielectric attenuation in millimeter-wave TWT helical Slow-Wave Structure (SWS) are analyzed by a stratified dielectric helix tape and a 3D electromagnetic model. In tape model, the imaginary part of complex propagation constant is considered as dielectric attenuation, and the conductivity losses are obtained by discontinuous surface current on metal helix and envelope. For 3D electromagnetic model, the RF losses of SWS are deduced through a quality factor and stored energy in a periodic structure with finite conductivity of helix and envelop and loss tangent of supported rods. An analysis of a Ka helical SWS shows that the conductivity loss of helix and dielectric attenuation of supported rods are greater than conductivity loss of envelope, the dielectric attenuation is linear with ceramic loss tangent and can not be neglected in millimeter wave band.
Conductivity and dielectric attenuation in millimeter-wave TWT helical Slow-Wave Structure (SWS) are analyzed by a stratified dielectric helix tape and a 3D electromagnetic model. In tape model, the imaginary part of complex propagation constant is considered as dielectric attenuation, and the conductivity losses are obtained by discontinuous surface current on metal helix and envelope. For 3D electromagnetic model, the RF losses of SWS are deduced through a quality factor and stored energy in a periodic structure with finite conductivity of helix and envelop and loss tangent of supported rods. An analysis of a Ka helical SWS shows that the conductivity loss of helix and dielectric attenuation of supported rods are greater than conductivity loss of envelope, the dielectric attenuation is linear with ceramic loss tangent and can not be neglected in millimeter wave band.
2011, 33(2): 461-465.
doi: 10.3724/SP.J.1146.2010.00400
Abstract:
Based on Interleaving Extraction and Block Compressive Sensing (IEBCS), a new Multiple Description Coding method (IEBCS-MDC) which can be achieved real-timely during imaging process is presented. The method is first partitions an image into several sub-images using interleaving extraction, then measures each sub-image with block compressive sensing and forms multiple descriptions. At the decoding terminal, the method reconstructs the original image by solving an optimization problem. Block strategy ensures that the complexity of measurement process does not change due to image size, so the method is simple and easy to implement, suitable for handling high-resolution images, and the characteristic self-recovery capability enhances the ability against packet loss. Experimental results show that, compared to CS-MDC, the proposed method can handle much bigger images in the same hardware environment and the reconstruction quality is also better than CS-MDC with the same packet loss probability.
Based on Interleaving Extraction and Block Compressive Sensing (IEBCS), a new Multiple Description Coding method (IEBCS-MDC) which can be achieved real-timely during imaging process is presented. The method is first partitions an image into several sub-images using interleaving extraction, then measures each sub-image with block compressive sensing and forms multiple descriptions. At the decoding terminal, the method reconstructs the original image by solving an optimization problem. Block strategy ensures that the complexity of measurement process does not change due to image size, so the method is simple and easy to implement, suitable for handling high-resolution images, and the characteristic self-recovery capability enhances the ability against packet loss. Experimental results show that, compared to CS-MDC, the proposed method can handle much bigger images in the same hardware environment and the reconstruction quality is also better than CS-MDC with the same packet loss probability.
2011, 33(2): 466-469.
doi: 10.3724/SP.J.1146.2010.00349
Abstract:
An optimal algorithm based on adaptive wavelet de-noising is proposed to improve effectively the performance of digital modulation recognition under low Signal to Noise Ratio (SNR). An adaptive wavelet threshold de-noising filter is used to eliminate the noise in instantaneous information and improve the SNR. And two parameters Ra, Ra are improved based on two existing parameters ap, df to reduce the sensitivity to noise threshold. Simulation results show that recognition probability of 7 types of digital modulation signals is all above 96%, even SNR is down to 1 dB. Compared with existing algorithm, the proposed algorithm is easy to implement, which has lower computational costs, higher recognition probability.
An optimal algorithm based on adaptive wavelet de-noising is proposed to improve effectively the performance of digital modulation recognition under low Signal to Noise Ratio (SNR). An adaptive wavelet threshold de-noising filter is used to eliminate the noise in instantaneous information and improve the SNR. And two parameters Ra, Ra are improved based on two existing parameters ap, df to reduce the sensitivity to noise threshold. Simulation results show that recognition probability of 7 types of digital modulation signals is all above 96%, even SNR is down to 1 dB. Compared with existing algorithm, the proposed algorithm is easy to implement, which has lower computational costs, higher recognition probability.
2011, 33(2): 470-474.
doi: 10.3724/SP.J.1146.2010.00352
Abstract:
Considering that the code-aided and data-aided carrier recovery algorithms perform small synchronization range and big inaccuracy in low Signal to Noise Ratio (SNR) deep space communication systems, a joint assisted carrier synchronization algorithm with code and pilot is proposed. First a coarse synchronization algorithm with optimal pilot placement is constructed based on the sum of autocorrelation function of the pilots, whose frequency estimator can approximate Cramer-Rao Bound. The impact mechanism of pilot type on the trade-off between range and accuracy of estimation is analyzed. And then, a modified expectation maximization fine carrier synchronization is got with means of inserting pilot symbols and adding an integrator. Finally the simulations with the LDPC-Hadamard code with rate 1/12 verify that the new algorithm can increase synchronization range and improve accuracy, and achieve perfect synchronization with a certain amount of pilots at very low Signal to Noise Ratio.
Considering that the code-aided and data-aided carrier recovery algorithms perform small synchronization range and big inaccuracy in low Signal to Noise Ratio (SNR) deep space communication systems, a joint assisted carrier synchronization algorithm with code and pilot is proposed. First a coarse synchronization algorithm with optimal pilot placement is constructed based on the sum of autocorrelation function of the pilots, whose frequency estimator can approximate Cramer-Rao Bound. The impact mechanism of pilot type on the trade-off between range and accuracy of estimation is analyzed. And then, a modified expectation maximization fine carrier synchronization is got with means of inserting pilot symbols and adding an integrator. Finally the simulations with the LDPC-Hadamard code with rate 1/12 verify that the new algorithm can increase synchronization range and improve accuracy, and achieve perfect synchronization with a certain amount of pilots at very low Signal to Noise Ratio.
2011, 33(2): 475-478.
doi: 10.3724/SP.J.1146.2010.00314
Abstract:
To reduce the computational complexity of spectrum sensing and improve the spectrum sensing performance, a spectrum sensing method based on the fractal box dimension was proposed. As the box dimensions of noise and signal are different, the fractal box dimension is used as test statistics. The simulation results demonstrate the proposed method has good detection performance under Gaussian white noise, and it is not sensitive to the noise. Furthermore, this method is low computational complexity and easy to implement.
To reduce the computational complexity of spectrum sensing and improve the spectrum sensing performance, a spectrum sensing method based on the fractal box dimension was proposed. As the box dimensions of noise and signal are different, the fractal box dimension is used as test statistics. The simulation results demonstrate the proposed method has good detection performance under Gaussian white noise, and it is not sensitive to the noise. Furthermore, this method is low computational complexity and easy to implement.
2011, 33(2): 479-483.
doi: 10.3724/SP.J.1146.2010.00210
Abstract:
In this paper, a self-tune AQM (Active Queue Management) algorithm with acceleration factor is presented by analyzing Blue algorithm and its variants, which is called SABlue (Self-tune Accelerate Blue). In order to make the queue length kept in the aim area, this algorithm adopt instantaneous queue length as the parameter of incipient congestion detection and calculate the step size of packet drop probability by using load factor. Furthermore, for the sake of response speed, the acceleration factor is led into alert area when the network traffic is changed suddenly. The experiments demonstrate that SABlue algorithm is more robust, carrying lower packet loss and shorter convergence time in the situation of dynamic traffic and RTT variation. The comprehensive performance of SABlue is more excellent than other AQM algorithms.
In this paper, a self-tune AQM (Active Queue Management) algorithm with acceleration factor is presented by analyzing Blue algorithm and its variants, which is called SABlue (Self-tune Accelerate Blue). In order to make the queue length kept in the aim area, this algorithm adopt instantaneous queue length as the parameter of incipient congestion detection and calculate the step size of packet drop probability by using load factor. Furthermore, for the sake of response speed, the acceleration factor is led into alert area when the network traffic is changed suddenly. The experiments demonstrate that SABlue algorithm is more robust, carrying lower packet loss and shorter convergence time in the situation of dynamic traffic and RTT variation. The comprehensive performance of SABlue is more excellent than other AQM algorithms.
2011, 33(2): 484-488.
doi: 10.3724/SP.J.1146.2010.00435
Abstract:
This paper investigates on SAR system that is constituted from Single-Input Multiple-Output (SIMO) or Multiple-Input Multiple-Output (MIMO), and researches the working mode of transmitting one by one or transmitting in same time with multiple transmitters and multiple receivers. Through virtual aperture phase correction, SIMO signal iterations and image algorithms with multiple receivers are performed. Through sub band synthesis and virtual aperture phase correction, MIMO signal iterations and image algorithms are processed with the similar ways; and some simulations of virtual aperture phase correction with three point objects images are provided. The algorithms will be potential to new MIMO-SAR system.
This paper investigates on SAR system that is constituted from Single-Input Multiple-Output (SIMO) or Multiple-Input Multiple-Output (MIMO), and researches the working mode of transmitting one by one or transmitting in same time with multiple transmitters and multiple receivers. Through virtual aperture phase correction, SIMO signal iterations and image algorithms with multiple receivers are performed. Through sub band synthesis and virtual aperture phase correction, MIMO signal iterations and image algorithms are processed with the similar ways; and some simulations of virtual aperture phase correction with three point objects images are provided. The algorithms will be potential to new MIMO-SAR system.
2011, 33(2): 489-493.
doi: 10.3724/SP.J.1146.2010.00315
Abstract:
The typical pilot-aided and blind estimation method for MIMO-OFDM channel can achieve good performance when the number of multi-path components is constant. However, in the practical wireless environment, the number of channel taps and amplitude are all unknown and time-varying in whole process, thus typical estimation methods are not suitable. In this paper, the channel-taps varying condition and a new channel model are established by using RST theory. Based on this model, the re-sample method by Concentrating particle Resample Space (CRS) is proposed. By abandoning low probability samples and reserving high probability samples, more accurate approximation is obtained at each iteration. And then the channel estimation method using Rao-Blackwellised Particle Filtering with CRS (RBPFC) is proposed. Simulation results show that the performance of RBPFC is the best, the performance of Rao-Blackwellised particle filtering scheme follows but is better than that of the basic particle filtering scheme, and the performance of Kalman filter-based scheme is the worst.
The typical pilot-aided and blind estimation method for MIMO-OFDM channel can achieve good performance when the number of multi-path components is constant. However, in the practical wireless environment, the number of channel taps and amplitude are all unknown and time-varying in whole process, thus typical estimation methods are not suitable. In this paper, the channel-taps varying condition and a new channel model are established by using RST theory. Based on this model, the re-sample method by Concentrating particle Resample Space (CRS) is proposed. By abandoning low probability samples and reserving high probability samples, more accurate approximation is obtained at each iteration. And then the channel estimation method using Rao-Blackwellised Particle Filtering with CRS (RBPFC) is proposed. Simulation results show that the performance of RBPFC is the best, the performance of Rao-Blackwellised particle filtering scheme follows but is better than that of the basic particle filtering scheme, and the performance of Kalman filter-based scheme is the worst.
2011, 33(2): 494-498.
doi: 10.3724/SP.J.1146.2010.00309
Abstract:
Correntropy is a localized similarity measure between two scalar random variables. This paper presents the parametric representation of the symmetric-stable (SS) distributions correntropy. The equivalency of the maximum correntropy criterion and the minimum dispersion criterion is derived from the parametric representation for zero locationSS distributions. This result is used to propose the adaptive time delay estimation inSS noise. Simulations show that the algorithm based on correntropy works better than the least mean square and the least mean p-norm approaches.
Correntropy is a localized similarity measure between two scalar random variables. This paper presents the parametric representation of the symmetric-stable (SS) distributions correntropy. The equivalency of the maximum correntropy criterion and the minimum dispersion criterion is derived from the parametric representation for zero locationSS distributions. This result is used to propose the adaptive time delay estimation inSS noise. Simulations show that the algorithm based on correntropy works better than the least mean square and the least mean p-norm approaches.
2011, 33(2): 499-503.
doi: 10.3724/SP.J.1146.2010.00230
Abstract:
In this paper laser printer mechanism is researched, and intelligent print document identification and retrieval are achieved by measuring print shape deviation which reflects the intrinsic tolerances of printer device. A novel polar Hausdorff distance is proposed to obtain effective performance of the print shape matching. Print character set including all samples is applied to compare and improve the identification and retrieval accuracy. Experimental results show the approach is suitable for this application in which the accuracy is 90% for retrieval and the minimum error rate is 17.80% for identification.
In this paper laser printer mechanism is researched, and intelligent print document identification and retrieval are achieved by measuring print shape deviation which reflects the intrinsic tolerances of printer device. A novel polar Hausdorff distance is proposed to obtain effective performance of the print shape matching. Print character set including all samples is applied to compare and improve the identification and retrieval accuracy. Experimental results show the approach is suitable for this application in which the accuracy is 90% for retrieval and the minimum error rate is 17.80% for identification.
2011, 33(2): 504-508.
doi: 10.3724/SP.J.1146.2010.00502
Abstract:
As an important module of aircraft navigation and correspondence systems, the remote distance communication depends mainly on the airborne High Frequency (HF) antenna. With the use of new materials and larger size of aircraft, HF antennas tend to be problematic for a number of reasons. In this paper, a novel HF antenna is designed which could be used on a large aircraft. It can be located on the front edge of the vertical wing and is well conformal with the configuration of the plane without degenerating the reasonable shape of the aircraft. Based on the simulation utilizing the software Ansoft HFSS, the impedance and radiation pattern of the antenna are calculated, and based on the simulation results, impedance matching and efficiency of the antenna are analyzed. A scaling test-piece has been made to authenticate the simulation results. It can be tuned satisfactorily with transceivers such as the KHF950 through the band. All indicate that this kind antenna can be used for remote communication on aeroplane.
As an important module of aircraft navigation and correspondence systems, the remote distance communication depends mainly on the airborne High Frequency (HF) antenna. With the use of new materials and larger size of aircraft, HF antennas tend to be problematic for a number of reasons. In this paper, a novel HF antenna is designed which could be used on a large aircraft. It can be located on the front edge of the vertical wing and is well conformal with the configuration of the plane without degenerating the reasonable shape of the aircraft. Based on the simulation utilizing the software Ansoft HFSS, the impedance and radiation pattern of the antenna are calculated, and based on the simulation results, impedance matching and efficiency of the antenna are analyzed. A scaling test-piece has been made to authenticate the simulation results. It can be tuned satisfactorily with transceivers such as the KHF950 through the band. All indicate that this kind antenna can be used for remote communication on aeroplane.