Email alert
2008 Vol. 30, No. 3
Display Method:
2008, 30(3): 509-513.
doi: 10.3724/SP.J.1146.2007.00511
Abstract:
In this paper, an image denoising model which embeds intrinsic mode function into Perona-Malik model is proposed. Firstly, the image is decomposed into Intrinsic Mode Functions (IMFs) by using empirical mode decomposition technique; each of IMFs captures the feature information under different scales. Secondly, the first and second IMFs are embedded into Perona-Malik model. Experimental results indicate that this method is more efficient than Perona-Malik model in removing Gaussian noise. Moreover, this method can remove salt and pepper noise.
In this paper, an image denoising model which embeds intrinsic mode function into Perona-Malik model is proposed. Firstly, the image is decomposed into Intrinsic Mode Functions (IMFs) by using empirical mode decomposition technique; each of IMFs captures the feature information under different scales. Secondly, the first and second IMFs are embedded into Perona-Malik model. Experimental results indicate that this method is more efficient than Perona-Malik model in removing Gaussian noise. Moreover, this method can remove salt and pepper noise.
2008, 30(3): 514-517.
doi: 10.3724/SP.J.1146.2006.01051
Abstract:
Colorization of grayscale image is in general an active and challenging area of research in image processing. In this paper, current colorization algorithms are summarized. Then a colorization algorithm based on unevenness is proposed. According to the new algorithm, a better colorized result is got with less sensitivity to the initial color distribution.
Colorization of grayscale image is in general an active and challenging area of research in image processing. In this paper, current colorization algorithms are summarized. Then a colorization algorithm based on unevenness is proposed. According to the new algorithm, a better colorized result is got with less sensitivity to the initial color distribution.
2008, 30(3): 518-523.
doi: 10.3724/SP.J.1146.2006.01303
Abstract:
According to scaled real Digital Elevation Maps (DEM) information, sets of Single Look Complex (SLC) SAR images are simulated for interferometric missions, based on the SAR imaging mechanism and scattering model of two-scale rough surface. On the basis of the Ant Colony Optimization (ACO), a novel phase unwrapping method is developed. This method is applied to the simulated sets of SAR images and real ENVISAT ASAR images of repeat-path orbits. Experiment results show its improved performances on unwrapping precision and speed with respect to several existed phase unwrapping algorithms.
According to scaled real Digital Elevation Maps (DEM) information, sets of Single Look Complex (SLC) SAR images are simulated for interferometric missions, based on the SAR imaging mechanism and scattering model of two-scale rough surface. On the basis of the Ant Colony Optimization (ACO), a novel phase unwrapping method is developed. This method is applied to the simulated sets of SAR images and real ENVISAT ASAR images of repeat-path orbits. Experiment results show its improved performances on unwrapping precision and speed with respect to several existed phase unwrapping algorithms.
2008, 30(3): 524-528.
doi: 10.3724/SP.J.1146.2007.00888
Abstract:
Image denoising is one of important technology in image processing. The denoising image can be gotten by shrink the amplitude of wavelet coefficient of noise according to the fact that it is smaller than others in Wavelet Shrinkage (WS). The Anisotropic Diffusivity (AD) completes denoising according to the direction and amplitude of gradient while as far as possible to keep the characteristic of image. In this paper, the equivalence framework of two dimensional wavelet shrinkage and anisotropic diffusivity is proved with experiment. After that, the Anisotropic Wavelet Shrinkage (AWS) is proposed that synthesizes the merits of the wavelet shrinkage and anisotropic diffusivity according to the equivalence. The contrastive experiments show that the AWS is better for image denoising.
Image denoising is one of important technology in image processing. The denoising image can be gotten by shrink the amplitude of wavelet coefficient of noise according to the fact that it is smaller than others in Wavelet Shrinkage (WS). The Anisotropic Diffusivity (AD) completes denoising according to the direction and amplitude of gradient while as far as possible to keep the characteristic of image. In this paper, the equivalence framework of two dimensional wavelet shrinkage and anisotropic diffusivity is proved with experiment. After that, the Anisotropic Wavelet Shrinkage (AWS) is proposed that synthesizes the merits of the wavelet shrinkage and anisotropic diffusivity according to the equivalence. The contrastive experiments show that the AWS is better for image denoising.
2008, 30(3): 529-533.
doi: 10.3724/SP.J.1146.2006.01344
Abstract:
A Chaos Particle Swarm Optimization (CPSO) algorithm is presented. The initial location of the particle is evaluated by chaos. During the running time, according to the variance of the populations fitness, the chaotic location update of the particle is performed adaptively. The experimental results using the testing functions show that CPSO is able to search the global optimizer and avoiding the premature convergence on the multidimensional variable space. Applied the algorithm to image correlation matching, a new image correlation matching method based on CPSO is presented. The experimental results show that this method is very effective for image matching processing with noise.
A Chaos Particle Swarm Optimization (CPSO) algorithm is presented. The initial location of the particle is evaluated by chaos. During the running time, according to the variance of the populations fitness, the chaotic location update of the particle is performed adaptively. The experimental results using the testing functions show that CPSO is able to search the global optimizer and avoiding the premature convergence on the multidimensional variable space. Applied the algorithm to image correlation matching, a new image correlation matching method based on CPSO is presented. The experimental results show that this method is very effective for image matching processing with noise.
2008, 30(3): 534-538.
doi: 10.3724/SP.J.1146.2006.01199
Abstract:
Using a statistical approach, data fusion is performed in image feature detection with which information about the same feature obtained by multiple methods can be integrated. Validity of the described fusion scheme and properties of the fused data are discussed. A confidence measure is defined and applied to evaluate credibility of the results. Taking into account the data fusion and confidence measure, a Mahalanobis distance is derived. The technique is applied to face retrieval and its canthus detection. Experimental results show that the proposed approach can reduce adverse effects of feature detection errors and enhance the pattern recognition rate.
Using a statistical approach, data fusion is performed in image feature detection with which information about the same feature obtained by multiple methods can be integrated. Validity of the described fusion scheme and properties of the fused data are discussed. A confidence measure is defined and applied to evaluate credibility of the results. Taking into account the data fusion and confidence measure, a Mahalanobis distance is derived. The technique is applied to face retrieval and its canthus detection. Experimental results show that the proposed approach can reduce adverse effects of feature detection errors and enhance the pattern recognition rate.
2008, 30(3): 539-541.
doi: 10.3724/SP.J.1146.2006.01258
Abstract:
Illumination and pose variations make the performance of the Locality Preserving Projections (LPP)in face recognition decrease. To solve the problem, a supervised LPP using discriminant information is presented in this paper, the proposal calls for the establishment of a feature subspace in which the intrasubject variation is minimized, while the intersubject variation is maximized, then face recognition is implemented with the subspace. Experimentation results on Havard and Umist indicate that this approach is robust to illumination and pose and has higher recognition rate than LPP and other subspace methods.
Illumination and pose variations make the performance of the Locality Preserving Projections (LPP)in face recognition decrease. To solve the problem, a supervised LPP using discriminant information is presented in this paper, the proposal calls for the establishment of a feature subspace in which the intrasubject variation is minimized, while the intersubject variation is maximized, then face recognition is implemented with the subspace. Experimentation results on Havard and Umist indicate that this approach is robust to illumination and pose and has higher recognition rate than LPP and other subspace methods.
2008, 30(3): 542-545.
doi: 10.3724/SP.J.1146.2006.00982
Abstract:
Taking into account target tracking in high-density clutter, first the modified PDA algorithm (MPDA) is presented, then its performance estimation is obtained by using HYbrid Conditional Averaging(HYCA)approach, which gives us a series of off-line recursive algorithms for performance measurement. Simulation results show that the modified algorithm performance prediction based on HYCA approach is effective, and MPDAs tracking precision is improved in contrast with PDAs.
Taking into account target tracking in high-density clutter, first the modified PDA algorithm (MPDA) is presented, then its performance estimation is obtained by using HYbrid Conditional Averaging(HYCA)approach, which gives us a series of off-line recursive algorithms for performance measurement. Simulation results show that the modified algorithm performance prediction based on HYCA approach is effective, and MPDAs tracking precision is improved in contrast with PDAs.
2008, 30(3): 546-549.
doi: 10.3724/SP.J.1146.2006.01047
Abstract:
An irrelevant Amplitude-Phase (AP) algorithm is used in raw data compression according to the bulky raw data of Interferometric Synthetic Aperture Radar(INSAR) system. The new algorithm is based on the statistical property and the high correlation between the two channels of INSAR. Amplitude and phase of raw data are compressed irrelevantly. Detail analysis in theory is given and experiment with real data is undertaken. Comparisons are made with BAQ algorithm at aspects of correlation, fringe similarity index and residuals. The results show that AP algorithm is superior to BAQ algorithm at the same reduction ratio. Thereby AP algorithm preserves more information of amplitude and phase.
An irrelevant Amplitude-Phase (AP) algorithm is used in raw data compression according to the bulky raw data of Interferometric Synthetic Aperture Radar(INSAR) system. The new algorithm is based on the statistical property and the high correlation between the two channels of INSAR. Amplitude and phase of raw data are compressed irrelevantly. Detail analysis in theory is given and experiment with real data is undertaken. Comparisons are made with BAQ algorithm at aspects of correlation, fringe similarity index and residuals. The results show that AP algorithm is superior to BAQ algorithm at the same reduction ratio. Thereby AP algorithm preserves more information of amplitude and phase.
2008, 30(3): 550-553.
doi: 10.3724/SP.J.1146.2006.01173
Abstract:
For Very High Frequency/Ultra High Frequency(VHF/UHF) Ultra-WideBand Synthetic Aperture Radar (UWB SAR) the Radio Frequency Interference(RFI) may degrade the SAR image. In this paper a method based on channel equalization is proposed to remove the RFI yet preserve SAR image quality. The effectiveness of the method is also verified with the real data from an airborne VHF/UHF UWB SAR.
For Very High Frequency/Ultra High Frequency(VHF/UHF) Ultra-WideBand Synthetic Aperture Radar (UWB SAR) the Radio Frequency Interference(RFI) may degrade the SAR image. In this paper a method based on channel equalization is proposed to remove the RFI yet preserve SAR image quality. The effectiveness of the method is also verified with the real data from an airborne VHF/UHF UWB SAR.
2008, 30(3): 554-558.
doi: 10.3724/SP.J.1146.2006.01198
Abstract:
This paper presents a new method for synthetic aperture radar images feature extraction and target recognition which based on principal component analysis in wavelet domain and support vector machine. After wavelet decomposition of a SAR image, feature extraction is implemented by picking up principal component of the low-frequency sub-band image. Then, support vector machine is used to perform target recognition. Results are presented to verify that, the correctness of recognition is enhanced obviously, and the method presented in this paper is a effective method for SAR images feature extraction and target recognition.
This paper presents a new method for synthetic aperture radar images feature extraction and target recognition which based on principal component analysis in wavelet domain and support vector machine. After wavelet decomposition of a SAR image, feature extraction is implemented by picking up principal component of the low-frequency sub-band image. Then, support vector machine is used to perform target recognition. Results are presented to verify that, the correctness of recognition is enhanced obviously, and the method presented in this paper is a effective method for SAR images feature extraction and target recognition.
2008, 30(3): 559-563.
doi: 10.3724/SP.J.1146.2006.01025
Abstract:
Sawtooth signals are transmitted and received continuously in FMCW-SAR, which makes the character of the echo of FMCW-SAR is different from pulsed SAR. Combined with the imaging principle of FMCW-SAR, the influence of Doppler effect on FMCW-SAR imaging system is analyzed in this paper. It is pointed out that second coupling of range and azimuth signals may be caused by Doppler effect and defocusing of the image may be caused by it. An approach to compensate for the Doppler effect in the azimuth Doppler domain is presented and the flow chart of the RD algorithm for the FMCW-SAR imaging system is given out. Experimental results show that the Doppler effect and second coupling of the range and azimuth signals may be compensated by the approach presented in this paper and better images could be formed.
Sawtooth signals are transmitted and received continuously in FMCW-SAR, which makes the character of the echo of FMCW-SAR is different from pulsed SAR. Combined with the imaging principle of FMCW-SAR, the influence of Doppler effect on FMCW-SAR imaging system is analyzed in this paper. It is pointed out that second coupling of range and azimuth signals may be caused by Doppler effect and defocusing of the image may be caused by it. An approach to compensate for the Doppler effect in the azimuth Doppler domain is presented and the flow chart of the RD algorithm for the FMCW-SAR imaging system is given out. Experimental results show that the Doppler effect and second coupling of the range and azimuth signals may be compensated by the approach presented in this paper and better images could be formed.
2008, 30(3): 564-568.
doi: 10.3724/SP.J.1146.2006.01287
Abstract:
The coherent time of echo signal of ocean surface is much different from that of land because of its stochastic movement, which is related to the system parameters designation. The ocean echo signal coherent time of bistatic SAR is studied in this paper. The echo signal phase due to the wave movement is analyzed, and the effect on the bistatic SAR imaging of the ocean wave stochastic movement is given. The formula of coherent time is given, the relationship between the azimuth resolution and the coherent time is given, and the relationships between the coherent time and some factors are simulated and analyzed.
The coherent time of echo signal of ocean surface is much different from that of land because of its stochastic movement, which is related to the system parameters designation. The ocean echo signal coherent time of bistatic SAR is studied in this paper. The echo signal phase due to the wave movement is analyzed, and the effect on the bistatic SAR imaging of the ocean wave stochastic movement is given. The formula of coherent time is given, the relationship between the azimuth resolution and the coherent time is given, and the relationships between the coherent time and some factors are simulated and analyzed.
2008, 30(3): 569-572.
doi: 10.3724/SP.J.1146.2006.01285
Abstract:
This paper proposes a compression method for Synthetic Aperture Radar (SAR) raw data,which is based on time-frequency analysis theory and the linear frequency modulation properties of the SAR raw data. In this method, the raw data of I and Q channels is divided into blocks at first, then each block is transformed into time-frequency domain by 2D-RDGT (Two-Dimensional Real valued Discrete Gabor Transform) and the desired bits are allocated to each frequency plane, finally each frequency plane is quantized with BAQ(Block Adaptive Quantization).The same raw data is experimented with this method and the existing methods.The experiments results show that this method outperforms the existing methods in raw data domain and SAR image domain.
This paper proposes a compression method for Synthetic Aperture Radar (SAR) raw data,which is based on time-frequency analysis theory and the linear frequency modulation properties of the SAR raw data. In this method, the raw data of I and Q channels is divided into blocks at first, then each block is transformed into time-frequency domain by 2D-RDGT (Two-Dimensional Real valued Discrete Gabor Transform) and the desired bits are allocated to each frequency plane, finally each frequency plane is quantized with BAQ(Block Adaptive Quantization).The same raw data is experimented with this method and the existing methods.The experiments results show that this method outperforms the existing methods in raw data domain and SAR image domain.
2008, 30(3): 573-575.
doi: 10.3724/SP.J.1146.2006.01252
Abstract:
Medium PRF and N/M detection rule is the general working model of airborne pulse Doppler radar, it is necessary to satisfy the demand that medium pulse repetition frequency(PRF) set can solve ambiguities of range and Doppler, besides the demand, the blind zones of range and Doppler occurred by the medium PRF set should be minimum. So the medium PRF set selection can be modeled as a large scale combinatorial optimization problem, the problem can be solved by simulated annealing algorithm. This paper proposed a method for medium PRF set selection of airborne pulsed Doppler radar. based on the characteristics of the radar. The results of experimentation show this method is very effective.
Medium PRF and N/M detection rule is the general working model of airborne pulse Doppler radar, it is necessary to satisfy the demand that medium pulse repetition frequency(PRF) set can solve ambiguities of range and Doppler, besides the demand, the blind zones of range and Doppler occurred by the medium PRF set should be minimum. So the medium PRF set selection can be modeled as a large scale combinatorial optimization problem, the problem can be solved by simulated annealing algorithm. This paper proposed a method for medium PRF set selection of airborne pulsed Doppler radar. based on the characteristics of the radar. The results of experimentation show this method is very effective.
2008, 30(3): 576-580.
doi: 10.3724/SP.J.1146.2006.01340
Abstract:
To achieve fast location of moving emitter by a single stationary observer, an algorithm of hybrid particle filter based on bearing constrained sampling is presented. The algorithm gets proposal importance density from Extended Kalman Filter(EKF), and generates particles through the constraint between bearing measurements and the state variables, thus the number of particles and computation cost decrease when tackling high-dimensional filtering, and the filtering performance gets improved. Applying the algorithm to the location method of using Doppler changing rate and bearing measurements, simulation results of comparing the proposed algorithm with EKF, Unscented Kalman Filter(UKF) and the general hybrid particle filter, show that the proposed algorithm is superior in convergence speed, tracking precision and filtering stability to others, and the estimation error is more closer the Cramer-Rao lower bound.
To achieve fast location of moving emitter by a single stationary observer, an algorithm of hybrid particle filter based on bearing constrained sampling is presented. The algorithm gets proposal importance density from Extended Kalman Filter(EKF), and generates particles through the constraint between bearing measurements and the state variables, thus the number of particles and computation cost decrease when tackling high-dimensional filtering, and the filtering performance gets improved. Applying the algorithm to the location method of using Doppler changing rate and bearing measurements, simulation results of comparing the proposed algorithm with EKF, Unscented Kalman Filter(UKF) and the general hybrid particle filter, show that the proposed algorithm is superior in convergence speed, tracking precision and filtering stability to others, and the estimation error is more closer the Cramer-Rao lower bound.
2008, 30(3): 581-584.
doi: 10.3724/SP.J.1146.2006.01309
Abstract:
To solve the real-time and reliability problem of the underwater maneuver target tracking, according to the features of under maneuver target motion, namely, slow speed and weak motion capability, the mixing filter between pattern space and measurement space is performed by the wavelet transform, and the algorithm of multirate interacting multiple model is proposed for under maneuver target tracking. Meanwhile, the uniform expressions of multirate interacting multiple model are presented. The simulation results show that the algorithm of the multirate interacting multiple model has the low computation complexity and can improve the real-time and reliability performances of the underwater maneuver target tracking.
To solve the real-time and reliability problem of the underwater maneuver target tracking, according to the features of under maneuver target motion, namely, slow speed and weak motion capability, the mixing filter between pattern space and measurement space is performed by the wavelet transform, and the algorithm of multirate interacting multiple model is proposed for under maneuver target tracking. Meanwhile, the uniform expressions of multirate interacting multiple model are presented. The simulation results show that the algorithm of the multirate interacting multiple model has the low computation complexity and can improve the real-time and reliability performances of the underwater maneuver target tracking.
2008, 30(3): 585-588.
doi: 10.3724/SP.J.1146.2006.01263
Abstract:
Sampling Matrix Inversion (SMI) beamforming technique has the worse performance in the small snapshot number, high signal-noise- ratio and coherent source. Diagonal loading technique can improve the beamforming aberration. But loading value is hard to solve. An oblique projection-based beamforming algorithm is proposed in this paper. This algorithm uses oblique projection algorithm for the received signal to eliminate interference and enhance the algorithm robustness. Simulation results reveal that this algorithm has the better beamforming performance than conventional SMI beamforming algorithm and projection algorithm in the lower, middle and high Signal-Noise-Ratio (SNR), and it has the better performance in the small snapshot number and coherent source. So it has robust characteristic and the better performance. The algorithm is no more complex and can be implemented easily, and it can be used widely.
Sampling Matrix Inversion (SMI) beamforming technique has the worse performance in the small snapshot number, high signal-noise- ratio and coherent source. Diagonal loading technique can improve the beamforming aberration. But loading value is hard to solve. An oblique projection-based beamforming algorithm is proposed in this paper. This algorithm uses oblique projection algorithm for the received signal to eliminate interference and enhance the algorithm robustness. Simulation results reveal that this algorithm has the better beamforming performance than conventional SMI beamforming algorithm and projection algorithm in the lower, middle and high Signal-Noise-Ratio (SNR), and it has the better performance in the small snapshot number and coherent source. So it has robust characteristic and the better performance. The algorithm is no more complex and can be implemented easily, and it can be used widely.
2008, 30(3): 589-592.
doi: 10.3724/SP.J.1146.2006.01273
Abstract:
In this paper, a time-domain blind source separation algorithm for non-stationary convolutive mixtures is proposed by reprogramming the vectors of convolutive mixture model and generalizing the joint approximate diagonalization method. Firstly the sampling convolutive mixture signals are reseted for matching instantaneous mixture model, then considering non-stationarity of the sources, space whitening and joint block-diagonalization method is exploited to obtain the original signals. This algorithm simplifies the convolutive mixture problem into the instantaneous mixture problem from a new point of view, so it avoids domain transformation and convolution operation, as well as decreases the complexity. Computer simulation verifies its effectiveness and gives the analysis results about the effect on the signal to interference ratio as its parameter changes.
In this paper, a time-domain blind source separation algorithm for non-stationary convolutive mixtures is proposed by reprogramming the vectors of convolutive mixture model and generalizing the joint approximate diagonalization method. Firstly the sampling convolutive mixture signals are reseted for matching instantaneous mixture model, then considering non-stationarity of the sources, space whitening and joint block-diagonalization method is exploited to obtain the original signals. This algorithm simplifies the convolutive mixture problem into the instantaneous mixture problem from a new point of view, so it avoids domain transformation and convolution operation, as well as decreases the complexity. Computer simulation verifies its effectiveness and gives the analysis results about the effect on the signal to interference ratio as its parameter changes.
2008, 30(3): 593-595.
doi: 10.3724/SP.J.1146.2006.01278
Abstract:
Based on the point that directed diffusion equation is a diffusion process with direction, the correlation between directed diffusion and wavelet transform is studied in this paper. At first, the last low-frequency image after wavelet decomposition can be an initial approximation of the next low-frequency image. It is tested and verified that the last low-frequency image can diffuse and converge to the next low-frequency image. On the other hand, the next low-frequency image can also diffuse and converge to the last low-frequency image. Above process just presents the gradual variation of wavelet decomposition and reconstruction. So the interative diffusion of directed diffusion equation can realize wavelet decomposition and reconstruction in two contiguous layers. With the reduction of time interval, the gradual variation of wavelet decomposition and reconstruction can be observed in more and more fine scales.
Based on the point that directed diffusion equation is a diffusion process with direction, the correlation between directed diffusion and wavelet transform is studied in this paper. At first, the last low-frequency image after wavelet decomposition can be an initial approximation of the next low-frequency image. It is tested and verified that the last low-frequency image can diffuse and converge to the next low-frequency image. On the other hand, the next low-frequency image can also diffuse and converge to the last low-frequency image. Above process just presents the gradual variation of wavelet decomposition and reconstruction. So the interative diffusion of directed diffusion equation can realize wavelet decomposition and reconstruction in two contiguous layers. With the reduction of time interval, the gradual variation of wavelet decomposition and reconstruction can be observed in more and more fine scales.
2008, 30(3): 596-599.
doi: 10.3724/SP.J.1146.2006.01203
Abstract:
In this paper, based on Second-Order Statistics (SOS), a new algorithm is proposed for jointly estimating ranges, frequencies and Direction-Of-Arrival (DOA) of multiple near-field narrowband sources. The proposed method constructs three matrices using the second-order statistics of the properly chosen sensor outputs, and forms two new ones using the three matrices, then estimates the source parameters from the new ones eigenvalues and the corresponding eigenvectors. In comparison with other methods based on SOS, the proposed method can save one sensor, and only uses three SOS matrices; in addition, the estimated parameters are paired automatically. Finally, the simulation results are presented to validate the performance of the proposed method.
In this paper, based on Second-Order Statistics (SOS), a new algorithm is proposed for jointly estimating ranges, frequencies and Direction-Of-Arrival (DOA) of multiple near-field narrowband sources. The proposed method constructs three matrices using the second-order statistics of the properly chosen sensor outputs, and forms two new ones using the three matrices, then estimates the source parameters from the new ones eigenvalues and the corresponding eigenvectors. In comparison with other methods based on SOS, the proposed method can save one sensor, and only uses three SOS matrices; in addition, the estimated parameters are paired automatically. Finally, the simulation results are presented to validate the performance of the proposed method.
2008, 30(3): 600-603.
doi: 10.3724/SP.J.1146.2006.01182
Abstract:
The main influence for the performance of finite length Low-Density Parity-Check(LDPC) codes is analyzed. According to optimum choice for degree distribution, a check matrix is constructed with improved Progressive-Edge-Growth (PEG) algorithm. A practical efficient encoding algorithm is proposed to optimize the check matrix. A finite length irregular LDPC code with low error-floor performance and approximate linear encoding complexity is obtained. This optimization method can be easily extended to general communication channels.
The main influence for the performance of finite length Low-Density Parity-Check(LDPC) codes is analyzed. According to optimum choice for degree distribution, a check matrix is constructed with improved Progressive-Edge-Growth (PEG) algorithm. A practical efficient encoding algorithm is proposed to optimize the check matrix. A finite length irregular LDPC code with low error-floor performance and approximate linear encoding complexity is obtained. This optimization method can be easily extended to general communication channels.
2008, 30(3): 604-606.
doi: 10.3724/SP.J.1146.2007.00409
Abstract:
A general method is presented for synthesis of quasi-elliptic filter with source-load coupling. The equivalent circuit of the low-pass prototype of a lossless coupled resonator filter is proposed, as well as its transfer function t (s ).The expressions of MSL, MSi and MiL in the coupling matrix M are obtained based on two-port admittance matrix [ YN ]. Finally, a novel quasi-elliptic two pole microstrip filter composed of two hexagonal open-loop resonators is designed. The numerical results are verified with the simulation, The measured and simulated data are in good agreement.
A general method is presented for synthesis of quasi-elliptic filter with source-load coupling. The equivalent circuit of the low-pass prototype of a lossless coupled resonator filter is proposed, as well as its transfer function t (s ).The expressions of MSL, MSi and MiL in the coupling matrix M are obtained based on two-port admittance matrix [ YN ]. Finally, a novel quasi-elliptic two pole microstrip filter composed of two hexagonal open-loop resonators is designed. The numerical results are verified with the simulation, The measured and simulated data are in good agreement.
2008, 30(3): 607-611.
doi: 10.3724/SP.J.1146.2006.01286
Abstract:
This paper provides an overview of broadband satellite communication networks based on Digital Video Broadcasting-Return Channel System (DVB-RCS) standard, with an emphasis on the dynamic allocation of multi-access channel. A novel combined Demand Assignment Multiple Access (DAMA) scheme is proposed, in which chaotic prediction method is employed for self-similar traffic. For comparing the performance of different schemes, OPNET software is used to build a simulation system. Simulation results indicate that the novel scheme has better performance under heavy channel load and when the traffic has high degree of self-similarity.
This paper provides an overview of broadband satellite communication networks based on Digital Video Broadcasting-Return Channel System (DVB-RCS) standard, with an emphasis on the dynamic allocation of multi-access channel. A novel combined Demand Assignment Multiple Access (DAMA) scheme is proposed, in which chaotic prediction method is employed for self-similar traffic. For comparing the performance of different schemes, OPNET software is used to build a simulation system. Simulation results indicate that the novel scheme has better performance under heavy channel load and when the traffic has high degree of self-similarity.
2008, 30(3): 612-615.
doi: 10.3724/SP.J.1146.2006.01294
Abstract:
This paper studies the power allocation problem in a cooperative diversity system with multi-relays based on decode-and-forward in fading channels. The system model assumes both the source and the destination have the complete Channel State Information (CSI). A relay selection and power allocation scheme is proposed to minimize the system outage probability by maximize the system capacity on each channel realization with power restraint. Simulation results verify the improved performance of the proposed method.
This paper studies the power allocation problem in a cooperative diversity system with multi-relays based on decode-and-forward in fading channels. The system model assumes both the source and the destination have the complete Channel State Information (CSI). A relay selection and power allocation scheme is proposed to minimize the system outage probability by maximize the system capacity on each channel realization with power restraint. Simulation results verify the improved performance of the proposed method.
2008, 30(3): 616-620.
doi: 10.3724/SP.J.1146.2006.01206
Abstract:
This paper proposes an intuitive modeling of a prior information with Signal-to-Noise-Ratio (SNR) mismatch for Bit-Interleaved Coded Modulation with Iterative Decoding (BICM-ID) systems. The extrinsic information transfer chart is applied to analyze the impact on BICM-ID caused by non-ideal SNR estimation for different constellations, labeling methods and channel types. It is found that the results are semblable for these cases. The performance of BICM-ID systems with different frame length is compared with SNR mismatch varied. The system performance decreases acutely when SNR mismatch is smaller than -3dB and maintains when it is bigger than 0dB for long frame, while it is more sensitive for SNR underestimation than overestimation for moderate and short frames. Besides these, SNR estimation methods are presented for 8PSK in Gaussian channel and 16QAM in Gaussian and Rayleigh channels. Finally, simulation results show these methods are effective.
This paper proposes an intuitive modeling of a prior information with Signal-to-Noise-Ratio (SNR) mismatch for Bit-Interleaved Coded Modulation with Iterative Decoding (BICM-ID) systems. The extrinsic information transfer chart is applied to analyze the impact on BICM-ID caused by non-ideal SNR estimation for different constellations, labeling methods and channel types. It is found that the results are semblable for these cases. The performance of BICM-ID systems with different frame length is compared with SNR mismatch varied. The system performance decreases acutely when SNR mismatch is smaller than -3dB and maintains when it is bigger than 0dB for long frame, while it is more sensitive for SNR underestimation than overestimation for moderate and short frames. Besides these, SNR estimation methods are presented for 8PSK in Gaussian channel and 16QAM in Gaussian and Rayleigh channels. Finally, simulation results show these methods are effective.
2008, 30(3): 621-624.
doi: 10.3724/SP.J.1146.2006.01189
Abstract:
In HF communication, the signals suffer from the effect of delay spread, time-varying interference and additive Gaussian noise. The performance of conventional differential demodulator is very poor in bad HF channel. A novel frequency domain differential demodulator is proposed which employs blind phase estimation to cancel the effect of phase error caused by delay spread. Its performance is studied through simulation in bad HF channel, and comparing with the conventional differential demodulator is described. Simulation results show that the proposed technique outperforms conventional differential demodulators in performance of transmission over the fast fading HF channel.
In HF communication, the signals suffer from the effect of delay spread, time-varying interference and additive Gaussian noise. The performance of conventional differential demodulator is very poor in bad HF channel. A novel frequency domain differential demodulator is proposed which employs blind phase estimation to cancel the effect of phase error caused by delay spread. Its performance is studied through simulation in bad HF channel, and comparing with the conventional differential demodulator is described. Simulation results show that the proposed technique outperforms conventional differential demodulators in performance of transmission over the fast fading HF channel.
2008, 30(3): 625-629.
doi: 10.3724/SP.J.1146.2006.01315
Abstract:
To further reduce the redundancy of the existing non-exhaustive list MAP detection algorithms, which is induced by defining a fixed and large list size, an Adaptive Size List Sphere Decoding (ASLSD) algorithm is proposed. Through updating radius and setting stop criterion, the resulted detection list has a variable length which is adaptive with the SNR and the iteration. Moreover, by combining LSD with a list, the repeated detection with different radius is avoided. Simulation shows that with slight loss in performance, the proposed algorithm leads to a much shortened detection list, which means a simplified receiver.
To further reduce the redundancy of the existing non-exhaustive list MAP detection algorithms, which is induced by defining a fixed and large list size, an Adaptive Size List Sphere Decoding (ASLSD) algorithm is proposed. Through updating radius and setting stop criterion, the resulted detection list has a variable length which is adaptive with the SNR and the iteration. Moreover, by combining LSD with a list, the repeated detection with different radius is avoided. Simulation shows that with slight loss in performance, the proposed algorithm leads to a much shortened detection list, which means a simplified receiver.
2008, 30(3): 630-633.
doi: 10.3724/SP.J.1146.2006.01411
Abstract:
A novel algorithm is proposed in this literature to estimate the transmit and receive antenna correlation matrix, RT and RR, in MIMO system. Theoretical analysis indicates that the estimation performance is improved when more training number is used. Interestingly, the performance of the estimation of antenna correlation matrix at one side is highly correlated to the correlation degree of the antennas at the other side, but not the correlation degree of itself. Moreover, the relationship between estimation performance and antenna numbers is also presented. Simulation results show that mean error of this algorithm is generally below 0.05 when 200 training symbols are used.
A novel algorithm is proposed in this literature to estimate the transmit and receive antenna correlation matrix, RT and RR, in MIMO system. Theoretical analysis indicates that the estimation performance is improved when more training number is used. Interestingly, the performance of the estimation of antenna correlation matrix at one side is highly correlated to the correlation degree of the antennas at the other side, but not the correlation degree of itself. Moreover, the relationship between estimation performance and antenna numbers is also presented. Simulation results show that mean error of this algorithm is generally below 0.05 when 200 training symbols are used.
2008, 30(3): 634-637.
doi: 10.3724/SP.J.1146.2006.01387
Abstract:
MIMO technology can utilize spatial multiplexing in link level or use multiuser scheduling in system level to increase system capacity. However it is not enough only to optimize the performance of link level or only to schedule the multiuser in system level respectively. Combining both to optimize can obtain higher system capacity. A novel Cross Optimization Scheduling Algorithm (COSA) is proposed, which combines the scheduling strategy in system level with the physical optimization in link level and water-filling algorithm is used to adjust power allocation of each antenna. Simulation results show that COSA algorithm not only provides the fair channel access chance to users in system level but utilizes spatial multiplexing of MIMO system and allocates power in link level to increase system capacity.
MIMO technology can utilize spatial multiplexing in link level or use multiuser scheduling in system level to increase system capacity. However it is not enough only to optimize the performance of link level or only to schedule the multiuser in system level respectively. Combining both to optimize can obtain higher system capacity. A novel Cross Optimization Scheduling Algorithm (COSA) is proposed, which combines the scheduling strategy in system level with the physical optimization in link level and water-filling algorithm is used to adjust power allocation of each antenna. Simulation results show that COSA algorithm not only provides the fair channel access chance to users in system level but utilizes spatial multiplexing of MIMO system and allocates power in link level to increase system capacity.
2008, 30(3): 638-642.
doi: 10.3724/SP.J.1146.2006.01162
Abstract:
A new fairness criterion based on the joint optimization of the physical layer and the MAC layer is proposed in this paper. It is suitable for both the real-time traffic and the non-real-time traffic. According to the criterion a new cross-layer resource allocation scheme is presented. Simulation results confirm the significant performance improvement and the user QoS satisfaction fairness by the proposed scheme.
A new fairness criterion based on the joint optimization of the physical layer and the MAC layer is proposed in this paper. It is suitable for both the real-time traffic and the non-real-time traffic. According to the criterion a new cross-layer resource allocation scheme is presented. Simulation results confirm the significant performance improvement and the user QoS satisfaction fairness by the proposed scheme.
2008, 30(3): 643-647.
doi: 10.3724/SP.J.1146.2006.01244
Abstract:
The Joint Probability Density Function (JPDF) of signal envelope and its derivative, which is subjected to composite fading, is analyzed by means of conditional probability density. New uniform expressions for the average Level Crossing Rate (LCR) and Average Fade Duration (AFD) of signal envelope are derived using the above obtained JPDF, then some results of the PDF, Cumulative Distribution Function (CDF), average LCR and AFD of signal envelopes, characterized by Rayleigh-Lognormal, Ricean-Lognormal, Nakagami-Lognormal fading models, are derived via Gauss-Hermite quadrature. Simulation results validate the correctness of these expressions.
The Joint Probability Density Function (JPDF) of signal envelope and its derivative, which is subjected to composite fading, is analyzed by means of conditional probability density. New uniform expressions for the average Level Crossing Rate (LCR) and Average Fade Duration (AFD) of signal envelope are derived using the above obtained JPDF, then some results of the PDF, Cumulative Distribution Function (CDF), average LCR and AFD of signal envelopes, characterized by Rayleigh-Lognormal, Ricean-Lognormal, Nakagami-Lognormal fading models, are derived via Gauss-Hermite quadrature. Simulation results validate the correctness of these expressions.
2008, 30(3): 648-651.
doi: 10.3724/SP.J.1146.2006.01229
Abstract:
Multi-Channel Decision Feedback Equalizer (MC-DFE) is the main method to overcome multipath fading and eliminate Inter-Symbol Interference (ISI) for underwater acoustic coherent communications. To reduce computational complexity and improve the precision of the data processing, an adaptive and self-optimized pre-combination multi-channel decision feedback equalizer is proposed which consists of Fast self-Optimized LMS Diversity Combiner (FOLMSDC), Fast self-Optimized LMS (FOLMS) and Fast self-Optimized LMS Phase Compensation (FOLMSPC). The proposed algorithm uses single error signal to adjust coefficients according to Minimize Mean Square Error (MMSE) scheme. The proposed algorithm is analyzed through simulation experiments and lake experiments. The experiment results show that proposed algorithm can further reduce computational complexity, and the algorithm performance is better than available algorithms.
Multi-Channel Decision Feedback Equalizer (MC-DFE) is the main method to overcome multipath fading and eliminate Inter-Symbol Interference (ISI) for underwater acoustic coherent communications. To reduce computational complexity and improve the precision of the data processing, an adaptive and self-optimized pre-combination multi-channel decision feedback equalizer is proposed which consists of Fast self-Optimized LMS Diversity Combiner (FOLMSDC), Fast self-Optimized LMS (FOLMS) and Fast self-Optimized LMS Phase Compensation (FOLMSPC). The proposed algorithm uses single error signal to adjust coefficients according to Minimize Mean Square Error (MMSE) scheme. The proposed algorithm is analyzed through simulation experiments and lake experiments. The experiment results show that proposed algorithm can further reduce computational complexity, and the algorithm performance is better than available algorithms.
2008, 30(3): 652-655.
doi: 10.3724/SP.J.1146.2006.01194
Abstract:
Based on the relationship between the extension Galois fields and the prime Galois fields, this paper presents a new construction of frequency-hopping sequences, here designated as general quadratic prime codes, by expanding the construct idea of prime codes to extension Galois fields. Taking a general quadratic irreducible polynomial as the module and based on the multiplication of extension Galois fields, quadratic prime codes with more sequences and longer period possess ideal Hamming autocorrelation and nearly ideal Hamming cross-correlation properties of no greater than two. Furthermore, general quadratic prime codes can be further partitioned to get frequency hopping sequences groups in which the maximum Hamming cross-correlation between any two FH sequences in the same group is at most one.
Based on the relationship between the extension Galois fields and the prime Galois fields, this paper presents a new construction of frequency-hopping sequences, here designated as general quadratic prime codes, by expanding the construct idea of prime codes to extension Galois fields. Taking a general quadratic irreducible polynomial as the module and based on the multiplication of extension Galois fields, quadratic prime codes with more sequences and longer period possess ideal Hamming autocorrelation and nearly ideal Hamming cross-correlation properties of no greater than two. Furthermore, general quadratic prime codes can be further partitioned to get frequency hopping sequences groups in which the maximum Hamming cross-correlation between any two FH sequences in the same group is at most one.
2008, 30(3): 656-659.
doi: 10.3724/SP.J.1146.2006.01236
Abstract:
The array calibration for spreading communication system is discussed in this paper. Based on both PARAFAC and array rotation, an array calibration approach is proposed. This calibration approach is employing auxiliary spreading code signal, PARAFAC analysis and array rotation. Array calibration can be performed without the DOA knowledge of auxiliary signal, which is convenient in practice. Simulation result shows that the calibration approach is feasible and efficient.
The array calibration for spreading communication system is discussed in this paper. Based on both PARAFAC and array rotation, an array calibration approach is proposed. This calibration approach is employing auxiliary spreading code signal, PARAFAC analysis and array rotation. Array calibration can be performed without the DOA knowledge of auxiliary signal, which is convenient in practice. Simulation result shows that the calibration approach is feasible and efficient.
2008, 30(3): 660-664.
doi: 10.3724/SP.J.1146.2007.00117
Abstract:
Plateaued functions include Bent functions and partially bent functions, but are wider than them. They have good cryptographic properties, and are important in the design of nonlinear combining functions. This paper proves some properties of Plateaued functions with Walsh spectrum and auto-correlation coefficient, and presents some other properties of Plateaued functions.
Plateaued functions include Bent functions and partially bent functions, but are wider than them. They have good cryptographic properties, and are important in the design of nonlinear combining functions. This paper proves some properties of Plateaued functions with Walsh spectrum and auto-correlation coefficient, and presents some other properties of Plateaued functions.
2008, 30(3): 665-667.
doi: 10.3724/SP.J.1146.2006.01269
Abstract:
Decimation attack is one attack method of stream ciphers. In this paper, the decimation attack to prime Linear Feedback Shift Register(LFSR) sequences is investigated. The connection of decimation distance and the linear complexity of the original sequence and the decimate sequence is presented. The minimum decimate distance that makes the linear complexity of the decimate sequence less than that of the original sequence is obtained. The minimum known plaintext amount for decimation attack is given, and the practical feasibility of the decimation attack to prime LFSRs is analyzed. It is proved that the decimation attack to prime LFSR is useful possibly only in the case that the degree of LFSR is very small.
Decimation attack is one attack method of stream ciphers. In this paper, the decimation attack to prime Linear Feedback Shift Register(LFSR) sequences is investigated. The connection of decimation distance and the linear complexity of the original sequence and the decimate sequence is presented. The minimum decimate distance that makes the linear complexity of the decimate sequence less than that of the original sequence is obtained. The minimum known plaintext amount for decimation attack is given, and the practical feasibility of the decimation attack to prime LFSRs is analyzed. It is proved that the decimation attack to prime LFSR is useful possibly only in the case that the degree of LFSR is very small.
2008, 30(3): 668-671.
doi: 10.3724/SP.J.1146.2006.01661
Abstract:
In a designated verifier proxy signature scheme, the original signer delegates his signing capability to the proxy signer in such a way that the latter can sign messages on behalf of the former, but only designated verifier can believe the validity of the signatures. The security of the known designated verifier proxy signature schemes is proven in the random oracle model. In this paper, based on Waters signature scheme, the first designated verifier proxy signature scheme is presented and is provably secure without random oracles. The proposed scheme is proven secure against existential forgery in adaptively chosen message attack under the weak Gap Bilinear Diffie-Hellman assumption.
In a designated verifier proxy signature scheme, the original signer delegates his signing capability to the proxy signer in such a way that the latter can sign messages on behalf of the former, but only designated verifier can believe the validity of the signatures. The security of the known designated verifier proxy signature schemes is proven in the random oracle model. In this paper, based on Waters signature scheme, the first designated verifier proxy signature scheme is presented and is provably secure without random oracles. The proposed scheme is proven secure against existential forgery in adaptively chosen message attack under the weak Gap Bilinear Diffie-Hellman assumption.
2008, 30(3): 672-675.
doi: 10.3724/SP.J.1146.2006.01396
Abstract:
Proxy signcryption schemes allow an original signcrypter to delegate his signcryption rights to a proxy signcrypter. However, the existing proxy signcryption schemes have the defect that can not solve the proxy revocation problem, that is, how to revoke the delegated signcryption rights of a proxy signcrypter. Based on the bilinear pairings, a new identity-based proxy signcryption scheme is proposed in this paper. A SEcurity Mediator (SEM) is introduced in the scheme to help a proxy signcrypter to generate valid proxy signcryptions, to examine whether a proxy signcrypter signcrypts messages according to the warrant, and to check the revocation of a proxy signcrypter. It is shown that the proposed scheme satisfies all the security requirements of a secure proxy signcryption scheme. Moreover, a proxy signcrypter must cooperate with the SEM to generate a valid proxy signcryption, which makes the new scheme has an effective and fast proxy revocation.
Proxy signcryption schemes allow an original signcrypter to delegate his signcryption rights to a proxy signcrypter. However, the existing proxy signcryption schemes have the defect that can not solve the proxy revocation problem, that is, how to revoke the delegated signcryption rights of a proxy signcrypter. Based on the bilinear pairings, a new identity-based proxy signcryption scheme is proposed in this paper. A SEcurity Mediator (SEM) is introduced in the scheme to help a proxy signcrypter to generate valid proxy signcryptions, to examine whether a proxy signcrypter signcrypts messages according to the warrant, and to check the revocation of a proxy signcrypter. It is shown that the proposed scheme satisfies all the security requirements of a secure proxy signcryption scheme. Moreover, a proxy signcrypter must cooperate with the SEM to generate a valid proxy signcryption, which makes the new scheme has an effective and fast proxy revocation.
2008, 30(3): 676-680.
doi: 10.3724/SP.J.1146.2006.01357
Abstract:
A Q-learning based Joint Radio Resource Management (JRRM) algorithm is proposed for the autonomic resource optimization in a B3G system with heterogeneous Radio Access Technologies (RAT). Through the trial-and-error interactions with the radio environment, the JRRM controller learns to allocate the proper RAT and the service bandwidth for each session. A backpropagation neural network is adopted to generalize the large input state space to reduce memory requirement. Simulation results show that the proposed algorithm not only realizes the autonomy of JRRM through the online learning process, but also achieves well trade-off between the spectrum utility and the blocking probability.
A Q-learning based Joint Radio Resource Management (JRRM) algorithm is proposed for the autonomic resource optimization in a B3G system with heterogeneous Radio Access Technologies (RAT). Through the trial-and-error interactions with the radio environment, the JRRM controller learns to allocate the proper RAT and the service bandwidth for each session. A backpropagation neural network is adopted to generalize the large input state space to reduce memory requirement. Simulation results show that the proposed algorithm not only realizes the autonomy of JRRM through the online learning process, but also achieves well trade-off between the spectrum utility and the blocking probability.
2008, 30(3): 681-684.
doi: 10.3724/SP.J.1146.2006.01068
Abstract:
Fairness is an important issue when accessing a shared wireless channel. This paper presents a new fair queuing methodPFQ, after studied some fairness solutions presented recently to IEEE WLAN. And then presents a new IEEE 802.11e MAC layer protocolP-EDCF by introducing PFQ into EDCF(Enhanced Distributed Coordination Function). This new protocol is fairer and more efficient in channel accessing by modifying the priority control method of EDCF. The results of simulations proves that P-EDCF performs much fairer than EDCF without reducing the efficiency of the system.
Fairness is an important issue when accessing a shared wireless channel. This paper presents a new fair queuing methodPFQ, after studied some fairness solutions presented recently to IEEE WLAN. And then presents a new IEEE 802.11e MAC layer protocolP-EDCF by introducing PFQ into EDCF(Enhanced Distributed Coordination Function). This new protocol is fairer and more efficient in channel accessing by modifying the priority control method of EDCF. The results of simulations proves that P-EDCF performs much fairer than EDCF without reducing the efficiency of the system.
2008, 30(3): 685-689.
doi: 10.3724/SP.J.1146.2006.01168
Abstract:
In the Universal Mobile Telecommunication System (UMTS) core network, an optional element Gateway Location Register (GLR) is introduced in visited networks to reduce location management signaling cost. In traditional GLR scheme, the GLR is usually centralized which may be a potential bottleneck due to the increase of number of roaming subscribers. In addition, the failure of GLR in the centralized GLR scheme will be fatal to the services for roaming subscribers. In this paper, a novel distributed GLR scheme is presented where the first visited VLR in visited network becomes the GLR for a roaming user to increase the ability to resist against failure of GLR and reduce the potential bottleneck. Analytic results show that the distributed scheme surpasses the traditional centralized GLR scheme in performance parameters such as robustness, resistance-bottleneck, database tracking cost and tracking-delay. In addition, the proposed scheme is easily realized since it only involves in software upgrade in corresponding network elements.
In the Universal Mobile Telecommunication System (UMTS) core network, an optional element Gateway Location Register (GLR) is introduced in visited networks to reduce location management signaling cost. In traditional GLR scheme, the GLR is usually centralized which may be a potential bottleneck due to the increase of number of roaming subscribers. In addition, the failure of GLR in the centralized GLR scheme will be fatal to the services for roaming subscribers. In this paper, a novel distributed GLR scheme is presented where the first visited VLR in visited network becomes the GLR for a roaming user to increase the ability to resist against failure of GLR and reduce the potential bottleneck. Analytic results show that the distributed scheme surpasses the traditional centralized GLR scheme in performance parameters such as robustness, resistance-bottleneck, database tracking cost and tracking-delay. In addition, the proposed scheme is easily realized since it only involves in software upgrade in corresponding network elements.
2008, 30(3): 690-694.
doi: 10.3724/SP.J.1146.2006.01246
Abstract:
The dependence is formed between different function entities due to service request. The concept named adjacent resources is proposed according to the dependence and the NGNs flatter architecture than the traditional telecommunication network. A distributed algorithm for overload control is presented, which is based on the proposed concept. It throttles the request at the logical adjacent resources. The algorithm can effectively handle with the overload control issuing from SIP fork or load balance scenarios. The mechanism can improve the network effective throughput by throttling the service request as soon as possible, which satisfies with the overload control requirements presented in the ETSI TISPAN draft.
The dependence is formed between different function entities due to service request. The concept named adjacent resources is proposed according to the dependence and the NGNs flatter architecture than the traditional telecommunication network. A distributed algorithm for overload control is presented, which is based on the proposed concept. It throttles the request at the logical adjacent resources. The algorithm can effectively handle with the overload control issuing from SIP fork or load balance scenarios. The mechanism can improve the network effective throughput by throttling the service request as soon as possible, which satisfies with the overload control requirements presented in the ETSI TISPAN draft.
2008, 30(3): 695-698.
doi: 10.3724/SP.J.1146.2007.00065
Abstract:
Nodes in Cognitive radio networks can change their communication frequency actively, which affects networks topology and routing. In this paper, a joint routing and spectrum assignment scheme for cognitive radio networks is proposed that assigns spectrum bands during on-demand routing. Simulation results show that, in the scenario of cognitive radio networks that have multiple data flows, this scheme provides better adaptability and incurs much lower cumulative delay when comparing to other routing approaches.
Nodes in Cognitive radio networks can change their communication frequency actively, which affects networks topology and routing. In this paper, a joint routing and spectrum assignment scheme for cognitive radio networks is proposed that assigns spectrum bands during on-demand routing. Simulation results show that, in the scenario of cognitive radio networks that have multiple data flows, this scheme provides better adaptability and incurs much lower cumulative delay when comparing to other routing approaches.
2008, 30(3): 699-702.
doi: 10.3724/SP.J.1146.2007.00120
Abstract:
Clustering algorithm is a kind of key technique used to reduce energy consumption, which can increase network scalability and lifetime. Considering spatial data correlation of sensor nodes, a novel Event Driven Clustering Algorithm (EDCA) based on spatial correlation in wireless sensor networks is proposed. According to the user-provided error-tolerance threshold and spatial data correlation Markovian model, the algorithm divides the event sensing range into dummy polar coordinate equal layer. Only one node per equal layer is selected as cluster-head by present maximal remainder energy. Mobile agent collects sensing information of cluster-heads. The mechanism reduces the number of transmission and saves energy efficiently.
Clustering algorithm is a kind of key technique used to reduce energy consumption, which can increase network scalability and lifetime. Considering spatial data correlation of sensor nodes, a novel Event Driven Clustering Algorithm (EDCA) based on spatial correlation in wireless sensor networks is proposed. According to the user-provided error-tolerance threshold and spatial data correlation Markovian model, the algorithm divides the event sensing range into dummy polar coordinate equal layer. Only one node per equal layer is selected as cluster-head by present maximal remainder energy. Mobile agent collects sensing information of cluster-heads. The mechanism reduces the number of transmission and saves energy efficiently.
2008, 30(3): 703-706.
doi: 10.3724/SP.J.1146.2006.01441
Abstract:
An approach for QoS prediction of Web service composition with transaction mechanism is proposed. With the analysis on the exception handler policy of the transaction and the effect on the execution processes of composite services, the specification model of composite services is defined. On the base of the model, an algorithm to predict the QoS of composite services is proposed. The experiment shows the approach gets lower error rate than the existing workflow based prediction approach while estimating the composite service with transaction mechanism, and proves the feasibility of the algorithm.
An approach for QoS prediction of Web service composition with transaction mechanism is proposed. With the analysis on the exception handler policy of the transaction and the effect on the execution processes of composite services, the specification model of composite services is defined. On the base of the model, an algorithm to predict the QoS of composite services is proposed. The experiment shows the approach gets lower error rate than the existing workflow based prediction approach while estimating the composite service with transaction mechanism, and proves the feasibility of the algorithm.
2008, 30(3): 707-711.
doi: 10.3724/SP.J.1146.2006.01324
Abstract:
The coverage problem has been expanded with presenting the reverse problem which has not yet been thoroughly studied, i.e., given an intrusion detecting WSN, how to design an object locomotion track with high security and fast speed. Based on this goal, a heuristic algorithm, namely, the SS (shortened Security and Speed) algorithm is introduced to build the corresponding tracks with adjusting the secure and speedy parameters under the varying application demands without the support of the global topology information. And the corresponding Integrated Gain (IG) will be posed to measure the SS ability. Simulations show that, the SS algorithm is not sensitive to the density and distribution of sensors, and solves the defects of working blind spot and track flooding. Compared with the traditional Voronoi algorithm, the proposed SS algorithm matches the results of the optimal more closely and has the smaller complexity than the optimal approach.
The coverage problem has been expanded with presenting the reverse problem which has not yet been thoroughly studied, i.e., given an intrusion detecting WSN, how to design an object locomotion track with high security and fast speed. Based on this goal, a heuristic algorithm, namely, the SS (shortened Security and Speed) algorithm is introduced to build the corresponding tracks with adjusting the secure and speedy parameters under the varying application demands without the support of the global topology information. And the corresponding Integrated Gain (IG) will be posed to measure the SS ability. Simulations show that, the SS algorithm is not sensitive to the density and distribution of sensors, and solves the defects of working blind spot and track flooding. Compared with the traditional Voronoi algorithm, the proposed SS algorithm matches the results of the optimal more closely and has the smaller complexity than the optimal approach.
2008, 30(3): 712-716.
doi: 10.3724/SP.J.1146.2006.01316
Abstract:
Designing optimal monitoring infrastructure is a key step for network monitoring. In this paper the problem of optimizing a hierarchical monitoring system is to reduce the cost of deployment of the monitoring infrastructure by identifying a minimum aggregating set subject to bandwidth constraints on the individual links and delay constraint on the aggregating path. The problem is NP-hard and approximation algorithm is proposed with performance guarantee ln d + 1 under unique aggregating route, where d is the number of monitoring object.
Designing optimal monitoring infrastructure is a key step for network monitoring. In this paper the problem of optimizing a hierarchical monitoring system is to reduce the cost of deployment of the monitoring infrastructure by identifying a minimum aggregating set subject to bandwidth constraints on the individual links and delay constraint on the aggregating path. The problem is NP-hard and approximation algorithm is proposed with performance guarantee ln d + 1 under unique aggregating route, where d is the number of monitoring object.
2008, 30(3): 717-720.
doi: 10.3724/SP.J.1146.2006.01200
Abstract:
A locally regressive algorithm for color calibration is proposed. Starting from the principle of structural risk minimization, the algorithm regards the residual of total least squares as the empirical risk and chooses the K-nearest neighborhood of a calibration color point to implement local regression for color calibration. Experimental results indicate that the proposed algorithm is superior, in both precision and robustness, to multiple regression and multiple regression based on subspaces and that its average error, maximum error and error standard deviation decrease by 46%(27%), 57%(21%) and 42%(20%) respectively.
A locally regressive algorithm for color calibration is proposed. Starting from the principle of structural risk minimization, the algorithm regards the residual of total least squares as the empirical risk and chooses the K-nearest neighborhood of a calibration color point to implement local regression for color calibration. Experimental results indicate that the proposed algorithm is superior, in both precision and robustness, to multiple regression and multiple regression based on subspaces and that its average error, maximum error and error standard deviation decrease by 46%(27%), 57%(21%) and 42%(20%) respectively.
2008, 30(3): 721-724.
doi: 10.3724/SP.J.1146.2006.01328
Abstract:
In this paper, a gender and age classification method, in which age is classified into four classes: child, youth, midlife and agedness, based on shape free texture and boosting learning is introduced. After a face is detected, face alignment extracts 88 facial landmarks by which the face image is normalized to a shape free texture. Further more, three kinds of local feature, Haar like feature, LBP histogram and Gabor jet are extracted from the shape free texture; and boosting learning method is used for training classifiers. The experimental results show that, LBP histogram can be used for robust recognition of children and old people, Haar like feature is more efficient for discriminating young and middle aged people, and Gabor Jet fits for gender classification best.
In this paper, a gender and age classification method, in which age is classified into four classes: child, youth, midlife and agedness, based on shape free texture and boosting learning is introduced. After a face is detected, face alignment extracts 88 facial landmarks by which the face image is normalized to a shape free texture. Further more, three kinds of local feature, Haar like feature, LBP histogram and Gabor jet are extracted from the shape free texture; and boosting learning method is used for training classifiers. The experimental results show that, LBP histogram can be used for robust recognition of children and old people, Haar like feature is more efficient for discriminating young and middle aged people, and Gabor Jet fits for gender classification best.
2008, 30(3): 725-729.
doi: 10.3724/SP.J.1146.2006.01382
Abstract:
In practical applications of information retrieval, such as the search engine,the query user submitted contains only several keywords usually. This will cause unmatched issue of word of relevant files and users query and have more serious negative effects on the performance of information retrieval. On the basis of analyzing of process of producing query, this paper puts forward a new method of query expansion on the basis of model of statistical machine translation. The approach extract related terms between documents and query through statistical machine translation model, then expand into query. The experiment result on TREC data collection shows the proposed method, SMT-based query expansion, has 12 - 17% of the improvement all the time more than the language model method without expanding. Compared to the popular approach of query expansion, pseudo feedback, the proposed method has the competed average precision.
In practical applications of information retrieval, such as the search engine,the query user submitted contains only several keywords usually. This will cause unmatched issue of word of relevant files and users query and have more serious negative effects on the performance of information retrieval. On the basis of analyzing of process of producing query, this paper puts forward a new method of query expansion on the basis of model of statistical machine translation. The approach extract related terms between documents and query through statistical machine translation model, then expand into query. The experiment result on TREC data collection shows the proposed method, SMT-based query expansion, has 12 - 17% of the improvement all the time more than the language model method without expanding. Compared to the popular approach of query expansion, pseudo feedback, the proposed method has the competed average precision.
2008, 30(3): 730-733.
doi: 10.3724/SP.J.1146.2006.01271
Abstract:
Wire antennas mounted on complex platform combined with conducting and dielectric objects are analyzed in this paper. EFIE-PMCHW boundary coupled integral equations are constructed by using equivalence principle. The surface, wire and junction basis functions are defined to simulate the current distribution on the complex structure and the selection of the basis functions on the boundary of the conducting/dielectric interface is analyzed. Multi-Level Fast Multipole Algorithm (MLFMA) is employed to accelerate the matrix-vector multiplication and solve the loss problem. Application of MLFMA increases the ability to solve the large-scale problem. Numerical examples validate this method and demonstrate the accuracy and high efficiency of this method.
Wire antennas mounted on complex platform combined with conducting and dielectric objects are analyzed in this paper. EFIE-PMCHW boundary coupled integral equations are constructed by using equivalence principle. The surface, wire and junction basis functions are defined to simulate the current distribution on the complex structure and the selection of the basis functions on the boundary of the conducting/dielectric interface is analyzed. Multi-Level Fast Multipole Algorithm (MLFMA) is employed to accelerate the matrix-vector multiplication and solve the loss problem. Application of MLFMA increases the ability to solve the large-scale problem. Numerical examples validate this method and demonstrate the accuracy and high efficiency of this method.
2008, 30(3): 734-737.
doi: 10.3724/SP.J.1146.2007.00394
Abstract:
The integration paths and the expansion functions in two-level DCIM are investigated. Accurate expansion functions are proposed and a novel integration path, which can be used to calculate the spatial Greens functions of microstrip structure, is presented. The spatial Greens functions of a five-layer microstrip structure are calculated, and the results show the effectiveness of the proposed method.
The integration paths and the expansion functions in two-level DCIM are investigated. Accurate expansion functions are proposed and a novel integration path, which can be used to calculate the spatial Greens functions of microstrip structure, is presented. The spatial Greens functions of a five-layer microstrip structure are calculated, and the results show the effectiveness of the proposed method.
2008, 30(3): 738-741.
doi: 10.3724/SP.J.1146.2006.01910
Abstract:
A novel pattern synthesis algorithm based on optimization theory is presented in the paper. The algorithm initializes with the set of weights which is used to maintain the desired mainlobe with phase-independent derivative constraints, then the residual weight vector is optimized to acquire the desired sidelobe, the influence of the residual weight on the mainlobe will be confined to a certain bound. Object function can be considered as the Second-Order Cone Programming(SOCP). Compared with the existing pattern synthesis algorithms which synthesize the array patterns with both desired magnitude and phase response, the proposed algorithm which synthesizes the array patterns with only desired magnitude response, can efficiently work and is unrelated to the reference point. Computer simulations on some arbitrary arrays demonstrate the effectiveness and validity of the proposed algorithm.
A novel pattern synthesis algorithm based on optimization theory is presented in the paper. The algorithm initializes with the set of weights which is used to maintain the desired mainlobe with phase-independent derivative constraints, then the residual weight vector is optimized to acquire the desired sidelobe, the influence of the residual weight on the mainlobe will be confined to a certain bound. Object function can be considered as the Second-Order Cone Programming(SOCP). Compared with the existing pattern synthesis algorithms which synthesize the array patterns with both desired magnitude and phase response, the proposed algorithm which synthesizes the array patterns with only desired magnitude response, can efficiently work and is unrelated to the reference point. Computer simulations on some arbitrary arrays demonstrate the effectiveness and validity of the proposed algorithm.
2008, 30(3): 742-745.
doi: 10.3724/SP.J.1146.2006.01292
Abstract:
In order to eliminate the late-time-oscillate in time domain integral equation, EFIE, MFIE and CFIE which based on the method of MOD using Laguerre polynomials as temporal basis function were introduced, induced current distributing and backward scattering of far field in time domain and monostatic RCS of a conducting sphere and a cylinder were given. The results showed that they eliminate the late-time-oscillate effectively and the CFIE has higher precision than the other two.
In order to eliminate the late-time-oscillate in time domain integral equation, EFIE, MFIE and CFIE which based on the method of MOD using Laguerre polynomials as temporal basis function were introduced, induced current distributing and backward scattering of far field in time domain and monostatic RCS of a conducting sphere and a cylinder were given. The results showed that they eliminate the late-time-oscillate effectively and the CFIE has higher precision than the other two.
2008, 30(3): 752-755.
doi: 10.3724/SP.J.1146.2006.01255
Abstract:
This paper models the process of video compression, DCT quantization noise and motion estimation noise with exploiting the quantization step size and motion information embedded in the bit-stream. Together with the additive noise term of imaging, the proposed total noise term adaptively adjusts for different quantizers. With a Huber-Markov Random Field (HMRF) as the prior model, the gradient descent algorithm and MAP super-resolution reconstruction are presented and their performances are also analyzed. Simulation results show that proposed algorithm obtains better objective and subjective performances.
This paper models the process of video compression, DCT quantization noise and motion estimation noise with exploiting the quantization step size and motion information embedded in the bit-stream. Together with the additive noise term of imaging, the proposed total noise term adaptively adjusts for different quantizers. With a Huber-Markov Random Field (HMRF) as the prior model, the gradient descent algorithm and MAP super-resolution reconstruction are presented and their performances are also analyzed. Simulation results show that proposed algorithm obtains better objective and subjective performances.
2008, 30(3): 756-758.
doi: 10.3724/SP.J.1146.2006.01351
Abstract:
A new antenna array consisting of U-slot microstrip patches is introduced. The array is fed by a novel wideband matching network and the energy is coupled to the U-slot patches through the feed slot on the ground. The operating frequency range of antenna array is from 10.4 to 16.7GHz, corresponding to an impedance bandwidth of 46.5%. The gain is above 13dBi from 11-15GHz.
A new antenna array consisting of U-slot microstrip patches is introduced. The array is fed by a novel wideband matching network and the energy is coupled to the U-slot patches through the feed slot on the ground. The operating frequency range of antenna array is from 10.4 to 16.7GHz, corresponding to an impedance bandwidth of 46.5%. The gain is above 13dBi from 11-15GHz.
2008, 30(3): 759-762.
doi: 10.3724/SP.J.1146.2006.01211
Abstract:
It is difficult to estimate channel knowledge in multiple input multiple output system which based on distributed antenna structure because the transmit signals are received asynchronously. Aiming at this matter, a differential encoding and detection scheme is proposed where channel estimation is not needed. The proposed scheme has the same spectral efficiency as V-BLAST scheme and can be applied to any number of transmit antenna and receive antenna, besides the number of receive antenna can be less than transmit antenna. The simulation results show that the BER performance is different corresponding to different channel propagation delay.
It is difficult to estimate channel knowledge in multiple input multiple output system which based on distributed antenna structure because the transmit signals are received asynchronously. Aiming at this matter, a differential encoding and detection scheme is proposed where channel estimation is not needed. The proposed scheme has the same spectral efficiency as V-BLAST scheme and can be applied to any number of transmit antenna and receive antenna, besides the number of receive antenna can be less than transmit antenna. The simulation results show that the BER performance is different corresponding to different channel propagation delay.
2008, 30(3): 746-751.
doi: 10.3724/SP.J.1146.2006.01680
Abstract:
In order to have a thorough understanding of the technical architecture of OMA DRM specification, and to actively promote creation of the Chinese DRM standard and application research of digital content protection technology, a systematic approach of OMA DRM is presented. The following functions of the latest release of OMA DRM2.0 are analyzed: rights object acquisition protocol, security model, architecture, content format and rights expression language. Then, these features are synthesized to present a complete working mechanism of OMA DRM: working flow and principle of OMA DRM. Finally, the capabilities of the latest release are analyzed by presenting the main difference between OMA DRM2.0 and OMA DRM1.0, and the limitations of OMA DRM2.0.
In order to have a thorough understanding of the technical architecture of OMA DRM specification, and to actively promote creation of the Chinese DRM standard and application research of digital content protection technology, a systematic approach of OMA DRM is presented. The following functions of the latest release of OMA DRM2.0 are analyzed: rights object acquisition protocol, security model, architecture, content format and rights expression language. Then, these features are synthesized to present a complete working mechanism of OMA DRM: working flow and principle of OMA DRM. Finally, the capabilities of the latest release are analyzed by presenting the main difference between OMA DRM2.0 and OMA DRM1.0, and the limitations of OMA DRM2.0.