Email alert
2005 Vol. 27, No. 4
Display Method:
2005, 27(4): 505-508.
Abstract:
A Spread Spectrum CI(SSCI) synthesized narrow pulse technique is put forward based on UWB CI pulse, a narrow pulse signal synthesized by several coherent carriers. The power spectrum density is decreased, the frequency usage efficiency is improved and the multi-access application is allowed. Through CI spectrum spreading of each coherent carrier and synthesizing them together, good results can be obtained. The narrow pulse signal synthesized by several SS subcarriers features good signal waveform and anti-multipath high resolution performance of UWB signal. Meanwhile, it is capable of correlative receiving the SS coherent subcarriers, so that the correlative receiving gain is improved and the intersymbol cross-interference is decreased. This study, the ultra-wide band SSCI signal is design to an optimal signal conforming to FCC and ETSI standards by signal optimizing design method, which can reduce the interference with other wireless systems. The paper gives theoretical analysis, design method and computer simulation results. This design method is significant to improve the performance of UWB communication system.
A Spread Spectrum CI(SSCI) synthesized narrow pulse technique is put forward based on UWB CI pulse, a narrow pulse signal synthesized by several coherent carriers. The power spectrum density is decreased, the frequency usage efficiency is improved and the multi-access application is allowed. Through CI spectrum spreading of each coherent carrier and synthesizing them together, good results can be obtained. The narrow pulse signal synthesized by several SS subcarriers features good signal waveform and anti-multipath high resolution performance of UWB signal. Meanwhile, it is capable of correlative receiving the SS coherent subcarriers, so that the correlative receiving gain is improved and the intersymbol cross-interference is decreased. This study, the ultra-wide band SSCI signal is design to an optimal signal conforming to FCC and ETSI standards by signal optimizing design method, which can reduce the interference with other wireless systems. The paper gives theoretical analysis, design method and computer simulation results. This design method is significant to improve the performance of UWB communication system.
2005, 27(4): 509-513.
Abstract:
Low-Density Parity Check(LDPC) coded Bit-Interleaved Coded Modulation (BICM) schemes are analyzed in this paper. The schemes using decoding method without iteration between demodulator and decoder are first simulated. Then a novel method using pre-hard-decision message is proposed. And the performance over Additive White Gaussian Noise (AWGN) and Rayieigh fading channels is analyzed. The decoding complexity is also considered. The new method proves that the conventional bit-metrics generation method is not optimal. Through the simulation results there is conclusion that the proposed schemes have better performance than the scheme without pre-decision message both over AWGN and Rayieigh fading channels with nearly same complexity.
Low-Density Parity Check(LDPC) coded Bit-Interleaved Coded Modulation (BICM) schemes are analyzed in this paper. The schemes using decoding method without iteration between demodulator and decoder are first simulated. Then a novel method using pre-hard-decision message is proposed. And the performance over Additive White Gaussian Noise (AWGN) and Rayieigh fading channels is analyzed. The decoding complexity is also considered. The new method proves that the conventional bit-metrics generation method is not optimal. Through the simulation results there is conclusion that the proposed schemes have better performance than the scheme without pre-decision message both over AWGN and Rayieigh fading channels with nearly same complexity.
2005, 27(4): 514-518.
Abstract:
The upper and lower bounds of the average codeword length of Golomb codes for arbitrary probability distributions as well as an optimal rule for choosing parameters are given in terms of the mean of sources. Furthermore, a class of extended gamma codes which are the generalization of Elias gamma code is constructed based on Golomb codes. The performance bounds and an optimal rule for choosing parameters are also given. Extended gamma codes are universal and can achieve asymptotically optimal performance under some conditions. Finally, a low complexity universal data compression framework based on Golomb codes and extended gamma codes is presented, and a sample system is constructed to indicate the significance of the data compression framework in practice.
The upper and lower bounds of the average codeword length of Golomb codes for arbitrary probability distributions as well as an optimal rule for choosing parameters are given in terms of the mean of sources. Furthermore, a class of extended gamma codes which are the generalization of Elias gamma code is constructed based on Golomb codes. The performance bounds and an optimal rule for choosing parameters are also given. Extended gamma codes are universal and can achieve asymptotically optimal performance under some conditions. Finally, a low complexity universal data compression framework based on Golomb codes and extended gamma codes is presented, and a sample system is constructed to indicate the significance of the data compression framework in practice.
An Improved Belief-Propagation Algorithm with Channel Estimation of LDPC under Block Fading Channels
2005, 27(4): 519-522.
Abstract:
Firstly it is proved by simulation that the Low Density Parity Check(LDPC) code has sound performance under block fading channel when decoded by general belief-propagation algorithm. Then based on its special iterative nature, the improved belif-propagation algorithm is proposed under block fading channel with channel estimation in every iterative step. The simulation demonstrates that the proposed algorithm can effectively reduce the iterative times of decoding.
Firstly it is proved by simulation that the Low Density Parity Check(LDPC) code has sound performance under block fading channel when decoded by general belief-propagation algorithm. Then based on its special iterative nature, the improved belif-propagation algorithm is proposed under block fading channel with channel estimation in every iterative step. The simulation demonstrates that the proposed algorithm can effectively reduce the iterative times of decoding.
2005, 27(4): 523-526.
Abstract:
In this paper, the performance of polynomial phase coefficient estimation algorithm based on High-order iguity Function (HAF) for non-polynomial phase signal with short sequences is discussed in detail. Further, ntaneous phase estimation method is developed on the basis of the idea of this algorithm. The main idea of the ;ssed algorithm is to divide the data sequence into several segments, approach the instantaneous phase of each short Lent by a low-order polynomial, estimate the parameters of the modeling polynomial-phase signal by HAF and Product methods, and finally integrate the whole phase with estimated instantaneous phase of each segment. The estimation mnance depends comparatively on the achievable accuracy of the segmented phase. The disadvantage of /PHAF-based polynomial-phase estimation method with short and non-polynomial phase sequences is analyzed in this r and some general conclusions are drawn after simulations.
In this paper, the performance of polynomial phase coefficient estimation algorithm based on High-order iguity Function (HAF) for non-polynomial phase signal with short sequences is discussed in detail. Further, ntaneous phase estimation method is developed on the basis of the idea of this algorithm. The main idea of the ;ssed algorithm is to divide the data sequence into several segments, approach the instantaneous phase of each short Lent by a low-order polynomial, estimate the parameters of the modeling polynomial-phase signal by HAF and Product methods, and finally integrate the whole phase with estimated instantaneous phase of each segment. The estimation mnance depends comparatively on the achievable accuracy of the segmented phase. The disadvantage of /PHAF-based polynomial-phase estimation method with short and non-polynomial phase sequences is analyzed in this r and some general conclusions are drawn after simulations.
2005, 27(4): 527-531.
Abstract:
The data-aimed (DA) Signal-to-Noise Ratio (SNR) estimation algorithm, Decision-Directed (DD) SNR estimation algorithm and a new blind SNR estimation algorithm for MPSK signals are presented in this paper with maximum-likelihood principle. The detail performance analysis, computer simulation and performance comparing with other SNR estimation algorithms are completed. The analysis and simulation results show that the DA algorithm has a perfect performance and as good as the performance lower bounds. The performance of DD algorithm depends directly on the accuracy of decision, so it is good when the SNR is high. But when the SNR is low and, especially, less than OdB, the estimation has relatively great bias. The new blind SNR estimation algorithm has broad estimation range, good performance and low computational complexity for M=2 (BPSK), but the performance degrades while Mis increasing.
The data-aimed (DA) Signal-to-Noise Ratio (SNR) estimation algorithm, Decision-Directed (DD) SNR estimation algorithm and a new blind SNR estimation algorithm for MPSK signals are presented in this paper with maximum-likelihood principle. The detail performance analysis, computer simulation and performance comparing with other SNR estimation algorithms are completed. The analysis and simulation results show that the DA algorithm has a perfect performance and as good as the performance lower bounds. The performance of DD algorithm depends directly on the accuracy of decision, so it is good when the SNR is high. But when the SNR is low and, especially, less than OdB, the estimation has relatively great bias. The new blind SNR estimation algorithm has broad estimation range, good performance and low computational complexity for M=2 (BPSK), but the performance degrades while Mis increasing.
2005, 27(4): 532-535.
Abstract:
A new blind signal separation method, where only one frequency bin is used, is presented to separate the underwater signals. Using this method, the indeterminacy in amplitude and permutation can be eliminated, and the validity of this method is confirmed by simulation experiment. Compared with the separation algorithm based on two frequency bins, this new method has better separation performance and less operation time. Therefore, the separation algorithm based on single frequency bin is more suitable for real-time signal separation.
A new blind signal separation method, where only one frequency bin is used, is presented to separate the underwater signals. Using this method, the indeterminacy in amplitude and permutation can be eliminated, and the validity of this method is confirmed by simulation experiment. Compared with the separation algorithm based on two frequency bins, this new method has better separation performance and less operation time. Therefore, the separation algorithm based on single frequency bin is more suitable for real-time signal separation.
2005, 27(4): 536-539.
Abstract:
In this paper, the uniform threshold function of waveshrink is build.Cornputationally efficient formulas for computing bias, variance and risk of uniform threshold function are derived. These formulas provide a new way of understanding how waveshrink works. On the basis of this, the relation of bias, variance and risk of uniform threshold function(u=l,2,) with threshold value and wavelet coefficients are compared. These comparisons give the performance of waveshrink in finite sample situations.
In this paper, the uniform threshold function of waveshrink is build.Cornputationally efficient formulas for computing bias, variance and risk of uniform threshold function are derived. These formulas provide a new way of understanding how waveshrink works. On the basis of this, the relation of bias, variance and risk of uniform threshold function(u=l,2,) with threshold value and wavelet coefficients are compared. These comparisons give the performance of waveshrink in finite sample situations.
2005, 27(4): 540-543.
Abstract:
A novel method based on geometric active contour model for face tracking is presented. After performing the chromatic histogram backprojection, facial region appears as a homogeneous region in the backprojected image. An improved narrow band algorithm is developed for implementing the curve evolution. The propagating curve is represented as a set of nodes. The update of the level set function is only calculated on these nodes, while the evolution on other grid points is performed using the interpolation and lookup table method. The motion direction as well as the time step of each node is different with respect to the local image properties. The experimental results indicate the proposed algorithm is fast and accurate enough for tracking faces in color image sequences.
A novel method based on geometric active contour model for face tracking is presented. After performing the chromatic histogram backprojection, facial region appears as a homogeneous region in the backprojected image. An improved narrow band algorithm is developed for implementing the curve evolution. The propagating curve is represented as a set of nodes. The update of the level set function is only calculated on these nodes, while the evolution on other grid points is performed using the interpolation and lookup table method. The motion direction as well as the time step of each node is different with respect to the local image properties. The experimental results indicate the proposed algorithm is fast and accurate enough for tracking faces in color image sequences.
2005, 27(4): 544-547.
Abstract:
In this paper, a new face recognition method based on fractal coding is proposed. Fractal singular value neighbor distance is brought forward based on fractal neighbor distance. Fractal coding and local singular value decomposition are used to improve the recognition rate. The experimental results show that the method can keep a good robustness to the variation of illumination, pose and expression, compared with fractal neighbor distances. Furthermore, the method shows that the training time is short and recognition rate is high.
In this paper, a new face recognition method based on fractal coding is proposed. Fractal singular value neighbor distance is brought forward based on fractal neighbor distance. Fractal coding and local singular value decomposition are used to improve the recognition rate. The experimental results show that the method can keep a good robustness to the variation of illumination, pose and expression, compared with fractal neighbor distances. Furthermore, the method shows that the training time is short and recognition rate is high.
2005, 27(4): 548-551.
Abstract:
This paper presents a text fuzzy clustering algorithm which combines rough set and genetic algorithm fully. In the clustering process, the weight parameters are also described by genetic algorithm, thus it makes parameters more reasonable and operationable and avoids subjectivity and unreliability of describing weight parameters in the similar algorithms proposed by other researchers. The example demonstrates the feasibility of the algorithm.
This paper presents a text fuzzy clustering algorithm which combines rough set and genetic algorithm fully. In the clustering process, the weight parameters are also described by genetic algorithm, thus it makes parameters more reasonable and operationable and avoids subjectivity and unreliability of describing weight parameters in the similar algorithms proposed by other researchers. The example demonstrates the feasibility of the algorithm.
2005, 27(4): 552-555.
Abstract:
The concepts of the Subspace Information Quantity(SIQ) and Function Set Information Quantity(FSIQ) are presented; Then the problem of model selection based on FSIQ are discussed explicitly, and the approximate method of model selection based on limited samples with white noise is proposed, which resolves the problem of underletting and overfitting of model selection and improves the generalization of predict model well. A new suboptimal algorithm for model selection is given, and its reliability and advantage are illustrated through concrete test.
The concepts of the Subspace Information Quantity(SIQ) and Function Set Information Quantity(FSIQ) are presented; Then the problem of model selection based on FSIQ are discussed explicitly, and the approximate method of model selection based on limited samples with white noise is proposed, which resolves the problem of underletting and overfitting of model selection and improves the generalization of predict model well. A new suboptimal algorithm for model selection is given, and its reliability and advantage are illustrated through concrete test.
2005, 27(4): 556-560.
Abstract:
DNA computing is a new computation method with simulating molecular biology structure of DNA and by means of molecular biology technology. This method has been widely used in many respects. Simply reviewed the progress of DNA computing, the paper introduces a new model of molecular computation that is called the sticker model. Finally, the solution of the minimal covering problem on surface using fluorescence marking technology is proposed based on the principle of sticker model.
DNA computing is a new computation method with simulating molecular biology structure of DNA and by means of molecular biology technology. This method has been widely used in many respects. Simply reviewed the progress of DNA computing, the paper introduces a new model of molecular computation that is called the sticker model. Finally, the solution of the minimal covering problem on surface using fluorescence marking technology is proposed based on the principle of sticker model.
2005, 27(4): 561-565.
Abstract:
Peak sequence matching is a important approach to SAR ATR. A general Gaussian model for peak is given in this paper and the extraction method is designed based on this model. Utilizing actual target database, the variability of targets peaks with target orientation, configuration and depression angle is analyzed. The results show that the peaks of target in SAR image have some stability with targets orientation, configuration and depression variability. So it is feasible to recognize target from SAR image using peak feature.
Peak sequence matching is a important approach to SAR ATR. A general Gaussian model for peak is given in this paper and the extraction method is designed based on this model. Utilizing actual target database, the variability of targets peaks with target orientation, configuration and depression angle is analyzed. The results show that the peaks of target in SAR image have some stability with targets orientation, configuration and depression variability. So it is feasible to recognize target from SAR image using peak feature.
2005, 27(4): 566-569.
Abstract:
According to the characteristic of coherent pulse compression in the passive location system using the FM broadcast signals, the desampling FIR filter is necessary and feasible. The maximum of SNR after coherent pulse compression is defined as the proper criterion for the design of the desampling filter. The relation of the SNR and the coefficients of the filter adopted is deduced. The steps by the Rayleigh quotient method to obtain the optimal desampling FIR filter are also presented. The results of simulation show that the proposed method achieves good SNR despite its low computation complexity.
According to the characteristic of coherent pulse compression in the passive location system using the FM broadcast signals, the desampling FIR filter is necessary and feasible. The maximum of SNR after coherent pulse compression is defined as the proper criterion for the design of the desampling filter. The relation of the SNR and the coefficients of the filter adopted is deduced. The steps by the Rayleigh quotient method to obtain the optimal desampling FIR filter are also presented. The results of simulation show that the proposed method achieves good SNR despite its low computation complexity.
2005, 27(4): 570-573.
Abstract:
This paper investigates the time-domain design method of two-channel signal-adapted FIR alias-free filterbanks without Perfect Reconstruction (PR) constraints. For subband coders based on filterbanks without PR constraints, the total distortion is composed of two parts: the systemic distortion and quantization distortion. For a given total bit budget, the minimization of the total distortion is an unconstrained nonlinear programming. Due to its high nonlinearity, the design results greatly depend on selection of initial filterbanks. Two approaches to select the initial filterbanks, the associated algorithm and designs of several examples are given in this paper. The obtained signal-adapted filterbanks achieve larger subband coding gains than the existing methods, which verifies our method is effective.
This paper investigates the time-domain design method of two-channel signal-adapted FIR alias-free filterbanks without Perfect Reconstruction (PR) constraints. For subband coders based on filterbanks without PR constraints, the total distortion is composed of two parts: the systemic distortion and quantization distortion. For a given total bit budget, the minimization of the total distortion is an unconstrained nonlinear programming. Due to its high nonlinearity, the design results greatly depend on selection of initial filterbanks. Two approaches to select the initial filterbanks, the associated algorithm and designs of several examples are given in this paper. The obtained signal-adapted filterbanks achieve larger subband coding gains than the existing methods, which verifies our method is effective.
2005, 27(4): 574-576.
Abstract:
Oil spill detection is one of the very important application fields in Synthetic Aperture Radar (SAR) images. Wavelet method is an edge detection method broadly researched in resent years. Normal oil spill detection uses the first and second derivatives of a Gaussian function as the basic wavelet function, which has slow computing speed. This paper utilizes zero-antisymmetrical dyadic wavelet, using multi-resolution analysis and multi-scale integration to get oil spill edge images. The results demonstrate that it is a useful and promising method for oil spill detection in SAR images.
Oil spill detection is one of the very important application fields in Synthetic Aperture Radar (SAR) images. Wavelet method is an edge detection method broadly researched in resent years. Normal oil spill detection uses the first and second derivatives of a Gaussian function as the basic wavelet function, which has slow computing speed. This paper utilizes zero-antisymmetrical dyadic wavelet, using multi-resolution analysis and multi-scale integration to get oil spill edge images. The results demonstrate that it is a useful and promising method for oil spill detection in SAR images.
2005, 27(4): 577-579.
Abstract:
Phased array radar combined with wideband technique is the development trend in radar, while technology of restraining wideband interference must be resolved in wideband phased array radar. A new wideband transmitting beamforming method is presented in this paper which uses LFM signal. The algorithm produces optimal weight by combining wideband with narrowband weight on the premise of accurate DOA estimation. The method can ensure high distance resolution of the signal and form pattern null in the direction of the interference. Simulation results demonstrate the effectiveness of the method.
Phased array radar combined with wideband technique is the development trend in radar, while technology of restraining wideband interference must be resolved in wideband phased array radar. A new wideband transmitting beamforming method is presented in this paper which uses LFM signal. The algorithm produces optimal weight by combining wideband with narrowband weight on the premise of accurate DOA estimation. The method can ensure high distance resolution of the signal and form pattern null in the direction of the interference. Simulation results demonstrate the effectiveness of the method.
2005, 27(4): 580-583.
Abstract:
The analysis of exponentially forgetting transform (EFT) is made in this paper. The application of a sliding single-sided exponential window, make it possible the iterative operation in EFT which greatly raises computation efficiency. Compared with other time-frequency distributions, EFT is especially useful in case of large data length and more suitable for hardware implementation. The relationships of bias and mean square error of EFT with SNR and forgetting coefficient are studied. And in order to overcome the drawbacks of single-sided window, a new time-frequency distribution using double-sided exponential window which is computation efficient with much bias reduction is introduced.
The analysis of exponentially forgetting transform (EFT) is made in this paper. The application of a sliding single-sided exponential window, make it possible the iterative operation in EFT which greatly raises computation efficiency. Compared with other time-frequency distributions, EFT is especially useful in case of large data length and more suitable for hardware implementation. The relationships of bias and mean square error of EFT with SNR and forgetting coefficient are studied. And in order to overcome the drawbacks of single-sided window, a new time-frequency distribution using double-sided exponential window which is computation efficient with much bias reduction is introduced.
2005, 27(4): 584-587.
Abstract:
A combined power control and data rate adjustment scheme with Minimum Mean Square Error (MMSE) receiver for the 3GPP Wideband Code Division Multiple Access (WCDMA) reverse link is proposed. It can guarantee target SIR while keeps transmit power at the minimum level and faster convergence, resulting an optimal overall system performance. Computer simulation shows that the proposed scheme is appropriate and the optimization algorithm is effective and correct. Simulation results show that, using the proposed algorithm, the capacity of system is increased by about 10% and the speed of convergence is sped up by about 30 % compared with those obtained using MMSE power control algorithm.
A combined power control and data rate adjustment scheme with Minimum Mean Square Error (MMSE) receiver for the 3GPP Wideband Code Division Multiple Access (WCDMA) reverse link is proposed. It can guarantee target SIR while keeps transmit power at the minimum level and faster convergence, resulting an optimal overall system performance. Computer simulation shows that the proposed scheme is appropriate and the optimization algorithm is effective and correct. Simulation results show that, using the proposed algorithm, the capacity of system is increased by about 10% and the speed of convergence is sped up by about 30 % compared with those obtained using MMSE power control algorithm.
2005, 27(4): 588-591.
Abstract:
The paper studies the capacity of a Multiple Input Multiple Output (MIMO) system with correlated fading under the constrain of fixed space for the receiver UCA antennas. The fading correlation model is established and the impacts of the number of antennas and scattering angle on the channel system capacity are investigated. Based on the random theory, a closed-form expression for the channel capacity of an M by N MIMO system is derived. Analysis shows that the channel capacity of a MIMO system is mainly determined by eigenvalues of the fading correlated matrix. Simulation shows that the channel capacity is saturated when the number of antennas increases to a certain point.
The paper studies the capacity of a Multiple Input Multiple Output (MIMO) system with correlated fading under the constrain of fixed space for the receiver UCA antennas. The fading correlation model is established and the impacts of the number of antennas and scattering angle on the channel system capacity are investigated. Based on the random theory, a closed-form expression for the channel capacity of an M by N MIMO system is derived. Analysis shows that the channel capacity of a MIMO system is mainly determined by eigenvalues of the fading correlated matrix. Simulation shows that the channel capacity is saturated when the number of antennas increases to a certain point.
2005, 27(4): 592-594.
Abstract:
The sampling time of the receiver might miss the best decision point when sampling clock is fixed and rate of sampling is limited. Under this circumstances, symbol decision is affected badly by ISI. In this paper, a method of enhancing the precision of symbol synchronization using interpolator is introduced based on theory of digital interpolation, and the feature of the interpolation filter and its efficient implementation structure based on polyphase decomposition is discussed.
The sampling time of the receiver might miss the best decision point when sampling clock is fixed and rate of sampling is limited. Under this circumstances, symbol decision is affected badly by ISI. In this paper, a method of enhancing the precision of symbol synchronization using interpolator is introduced based on theory of digital interpolation, and the feature of the interpolation filter and its efficient implementation structure based on polyphase decomposition is discussed.
Adaptive Non-linear LTJ Filter Techniques for Interference Suppression in CDMA Communication Systems
2005, 27(4): 595-598.
Abstract:
In this paper, a LTJ-based adaptive nonlinear filter and the application for narrowband interference suppression in satellite CDMA communication systems is researched. The filter based on LTJ structure used both Lattice and LMS adaptive structures, having the merit of these two filters. With this merit, the filter increases the convergence rate combining with decreasing the complexity of itself.
In this paper, a LTJ-based adaptive nonlinear filter and the application for narrowband interference suppression in satellite CDMA communication systems is researched. The filter based on LTJ structure used both Lattice and LMS adaptive structures, having the merit of these two filters. With this merit, the filter increases the convergence rate combining with decreasing the complexity of itself.
2005, 27(4): 599-602.
Abstract:
The un-uniform adjacent subcarrier partition method is proposed in order to reduce the Peak-to-Average power Ratio (PAR) of the signal in MultiCarrier-CDMA(MC-CDMA) systems by using Partial Transmit Sequence (PTS) scheme. That is reducing the number of subcarriers of high and low frequencies within a data block, and enhancing the number of subcarriers of intermediate frequencies if considering all subcarriers frequency as an band of frequency. When the number of subcarriers of the intermediate frequency within a data block is double of the number of subcarriers of the high (or low) frequencies, the PAR reduces 0.2dB comparing with that of the uniform partition. The Ganetic Algorithm(GA) is proposed in order to reduce the iterative number when the space of rotate-matrix is very big. By 60 generations evolution, the optimum value can be obtained. The results of simulation validate the proposed method.
The un-uniform adjacent subcarrier partition method is proposed in order to reduce the Peak-to-Average power Ratio (PAR) of the signal in MultiCarrier-CDMA(MC-CDMA) systems by using Partial Transmit Sequence (PTS) scheme. That is reducing the number of subcarriers of high and low frequencies within a data block, and enhancing the number of subcarriers of intermediate frequencies if considering all subcarriers frequency as an band of frequency. When the number of subcarriers of the intermediate frequency within a data block is double of the number of subcarriers of the high (or low) frequencies, the PAR reduces 0.2dB comparing with that of the uniform partition. The Ganetic Algorithm(GA) is proposed in order to reduce the iterative number when the space of rotate-matrix is very big. By 60 generations evolution, the optimum value can be obtained. The results of simulation validate the proposed method.
2005, 27(4): 603-607.
Abstract:
This paper proposes a novel channel estimation method for wireless Orthogonal Frequency Division Multiplexing (OFDM) systems. The method can significantly reduce the Inter-Carrier Interference (ICI) and additive white Gaussian noise by processing the received pilots both in multipath spread domain and Doppier spread domain. Furthermore, the method is adaptive in the sense that the cutoff frequency of the filter in the Doppier spread domain is designed dynamically to match the Signal-to-Noise Ratio (SNR). Computer simulation results demonstrate that the proposed method performs well at various Doppier frequency shifts.
This paper proposes a novel channel estimation method for wireless Orthogonal Frequency Division Multiplexing (OFDM) systems. The method can significantly reduce the Inter-Carrier Interference (ICI) and additive white Gaussian noise by processing the received pilots both in multipath spread domain and Doppier spread domain. Furthermore, the method is adaptive in the sense that the cutoff frequency of the filter in the Doppier spread domain is designed dynamically to match the Signal-to-Noise Ratio (SNR). Computer simulation results demonstrate that the proposed method performs well at various Doppier frequency shifts.
2005, 27(4): 608-611.
Abstract:
A publicly verifiable encryption scheme allows any entity to verify that a cipher-text hides the same message as committed before without revealing it. It is important to construct fair exchange scheme, publicly verifiable secret sharing and cheater-resistant secure multi-party computation. In this paper, publicly verifiable encryption schemes are presented for ElGamal/RSA cryptosystem. The ElGamal case is an improved version of Stadler publicly verifiable encryption scheme. The improved scheme is semantic secure while Stadler scheme is not. Also, the scheme is extended to the context of multi-recipient ElGamal encryption and an efficient publicly verifiable RSA scheme is proposed.
A publicly verifiable encryption scheme allows any entity to verify that a cipher-text hides the same message as committed before without revealing it. It is important to construct fair exchange scheme, publicly verifiable secret sharing and cheater-resistant secure multi-party computation. In this paper, publicly verifiable encryption schemes are presented for ElGamal/RSA cryptosystem. The ElGamal case is an improved version of Stadler publicly verifiable encryption scheme. The improved scheme is semantic secure while Stadler scheme is not. Also, the scheme is extended to the context of multi-recipient ElGamal encryption and an efficient publicly verifiable RSA scheme is proposed.
2005, 27(4): 612-616.
Abstract:
The definition and related conclusions of chosen ciphertext security IND-CCA (INDistinguishability against adaptive-Chosen Ciphertext Attack) of hybrid encryption of symmetric and asymmetric encryption are discussed. Having studied two kinds of hybrid encryptions of different use and their security definitions, it is found that there is a difference in the oracles. Then the definition of IND-CCA is unified as security for the adversaries can only access the whole decryption oracle of hybrid schemes, which makes the unification of security conclusions of hybrid schemes possible, and supplies the ground for proper use of hybrid schemes. A hybrid scheme called REACT+ has been proposed with its security proof.
The definition and related conclusions of chosen ciphertext security IND-CCA (INDistinguishability against adaptive-Chosen Ciphertext Attack) of hybrid encryption of symmetric and asymmetric encryption are discussed. Having studied two kinds of hybrid encryptions of different use and their security definitions, it is found that there is a difference in the oracles. Then the definition of IND-CCA is unified as security for the adversaries can only access the whole decryption oracle of hybrid schemes, which makes the unification of security conclusions of hybrid schemes possible, and supplies the ground for proper use of hybrid schemes. A hybrid scheme called REACT+ has been proposed with its security proof.
2005, 27(4): 617-620.
Abstract:
In this paper, the principle of density evolution combined with decoding process is explained firstly. Two algorithms discretized density evolution and Gaussian approximation to program the evolution proceeding are discussed. Simulation results of some good distribution pairs that are found are presented as well.
In this paper, the principle of density evolution combined with decoding process is explained firstly. Two algorithms discretized density evolution and Gaussian approximation to program the evolution proceeding are discussed. Simulation results of some good distribution pairs that are found are presented as well.
2005, 27(4): 621-624.
Abstract:
The efficiency of a public-key encryption scheme proposed by P. Paillier et al. is improved. The equivalence of security of the improved encryption scheme and that of the original encryption scheme is proved. Without increasing the size of the ciphertext, the scheme is further modified into an efficient signcryption scheme. If necessary, the receiver can verify the integrity and source of the message at any time.
The efficiency of a public-key encryption scheme proposed by P. Paillier et al. is improved. The equivalence of security of the improved encryption scheme and that of the original encryption scheme is proved. Without increasing the size of the ciphertext, the scheme is further modified into an efficient signcryption scheme. If necessary, the receiver can verify the integrity and source of the message at any time.
2005, 27(4): 625-628.
Abstract:
Correlation function is the important parameter for studying the security of sequence cipher. This paper discusses the auto-correlation of two classes of generalized Jacobi sequences and gives their values of auto-correlation. The results show that two classes of generalized Jacobi sequences have good auto-correlation properties.
Correlation function is the important parameter for studying the security of sequence cipher. This paper discusses the auto-correlation of two classes of generalized Jacobi sequences and gives their values of auto-correlation. The results show that two classes of generalized Jacobi sequences have good auto-correlation properties.
2005, 27(4): 629-633.
Abstract:
This paper presents an uncontested Distributed Wireless Token Ring Protocol(DWTRP), based on the Wireless Token Ring Protocol(WTRP). The simulation results show that the average delay and average queue length in DW.TRP system are much more lower than that in WTRP system, the stability is more enhanced.
This paper presents an uncontested Distributed Wireless Token Ring Protocol(DWTRP), based on the Wireless Token Ring Protocol(WTRP). The simulation results show that the average delay and average queue length in DW.TRP system are much more lower than that in WTRP system, the stability is more enhanced.
2005, 27(4): 634-637.
Abstract:
A novel distributed QoS routing algorithm, the metric of which is bandwidth, is presented in this paper. It not only keeps the merits of simplicity and low link overhead in traditional distributed QoS routing algorithm, but also can reduce resource fragments and admit more services into a network with heavy load. The high performance of this algorithm accompanies some delay during path setup. But the performance remains with great reduction in this delay when the start threshold of this algorithm is considered. The extensive simulation results also indicate its correctness and efficiency.
A novel distributed QoS routing algorithm, the metric of which is bandwidth, is presented in this paper. It not only keeps the merits of simplicity and low link overhead in traditional distributed QoS routing algorithm, but also can reduce resource fragments and admit more services into a network with heavy load. The high performance of this algorithm accompanies some delay during path setup. But the performance remains with great reduction in this delay when the start threshold of this algorithm is considered. The extensive simulation results also indicate its correctness and efficiency.
2005, 27(4): 638-641.
Abstract:
Lower-cost shortest path tree is a commonly-used multicast tree type, which can minimize end-to-end delay and at the same time reduce bandwidth as possible. This article presents an algorithm for lower cost shortest path tree. The algorithm adjusts the nodes minimum cost to the current shortest path tree dynamically, and gradually gets shortest path tree with low total cost by selecting the node with minimum cost to current shortest path tree in turn. The algorithm has better performance and lower complexity than Destination-Driven Shortest Path tree (DDSP) algorithm so that is a very fine shortest path tree algorithm by algorithm analysis and simulation.
Lower-cost shortest path tree is a commonly-used multicast tree type, which can minimize end-to-end delay and at the same time reduce bandwidth as possible. This article presents an algorithm for lower cost shortest path tree. The algorithm adjusts the nodes minimum cost to the current shortest path tree dynamically, and gradually gets shortest path tree with low total cost by selecting the node with minimum cost to current shortest path tree in turn. The algorithm has better performance and lower complexity than Destination-Driven Shortest Path tree (DDSP) algorithm so that is a very fine shortest path tree algorithm by algorithm analysis and simulation.
2005, 27(4): 642-646.
Abstract:
The character of radiation of disk radiator is analyzed with transfer function in frequency domain. Near or far field and law of energy propagation are presented at axes of disk. Time domain electromagnetic waves are analyzed in detail by defining a energy radiation parameter. The nature is the same as sinusoidal electromagnetic wave. They can decrease much more slowly because the radiating capacity of high frequency is better than the low, then coincident understand can be reached between them.
The character of radiation of disk radiator is analyzed with transfer function in frequency domain. Near or far field and law of energy propagation are presented at axes of disk. Time domain electromagnetic waves are analyzed in detail by defining a energy radiation parameter. The nature is the same as sinusoidal electromagnetic wave. They can decrease much more slowly because the radiating capacity of high frequency is better than the low, then coincident understand can be reached between them.
2005, 27(4): 647-650.
Abstract:
Prohibitive computation resources and too much time are needed for EMC analysis of complicated EM environment. To overcome this drawback, a parallel algorithm that combined MoM with MPI functions is studied in this paper. The tessellation scheme is employed for parallel filling the impedance matrix and parallel conjugate gradient method is used to solve the matrix equation. The performance of the parallel code on PC clusters is analyzed and numerical results show its efficiency.
Prohibitive computation resources and too much time are needed for EMC analysis of complicated EM environment. To overcome this drawback, a parallel algorithm that combined MoM with MPI functions is studied in this paper. The tessellation scheme is employed for parallel filling the impedance matrix and parallel conjugate gradient method is used to solve the matrix equation. The performance of the parallel code on PC clusters is analyzed and numerical results show its efficiency.
2005, 27(4): 651-654.
Abstract:
In this paper, a new combinational equivalence checking approach using Boolean Satisfiability is proposed. The algorithm uses several methods to reduce the space of the SAT reasoning first, those methods are AND/INVERTER graph transformation, BDD propagation and implication learning, CNF-based SAT solver zChaff is used to solve the verification task. The algorithm combines the advantages of both BDD and SAT, BDDs size is limited to avoid memory explosion problem and structural reduction is applied to reduce the search space of SAT. The efficiency of the proposed approach is shown through its application on the ISCAS85 benchmark circuits.
In this paper, a new combinational equivalence checking approach using Boolean Satisfiability is proposed. The algorithm uses several methods to reduce the space of the SAT reasoning first, those methods are AND/INVERTER graph transformation, BDD propagation and implication learning, CNF-based SAT solver zChaff is used to solve the verification task. The algorithm combines the advantages of both BDD and SAT, BDDs size is limited to avoid memory explosion problem and structural reduction is applied to reduce the search space of SAT. The efficiency of the proposed approach is shown through its application on the ISCAS85 benchmark circuits.
2005, 27(4): 663-665.
Abstract:
In this paper, a new blind equalization algorithm adapt to the QAM communication system is proposed, which overcomes conventional General Sato Algorithm (GSA), Gonstant Modulus Algorithm. (CMA), etc. algorithms larger residual error after convergence and still keeps a higher convergence rate, and it can recover the channel phase error automatically at the same time. The algorithms better equalization property is proved by the computer simulation results.
In this paper, a new blind equalization algorithm adapt to the QAM communication system is proposed, which overcomes conventional General Sato Algorithm (GSA), Gonstant Modulus Algorithm. (CMA), etc. algorithms larger residual error after convergence and still keeps a higher convergence rate, and it can recover the channel phase error automatically at the same time. The algorithms better equalization property is proved by the computer simulation results.
2005, 27(4): 666-669.
Abstract:
The protection of bidder privacy and the prevention of bidder default are the keys in the designing of secure auction protocol. But up to now, the research on both of them is still very weak. Aiming at this drawback, this paper designed a new secure and efficient auction protocol. Using the digital signature technology and bit commitment protocol, it not only guarantees the non-repudiation and anonymity of bidders, but also ensures that nobody can manipulate others in the whole auction And also, this protocol achieves the valuable properties of bid secrecy and verifiability. Even when malicious bidders collude with auctioneers, it is still secure and valid. More importantly, this protocol supports the second-price principle and optimal distribution of goods. Compared with previous works, this protocol provides better extensibility and higher efficiency, and is suitable for large-scale distributed auction.
The protection of bidder privacy and the prevention of bidder default are the keys in the designing of secure auction protocol. But up to now, the research on both of them is still very weak. Aiming at this drawback, this paper designed a new secure and efficient auction protocol. Using the digital signature technology and bit commitment protocol, it not only guarantees the non-repudiation and anonymity of bidders, but also ensures that nobody can manipulate others in the whole auction And also, this protocol achieves the valuable properties of bid secrecy and verifiability. Even when malicious bidders collude with auctioneers, it is still secure and valid. More importantly, this protocol supports the second-price principle and optimal distribution of goods. Compared with previous works, this protocol provides better extensibility and higher efficiency, and is suitable for large-scale distributed auction.
2005, 27(4): 670-672.
Abstract:
By the modern time series analysis method, based on the AutoRegressive Moving Average(ARMA) innovation model and Lyapunov equation, a mulisensor information fusion Wiener deconvolution filter is presented for single channel ARMA signals. It avoids the Riccati equation and can be applied to design the self-tuning information fusion filter for systems with unknown model parameters and unknown variances. A simulation example shows its effectiveness.
By the modern time series analysis method, based on the AutoRegressive Moving Average(ARMA) innovation model and Lyapunov equation, a mulisensor information fusion Wiener deconvolution filter is presented for single channel ARMA signals. It avoids the Riccati equation and can be applied to design the self-tuning information fusion filter for systems with unknown model parameters and unknown variances. A simulation example shows its effectiveness.
2005, 27(4): 655-662.
Abstract:
Data mining is used to draw interesting information from Very Large DataBases (VLDB). Clustering plays an outstanding role in data mining applications. Clustering is a division of databases into groups of similar objects based on the similarity. From a machine learning perspective clusters correspond to hidden patterns, the search for clusters is unsupervised learning. There are tens of clustering algorithms used in various fields such as statistics, pattern recognition and machine learning now. This paper concludes the clustering algorithms used in data mining and assorts them into 7 classes. Seven types of algorithms are summarized and their performances are analyzed here.
Data mining is used to draw interesting information from Very Large DataBases (VLDB). Clustering plays an outstanding role in data mining applications. Clustering is a division of databases into groups of similar objects based on the similarity. From a machine learning perspective clusters correspond to hidden patterns, the search for clusters is unsupervised learning. There are tens of clustering algorithms used in various fields such as statistics, pattern recognition and machine learning now. This paper concludes the clustering algorithms used in data mining and assorts them into 7 classes. Seven types of algorithms are summarized and their performances are analyzed here.