Email alert
2009 Vol. 31, No. 12
Display Method:
2009, 31(12): 2795-2800.
doi: 10.3724/SP.J.1146.2008.01793
Abstract:
In this paper, the issue of robust protection is investigated in WDM networks under the hose uncertain traffic model. Based on Valiant Load Balancing (VLB) and shared protection, a segment protection algorithm called VLB-SSP (VLB-based Shared Segment Protection) is proposed. The algorithm provisions wavelengths in terms of the shared protection, and splits the protection loops so as to meet the requirement of recovery time. Simulation results indicate that VLB-SSP can not only achieve a lower cost budget but also perform a faster recovery in contrast to dedicated-path protection VLB algorithm and uniform Load Balancing protection scheme.
In this paper, the issue of robust protection is investigated in WDM networks under the hose uncertain traffic model. Based on Valiant Load Balancing (VLB) and shared protection, a segment protection algorithm called VLB-SSP (VLB-based Shared Segment Protection) is proposed. The algorithm provisions wavelengths in terms of the shared protection, and splits the protection loops so as to meet the requirement of recovery time. Simulation results indicate that VLB-SSP can not only achieve a lower cost budget but also perform a faster recovery in contrast to dedicated-path protection VLB algorithm and uniform Load Balancing protection scheme.
2009, 31(12): 2801-2806.
doi: 10.3724/SP.J.1146.2008.01766
Abstract:
An adaptive dynamic spectrum allocation scheme among multi-cells in cognitive radio system is proposed. Firstly, the method presents the predicting scheme of idle period of each licensed spectrum units that can be used by cognitive radio, based on this, a dynamic spectrum allocation scheme is proposed to reduce the spectrum handoff ratio and minimize the continuity of the spectrum used by the cognitive radio, which can avoid increment of system overhead and complexity of spectrum management. And the proposed scheme introduces graph theory of coloring to avoid interference among multi-cells. And results of simulation and analysis prove this conclusion.
An adaptive dynamic spectrum allocation scheme among multi-cells in cognitive radio system is proposed. Firstly, the method presents the predicting scheme of idle period of each licensed spectrum units that can be used by cognitive radio, based on this, a dynamic spectrum allocation scheme is proposed to reduce the spectrum handoff ratio and minimize the continuity of the spectrum used by the cognitive radio, which can avoid increment of system overhead and complexity of spectrum management. And the proposed scheme introduces graph theory of coloring to avoid interference among multi-cells. And results of simulation and analysis prove this conclusion.
2009, 31(12): 2807-2812.
doi: 10.3724/SP.J.1146.2008.01596
Abstract:
For wireless sensor networks, the energy of sensor node is very limited, and there are a lot of redundant nodes when it is densely deployed. Therefore, a 3D coverage scheme based on hibernation of redundant nodes and phased waking-up strategy for wireless sensor networks is proposed. A large number of sensor nodes are randomly deployed in the monitoring region and the redundant nodes can hibernate. The hibernated nodes will be waken up in phases after the on-duty nodes are exhausted. The process of hibernating/waking-up lasts until all of the nodes in the entire sensor network are exhausted. The simulation results show that this method improves the network performance. Besides, with the same number of nodes deployed in the 3D monitoring region, the phased waking-up strategy outperforms the none-phased waking-up strategy, and waking up the nodes after hibernation achieves higher network performance than the method of directly waking up in turn.
For wireless sensor networks, the energy of sensor node is very limited, and there are a lot of redundant nodes when it is densely deployed. Therefore, a 3D coverage scheme based on hibernation of redundant nodes and phased waking-up strategy for wireless sensor networks is proposed. A large number of sensor nodes are randomly deployed in the monitoring region and the redundant nodes can hibernate. The hibernated nodes will be waken up in phases after the on-duty nodes are exhausted. The process of hibernating/waking-up lasts until all of the nodes in the entire sensor network are exhausted. The simulation results show that this method improves the network performance. Besides, with the same number of nodes deployed in the 3D monitoring region, the phased waking-up strategy outperforms the none-phased waking-up strategy, and waking up the nodes after hibernation achieves higher network performance than the method of directly waking up in turn.
2009, 31(12): 2813-2818.
doi: 10.3724/SP.J.1146.2008.01556
Abstract:
Broadcast is a common operation in multi-hop wireless networks. However, previous schemes either have much transmission redundancy or cost too much overhead. In this paper, the minimal number of forwarding nodes needed to cover a network is analyzed. On this basis, a simple broadcast method is proposed to increase significantly the transmission efficiency. In this method, each forwarding node only needs to select no more than three forwarding nodes. The set of forwarding nodes can provide nearly double coverage to all the network area thus provide high delivery ratio. In addition, it has excellent scalability in large scale networks and highly dynamic environment. Simulation results show that the method exhibits much better performance in variety of network conditions, compared with existing methods.
Broadcast is a common operation in multi-hop wireless networks. However, previous schemes either have much transmission redundancy or cost too much overhead. In this paper, the minimal number of forwarding nodes needed to cover a network is analyzed. On this basis, a simple broadcast method is proposed to increase significantly the transmission efficiency. In this method, each forwarding node only needs to select no more than three forwarding nodes. The set of forwarding nodes can provide nearly double coverage to all the network area thus provide high delivery ratio. In addition, it has excellent scalability in large scale networks and highly dynamic environment. Simulation results show that the method exhibits much better performance in variety of network conditions, compared with existing methods.
2009, 31(12): 2819-2823.
doi: 10.3724/SP.J.1146.2008.00112
Abstract:
In this paper, a distributed moving horizon state estimation approach is presented based on multi-bit quantized data. Each sensor node preserves a list of thresholds which are used to quantize observations into multiple bits. After receiving these bits, the Fusion Center (FC) makes the final estimation for system states. Simulation results show that the more number of thresholds, better estimation results will be made, Which is Consistent with Common Sense. Compared with single bit distributed moving horizon state estimation, this method avoids FC sending the estimate information back to sensor nodes and provides higher precision of state estimation.
In this paper, a distributed moving horizon state estimation approach is presented based on multi-bit quantized data. Each sensor node preserves a list of thresholds which are used to quantize observations into multiple bits. After receiving these bits, the Fusion Center (FC) makes the final estimation for system states. Simulation results show that the more number of thresholds, better estimation results will be made, Which is Consistent with Common Sense. Compared with single bit distributed moving horizon state estimation, this method avoids FC sending the estimate information back to sensor nodes and provides higher precision of state estimation.
2009, 31(12): 2824-2828.
doi: 10.3724/SP.J.1146.2008.01741
Abstract:
Inter Symbol Interference (ISI) will increase when transmission rate of an Impulse Radio-Ultra Wide Band (IR-UWB) system is getting larger, which will worsen Bit Error Rate (BER), restrict the highest realizable transmission rate. In order to suppress ISI and realize high transmission rate, based on the causation of ISI, a Fractionally Spaced-Decision Feedback Middle Equalization (FS-DFME) equalization receiver is proposed. The receiver can realize jointly Matched Filter (MF) and channel equalization so as to collect multipath signal energy and suppress ISI. Simulation results show that the observation window length is important parameter in mitigation of ISI. Compared with Linear Equalization (LE) and Fractionally Spaced-Decision Feedback Non-Middle Equalization (FS-DFNME) equalization receiver, ISI is mitigated more effectively by FS-DFME equalization receiver and BER performance is improved obviously.
Inter Symbol Interference (ISI) will increase when transmission rate of an Impulse Radio-Ultra Wide Band (IR-UWB) system is getting larger, which will worsen Bit Error Rate (BER), restrict the highest realizable transmission rate. In order to suppress ISI and realize high transmission rate, based on the causation of ISI, a Fractionally Spaced-Decision Feedback Middle Equalization (FS-DFME) equalization receiver is proposed. The receiver can realize jointly Matched Filter (MF) and channel equalization so as to collect multipath signal energy and suppress ISI. Simulation results show that the observation window length is important parameter in mitigation of ISI. Compared with Linear Equalization (LE) and Fractionally Spaced-Decision Feedback Non-Middle Equalization (FS-DFNME) equalization receiver, ISI is mitigated more effectively by FS-DFME equalization receiver and BER performance is improved obviously.
2009, 31(12): 2829-2833.
doi: 10.3724/SP.J.1146.2008.01654
Abstract:
The performance in cooperative communication depends much on efficient resource allocation such as relay selection and power control. In this paper, a game-theoretical power control algorithm with relay selection is proposed. A payoff function is defined according to each nodes SNR, and there are different pricing scheme between the source node and the relay nodes, because they play different roles in the networks. On the premise that these nodes are responsible, the goal is to maximize the utility function by adjusting the power. Then a choice among the relay nodes is maked, according to the power criterion. Furthermore, the property of the Nash equilibrium is analyzed . Simulation results show that the algorithm can encourage the nodes to employ their power efficiently, and improve the performances of the systems.
The performance in cooperative communication depends much on efficient resource allocation such as relay selection and power control. In this paper, a game-theoretical power control algorithm with relay selection is proposed. A payoff function is defined according to each nodes SNR, and there are different pricing scheme between the source node and the relay nodes, because they play different roles in the networks. On the premise that these nodes are responsible, the goal is to maximize the utility function by adjusting the power. Then a choice among the relay nodes is maked, according to the power criterion. Furthermore, the property of the Nash equilibrium is analyzed . Simulation results show that the algorithm can encourage the nodes to employ their power efficiently, and improve the performances of the systems.
2009, 31(12): 2834-2837.
doi: 10.3724/SP.J.1146.2008.01565
Abstract:
The residual frequency and phase offset will make obvious degradations on the Turbo decoding performance under low SNR conditions, and it must introduce some coded-aided carrier synchronization algorithm for the phase estimation. In this paper, an improved residual carrier frequency offset estimation algorithm based on APPA(A Priori Probability Aided)phase estimation is proposed. It uses the extrinsic information obtained from the turbo decoder to aid an iterative phase estimation process. And the loop filter converts the phase error signal to a control word that is used to bias the numerically controlled oscillator. The simulation results show that the algorithm performs successfully with small frequency offset and phase offset at very low SNR(for example, SNR-7.8 dB) and its performance is very close to the optimally synchronized system.
The residual frequency and phase offset will make obvious degradations on the Turbo decoding performance under low SNR conditions, and it must introduce some coded-aided carrier synchronization algorithm for the phase estimation. In this paper, an improved residual carrier frequency offset estimation algorithm based on APPA(A Priori Probability Aided)phase estimation is proposed. It uses the extrinsic information obtained from the turbo decoder to aid an iterative phase estimation process. And the loop filter converts the phase error signal to a control word that is used to bias the numerically controlled oscillator. The simulation results show that the algorithm performs successfully with small frequency offset and phase offset at very low SNR(for example, SNR-7.8 dB) and its performance is very close to the optimally synchronized system.
2009, 31(12): 2838-2842.
doi: 10.3724/SP.J.1146.2008.01650
Abstract:
The design of transceivers in relaying systems is studied with full Channel State Information (CSI) or partial CSI at the relay. Based on the acquired CSI,the relay filters and retransmits its received signals and the destination recovers the original signals with linear Minimum Mean Square Error (MMSE) receiver. Simulation results show that the proposed relaying scheme is prior to the traditional Amplify-and-Forward (AF) relaying. At high Signal to Noise Ratio (SNR), the scheme based on partial CSI feedback behaves closely to that based on full CSI feedback.
The design of transceivers in relaying systems is studied with full Channel State Information (CSI) or partial CSI at the relay. Based on the acquired CSI,the relay filters and retransmits its received signals and the destination recovers the original signals with linear Minimum Mean Square Error (MMSE) receiver. Simulation results show that the proposed relaying scheme is prior to the traditional Amplify-and-Forward (AF) relaying. At high Signal to Noise Ratio (SNR), the scheme based on partial CSI feedback behaves closely to that based on full CSI feedback.
2009, 31(12): 2843-2847.
doi: 10.3724/SP.J.1146.2008.01041
Abstract:
Traditional power spectrum detector is based on the assumption that mean and variance of background noise spectrum do not vary with frequency. However, actual noise spectrums generally do not meet the hypothesis in non-cooperative satellite communication environment, so the detector performance is restricted. In this problem, received signals are preprocessed by sliding widow least square to make noise spectrums satisfy the presumption. Consequently, a blind detection algorithm of satellite communication signals is presented, and improving factor of detection performance relative to power spectrum detector is also deduced. Simulation results indicate that under same conditions, the algorithm generally has better performance than power spectrum detector, which is a classical traditional blind detection method. The proposed algorithm is also easy to implement with low computation complexity.
Traditional power spectrum detector is based on the assumption that mean and variance of background noise spectrum do not vary with frequency. However, actual noise spectrums generally do not meet the hypothesis in non-cooperative satellite communication environment, so the detector performance is restricted. In this problem, received signals are preprocessed by sliding widow least square to make noise spectrums satisfy the presumption. Consequently, a blind detection algorithm of satellite communication signals is presented, and improving factor of detection performance relative to power spectrum detector is also deduced. Simulation results indicate that under same conditions, the algorithm generally has better performance than power spectrum detector, which is a classical traditional blind detection method. The proposed algorithm is also easy to implement with low computation complexity.
2009, 31(12): 2848-2852.
doi: 10.3724/SP.J.1146.2008.00224
Abstract:
Stepped Frequency Chirp Signal (SFCS) is one of usually used signals in high-resolution radar. It is a combination of chirp signal and stepped frequency continuous waveform, and has advantages of the both signals. In the design of real radar system, if the frequency step (?f) can be larger than the bandwidth of sub-chirp (Bm), then it will be very helpful for using less number of sub-chirps to obtain larger bandwidth and reducing the influence of target motion on the quality of synthesized signal. However there will have high grating lobes in range profile when ?f Bm if without further processing. Here an algorithm is proposed by using Super-SVA to extrapolate the bandwidth of each sub-chirp so as to fulfill the bandwidth gaps between sub-chirps and efficiently eliminate grating lobes. Super-SVA is also applied to the synthesized range profile, then the sidelobes can be further depressed. The simulation results verify the effectiveness of the proposed algorithm.
Stepped Frequency Chirp Signal (SFCS) is one of usually used signals in high-resolution radar. It is a combination of chirp signal and stepped frequency continuous waveform, and has advantages of the both signals. In the design of real radar system, if the frequency step (?f) can be larger than the bandwidth of sub-chirp (Bm), then it will be very helpful for using less number of sub-chirps to obtain larger bandwidth and reducing the influence of target motion on the quality of synthesized signal. However there will have high grating lobes in range profile when ?f Bm if without further processing. Here an algorithm is proposed by using Super-SVA to extrapolate the bandwidth of each sub-chirp so as to fulfill the bandwidth gaps between sub-chirps and efficiently eliminate grating lobes. Super-SVA is also applied to the synthesized range profile, then the sidelobes can be further depressed. The simulation results verify the effectiveness of the proposed algorithm.
2009, 31(12): 2853-2857.
doi: 10.3724/SP.J.1146.2008.01236
Abstract:
In this paper, the frequency domain characteristic and time-varying characteristic of the RCS (Radar Cross Section) of aircraft wake vortices in clear air are analyzed. A LMP (Locally Most Powerful) detector is introduced for detection of aircraft wake vortices using coherent Doppler radar, and the analytical expressions of detection probability and false alarm probability are derived for the detector. Then the radar equations are deduced for radar detection of wake vortices. Simulation results indicate that, the detection performance of normal incidence is better than that of oblique incidence. The detection performance is improved with increasing radar range resolution, on condition that the radar observation time is long. For a wake vortex whos RCS per unit length is between -80 dBm2/m and -60 dBm2/m, the radar detection range can be between 30 km and 100 km.
In this paper, the frequency domain characteristic and time-varying characteristic of the RCS (Radar Cross Section) of aircraft wake vortices in clear air are analyzed. A LMP (Locally Most Powerful) detector is introduced for detection of aircraft wake vortices using coherent Doppler radar, and the analytical expressions of detection probability and false alarm probability are derived for the detector. Then the radar equations are deduced for radar detection of wake vortices. Simulation results indicate that, the detection performance of normal incidence is better than that of oblique incidence. The detection performance is improved with increasing radar range resolution, on condition that the radar observation time is long. For a wake vortex whos RCS per unit length is between -80 dBm2/m and -60 dBm2/m, the radar detection range can be between 30 km and 100 km.
2009, 31(12): 2858-2863.
doi: 10.3724/SP.J.1146.2008.01407
Abstract:
Considering issue of target-aspect sensitivities in the waveform design for the recognition of broadband radar targets, a novel method termed Multi Eigen-Subspace (MES) is proposed in the additional of colored noise. The optimization is done via selecting a lot of eigenvectors which can represent the difference between the echoes of different classes of target in all aspect adequately and spanning them to several eigen-subspaces, then the optimized waveform will be obtained by mapping the desired waveform to those eigen-subspaces. The experimental results prove the efficiency of the proposed method. Compared to the available approaches, the MES waveform helps to increase the class separability and obtain the better performance.
Considering issue of target-aspect sensitivities in the waveform design for the recognition of broadband radar targets, a novel method termed Multi Eigen-Subspace (MES) is proposed in the additional of colored noise. The optimization is done via selecting a lot of eigenvectors which can represent the difference between the echoes of different classes of target in all aspect adequately and spanning them to several eigen-subspaces, then the optimized waveform will be obtained by mapping the desired waveform to those eigen-subspaces. The experimental results prove the efficiency of the proposed method. Compared to the available approaches, the MES waveform helps to increase the class separability and obtain the better performance.
2009, 31(12): 2864-2868.
doi: 10.3724/SP.J.1146.2008.01612
Abstract:
The midcourse missile imaging method is the key technology for the missile-defense system. Conventional ISAR imaging methods require a uniform function to compensate the phase of the echo. The function is difficult to acquire in the wideband case, so the ISAR imaging result is always smeared. This paper provides a new signal processing algorithm called time-frequency trunk line extraction method. By enhancing the amplitude of the time-frequency doppler which extracted from the narrowband radar return signal, the method improves the doppler imaging resolution effectively. The simulation results and anechoic chamber data of the representative cone-shaped object show that the imaging of the missile target can be obtained from the narrowband radar doppler information using this method.
The midcourse missile imaging method is the key technology for the missile-defense system. Conventional ISAR imaging methods require a uniform function to compensate the phase of the echo. The function is difficult to acquire in the wideband case, so the ISAR imaging result is always smeared. This paper provides a new signal processing algorithm called time-frequency trunk line extraction method. By enhancing the amplitude of the time-frequency doppler which extracted from the narrowband radar return signal, the method improves the doppler imaging resolution effectively. The simulation results and anechoic chamber data of the representative cone-shaped object show that the imaging of the missile target can be obtained from the narrowband radar doppler information using this method.
2009, 31(12): 2869-2875.
doi: 10.3724/SP.J.1146.2008.01798
Abstract:
A novel criterion for thinning array optimization based on optimal polarization (minimization of the cross-polarization level) is proposed to suppress the cross-polarization level and solve the problem of grating lobes arisen from thinning array in millimeter wave conical phased array. The mathematical model based on the radiation pattern of the radar seeker polarization is developed firstly. Then the optimal polarization mode is chosen by comparing the cross-polarization level of circular polarization with that of linear polarization, and two groups basic parameters are determined sequentially. Finally the Modified Particle Swarm Optimization(MPSO) algorithm is applied to optimize the configurations of the elements under two groups of parameters, and then the criterion for thinned array optimization is obtained by compare the array patterns of the two configurations, thus the grating lobes are effectively suppressed. Simulation results verify the rationality of the proposed principle.
A novel criterion for thinning array optimization based on optimal polarization (minimization of the cross-polarization level) is proposed to suppress the cross-polarization level and solve the problem of grating lobes arisen from thinning array in millimeter wave conical phased array. The mathematical model based on the radiation pattern of the radar seeker polarization is developed firstly. Then the optimal polarization mode is chosen by comparing the cross-polarization level of circular polarization with that of linear polarization, and two groups basic parameters are determined sequentially. Finally the Modified Particle Swarm Optimization(MPSO) algorithm is applied to optimize the configurations of the elements under two groups of parameters, and then the criterion for thinned array optimization is obtained by compare the array patterns of the two configurations, thus the grating lobes are effectively suppressed. Simulation results verify the rationality of the proposed principle.
2009, 31(12): 2876-2880.
doi: 10.3724/SP.J.1146.2008.01435
Abstract:
Mode S signals are exposed to heavy A/C fruits when used in multilateration and ADS-B systems. Current decoding methods will lead to many decoding errors and low confidence declaration. Mode S decoding techniques are introduced in this paper. There are center amplitude method, baseline multi-sample technique, multi-sample technique with table lookup technique and reduced table lookup technique. The overlapping probability on 40,000 fruit per second is analyzed, then a method is proposed to simulate true jamming environment. Finally, effects of algorithms are evaluated by a great of decoding experiments. High decoding accuracy in experiments shows that these methods are very effective.
Mode S signals are exposed to heavy A/C fruits when used in multilateration and ADS-B systems. Current decoding methods will lead to many decoding errors and low confidence declaration. Mode S decoding techniques are introduced in this paper. There are center amplitude method, baseline multi-sample technique, multi-sample technique with table lookup technique and reduced table lookup technique. The overlapping probability on 40,000 fruit per second is analyzed, then a method is proposed to simulate true jamming environment. Finally, effects of algorithms are evaluated by a great of decoding experiments. High decoding accuracy in experiments shows that these methods are very effective.
2009, 31(12): 2881-2885.
doi: 10.3724/SP.J.1146.2008.01707
Abstract:
In all the algorithms of the ship detection of SAR images, two parameter CFAR detector uses three moving windows: target window, protect window and background window, the sizes of the three windows and the moving step need to be trained, So it is quite inefficient and will cause targets undetected when they are too close. The improved two parameter CFAR detector delivered in this paper only uses a target window and a background window, By using special methods to remove the leaked ship pixels in the background window and estimate the remaining pixels in the background window which are sea clutter to get the local windows clutter gray mean and variance. The moving step is the same as the length of the target window, Compared with two parameter CFAR detector, the structure is simplified and the detection results false alarms is less, targets too close can also be detected, furthermore, the computing efficiency is improved. The simulation results prove the algorithms effectiveness.
In all the algorithms of the ship detection of SAR images, two parameter CFAR detector uses three moving windows: target window, protect window and background window, the sizes of the three windows and the moving step need to be trained, So it is quite inefficient and will cause targets undetected when they are too close. The improved two parameter CFAR detector delivered in this paper only uses a target window and a background window, By using special methods to remove the leaked ship pixels in the background window and estimate the remaining pixels in the background window which are sea clutter to get the local windows clutter gray mean and variance. The moving step is the same as the length of the target window, Compared with two parameter CFAR detector, the structure is simplified and the detection results false alarms is less, targets too close can also be detected, furthermore, the computing efficiency is improved. The simulation results prove the algorithms effectiveness.
2009, 31(12): 2886-2891.
doi: 10.3724/SP.J.1146.2008.01684
Abstract:
In this paper, a new image fusion approach is presented based on correspondence analysis to sharpen the multispectral images with the panchromatic image. First, the original multispectral images are transformed into component space by correspondence analysis. Then, the panchromatic spatial detail information extracted by the redundant wavelet is injected into the component space. Finally, the fused results are obtained through the inverse transformation. Different IKONOS and QuickBird images have been fused with this new approach. Visual and statistical analyses prove that the proposed method does significantly overcome the color distortion and improve the fusion quality compare to some existing techniques.
In this paper, a new image fusion approach is presented based on correspondence analysis to sharpen the multispectral images with the panchromatic image. First, the original multispectral images are transformed into component space by correspondence analysis. Then, the panchromatic spatial detail information extracted by the redundant wavelet is injected into the component space. Finally, the fused results are obtained through the inverse transformation. Different IKONOS and QuickBird images have been fused with this new approach. Visual and statistical analyses prove that the proposed method does significantly overcome the color distortion and improve the fusion quality compare to some existing techniques.
2009, 31(12): 2892-2896.
doi: 10.3724/SP.J.1146.2008.01675
Abstract:
This paper proposes a denoising method of hyperspectral super-dimensional data based on Contourlet transform and principal component analysis. At first the sparse representation of images is accomplished with Contourlet transform. Then the Contourlet coefficients are processed with principal component analysis. The experimental results based on OMIS images show that the proposed method can simultaneously eliminate noises in multi-band hyperspectral images, improve the quality of the whole hyperspectral data and outperforms methods based on PCA and Contourlet transform respectively.
This paper proposes a denoising method of hyperspectral super-dimensional data based on Contourlet transform and principal component analysis. At first the sparse representation of images is accomplished with Contourlet transform. Then the Contourlet coefficients are processed with principal component analysis. The experimental results based on OMIS images show that the proposed method can simultaneously eliminate noises in multi-band hyperspectral images, improve the quality of the whole hyperspectral data and outperforms methods based on PCA and Contourlet transform respectively.
2009, 31(12): 2897-2900.
doi: 10.3724/SP.J.1146.2008.01701
Abstract:
Linear complexity is an important parameter of sequences security. In this paper, the linear complexity properties of primitive -LFSR sequences are studied. Firstly, the bounds of the linear complexity for one n stages primitive -LFSR sequence is given and it is proved that the bounds are tight; then, with the tool of root representation, a method to get the linear complexity of one primitive LFSR sequence is obtained.
Linear complexity is an important parameter of sequences security. In this paper, the linear complexity properties of primitive -LFSR sequences are studied. Firstly, the bounds of the linear complexity for one n stages primitive -LFSR sequence is given and it is proved that the bounds are tight; then, with the tool of root representation, a method to get the linear complexity of one primitive LFSR sequence is obtained.
2009, 31(12): 2901-2906.
doi: 10.3724/SP.J.1146.2008.01498
Abstract:
In this paper, the Chrestenson spectrum and the autocorrelation functions of Rotation Symmetric(RotS) functions are researched. The polynomials of RotS functions have special properties, by constructing matrixes, the relationship of the truth table, short algebric normal form and the chrestenson spectrum are established. By studying the matrixes, the sufficient and necessary conditions of the RotS functions to satisfying cryptographic properties of balancedness, correlation immunity and the stability are given.
In this paper, the Chrestenson spectrum and the autocorrelation functions of Rotation Symmetric(RotS) functions are researched. The polynomials of RotS functions have special properties, by constructing matrixes, the relationship of the truth table, short algebric normal form and the chrestenson spectrum are established. By studying the matrixes, the sufficient and necessary conditions of the RotS functions to satisfying cryptographic properties of balancedness, correlation immunity and the stability are given.
2009, 31(12): 2907-2911.
doi: 10.3724/SP.J.1146.2008.01599
Abstract:
Arithmetic code on parallelized MPS(Most Probable Symbol) not only avoids complex operation of classical parallelized arithmetic code, but also does not inflect its basic probability estimation rule since utilizing statistic law of multidimensional binary coding. The relation between parallel degree, speedup ratio and coding efficiency is theoretically analyzed based on the theorem of complete probability and statistic average. It is pointed out the algorithm with 2 parallel degree is superior to others on the coding efficiency and speed, the algorithm of 3 parallel degree is equal to the one of 4 parallel degree on the coding efficiency. The result is verified by the experiment.
Arithmetic code on parallelized MPS(Most Probable Symbol) not only avoids complex operation of classical parallelized arithmetic code, but also does not inflect its basic probability estimation rule since utilizing statistic law of multidimensional binary coding. The relation between parallel degree, speedup ratio and coding efficiency is theoretically analyzed based on the theorem of complete probability and statistic average. It is pointed out the algorithm with 2 parallel degree is superior to others on the coding efficiency and speed, the algorithm of 3 parallel degree is equal to the one of 4 parallel degree on the coding efficiency. The result is verified by the experiment.
2009, 31(12): 2912-2916.
doi: 10.3724/SP.J.1146.2008.01791
Abstract:
For the issue of transmitting nonsinusoidal signal in radio frequency and improving the energy efficiency and bandwidth efficiency in nonsinusoidal communication systems, designing method of orthogonal pulse set in time domain based on Prolate Spheroidal Wave Functions (PSWF) is proposed. The pulse set is constructed by parameters setting, frequency dividing, equation solving and Schmidt orthogonalization. The frequency spectra shifting and characteristics of the pulses set are controlled by changing pulses parameters. The pulses set is bandlimited signal with controllable frequency spectra. Simulation results show that the pulses set is with good energy concentration which is important for improving the energy efficiency of nonsinusoidal communication system, and the bandwidth efficiency can be rapidly closed to the Nyquist rate.
For the issue of transmitting nonsinusoidal signal in radio frequency and improving the energy efficiency and bandwidth efficiency in nonsinusoidal communication systems, designing method of orthogonal pulse set in time domain based on Prolate Spheroidal Wave Functions (PSWF) is proposed. The pulse set is constructed by parameters setting, frequency dividing, equation solving and Schmidt orthogonalization. The frequency spectra shifting and characteristics of the pulses set are controlled by changing pulses parameters. The pulses set is bandlimited signal with controllable frequency spectra. Simulation results show that the pulses set is with good energy concentration which is important for improving the energy efficiency of nonsinusoidal communication system, and the bandwidth efficiency can be rapidly closed to the Nyquist rate.
2009, 31(12): 2917-2921.
doi: 10.3724/SP.J.1146.2008.01572
Abstract:
A concatenated coding scheme is proposed in this paper, which uses Reed-Solomon (RS) product code for outer code and convolutional code for inner code. The interleaving pattern, which is generated according to congruential sequence, is used to rearrange the symbols of RS product code .The iterative decoding of the concatenated coding scheme is based on the soft decoding of the component codes. When a given maximun number of iteration has been performed, a method is proposed to correct residual errors by computing the syndromes of RS codes. The simulation results show that coding gains up to 0.4 dB for a BER (Bit Error Rate) is of 1e-5 on the Gaussian channel comparison with concatenation RS/CC codes.
A concatenated coding scheme is proposed in this paper, which uses Reed-Solomon (RS) product code for outer code and convolutional code for inner code. The interleaving pattern, which is generated according to congruential sequence, is used to rearrange the symbols of RS product code .The iterative decoding of the concatenated coding scheme is based on the soft decoding of the component codes. When a given maximun number of iteration has been performed, a method is proposed to correct residual errors by computing the syndromes of RS codes. The simulation results show that coding gains up to 0.4 dB for a BER (Bit Error Rate) is of 1e-5 on the Gaussian channel comparison with concatenation RS/CC codes.
2009, 31(12): 2922-2925.
doi: 10.3724/SP.J.1146.2008.01717
Abstract:
Quantum error-correcting codes play an important role in not only quantum communication but also quantum computation. Previous work in constructing quantum error-correcting codes focuses on code constructions for symmetric quantum channels, i.e., qubit-flip and phase-shift errors have equal probabilities. This paper focuses on the asymmetric quantum channels, i.e., qubit-flip and phase-shift errors have different probabilities Some present families of asymmetric quantum codes are constructed with classical quadratic residue codes and Reed-Muller codes. Compared to previously known methods, the method is simple. Furthermore, using the Trace map, more asymmetric quantum error-correcting codes are obtained.
Quantum error-correcting codes play an important role in not only quantum communication but also quantum computation. Previous work in constructing quantum error-correcting codes focuses on code constructions for symmetric quantum channels, i.e., qubit-flip and phase-shift errors have equal probabilities. This paper focuses on the asymmetric quantum channels, i.e., qubit-flip and phase-shift errors have different probabilities Some present families of asymmetric quantum codes are constructed with classical quadratic residue codes and Reed-Muller codes. Compared to previously known methods, the method is simple. Furthermore, using the Trace map, more asymmetric quantum error-correcting codes are obtained.
2009, 31(12): 2926-2930.
doi: 10.3724/SP.J.1146.2008.01677
Abstract:
Effective feature extraction is very important when building the smart DOA estimation model. Based on analyzing the correlation function of the array signal, this paper firstly presents using the angles of contiguous array signals correlation function for DOA estimation purpose instead of common used upper triangular half of the covariance matrix, which eliminates the irrelevant magnitude information and redundant direction characteristic. Therefore the feature dimension is largely reduced without losing any DOA information. Experimental results show that the performance of RBF neural network using proposed Phase-feature is superior to the common used upper triangular half of the covariance matrix in terms of neural network size, generalization, estimation precision and real-time performance, so it has a broad application value.
Effective feature extraction is very important when building the smart DOA estimation model. Based on analyzing the correlation function of the array signal, this paper firstly presents using the angles of contiguous array signals correlation function for DOA estimation purpose instead of common used upper triangular half of the covariance matrix, which eliminates the irrelevant magnitude information and redundant direction characteristic. Therefore the feature dimension is largely reduced without losing any DOA information. Experimental results show that the performance of RBF neural network using proposed Phase-feature is superior to the common used upper triangular half of the covariance matrix in terms of neural network size, generalization, estimation precision and real-time performance, so it has a broad application value.
2009, 31(12): 2931-2936.
doi: 10.3724/SP.J.1146.2008.01633
Abstract:
Since standard Capon beamformer is susceptible to the steering vector mismatches of the Signal Of Interest(SOI), robust Capon beamforming based on steering vector error uncertainty set is investigated. In the cases that the practical steering vector belongs to a spherical uncertainty set, an approximate closed-form solution to the weight vector is derived with a series of approximate deductions. With the solution, the approximate expressions of the targets power estimation and the output signal-to-interference-plus-noise ratio are presented for analysis evaluation, which provides insights into the influence factors on performace. The simulation results demonstrate the reasonability of the analysis in this paper.
Since standard Capon beamformer is susceptible to the steering vector mismatches of the Signal Of Interest(SOI), robust Capon beamforming based on steering vector error uncertainty set is investigated. In the cases that the practical steering vector belongs to a spherical uncertainty set, an approximate closed-form solution to the weight vector is derived with a series of approximate deductions. With the solution, the approximate expressions of the targets power estimation and the output signal-to-interference-plus-noise ratio are presented for analysis evaluation, which provides insights into the influence factors on performace. The simulation results demonstrate the reasonability of the analysis in this paper.
2009, 31(12): 2937-2940.
doi: 10.3724/SP.J.1146.2008.01716
Abstract:
An adaptive method of fractional Fourier transform based on Least Mean Square (LMS) algorithm is proposed and is used to detect and estimate parameters of multicomponent chirp signals. Through the discrete sampling of continuous inverse fractional Fourier transform, a discrete form for numerical calculation is obtained, and then an adaptive filter is constructed with the appropriate choices of the input vector and the desired sequence. The weight vector of the adaptive filter is trained according to LMS algorithm, and the stable weight vector is just the result of fractional Fourier transform. The simulation results show that the proposed algorithm can be used to calculate fractional Fourier transform and to detect and estimate parameters of chirp signals and the delay of calculation is relatively small.
An adaptive method of fractional Fourier transform based on Least Mean Square (LMS) algorithm is proposed and is used to detect and estimate parameters of multicomponent chirp signals. Through the discrete sampling of continuous inverse fractional Fourier transform, a discrete form for numerical calculation is obtained, and then an adaptive filter is constructed with the appropriate choices of the input vector and the desired sequence. The weight vector of the adaptive filter is trained according to LMS algorithm, and the stable weight vector is just the result of fractional Fourier transform. The simulation results show that the proposed algorithm can be used to calculate fractional Fourier transform and to detect and estimate parameters of chirp signals and the delay of calculation is relatively small.
2009, 31(12): 2941-2947.
doi: 10.3724/SP.J.1146.2008.01553
Abstract:
A novel method based on Least Squares Support Vector Machine (LSSVM) is proposed to extract the Fetal Electrocardiogram (FECG) signal from the abdominal composite signal of the pregnant woman. The Maternal Electrocardiogram (MECG) component in the abdominal composite signal is a nonlinear transformation of the MECG signal and the nonlinear transformation is identified by LSSVM with limited samples. An optimal estimation of the MECG component in the abdominal composite signal is obtained by the MECG undergoing the nonlinear transformation. The FECG is extracted by removing the optimal estimation of the MECG component from the abdominal composite signal. The baseline shift and noise in the extracted FECG are suppressed by Empirical Mode Decomposition (EMD). The experimental results show that the clear FECG can be extracted even under the condition of the fetal QRS wave being entirely overlapped with the maternal QRS wave in the abdominal composite signal. The experimental results verify the effectiveness of proposed method.
A novel method based on Least Squares Support Vector Machine (LSSVM) is proposed to extract the Fetal Electrocardiogram (FECG) signal from the abdominal composite signal of the pregnant woman. The Maternal Electrocardiogram (MECG) component in the abdominal composite signal is a nonlinear transformation of the MECG signal and the nonlinear transformation is identified by LSSVM with limited samples. An optimal estimation of the MECG component in the abdominal composite signal is obtained by the MECG undergoing the nonlinear transformation. The FECG is extracted by removing the optimal estimation of the MECG component from the abdominal composite signal. The baseline shift and noise in the extracted FECG are suppressed by Empirical Mode Decomposition (EMD). The experimental results show that the clear FECG can be extracted even under the condition of the fetal QRS wave being entirely overlapped with the maternal QRS wave in the abdominal composite signal. The experimental results verify the effectiveness of proposed method.
2009, 31(12): 2948-2952.
doi: 10.3724/SP.J.1146.2008.01704
Abstract:
Compressed Sensing is a research focus rising in recent years. On the basis of the signals sparse representation in the KLT domain, this paper proposes an approximate KLT method using template matching and studies on the corresponding compressed speech signal sensing. First, it verifies the sparsity of speech signal in the approximate KLT domain. Second, by speech signal and a measurement matrix, it arranges measurements of fixed or adaptive length according to frame energy. Third, according to the measurements, it finds the speech signals sparsest coefficient vector through L1 optimization algorithm to recover the speech signal. Simulation results demonstrate that compressed speech signal sensing in the approximate KLT using template matching has good performance.
Compressed Sensing is a research focus rising in recent years. On the basis of the signals sparse representation in the KLT domain, this paper proposes an approximate KLT method using template matching and studies on the corresponding compressed speech signal sensing. First, it verifies the sparsity of speech signal in the approximate KLT domain. Second, by speech signal and a measurement matrix, it arranges measurements of fixed or adaptive length according to frame energy. Third, according to the measurements, it finds the speech signals sparsest coefficient vector through L1 optimization algorithm to recover the speech signal. Simulation results demonstrate that compressed speech signal sensing in the approximate KLT using template matching has good performance.
2009, 31(12): 2953-2957.
doi: 10.3724/SP.J.1146.2008.01663
Abstract:
In this paper, the characteristic of approximate backbone is analyzed and an Approximate Backbone guided Reduction Algorithm for Clustering (ABRAC) is proposed. ABRAC works as follows: firstly, multiple local optimal solutions are obtained by an existing heuristic clustering algorithm; then, the approximate backbone is generated by intersection of local optimal solutions; afterwards, the search space can be dramatically reduced by fixing the approximate backbone; finally, this reduced search space can be efficiently searched to find high quality solutions. Extensively wide experiments on 26 synthetic and 3 real-life data sets demonstrate that the backbone has significantly effects for improving the quality of clustering, reducing the impact of initial solution, and speeding up the convergence rate.
In this paper, the characteristic of approximate backbone is analyzed and an Approximate Backbone guided Reduction Algorithm for Clustering (ABRAC) is proposed. ABRAC works as follows: firstly, multiple local optimal solutions are obtained by an existing heuristic clustering algorithm; then, the approximate backbone is generated by intersection of local optimal solutions; afterwards, the search space can be dramatically reduced by fixing the approximate backbone; finally, this reduced search space can be efficiently searched to find high quality solutions. Extensively wide experiments on 26 synthetic and 3 real-life data sets demonstrate that the backbone has significantly effects for improving the quality of clustering, reducing the impact of initial solution, and speeding up the convergence rate.
2009, 31(12): 2958-2962.
doi: 10.3724/SP.J.1146.2008.01656
Abstract:
2-Dimensional Kernel Discriminant Analysis (2DKDA) can not be performed since its scatter metric matrices are too large. This paper combines the sampling and regrouping images with 2DKDA and gives three kinds of Sampling and Regrouping 2-Dimensional Kernel Discriminant Analysis (SR2DKDA). These algorithms not only overcome the drawback of 2DKDA but also have superior recognition accuracy to 2-Dimensional Linear Discriminant Analysis (2DLDA). The experiments on ORL database and UMIST database verify the efficiency of the SR2DKDA.
2-Dimensional Kernel Discriminant Analysis (2DKDA) can not be performed since its scatter metric matrices are too large. This paper combines the sampling and regrouping images with 2DKDA and gives three kinds of Sampling and Regrouping 2-Dimensional Kernel Discriminant Analysis (SR2DKDA). These algorithms not only overcome the drawback of 2DKDA but also have superior recognition accuracy to 2-Dimensional Linear Discriminant Analysis (2DLDA). The experiments on ORL database and UMIST database verify the efficiency of the SR2DKDA.
2009, 31(12): 2963-2968.
doi: 10.3724/SP.J.1146.2008.01578
Abstract:
Generally blind watermarking schemes are more secure in various applications because the cover meshes are absent in the watermark extraction stage, even if they are less robust than non-blind ones. But higher robustness of the blind watermarking schemes is pursued. A blind watermarking algorithm for 3D meshes is proposed in this paper. Firstly, a group of lines through the model center depending on a pseudorandom number is generated. Then the intersection points of these lines and the models surface are chosen as embedding objects. Neighbor balls are centered on these intersection points and all vertices within the balls are adjusted to new positions according to watermark bits. Most lines have two or more intersections with the model and most balls contain several vertices, so by distributing a bit of watermark into multi-ball and multi-vertex make the algorithm be robust against cropping and random noise. Because the algorithm doesnt choose embedding objects according the vertex number, it can resist vertex reorder attack. The algorithm is also robust against translation, rotation and scaling attacks. Finally the robustness is verified by a set of experiments results.
Generally blind watermarking schemes are more secure in various applications because the cover meshes are absent in the watermark extraction stage, even if they are less robust than non-blind ones. But higher robustness of the blind watermarking schemes is pursued. A blind watermarking algorithm for 3D meshes is proposed in this paper. Firstly, a group of lines through the model center depending on a pseudorandom number is generated. Then the intersection points of these lines and the models surface are chosen as embedding objects. Neighbor balls are centered on these intersection points and all vertices within the balls are adjusted to new positions according to watermark bits. Most lines have two or more intersections with the model and most balls contain several vertices, so by distributing a bit of watermark into multi-ball and multi-vertex make the algorithm be robust against cropping and random noise. Because the algorithm doesnt choose embedding objects according the vertex number, it can resist vertex reorder attack. The algorithm is also robust against translation, rotation and scaling attacks. Finally the robustness is verified by a set of experiments results.
2009, 31(12): 2969-2974.
doi: 10.3724/SP.J.1146.2008.01624
Abstract:
This paper describes and compares web-based unsupervised translation disambiguation word model and N-gram model. For acquiring knowledge of disambiguation, both two models put differents queries to search engine and statistic page counts which it returned. Word model defines Web Bilingual Relatedness(WBR) between Chinese words and English words and disambiguates word sense by maxmizing Web Bilingual Relatedness between contexts and the translations of target word. Based on the hypothesis that the pattern of a polysemant is different while different sense of it is being used, N-gram model makes disambiguation by statisticing and analyzing N-grams of words in different semantic class of that polysemant. Both of the two models are evaluated on the SemEval2007 task#5, achieving the top performance against the state-of-the-art comparable unsupervised systems. Furthmore, N-gram model outperforms word model and the performence has potential for promotion when combine the results of that two class model.
This paper describes and compares web-based unsupervised translation disambiguation word model and N-gram model. For acquiring knowledge of disambiguation, both two models put differents queries to search engine and statistic page counts which it returned. Word model defines Web Bilingual Relatedness(WBR) between Chinese words and English words and disambiguates word sense by maxmizing Web Bilingual Relatedness between contexts and the translations of target word. Based on the hypothesis that the pattern of a polysemant is different while different sense of it is being used, N-gram model makes disambiguation by statisticing and analyzing N-grams of words in different semantic class of that polysemant. Both of the two models are evaluated on the SemEval2007 task#5, achieving the top performance against the state-of-the-art comparable unsupervised systems. Furthmore, N-gram model outperforms word model and the performence has potential for promotion when combine the results of that two class model.
Information Discriminant Feature Extraction Based on Mutual Information Gradient Optimal Computation
2009, 31(12): 2975-2979.
doi: 10.3724/SP.J.1146.2009.00078
Abstract:
A linear feature extraction method is present with information discriminant analysis, it is based on a feasible computationally feature extraction matrix used mutual information gradient. Firstly, this paper analyzes the limitation for current linear discriminant, and constructs a information discriminant analysis model which facilitates the maximization of the mutual information under the parametric class-conditional PDF. Then, it is proved that the mutual information is linear transformation invariance and optimal in the sense of Bayes, and the algorithm is present for computing feature extraction matrix with mutual information gradient. Finally, the good performance of the method is proved on real-world data set.
A linear feature extraction method is present with information discriminant analysis, it is based on a feasible computationally feature extraction matrix used mutual information gradient. Firstly, this paper analyzes the limitation for current linear discriminant, and constructs a information discriminant analysis model which facilitates the maximization of the mutual information under the parametric class-conditional PDF. Then, it is proved that the mutual information is linear transformation invariance and optimal in the sense of Bayes, and the algorithm is present for computing feature extraction matrix with mutual information gradient. Finally, the good performance of the method is proved on real-world data set.
2009, 31(12): 2980-2983.
doi: 10.3724/SP.J.1146.2009.00132
Abstract:
As the size of VLSI circuits keeps growing, the quality of circuit partitioning for parallel simulation is becoming increasingly crucial. In view of the fact that the present algorithms cannot guarantee the size balance and minimize the cut-signals among partitions simultaneously, a novel algorithm for circuit partitioning at transistor level is presented. The proposed algorithm first conducts clustering procedure to obtain a good initial partition result, and then makes an adjustment procedure to achieve well-balanced partitions with fewer cut-signals. The excellent performance of the new algorithm is demonstrated on several industrial circuits. Compared with the COPART algorithm which is widely used, the size discrepancy among different partitions and the number of cut-signals obtained using the new algorithm decrease by 25% and 18% on average, respectively.
As the size of VLSI circuits keeps growing, the quality of circuit partitioning for parallel simulation is becoming increasingly crucial. In view of the fact that the present algorithms cannot guarantee the size balance and minimize the cut-signals among partitions simultaneously, a novel algorithm for circuit partitioning at transistor level is presented. The proposed algorithm first conducts clustering procedure to obtain a good initial partition result, and then makes an adjustment procedure to achieve well-balanced partitions with fewer cut-signals. The excellent performance of the new algorithm is demonstrated on several industrial circuits. Compared with the COPART algorithm which is widely used, the size discrepancy among different partitions and the number of cut-signals obtained using the new algorithm decrease by 25% and 18% on average, respectively.
2009, 31(12): 2984-2987.
doi: 10.3724/SP.J.1146.2008.01784
Abstract:
The data processing in complex cepstrum domain can restrain the echo-like multi-reflection wave. The TDR (Time Domain Reflectometry) waveforms are analyzed in complex cepstrum domain in this paper. By filtering in complex cepstrum domain, the multi-reflection waves are eliminated. Used the cepstrum analysis, the travel time on the probes are determined. Apparent permittivity of soil is inversed with the cepstrum analysis. The results inversed with the complex domain filter method is more accurate than tangent-lines method.
The data processing in complex cepstrum domain can restrain the echo-like multi-reflection wave. The TDR (Time Domain Reflectometry) waveforms are analyzed in complex cepstrum domain in this paper. By filtering in complex cepstrum domain, the multi-reflection waves are eliminated. Used the cepstrum analysis, the travel time on the probes are determined. Apparent permittivity of soil is inversed with the cepstrum analysis. The results inversed with the complex domain filter method is more accurate than tangent-lines method.
2009, 31(12): 2988-2992.
doi: 10.3724/SP.J.1146.2008.01532
Abstract:
All the current mobile beacon-assisted localization algorithms do not make full use of the practical node distribution information and let the mobile landmark travel the entire network, which causes large path length and low beacon utilization ratio. A novel mobile beacon-assisted node localization algorithm using network-density-based clustering (MBL(ndc)) for wireless sensor networks is presented, which combines node clustering, incremental localization and mobile beacon assisting together. It first selects the cluster heads that has highest core density, and then employs density-reachable method to cluster the network into several branches with the same density, and lastly obtains the optimum trajectory of mobile beacon by combining cluster head path planning using genetic algorithm with in-cluster path planning using hexagon trajectory. After the cluster heads and nearby nodes have completed localization, they become beacons, then cooperate with each other to localize the left unknown nodes in an incremental way. Simulation results demonstrate that the proposed MBL(ndc) algorithm offers comparable localization accuracy as the mobile beacon-assisted localization algorithm with HILBERT trajectory, but with less than 50% path length of the later.
All the current mobile beacon-assisted localization algorithms do not make full use of the practical node distribution information and let the mobile landmark travel the entire network, which causes large path length and low beacon utilization ratio. A novel mobile beacon-assisted node localization algorithm using network-density-based clustering (MBL(ndc)) for wireless sensor networks is presented, which combines node clustering, incremental localization and mobile beacon assisting together. It first selects the cluster heads that has highest core density, and then employs density-reachable method to cluster the network into several branches with the same density, and lastly obtains the optimum trajectory of mobile beacon by combining cluster head path planning using genetic algorithm with in-cluster path planning using hexagon trajectory. After the cluster heads and nearby nodes have completed localization, they become beacons, then cooperate with each other to localize the left unknown nodes in an incremental way. Simulation results demonstrate that the proposed MBL(ndc) algorithm offers comparable localization accuracy as the mobile beacon-assisted localization algorithm with HILBERT trajectory, but with less than 50% path length of the later.
2009, 31(12): 2993-2996.
doi: 10.3724/SP.J.1146.2008.01669
Abstract:
A method based on chaos theory is presented in this paper, to evaluate large-scale network performance using massive traffic measurement. As the periodicity of long-term link utilization measurement and the chaotic nature of short-term link utilization measurement, the largest Lyapunov exponent can be selected as a performance evaluation parameter to represent the performance of network. Analysis results show that the largest Lyapunov exponent can achieves better results than commonly used statistics, such as mathematical expectation and variance.
A method based on chaos theory is presented in this paper, to evaluate large-scale network performance using massive traffic measurement. As the periodicity of long-term link utilization measurement and the chaotic nature of short-term link utilization measurement, the largest Lyapunov exponent can be selected as a performance evaluation parameter to represent the performance of network. Analysis results show that the largest Lyapunov exponent can achieves better results than commonly used statistics, such as mathematical expectation and variance.
2009, 31(12): 2997-3000.
doi: 10.3724/SP.J.1146.2008.01652
Abstract:
Attacks based on flow may bring tremendous damage to complex networks. In existing works, the cases of nodes attacking are mainly concerned, however, few work is involved to the edges. In this paper, the frangibility of complex networks is discussed in the case of some edges being deleted. Additionally, the effects of time strategy and network size are also discussed. By analyzing the load and degree of complex networks, it is demonstrated that the complex networks possess a high heterogeneous distribution of loads, which is caused by the power-law degree distribution, and the heterogeneity makes the networks particularly vulnerability to attacks. The analytic results show that complex networks exhibit strong error tolerance to random failures of edges, but a large-scale cascade of node failure can be triggered by disabling several key edges, which may result in the collapse of networks.
Attacks based on flow may bring tremendous damage to complex networks. In existing works, the cases of nodes attacking are mainly concerned, however, few work is involved to the edges. In this paper, the frangibility of complex networks is discussed in the case of some edges being deleted. Additionally, the effects of time strategy and network size are also discussed. By analyzing the load and degree of complex networks, it is demonstrated that the complex networks possess a high heterogeneous distribution of loads, which is caused by the power-law degree distribution, and the heterogeneity makes the networks particularly vulnerability to attacks. The analytic results show that complex networks exhibit strong error tolerance to random failures of edges, but a large-scale cascade of node failure can be triggered by disabling several key edges, which may result in the collapse of networks.
2009, 31(12): 3001-3005.
doi: 10.3724/SP.J.1146.2008.01489
Abstract:
Probable security is an important criteria for analyzing the security of cryptographic protocols. However, writing and verifying proofs with hand are prone to errors. This paper introduces the game-based approach of writing security proofs and its automatic technique. It advocates the automatic security proof approach based on process calculus, makes researches on the automatic security proof of OAEP+, and presents its initial game and observational equivalences for the first time.
Probable security is an important criteria for analyzing the security of cryptographic protocols. However, writing and verifying proofs with hand are prone to errors. This paper introduces the game-based approach of writing security proofs and its automatic technique. It advocates the automatic security proof approach based on process calculus, makes researches on the automatic security proof of OAEP+, and presents its initial game and observational equivalences for the first time.
2009, 31(12): 3006-3009.
doi: 10.3724/SP.J.1146.2008.01843
Abstract:
Considering the defects in the adaptive Chase algorithm, a novel decoding algorithm for block Turbo codes based on the adaptive quantized testing sequences is proposed. This algorithm is employed to investigate the number of testing sequences, select the test patterns according to the possibility of least reliable bits, and use the quantizing-testing function to quantize the testing sequences according to the level of SNR, which can adaptively adjust the complexity of decoding. The simulation results show that, compared with the traditional algorithms, the proposed algorithm can reduce the decoding complexity with the same BER performance.
Considering the defects in the adaptive Chase algorithm, a novel decoding algorithm for block Turbo codes based on the adaptive quantized testing sequences is proposed. This algorithm is employed to investigate the number of testing sequences, select the test patterns according to the possibility of least reliable bits, and use the quantizing-testing function to quantize the testing sequences according to the level of SNR, which can adaptively adjust the complexity of decoding. The simulation results show that, compared with the traditional algorithms, the proposed algorithm can reduce the decoding complexity with the same BER performance.
2009, 31(12): 3010-3014.
doi: 10.3724/SP.J.1146.2008.01634
Abstract:
In order to synthesize real video sequence, a visual speech synthesis algorithm based on Chinese visual triphone is proposed. According to Chinese pronunciation principle and the relationship between phoneme and viseme, conception of visual triphone is presented. Hidden Markov Model(HMM) is established based on visual triphones. In the training stage, combined features including visual features and audio features are used. In the synthesis stage, sentence HMM is constructed by concatenating triphone HMMs, from which the feature parameters are extracted. From the result of subjective and objective evaluation, the synthesized video is real and satisfied.
In order to synthesize real video sequence, a visual speech synthesis algorithm based on Chinese visual triphone is proposed. According to Chinese pronunciation principle and the relationship between phoneme and viseme, conception of visual triphone is presented. Hidden Markov Model(HMM) is established based on visual triphones. In the training stage, combined features including visual features and audio features are used. In the synthesis stage, sentence HMM is constructed by concatenating triphone HMMs, from which the feature parameters are extracted. From the result of subjective and objective evaluation, the synthesized video is real and satisfied.
2009, 31(12): 3015-3018.
doi: 10.3724/SP.J.1146.2008.01506
Abstract:
A novel, effective and synthetic analysis method is developed for studying the heat dissipation capability of slow-wave structure. This method, based on some theoretical and experimental research, is applicable to analyze the thermal conduction precisely and truly. This method can reduce the material costs and save the involved time. The consistency and the feasibility of this method are verified by some experimental tests on the slow-wave structures with BeO support rods, BN support rods and copper-plated helix.
A novel, effective and synthetic analysis method is developed for studying the heat dissipation capability of slow-wave structure. This method, based on some theoretical and experimental research, is applicable to analyze the thermal conduction precisely and truly. This method can reduce the material costs and save the involved time. The consistency and the feasibility of this method are verified by some experimental tests on the slow-wave structures with BeO support rods, BN support rods and copper-plated helix.