Email alert
2009 Vol. 31, No. 11
Display Method:
2009, 31(11): 2541-2545.
doi: 10.3724/SP.J.1146.2008.01671
Abstract:
This paper presents the sampling criteria for the three-Dimensional (3-D) microwave imaging of the surface of the human body. Then the algorithm based on the wavenumber domain integration along the wave propagation direction at the given ground-range bins is proposed, which avoids the complex 3-D STOLT interpolation computation. Next, the focused 3-D imagery of human body model consisting of points is given with the simulation. Finally, the experiment for the model at Ka band in anechoic chamber is designed, and the corresponding processed result is presented to demonstrate the validity of the sampling criteria and the feasibility of the algorithm.
This paper presents the sampling criteria for the three-Dimensional (3-D) microwave imaging of the surface of the human body. Then the algorithm based on the wavenumber domain integration along the wave propagation direction at the given ground-range bins is proposed, which avoids the complex 3-D STOLT interpolation computation. Next, the focused 3-D imagery of human body model consisting of points is given with the simulation. Finally, the experiment for the model at Ka band in anechoic chamber is designed, and the corresponding processed result is presented to demonstrate the validity of the sampling criteria and the feasibility of the algorithm.
2009, 31(11): 2546-2551.
doi: 10.3724/SP.J.1146.2008.01406
Abstract:
Specific emitter identification is the key research fields of modern electronic intelligence systems and electronic support measurement systems. Based on the analysis of individual features of radar emitter, a specific emitter identification algorithm is proposed based on ambiguity function. Considering the redundancy of ambiguity function, the slices of ambiguity function and localized ambiguity function are used to represent the individual features, such as pulse envelope, phase noise and radiated emission. Then the fast algorithm for localized ambiguity function slices is deduced. Finally, Simulation experiments verify the feasibility and validity of the proposed methods.
Specific emitter identification is the key research fields of modern electronic intelligence systems and electronic support measurement systems. Based on the analysis of individual features of radar emitter, a specific emitter identification algorithm is proposed based on ambiguity function. Considering the redundancy of ambiguity function, the slices of ambiguity function and localized ambiguity function are used to represent the individual features, such as pulse envelope, phase noise and radiated emission. Then the fast algorithm for localized ambiguity function slices is deduced. Finally, Simulation experiments verify the feasibility and validity of the proposed methods.
2009, 31(11): 2552-2556.
doi: 10.3724/SP.J.1146.2008.01505
Abstract:
Radar emitter sorting rate of current methods is not high and they are sensitive to the Signal Noise Ratio (SNR). In this paper, complexity characteristics are applied to sorting unknown complicated radar signal and a high sorting rate is got. The received signal is pretreatment firstly, then the box dimension and sparseness are extracted and they are used as sorting characteristics. Finally, the sorting is completed by KFCM algorithm. By simulation results, the box dimension and sparseness of pretreatment signal sequence are distinguishable and they are not sensitive to SNR, and the lowest sorting rate of different signal is 87% at SNR=5 dB.
Radar emitter sorting rate of current methods is not high and they are sensitive to the Signal Noise Ratio (SNR). In this paper, complexity characteristics are applied to sorting unknown complicated radar signal and a high sorting rate is got. The received signal is pretreatment firstly, then the box dimension and sparseness are extracted and they are used as sorting characteristics. Finally, the sorting is completed by KFCM algorithm. By simulation results, the box dimension and sparseness of pretreatment signal sequence are distinguishable and they are not sensitive to SNR, and the lowest sorting rate of different signal is 87% at SNR=5 dB.
2009, 31(11): 2556-2562.
doi: 10.3724/SP.J.1146.2008.01543
Abstract:
A method of SAR image segmentation based on Multiscale AutoRegressive and Markov Random Field (MAR-MRF) models is presented. MAR models is used to establish mathematic relationship among different image layers, and is combined with Markov Random Field (MRF) segment models. This method takes into account the dependence of neighbor layers Markov property of the same layer, and uses forecasting result of the MAR models to direct the fine layer segmentation. Experimental results on SAR image show that this method reduces the iterative times of segmentation and inaccuracy classify blocks, and gets clear and smooth object border.
A method of SAR image segmentation based on Multiscale AutoRegressive and Markov Random Field (MAR-MRF) models is presented. MAR models is used to establish mathematic relationship among different image layers, and is combined with Markov Random Field (MRF) segment models. This method takes into account the dependence of neighbor layers Markov property of the same layer, and uses forecasting result of the MAR models to direct the fine layer segmentation. Experimental results on SAR image show that this method reduces the iterative times of segmentation and inaccuracy classify blocks, and gets clear and smooth object border.
2009, 31(11): 2563-2568.
doi: 10.3724/SP.J.1146.2008.01449
Abstract:
The general relative range model for non-linear trajectory is built firstly, followed by the maneuver induced phase errors analysis. The quantitative conditions of ignoring these errors are deduced later. Then this paper concerns on the issue of missile-borne SAR on the trajectory with diving maneuver: the characteristics of signals are analyzed and a range Doppler based algorithm is presented according to these characteristics. Compared with the range Doppler algorithm under linear aperture, the new one just modifies some phase correction factors and needs no additional complexity owing to nonlinear aperture.
The general relative range model for non-linear trajectory is built firstly, followed by the maneuver induced phase errors analysis. The quantitative conditions of ignoring these errors are deduced later. Then this paper concerns on the issue of missile-borne SAR on the trajectory with diving maneuver: the characteristics of signals are analyzed and a range Doppler based algorithm is presented according to these characteristics. Compared with the range Doppler algorithm under linear aperture, the new one just modifies some phase correction factors and needs no additional complexity owing to nonlinear aperture.
2009, 31(11): 2569-2574.
doi: 10.3724/SP.J.1146.2008.01412
Abstract:
Regularization technique can accomplish super-resolution and noise suppression by imposing prior information, which provides high quality image for target recognition. The iterative process of SAR complex imagery domain regularization based on lk norm is analyzed to reveal the inherent principles of super-resolution. Because of the inconsistent resolution problem of the original algorithm for different amplitude scattering centers, an improved method based on varying parameters is proposed. Simulation experiments and computational results of measured MSTAR data prove the effectiveness of the proposed method.
Regularization technique can accomplish super-resolution and noise suppression by imposing prior information, which provides high quality image for target recognition. The iterative process of SAR complex imagery domain regularization based on lk norm is analyzed to reveal the inherent principles of super-resolution. Because of the inconsistent resolution problem of the original algorithm for different amplitude scattering centers, an improved method based on varying parameters is proposed. Simulation experiments and computational results of measured MSTAR data prove the effectiveness of the proposed method.
2009, 31(11): 2575-2580.
doi: 10.3724/SP.J.1146.2008.01447
Abstract:
In this paper, a new approach based on Taper Scale Transform (TST) is presented for weak ship target detection in the circumstance of strong clutter and noise. By scaling the coordinate of signal instantaneous correlation function, the coupling between slow time and lag time is removed, thus fully coherent integration can be achieved for target detection. The determination of scaling factor and the interference of cross terms related to detection are analyzed. Real data result verifies the effectiveness of the proposed method.
In this paper, a new approach based on Taper Scale Transform (TST) is presented for weak ship target detection in the circumstance of strong clutter and noise. By scaling the coordinate of signal instantaneous correlation function, the coupling between slow time and lag time is removed, thus fully coherent integration can be achieved for target detection. The determination of scaling factor and the interference of cross terms related to detection are analyzed. Real data result verifies the effectiveness of the proposed method.
2009, 31(11): 2581-2584.
doi: 10.3724/SP.J.1146.2008.01510
Abstract:
To identify with the target images obtained by Synthetic Aperture Radar (SAR), those subspace-based methods for Automatic Target Recognition (ATR) are usually based on the range subspace of the samples. When some similar targets need to be distinguished, the corresponding templates are poorly separable because the range subspaces of each other have a big intersection. A method for SAR target ATR is proposed in this paper, which considers the orthogonal complement subspace of the samples as the projection subspaces. Consequently, the difference, between the projections of the different type targets on the projection subspace, is enlarged, which significantly improves the identification performance. Experimental results show that the proposed method is superior to the other similar methods.
To identify with the target images obtained by Synthetic Aperture Radar (SAR), those subspace-based methods for Automatic Target Recognition (ATR) are usually based on the range subspace of the samples. When some similar targets need to be distinguished, the corresponding templates are poorly separable because the range subspaces of each other have a big intersection. A method for SAR target ATR is proposed in this paper, which considers the orthogonal complement subspace of the samples as the projection subspaces. Consequently, the difference, between the projections of the different type targets on the projection subspace, is enlarged, which significantly improves the identification performance. Experimental results show that the proposed method is superior to the other similar methods.
2009, 31(11): 2585-2590.
doi: 10.3724/SP.J.1146.2008.01400
Abstract:
Aiming at the problems of the target-aspect sensitivities, the time-shift sensitivities and the initial phase uncertainty in the waveform design for the recognition of broadband radar targets, a novel method termed GASC (Genetic Algorithm and Slide Correlation method) is proposed which is based on genetic algorithm and slide correlation classifier in the presence of additive colored Gaussian noise. This method gives a new optimization measurement called matched distance which is defined as the matched coefficient between the echoes and the templates of same class of target minus the matched coefficient between the echoes and the templates of different class of target, and the optimization is done via maximizing the minimal matched distance of all kinds of target with the constraint that the magnitude of the transmit pulse is constant. Using genetic algorithm, the optimized waveform is obtained. The experimental results prove the efficiency of the proposed method. Compared to the available approaches, the GASC can increase the class separability and obtain the better performance.
Aiming at the problems of the target-aspect sensitivities, the time-shift sensitivities and the initial phase uncertainty in the waveform design for the recognition of broadband radar targets, a novel method termed GASC (Genetic Algorithm and Slide Correlation method) is proposed which is based on genetic algorithm and slide correlation classifier in the presence of additive colored Gaussian noise. This method gives a new optimization measurement called matched distance which is defined as the matched coefficient between the echoes and the templates of same class of target minus the matched coefficient between the echoes and the templates of different class of target, and the optimization is done via maximizing the minimal matched distance of all kinds of target with the constraint that the magnitude of the transmit pulse is constant. Using genetic algorithm, the optimized waveform is obtained. The experimental results prove the efficiency of the proposed method. Compared to the available approaches, the GASC can increase the class separability and obtain the better performance.
2009, 31(11): 2591-2595.
doi: 10.3724/SP.J.1146.2008.01598
Abstract:
In order to meet the needs of traffic information collection in Intelligent Transportation System (ITS), a design of K-band dual mode traffic information collection radar which can work in both Frequency Modulate Continual Wave (FMCW) mode and Continual Wave (CW) mode is introduced in this paper. As research about Voltage Controlled Oscillator (VCO) used as the oscillator in CW radar for velocity measurement is seldom reported, the effect of phase noise and detecting distance on velocity error in CW mode is analyzed. In addition, the short-range velocity measurement is also introduced. The experiment result shows that the velocity error of the dual mode traffic information radar with MMIC VCO is acceptable, although not as good as of the CW radar with low phase noise PLL. The analysis shows that the effect of phase noise on velocity error can be reduced by shortening the detecting distance. The dual mode radar is acceptable in velocity measurement of civilian traffic radar.
In order to meet the needs of traffic information collection in Intelligent Transportation System (ITS), a design of K-band dual mode traffic information collection radar which can work in both Frequency Modulate Continual Wave (FMCW) mode and Continual Wave (CW) mode is introduced in this paper. As research about Voltage Controlled Oscillator (VCO) used as the oscillator in CW radar for velocity measurement is seldom reported, the effect of phase noise and detecting distance on velocity error in CW mode is analyzed. In addition, the short-range velocity measurement is also introduced. The experiment result shows that the velocity error of the dual mode traffic information radar with MMIC VCO is acceptable, although not as good as of the CW radar with low phase noise PLL. The analysis shows that the effect of phase noise on velocity error can be reduced by shortening the detecting distance. The dual mode radar is acceptable in velocity measurement of civilian traffic radar.
2009, 31(11): 2596-2599.
doi: 10.3724/SP.J.1146.2008.01540
Abstract:
Phase-coded signals suffer from Doppler mismatch in pulse compression due to the characteristics of the ideal thumbtack ambiguity function. In this paper, a novel model of measuring velocity-and-range and the corresponding Doppler compensation algorithm from the perspective of 2D image reconstruction are described considering of the intra-pulse Doppler. Simulation shows that the proposed method solves the Doppler-mismatch problem and obtains the satisfactory compression result when velocity ambiguity and range migration are taken into account.
Phase-coded signals suffer from Doppler mismatch in pulse compression due to the characteristics of the ideal thumbtack ambiguity function. In this paper, a novel model of measuring velocity-and-range and the corresponding Doppler compensation algorithm from the perspective of 2D image reconstruction are described considering of the intra-pulse Doppler. Simulation shows that the proposed method solves the Doppler-mismatch problem and obtains the satisfactory compression result when velocity ambiguity and range migration are taken into account.
2009, 31(11): 2600-2605.
doi: 10.3724/SP.J.1146.2008.01476
Abstract:
In the process of ballistic missile penetration, modern active jamming systems can generate decoys resembling true target echoes in aspects of energy, waveform, or even phase modulation. These decoys can share the radar signal processors coherent processing gain, making conventional signal-based discrimination methods invalidate and even form multiple false tracks at the data processor. In this paper, motion models of active decoys in typical coordinate systems (CS) are derived theoretically, and the motional characteristics of active decoys are analyzed in detail. These motion models can reveal the intrinsical dynamics differences between physical targets and active decoys. Firstly, uniform mathematical model is built up to derive the motion models in East-North-Up (ENU) CS, Radar spherical CS and Earth-centered fixed (ECF) CS. Secondly, the orbit, velocity and acceleration characteristics of active decoys are analyzed on the basis of motin models. And thirdly, performance analysis due to related elements are also covered. This study makes a contribution in that it has provided motion-based discrimination algorithms with theoretical basis.
In the process of ballistic missile penetration, modern active jamming systems can generate decoys resembling true target echoes in aspects of energy, waveform, or even phase modulation. These decoys can share the radar signal processors coherent processing gain, making conventional signal-based discrimination methods invalidate and even form multiple false tracks at the data processor. In this paper, motion models of active decoys in typical coordinate systems (CS) are derived theoretically, and the motional characteristics of active decoys are analyzed in detail. These motion models can reveal the intrinsical dynamics differences between physical targets and active decoys. Firstly, uniform mathematical model is built up to derive the motion models in East-North-Up (ENU) CS, Radar spherical CS and Earth-centered fixed (ECF) CS. Secondly, the orbit, velocity and acceleration characteristics of active decoys are analyzed on the basis of motin models. And thirdly, performance analysis due to related elements are also covered. This study makes a contribution in that it has provided motion-based discrimination algorithms with theoretical basis.
2009, 31(11): 2606-2609.
doi: 10.3724/SP.J.1146.2008.01708
Abstract:
Fixed step-size Subband Adaptive Filters (SAFs) must carry out a trade-off between fast convergence rate and low steady-state misadjustment. According to the functional relationship between the Mean-Square Deviation (MSD) of the coefficient vector of the adaptive filter and step-size, this paper proposes a step-size control algorithm to address the problem above, which is derived by the largest decrease method of the MSD of the coefficient vector of the adaptive filter for each iterative update. This algorithm can obtain both fast convergence rate and low steady-state misadjustment. Experimental results verify the validity of the proposed method.
Fixed step-size Subband Adaptive Filters (SAFs) must carry out a trade-off between fast convergence rate and low steady-state misadjustment. According to the functional relationship between the Mean-Square Deviation (MSD) of the coefficient vector of the adaptive filter and step-size, this paper proposes a step-size control algorithm to address the problem above, which is derived by the largest decrease method of the MSD of the coefficient vector of the adaptive filter for each iterative update. This algorithm can obtain both fast convergence rate and low steady-state misadjustment. Experimental results verify the validity of the proposed method.
2009, 31(11): 2610-2613.
doi: 10.3724/SP.J.1146.2008.01585
Abstract:
Image registration is transformed an unconstrained optimization problem in image mosaic. Parameter optimization model algorithm is proposed by means of quadratic form theory-- orthogonal approximation algorithm. According to properties of best-function, a direct method is adopted without calculating derivative or gradient, only calculating function values. The advantages are eliminating of taking up large computer memory, only using a matrix to store information. Simulation experiments show the algorithm has advantages of fast convergence, result precisely and good practicality.
Image registration is transformed an unconstrained optimization problem in image mosaic. Parameter optimization model algorithm is proposed by means of quadratic form theory-- orthogonal approximation algorithm. According to properties of best-function, a direct method is adopted without calculating derivative or gradient, only calculating function values. The advantages are eliminating of taking up large computer memory, only using a matrix to store information. Simulation experiments show the algorithm has advantages of fast convergence, result precisely and good practicality.
2009, 31(11): 2614-2619.
doi: 10.3724/SP.J.1146.2008.01469
Abstract:
Due to representing 3-dimensional color vectors as quaternion hypercomplex numbers, a color image sequences is mapped as a quaternion hypercomplex function in 3-dimensional spatial-temporal space. By underlying hypothesis of Color Constant Model (CCM), a quaternion Optical Flow Equation (OFE) which is used to describe motion in color image sequences is developed. In order to solve this quaternion OFE efficiently, a concept of quaternion homogeneity differential is introduced, thus that a new Quaternion Homogeneity CCM (QHCCM) OFE is built. The mathematic form of the QHCCM OFE is similar with the Brightness Constant Model (BCM) OFE. Because it contains not only luminance information but also chrominance information, the QHCCM OPE can capture motions in color video more accurate than that of the BCM-based method. Finally, some compared experiments are given to show the proposed methods effectiveness.
Due to representing 3-dimensional color vectors as quaternion hypercomplex numbers, a color image sequences is mapped as a quaternion hypercomplex function in 3-dimensional spatial-temporal space. By underlying hypothesis of Color Constant Model (CCM), a quaternion Optical Flow Equation (OFE) which is used to describe motion in color image sequences is developed. In order to solve this quaternion OFE efficiently, a concept of quaternion homogeneity differential is introduced, thus that a new Quaternion Homogeneity CCM (QHCCM) OFE is built. The mathematic form of the QHCCM OFE is similar with the Brightness Constant Model (BCM) OFE. Because it contains not only luminance information but also chrominance information, the QHCCM OPE can capture motions in color video more accurate than that of the BCM-based method. Finally, some compared experiments are given to show the proposed methods effectiveness.
2009, 31(11): 2620-2625.
doi: 10.3724/SP.J.1146.2008.01440
Abstract:
This paper presents an improved SIFT(Scale Invariant Feature Transform) descriptor for local feature detection and matching in object tracking. Only the local maxima in DOG scale space are detected as candidate interesting points to improve the stability. In order to avoid rotating the image, the main orientations and descriptors are determined statistically, according to oriented gradients histograms in circular neighborhood around the interesting point. Finally, ratio between the first and the second closest distance is used to match the 96-dimensional vectors. This method exhibits very good performance in high reliable applications, for its effectiveness and reduced complexity.
This paper presents an improved SIFT(Scale Invariant Feature Transform) descriptor for local feature detection and matching in object tracking. Only the local maxima in DOG scale space are detected as candidate interesting points to improve the stability. In order to avoid rotating the image, the main orientations and descriptors are determined statistically, according to oriented gradients histograms in circular neighborhood around the interesting point. Finally, ratio between the first and the second closest distance is used to match the 96-dimensional vectors. This method exhibits very good performance in high reliable applications, for its effectiveness and reduced complexity.
2009, 31(11): 2626-2631.
doi: 10.3724/SP.J.1146.2008.01422
Abstract:
Contour features are powerful cues for human vision system to analyze and identify objects. A new method for automatic multi-categorical objects recognition using shape statistical models is proposed to improve the disadvantages existing in most of the relative methods. This method defines firstly the shape base pairs as feature descriptors, and extracts typical shape base pairs from sample images to build a feature codebook. Then, unsupervised learning is performed to calculate the feature distribution and design class-specific shape models. After detecting the regions and determining the categories quickly, segmentation could be applied to obtain the precise outlines. Experimental results demonstrate that proposed method can achieve high efficiency and accuracy in extracting manifold and complicated objects, and resolve the problems of noise disturbance, rotations at a certain extent.
Contour features are powerful cues for human vision system to analyze and identify objects. A new method for automatic multi-categorical objects recognition using shape statistical models is proposed to improve the disadvantages existing in most of the relative methods. This method defines firstly the shape base pairs as feature descriptors, and extracts typical shape base pairs from sample images to build a feature codebook. Then, unsupervised learning is performed to calculate the feature distribution and design class-specific shape models. After detecting the regions and determining the categories quickly, segmentation could be applied to obtain the precise outlines. Experimental results demonstrate that proposed method can achieve high efficiency and accuracy in extracting manifold and complicated objects, and resolve the problems of noise disturbance, rotations at a certain extent.
2009, 31(11): 2632-2636.
doi: 10.3724/SP.J.1146.2008.01629
Abstract:
Direct LDA (DLDA) is an extension of Linear Discriminant Analysis (LDA) to deal with the small sample size problem, which is previously claimed to take advantage of all the information, both within and outside of the within-class scatter's null space. However, a lot of counter-examples show that this is not the case. In order to better understand the characteristics of DLDA, this paper presents its theoretical analysis and concludes that: DLDA based on the traditional Fisher criterion nearly does not make use of the information inside the null space, thus some discriminative information may be lost; while one based on other variants of Fisher criterion is equivalent to null-space LDA and orthogonal LDA under the orthogonal constraints among discriminant vectors and a mild condition which holds in many applications involving high-dimensional data. The comparative results on the face database, ORL and YALE, also consistent with the theory analysis.
Direct LDA (DLDA) is an extension of Linear Discriminant Analysis (LDA) to deal with the small sample size problem, which is previously claimed to take advantage of all the information, both within and outside of the within-class scatter's null space. However, a lot of counter-examples show that this is not the case. In order to better understand the characteristics of DLDA, this paper presents its theoretical analysis and concludes that: DLDA based on the traditional Fisher criterion nearly does not make use of the information inside the null space, thus some discriminative information may be lost; while one based on other variants of Fisher criterion is equivalent to null-space LDA and orthogonal LDA under the orthogonal constraints among discriminant vectors and a mild condition which holds in many applications involving high-dimensional data. The comparative results on the face database, ORL and YALE, also consistent with the theory analysis.
2009, 31(11): 2637-2642.
doi: 10.3724/SP.J.1146.2008.01560
Abstract:
Frequency-shaping pulse is one of the essential modulation parameters for the demodulation of Continuous Phase Modulation (CPM), and it is determined by the pulse shape and pulse length. In this paper, based on the features of the auto-correlation of CPM-signal, an in-depth study is proposed on the internal relationship of the modulation parameters and the auto-correlation function of CPM-signal. Then a blind estimation algorithm of frequency-shaping pulse is developed. First, the pulse shape is identified; second, the modulation index of the CPM-signal is adjusted to an integer; Finally, the pulse length is estimated using the number of non-zero value of auto-correlation function of CPM-signal. The simulation study shows that the proposed algorithm has the ability to provide a good estimation of frequency-shaping pulse of CPM-signal with any arbitrary modulation order and modulation index. In addition, the algorithm design is computation efficient and thus facilitates hardware implementation.
Frequency-shaping pulse is one of the essential modulation parameters for the demodulation of Continuous Phase Modulation (CPM), and it is determined by the pulse shape and pulse length. In this paper, based on the features of the auto-correlation of CPM-signal, an in-depth study is proposed on the internal relationship of the modulation parameters and the auto-correlation function of CPM-signal. Then a blind estimation algorithm of frequency-shaping pulse is developed. First, the pulse shape is identified; second, the modulation index of the CPM-signal is adjusted to an integer; Finally, the pulse length is estimated using the number of non-zero value of auto-correlation function of CPM-signal. The simulation study shows that the proposed algorithm has the ability to provide a good estimation of frequency-shaping pulse of CPM-signal with any arbitrary modulation order and modulation index. In addition, the algorithm design is computation efficient and thus facilitates hardware implementation.
2009, 31(11): 2643-2648.
doi: 10.3724/SP.J.1146.2008.01546
Abstract:
This paper proposes a Multi-Stage FastICA Algorithm(MSFICA) to solve the problem of error propagation in the traditional successive FastICA algorithm. MSFICA removes the error propagation effect through a two-stage structure. In order to reduce the computational complexity, a dimension decrease method is used to get the initial values of separating vectors in the first stage. In the second stage, the algorithm uses the initial values and whitened observed signals to separate original signals, and does not need orthogonal projection. Simulation results indicate that the proposed algorithm can eliminate error propagation successfully and achieve better performance than existing parallel FastICA algorithm at the expense of a slightly increased complexity.
This paper proposes a Multi-Stage FastICA Algorithm(MSFICA) to solve the problem of error propagation in the traditional successive FastICA algorithm. MSFICA removes the error propagation effect through a two-stage structure. In order to reduce the computational complexity, a dimension decrease method is used to get the initial values of separating vectors in the first stage. In the second stage, the algorithm uses the initial values and whitened observed signals to separate original signals, and does not need orthogonal projection. Simulation results indicate that the proposed algorithm can eliminate error propagation successfully and achieve better performance than existing parallel FastICA algorithm at the expense of a slightly increased complexity.
2009, 31(11): 2649-2652.
doi: 10.3724/SP.J.1146.2008.01604
Abstract:
According to the model of sampling in shift-invariant subspace with multiplicity r, this paper proposes a reconstruction method of sampling in shift-invariant subspace with multiplicity r based on least squares method, moreover, obtains the reconstruction filters frequency expression . And the reconstruction error is analyzed from the projection theory of Hilbert space. Finally, with amplitude modulation signal as example, the reconstruction method of sampling in shift-invariant subspace with multiplicity r based on least squares method is proved by simulation, the results show that the reconstruction algorithm is effective.
According to the model of sampling in shift-invariant subspace with multiplicity r, this paper proposes a reconstruction method of sampling in shift-invariant subspace with multiplicity r based on least squares method, moreover, obtains the reconstruction filters frequency expression . And the reconstruction error is analyzed from the projection theory of Hilbert space. Finally, with amplitude modulation signal as example, the reconstruction method of sampling in shift-invariant subspace with multiplicity r based on least squares method is proved by simulation, the results show that the reconstruction algorithm is effective.
2009, 31(11): 2653-2658.
doi: 10.3724/SP.J.1146.2008.01550
Abstract:
The derivation mistake in Caos Fuzzy Fisher Criterion (FFC) based Semi-Fuzzy Clustering Algorithm (FFC-SFCA) is pointed out. Combining Fuzzy Compactness and Separation (FCS) clustering algorithm, a new clustering algorithm, FFC-FCS, is proposed in this paper. FFC-FCS make full use of the feature extraction and dimension reduction characteristics of FFC, alternately running FFC in the original data space and FCS in the projection space, clustering the original data is accomplished by clustering the dimension reduction data. FFC-FCS not only shows excellent capability of classifying low dimensional data but also has a certain grade classification advantage with respect to high dimensional data. The experimental results show that FFC-FCS has super performance over original FCS, FFC-SFCA and classical Fuzzy C-Means(FCM).
The derivation mistake in Caos Fuzzy Fisher Criterion (FFC) based Semi-Fuzzy Clustering Algorithm (FFC-SFCA) is pointed out. Combining Fuzzy Compactness and Separation (FCS) clustering algorithm, a new clustering algorithm, FFC-FCS, is proposed in this paper. FFC-FCS make full use of the feature extraction and dimension reduction characteristics of FFC, alternately running FFC in the original data space and FCS in the projection space, clustering the original data is accomplished by clustering the dimension reduction data. FFC-FCS not only shows excellent capability of classifying low dimensional data but also has a certain grade classification advantage with respect to high dimensional data. The experimental results show that FFC-FCS has super performance over original FCS, FFC-SFCA and classical Fuzzy C-Means(FCM).
2009, 31(11): 2659-2664.
doi: 10.3724/SP.J.1146.2008.01413
Abstract:
Ranging is one of the most important processes in the mobile WiMAX standard, for resolving the uplink synchronization and near/far problems. In this paper, under WiMAX OFDMA framework, two Carrier-Frequency Offset (CFO) estimation methods for multiuser case are proposed for initial ranging process. In Method 1, the CFO of each ranging user is estimated by utilizing the phase of time-domain correlation results between the ranging signal and the received signal samples. Method 2 uses the phase difference of frequency-domain correlation results between the ranging signal and two received consecutive OFDMA symbols. Simulation results show that the proposed frequency-domain correlation method performs better than the time-domain correlation method even in the case of multiple users simultaneously existing in one ranging time-slot. And it is more robust to timing offsets result from the Symbol Timing Offset (STO) estimation error.
Ranging is one of the most important processes in the mobile WiMAX standard, for resolving the uplink synchronization and near/far problems. In this paper, under WiMAX OFDMA framework, two Carrier-Frequency Offset (CFO) estimation methods for multiuser case are proposed for initial ranging process. In Method 1, the CFO of each ranging user is estimated by utilizing the phase of time-domain correlation results between the ranging signal and the received signal samples. Method 2 uses the phase difference of frequency-domain correlation results between the ranging signal and two received consecutive OFDMA symbols. Simulation results show that the proposed frequency-domain correlation method performs better than the time-domain correlation method even in the case of multiple users simultaneously existing in one ranging time-slot. And it is more robust to timing offsets result from the Symbol Timing Offset (STO) estimation error.
2009, 31(11): 2665-2670.
doi: 10.3724/SP.J.1146.2008.01416
Abstract:
The unbiased Minimum Mean-Square Error-Iterative Tree Search (MMSE-ITS) detector, which is known to be one of the most efficient Multi-Input Multi-Output (MIMO) detectors available, is improved by selectively augmenting partial length paths and by adding one bit complement vectors. Simulation and analysis results show that the improved detector provides better detection performance with lower complexity than the unbiased MMSE-ITS detector does. In addition, the improved .detector avoids the clipping operation completely and is robust to any MIMO channels.
The unbiased Minimum Mean-Square Error-Iterative Tree Search (MMSE-ITS) detector, which is known to be one of the most efficient Multi-Input Multi-Output (MIMO) detectors available, is improved by selectively augmenting partial length paths and by adding one bit complement vectors. Simulation and analysis results show that the improved detector provides better detection performance with lower complexity than the unbiased MMSE-ITS detector does. In addition, the improved .detector avoids the clipping operation completely and is robust to any MIMO channels.
2009, 31(11): 2671-2676.
doi: 10.3724/SP.J.1146.2008.01564
Abstract:
To ensure the reliability of the data transmitting for the Unmanned Aerial Vehicles (UAVs) in dynamic environment, a novel class of fast acquisition algorithm based on Partial Matched Filters and Two-Leveled FFT(PMF-TLFFT) is proposed, which is capable of searching for the code synchronization point as well as estimating the carrier-frequency offset for correction. However, in dynamic environment, as the UAVs usually possesses a larger axial acceleration compared with that of the satellite repeater, the Doppler frequency shifts dramatically, making the tracking loop of a general receiver out of balance. Therefore, An FPLL based on Look-Up Tables (LUT-FPLL) is presented, which employs a list of experiential values in FLL and the bandwidth of the loop filter in FLL is adaptively adjusted, this ensures the high precision tracking performance. Simulation results demonstrate that with the input SNR higher than -35 dB, initial carrier-frequency offset ranging from -12.8 kHz to +12.8 kHz and axial acceleration 5 m/s2 the receiver on the UAVs can efficiently and reliably work.
To ensure the reliability of the data transmitting for the Unmanned Aerial Vehicles (UAVs) in dynamic environment, a novel class of fast acquisition algorithm based on Partial Matched Filters and Two-Leveled FFT(PMF-TLFFT) is proposed, which is capable of searching for the code synchronization point as well as estimating the carrier-frequency offset for correction. However, in dynamic environment, as the UAVs usually possesses a larger axial acceleration compared with that of the satellite repeater, the Doppler frequency shifts dramatically, making the tracking loop of a general receiver out of balance. Therefore, An FPLL based on Look-Up Tables (LUT-FPLL) is presented, which employs a list of experiential values in FLL and the bandwidth of the loop filter in FLL is adaptively adjusted, this ensures the high precision tracking performance. Simulation results demonstrate that with the input SNR higher than -35 dB, initial carrier-frequency offset ranging from -12.8 kHz to +12.8 kHz and axial acceleration 5 m/s2 the receiver on the UAVs can efficiently and reliably work.
2009, 31(11): 2677-2681.
doi: 10.3724/SP.J.1146.2008.01421
Abstract:
A modified Vertical-Bell Labs layered space-time (V-BLAST) system is proposed by Shao(2007). However, the ZF detection algorithm proposed by Shao is not the optimum under ZF criterion. In this paper, the optimum ZF detector is directly derived from the original continuous, unsampled signals at receive antennas. Both Analysis and simulation demonstrate that the optimum ZF detection is superior to the ZF detection algorithm proposed by Shao, especially when the number of transmit antennas is large.
A modified Vertical-Bell Labs layered space-time (V-BLAST) system is proposed by Shao(2007). However, the ZF detection algorithm proposed by Shao is not the optimum under ZF criterion. In this paper, the optimum ZF detector is directly derived from the original continuous, unsampled signals at receive antennas. Both Analysis and simulation demonstrate that the optimum ZF detection is superior to the ZF detection algorithm proposed by Shao, especially when the number of transmit antennas is large.
2009, 31(11): 2682-2686.
doi: 10.3724/SP.J.1146.2008.01562
Abstract:
In this paper, detection performance and detection sensitivity with detection duration and noise average power fluctuation in short time is investigated firstly. Performance accuracy and detection sensitivity drop quickly with noise average power fluctuation increasing and it is worse when signal-to-noise ratio lower. Considering the characteristic, a novel new energy detection algorithm based on cooperative is presented. Simulations show that the proposed scheme improves antagonism of noise average power fluctuation in short time and gets a good detection performance as long as increasing the number of cooperative detection users.
In this paper, detection performance and detection sensitivity with detection duration and noise average power fluctuation in short time is investigated firstly. Performance accuracy and detection sensitivity drop quickly with noise average power fluctuation increasing and it is worse when signal-to-noise ratio lower. Considering the characteristic, a novel new energy detection algorithm based on cooperative is presented. Simulations show that the proposed scheme improves antagonism of noise average power fluctuation in short time and gets a good detection performance as long as increasing the number of cooperative detection users.
2009, 31(11): 2687-2691.
doi: 10.3724/SP.J.1146.2008.00594
Abstract:
This paper proposes a Utility-Based and Channel-resource Borrowing (UBCB) call admission control strategy of mobile satellite system. In order to improve resource utilization probability and integrated performance of satellite system, the borrowing weight as the criterion of resource borrowing is introduced to improve call blocking and traffic dropping probability, and the threshold is adjusted for traffic admissionnet utility to improve the congestion of the satellite system. Simulation results show that this strategy improves prominently in service grade and service value of satellite system compared to common Channel-Borrow Scheme (CBS).
This paper proposes a Utility-Based and Channel-resource Borrowing (UBCB) call admission control strategy of mobile satellite system. In order to improve resource utilization probability and integrated performance of satellite system, the borrowing weight as the criterion of resource borrowing is introduced to improve call blocking and traffic dropping probability, and the threshold is adjusted for traffic admissionnet utility to improve the congestion of the satellite system. Simulation results show that this strategy improves prominently in service grade and service value of satellite system compared to common Channel-Borrow Scheme (CBS).
2009, 31(11): 2692-2696.
doi: 10.3724/SP.J.1146.2008.00362
Abstract:
The channel feedback requirement of the MISO-SDMA using Zero-Forcing (ZF) precoding in multi- antenna broadcast channels can be considerably reduced by employing more antennas at user terminals and using the related antenna combining technique. In this paper, a combining scheme based on SINR maximization is proposed to jointly design the optimal combining vector and the quantization of the effective channel vector. Different from the existing combining schemes, the proposed scheme takes into account of both the effective received SNR and the inter-user interferences from the channel quantization. Also, the existing schemes of MRC and quantization-based combining (QBC) are proved that the particular cases if the SNR tends to zero and to infinite, respectively. Simulation results show that the proposed scheme outperforms the above two algorithms in terms of sum capacity, with the same feedback bits.
The channel feedback requirement of the MISO-SDMA using Zero-Forcing (ZF) precoding in multi- antenna broadcast channels can be considerably reduced by employing more antennas at user terminals and using the related antenna combining technique. In this paper, a combining scheme based on SINR maximization is proposed to jointly design the optimal combining vector and the quantization of the effective channel vector. Different from the existing combining schemes, the proposed scheme takes into account of both the effective received SNR and the inter-user interferences from the channel quantization. Also, the existing schemes of MRC and quantization-based combining (QBC) are proved that the particular cases if the SNR tends to zero and to infinite, respectively. Simulation results show that the proposed scheme outperforms the above two algorithms in terms of sum capacity, with the same feedback bits.
2009, 31(11): 2697-2702.
doi: 10.3724/SP.J.1146.2008.01481
Abstract:
In cellular communications, the criteria for Fast Cell Selection (FCS) usually depend on single factor, which is lack of flexibility and fairness. In order to improve FCS performance, a novel algorithm is proposed. Based on extension theory, a flexible mapping table is constructed for Quality of Experience (QoE) evaluation parameters, and some actual performance parameters are mapped into corresponding calibration interval. Then the mapping values are calculated. On this basis, a judgment matrix is constructed by means of Fuzzy Analytic Hierarchy Process (FAHP) analysis, and its consistency is tested. Finally, the total utility function values of each cell are calculated by the weight vectors. According to such values, the optimal FCS scheme can be found. Simulation analysis shows that the weight factors in each cell have a directly effect on utility function. Compared with the existing algorithms, the proposed algorithm makes the judgment of performance parameters to be comprehensive with the proper increase of computing complexity, which reduces the blocking probability and raises the throughput.
In cellular communications, the criteria for Fast Cell Selection (FCS) usually depend on single factor, which is lack of flexibility and fairness. In order to improve FCS performance, a novel algorithm is proposed. Based on extension theory, a flexible mapping table is constructed for Quality of Experience (QoE) evaluation parameters, and some actual performance parameters are mapped into corresponding calibration interval. Then the mapping values are calculated. On this basis, a judgment matrix is constructed by means of Fuzzy Analytic Hierarchy Process (FAHP) analysis, and its consistency is tested. Finally, the total utility function values of each cell are calculated by the weight vectors. According to such values, the optimal FCS scheme can be found. Simulation analysis shows that the weight factors in each cell have a directly effect on utility function. Compared with the existing algorithms, the proposed algorithm makes the judgment of performance parameters to be comprehensive with the proper increase of computing complexity, which reduces the blocking probability and raises the throughput.
2009, 31(11): 2703-2707.
doi: 10.3724/SP.J.1146.2008.01511
Abstract:
Considering the uplink inter-cell interference in the OFDMA cellular system, an inter-cell power control method based on IoT is proposed. This method gives an amendment on the traditional power control through the exchange of the interference of the neighbor cells, and in accordance with the adjustment principle and adjustment step, four power control methods are put forward, which can make IoT more stable and have higher throughput. The simulation results show that all these four methods are superior to the traditional power control method, in which, the adaptive step power control combining with fixed target IoT is optimal.
Considering the uplink inter-cell interference in the OFDMA cellular system, an inter-cell power control method based on IoT is proposed. This method gives an amendment on the traditional power control through the exchange of the interference of the neighbor cells, and in accordance with the adjustment principle and adjustment step, four power control methods are put forward, which can make IoT more stable and have higher throughput. The simulation results show that all these four methods are superior to the traditional power control method, in which, the adaptive step power control combining with fixed target IoT is optimal.
2009, 31(11): 2708-2712.
doi: 10.3724/SP.J.1146.2008.01448
Abstract:
Added key on modulo 2n operation-Y=X+Kmod2n is a code link which is often used in cipher algorithms, as SAFER++, RC6, Phelix and so on. In this paper, the Y=X+Kmod2n is analyzed with differential cryptanalysis. And the characters of structure, counting formulas of input and output differences and the keys is given for the first time, when the differential probability is to be 1, 1-1/2n-2, 1/2n-2, 1/2.
Added key on modulo 2n operation-Y=X+Kmod2n is a code link which is often used in cipher algorithms, as SAFER++, RC6, Phelix and so on. In this paper, the Y=X+Kmod2n is analyzed with differential cryptanalysis. And the characters of structure, counting formulas of input and output differences and the keys is given for the first time, when the differential probability is to be 1, 1-1/2n-2, 1/2n-2, 1/2.
2009, 31(11): 2713-2715.
doi: 10.3724/SP.J.1146.2008.00699
Abstract:
In order to solve the troubles of incomplete reduction tumbled in the realization of R-ate and efficient compute the R-ate, a new technique named m-R-ate, which extend R-ate from Fq to Fqm , is proposed. Furthermore, in m-R-ate a very efficient algorithm of R-ate is obtained by replacing qm with the field character q in the formula. That overcoming incomplete reduction will improve the efficiency of R-ate 7.8% at least, and the Miller loop will be reduced by selecting of smaller granularity of (A,B), which is much better than Atei.
In order to solve the troubles of incomplete reduction tumbled in the realization of R-ate and efficient compute the R-ate, a new technique named m-R-ate, which extend R-ate from Fq to Fqm , is proposed. Furthermore, in m-R-ate a very efficient algorithm of R-ate is obtained by replacing qm with the field character q in the formula. That overcoming incomplete reduction will improve the efficiency of R-ate 7.8% at least, and the Miller loop will be reduced by selecting of smaller granularity of (A,B), which is much better than Atei.
2009, 31(11): 2716-2719.
doi: 10.3724/SP.J.1146.2008.01548
Abstract:
The goal of password-based authenticated exchange protocol is established secure key by using pre- shared human-memorable password. Most of existing schemes either have computation burden or rely on the random oracle model. A new scheme without random oracles is proposed, which requires only one generator. Due to not using CPA or CCA2 public encryption scheme, the proposed protocol is efficient in computational cost and simple in protocol description when compared other solutions without random oracles. Specifically, this protocol reduces 64% of the exponential computations of the protocol proposed by Yin Yin et al. in the paper of Provable secure encrypted key exchange protocol under standard model. The security of the proposed scheme has been proven in the standard model under DDH assumption.
The goal of password-based authenticated exchange protocol is established secure key by using pre- shared human-memorable password. Most of existing schemes either have computation burden or rely on the random oracle model. A new scheme without random oracles is proposed, which requires only one generator. Due to not using CPA or CCA2 public encryption scheme, the proposed protocol is efficient in computational cost and simple in protocol description when compared other solutions without random oracles. Specifically, this protocol reduces 64% of the exponential computations of the protocol proposed by Yin Yin et al. in the paper of Provable secure encrypted key exchange protocol under standard model. The security of the proposed scheme has been proven in the standard model under DDH assumption.
2009, 31(11): 2720-2724.
doi: 10.3724/SP.J.1146.2008.01533
Abstract:
In this paper, a new authenticated key exchange protocol is proposed and its security is proved in the stronger version of the well-known CK model revised by Krawczyk. The analysis of the protocol is under standard model instead of Random Oracle Model (ROM). The proposal is based on the two basic computational complexity assumptions of DDH assumption and the existence of Pseudo Random Function (PRF) family. Compared with Boyd et al.s protocol, the protocol possesses simpler communication and computation complexity.
In this paper, a new authenticated key exchange protocol is proposed and its security is proved in the stronger version of the well-known CK model revised by Krawczyk. The analysis of the protocol is under standard model instead of Random Oracle Model (ROM). The proposal is based on the two basic computational complexity assumptions of DDH assumption and the existence of Pseudo Random Function (PRF) family. Compared with Boyd et al.s protocol, the protocol possesses simpler communication and computation complexity.
2009, 31(11): 2725-2730.
doi: 10.3724/SP.J.1146.2008.01503
Abstract:
Information hiding is a technique that hides secret messages in some carrier during transmission or storage. Carriers in common use include image, audio, video, text documents and so on. Since there is little redundancy in text documents, particularly in plain text documents, information hiding based on plain text documents is much more challenging. Previous algorithms based on plain text documents are all based on a single plain text segment. Therefore, there are many inherent limitations in security of them. In this paper, a novel information hiding algorithm based on double text segments is proposed. The algorithm can enhance the concealment and security of information hiding greatly by choosing a proper hiding form out of many ones and scattering information to places. In addition, the algorithm is so flexible that it can be modified or adjusted according to the certain application scene to fit the practical requirements better.
Information hiding is a technique that hides secret messages in some carrier during transmission or storage. Carriers in common use include image, audio, video, text documents and so on. Since there is little redundancy in text documents, particularly in plain text documents, information hiding based on plain text documents is much more challenging. Previous algorithms based on plain text documents are all based on a single plain text segment. Therefore, there are many inherent limitations in security of them. In this paper, a novel information hiding algorithm based on double text segments is proposed. The algorithm can enhance the concealment and security of information hiding greatly by choosing a proper hiding form out of many ones and scattering information to places. In addition, the algorithm is so flexible that it can be modified or adjusted according to the certain application scene to fit the practical requirements better.
2009, 31(11): 2731-2737.
doi: 10.3724/SP.J.1146.2008.01012
Abstract:
In this paper, an adaptive optimization scheme for IEEE 802.11 DCF is proposed to enhance the throughput and fairness performance. The scheme is based on channel sensing result for network state information and thus it is called CSCC (Channel Sensing Contention Control). The key idea to approach optimal performance dynamically in the new scheme is that the transmission attempt from the DCF is filtered by an adjustable probability P_T. CSCC does not need to perform complex on-line estimation of the number of active stations in the network, and can make adaptive tuning always toward the certain optimization object under various network states. Detailed simulation results show that the scheme can effectively adapt to various networks different in station number and packet size, and consequently achieve performance improvements on several aspects including system throughput, collision probability, delay, delay jitter, fairness and so on.
In this paper, an adaptive optimization scheme for IEEE 802.11 DCF is proposed to enhance the throughput and fairness performance. The scheme is based on channel sensing result for network state information and thus it is called CSCC (Channel Sensing Contention Control). The key idea to approach optimal performance dynamically in the new scheme is that the transmission attempt from the DCF is filtered by an adjustable probability P_T. CSCC does not need to perform complex on-line estimation of the number of active stations in the network, and can make adaptive tuning always toward the certain optimization object under various network states. Detailed simulation results show that the scheme can effectively adapt to various networks different in station number and packet size, and consequently achieve performance improvements on several aspects including system throughput, collision probability, delay, delay jitter, fairness and so on.
2009, 31(11): 2738-2743.
doi: 10.3724/SP.J.1146.2008.01586
Abstract:
This papar presents a probabilistic logging scheme based on Bloom filter for source tracing. The scheme makes probabilistic sampling of all packets through each router, and uses efficient Bloom filter for storage. The sampling information can stored in memory, which make it easier to find. This paper introduces first the concept of source locating server. Besides forwarding packets, the routers in the core network only need probabilistic sampling of packets. In addition, this paper gives theoretical analysis of the choice of the relevant parameters. In theory, This paper analyzes the cost of storage in probabilistic logging scheme and the validity of source location. The proposed scheme has the characteristics of small storage costs and high efficiency, which provides a theoretical basis for further actually deplyment.
This papar presents a probabilistic logging scheme based on Bloom filter for source tracing. The scheme makes probabilistic sampling of all packets through each router, and uses efficient Bloom filter for storage. The sampling information can stored in memory, which make it easier to find. This paper introduces first the concept of source locating server. Besides forwarding packets, the routers in the core network only need probabilistic sampling of packets. In addition, this paper gives theoretical analysis of the choice of the relevant parameters. In theory, This paper analyzes the cost of storage in probabilistic logging scheme and the validity of source location. The proposed scheme has the characteristics of small storage costs and high efficiency, which provides a theoretical basis for further actually deplyment.
2009, 31(11): 2744-2750.
doi: 10.3724/SP.J.1146.2008.01525
Abstract:
In this paper, we consider the impact of node failure on the monitoring reliability of wireless sensor networks. The quantitative criterions of two basic network monitoring performance, coverage and connection, are proposed from the perspective of node failure focusing on the case of stochastic and uniform node deployment. The performance criterion model is given. Two node failure forms, stochastic failure and vicious failure are discussed and simulated by the designed node failure algorithms. The impact of node failures on the network monitoring reliability are analysed based on the simulation results. Experience formulas which are helpful for choosing the network parameters are also summarized.
In this paper, we consider the impact of node failure on the monitoring reliability of wireless sensor networks. The quantitative criterions of two basic network monitoring performance, coverage and connection, are proposed from the perspective of node failure focusing on the case of stochastic and uniform node deployment. The performance criterion model is given. Two node failure forms, stochastic failure and vicious failure are discussed and simulated by the designed node failure algorithms. The impact of node failures on the network monitoring reliability are analysed based on the simulation results. Experience formulas which are helpful for choosing the network parameters are also summarized.
2009, 31(11): 2751-2756.
doi: 10.3724/SP.J.1146.2008.01419
Abstract:
In order to solve the problem of wireless link instability in sensor network, based on the r-neighborhood graph model, this paper puts forward a robust adjustable topology control algorithm with steady links, named RAWSL. RAWSL algorithm sets the receive signal strength threshold as the limit of topology, which to avoided effectively the instability links, and by adjusting the value of parameters r to fit a variety of network robustness requirements. The experiment results show that, RAWSL algorithm not only ensures the entire connectivity, it also has higher robustness and lower delay characteristics.
In order to solve the problem of wireless link instability in sensor network, based on the r-neighborhood graph model, this paper puts forward a robust adjustable topology control algorithm with steady links, named RAWSL. RAWSL algorithm sets the receive signal strength threshold as the limit of topology, which to avoided effectively the instability links, and by adjusting the value of parameters r to fit a variety of network robustness requirements. The experiment results show that, RAWSL algorithm not only ensures the entire connectivity, it also has higher robustness and lower delay characteristics.
2009, 31(11): 2757-2761.
doi: 10.3724/SP.J.1146.2008.01144
Abstract:
Lack of cache and service time-delay in the media streaming system are considered. PCSPC (Proxy- Caching Scheduler based on P2P Cooperation) is proposed. First according to the principle that more popular data are assigned more cache, corresponding cache is allocated to every media files prefix. And then prefix sequence and proxy sequence are sorted in ascending and descending order respectively by transmission cost, and this method can reduce transmission cost effectively, which is proved by mathematical method. Both cache efficiency and transmission cost are considered in PCSPC. Simulation results show the effectiveness of the strategy.
Lack of cache and service time-delay in the media streaming system are considered. PCSPC (Proxy- Caching Scheduler based on P2P Cooperation) is proposed. First according to the principle that more popular data are assigned more cache, corresponding cache is allocated to every media files prefix. And then prefix sequence and proxy sequence are sorted in ascending and descending order respectively by transmission cost, and this method can reduce transmission cost effectively, which is proved by mathematical method. Both cache efficiency and transmission cost are considered in PCSPC. Simulation results show the effectiveness of the strategy.
2009, 31(11): 2762-2766.
doi: 10.3724/SP.J.1146.2008.01544
Abstract:
A selective registering method is proposed to solve the problem of data loss in readout caused by simultaneously accessing both the read and write ports at the same address of a synchronous dual-port memory IP. Using this method to design an embedded programmable memory with the synchronous dual-port memory IP gives rise to reducing the implementing complexity and further improving the designs migration capabilities. Thus the research and development time can be shortened dramatically. According to the measurement results, such an embedded programmable memory fabricated in SMIC 0.18 ?m 1P6M CMOS process has achieved some comparable performance for the compatible functions with the reference to those full custom embedded programmable memories based on the close processes.
A selective registering method is proposed to solve the problem of data loss in readout caused by simultaneously accessing both the read and write ports at the same address of a synchronous dual-port memory IP. Using this method to design an embedded programmable memory with the synchronous dual-port memory IP gives rise to reducing the implementing complexity and further improving the designs migration capabilities. Thus the research and development time can be shortened dramatically. According to the measurement results, such an embedded programmable memory fabricated in SMIC 0.18 ?m 1P6M CMOS process has achieved some comparable performance for the compatible functions with the reference to those full custom embedded programmable memories based on the close processes.
2009, 31(11): 2767-2771.
doi: 10.3724/SP.J.1146.2009.00031
Abstract:
Analysis of RLC Statistical delay considering process fluctuation is presented in this paper. Construction of parasitic parameters and moments with process variation is first given, and then a statistical delay model based on Weibull distribution is achieved. The proposed method is also applied to the other available delay such as Elmore, equivalent Elmore and D2M. For the statistical delay model based on Weibull distribution, compared with HSPICE, results show that the maximum error of 50% delay is 0.11%, the maximum error of mean and the average in Monte Carlo analysis is 2.02%.
Analysis of RLC Statistical delay considering process fluctuation is presented in this paper. Construction of parasitic parameters and moments with process variation is first given, and then a statistical delay model based on Weibull distribution is achieved. The proposed method is also applied to the other available delay such as Elmore, equivalent Elmore and D2M. For the statistical delay model based on Weibull distribution, compared with HSPICE, results show that the maximum error of 50% delay is 0.11%, the maximum error of mean and the average in Monte Carlo analysis is 2.02%.
2009, 31(11): 2772-2775.
doi: 10.3724/SP.J.1146.2008.01631
Abstract:
MultiLevel Fast Multipole Algorithm (MLFMA) in conjugation with the best uniform approximation is applied to the scattering analysis of an arbitrary shaped perfect electric conductor over a wide frequency band in this paper. The nodes of Chebyshev within a given frequency range are found firstly, and the surface electric currents at these nodes are computed with MLFMA. The surface current on perfect electric conductor is expanded in a polynomial function via the best uniform approximation, then the electric current distribution can be obtained at any frequency within the given frequency range, which is used to compute the scattered fields and the wide-band Radar Cross Section (RCS). The numerical results presented in this paper are compared with the results obtained with MLFMA at each frequency. The results show that the computational efficiency is improved drastically without sacrificing much accuracy.
MultiLevel Fast Multipole Algorithm (MLFMA) in conjugation with the best uniform approximation is applied to the scattering analysis of an arbitrary shaped perfect electric conductor over a wide frequency band in this paper. The nodes of Chebyshev within a given frequency range are found firstly, and the surface electric currents at these nodes are computed with MLFMA. The surface current on perfect electric conductor is expanded in a polynomial function via the best uniform approximation, then the electric current distribution can be obtained at any frequency within the given frequency range, which is used to compute the scattered fields and the wide-band Radar Cross Section (RCS). The numerical results presented in this paper are compared with the results obtained with MLFMA at each frequency. The results show that the computational efficiency is improved drastically without sacrificing much accuracy.
2009, 31(11): 2776-2780.
doi: 10.3724/SP.J.1146.2008.01527
Abstract:
A new micromechanical electric field sensor system with a closed-loop autonomous driving circuit is designed and simulated. The closed-loop autonomous driving circuit, which uses the principle of auto-gain-control, keeps the micro sensor working in the resonance state, and keeps the stable resonance amplitude. The simulation result shows, compared with the open-loop driving mode, the sensor can catch the new resonance frequency, the attenuation of libration amplitude is reduced from 30% to 0.1%, and the attenuation of the sensitivity of the sensor is reduced from 50% to 0.1%, when the resonance frequency changes 0.5%.
A new micromechanical electric field sensor system with a closed-loop autonomous driving circuit is designed and simulated. The closed-loop autonomous driving circuit, which uses the principle of auto-gain-control, keeps the micro sensor working in the resonance state, and keeps the stable resonance amplitude. The simulation result shows, compared with the open-loop driving mode, the sensor can catch the new resonance frequency, the attenuation of libration amplitude is reduced from 30% to 0.1%, and the attenuation of the sensitivity of the sensor is reduced from 50% to 0.1%, when the resonance frequency changes 0.5%.
2009, 31(11): 2781-2785.
doi: 10.3724/SP.J.1146.2008.01528
Abstract:
A multicomponent LFM signals detection method based on multi-scale chirplet sparse signal decomposition is proposed to overcome the problems of cross-interference and cross-decomposition existed in the multicomponent LFM signals processing based on traditional quadratic time-frequency analysis and the atomic matching pursuit method. This method projects the multicomponent LFM signals onto the multi-scale chirplet base functions. Through the different support regions of time in which the projection coefficient is the largest, this method can separate the LFM signal component with the largest energy from the multicomponent LFM signals. The instantaneous frequency of the LFM signal component is estimated by connecting the chirplet base functions used in the current separation. Though seeking for the initial point and the slope of the instantaneous frequency curve, the center frequency and FM slope of the LFM signal component have been estimated. Simulation experiments show that the proposed method can accuratelly extract the instantaneous frequency of the multicomponent LFM signals with a strong anti-noise interference ability.
A multicomponent LFM signals detection method based on multi-scale chirplet sparse signal decomposition is proposed to overcome the problems of cross-interference and cross-decomposition existed in the multicomponent LFM signals processing based on traditional quadratic time-frequency analysis and the atomic matching pursuit method. This method projects the multicomponent LFM signals onto the multi-scale chirplet base functions. Through the different support regions of time in which the projection coefficient is the largest, this method can separate the LFM signal component with the largest energy from the multicomponent LFM signals. The instantaneous frequency of the LFM signal component is estimated by connecting the chirplet base functions used in the current separation. Though seeking for the initial point and the slope of the instantaneous frequency curve, the center frequency and FM slope of the LFM signal component have been estimated. Simulation experiments show that the proposed method can accuratelly extract the instantaneous frequency of the multicomponent LFM signals with a strong anti-noise interference ability.
2009, 31(11): 2786-2790.
doi: 10.3724/SP.J.1146.2008.01563
Abstract:
This paper presents a novel decentralized trust management model with reputation. In P2P (peer to peer) network, there is no trusted authority. Trust relations between peers should be established with peers behaviors. There are three main contributions in this paper. Towards utilizing time self-decay function, the time-related problem is resolved. Also through comparing the nearly scores and general scores, the problem of servents subjective expect can be resolved. After using the DHTs, bandwidth cost can be reduced and salability can be obtained. This model can promote the business succeed rate in P2P network efficiently.
This paper presents a novel decentralized trust management model with reputation. In P2P (peer to peer) network, there is no trusted authority. Trust relations between peers should be established with peers behaviors. There are three main contributions in this paper. Towards utilizing time self-decay function, the time-related problem is resolved. Also through comparing the nearly scores and general scores, the problem of servents subjective expect can be resolved. After using the DHTs, bandwidth cost can be reduced and salability can be obtained. This model can promote the business succeed rate in P2P network efficiently.
2009, 31(11): 2791-2794.
doi: 10.3724/SP.J.1146.2008.00493
Abstract:
To solve the tracking problem of maneuver target with acoustic characteristic in wireless sensor network, according to the attenuation model of acoustic energy with distance, an acoustic-energy based target tracking algorithm is proposed based on dynamically greedy group management scheme. The simulation results show that the proposed algorithm can effectively track the target and the tracking error can be decreased by using Kalman Filter.
To solve the tracking problem of maneuver target with acoustic characteristic in wireless sensor network, according to the attenuation model of acoustic energy with distance, an acoustic-energy based target tracking algorithm is proposed based on dynamically greedy group management scheme. The simulation results show that the proposed algorithm can effectively track the target and the tracking error can be decreased by using Kalman Filter.