Email alert
2013 Vol. 35, No. 9
Display Method:
2013, 35(9): 2033-2039.
doi: 10.3724/SP.J.1146.2013.00412
Abstract:
The traditional spectrum algorithms are limited in face recognition issue. For its characteristics of issue, a novel multi-angle face recognition method based on modified Gaussian Process Latent Variable Mode (GP-LVM) is proposed. Firstly, the probabilistic model of face manifold is established with the Gaussian Process (GP), and the GP-LVM can be gotten. Secondly, the shared information and private information can be gotten by analyzing the GP-LVM. Thereafter, the reference matrices and the reference values are calculated with maximum probability and Lagrange algorithm. Finally, the multi-angle face recognition can be achieved. The four classes of data sets are selected as the experimental data, which consist of Yale, JAFFE, FERET and CMU-PIE. The experiment results show that the proposed method not only has a great effect to recognize multi-angle face, but it can be applied to no angle face recognition.
The traditional spectrum algorithms are limited in face recognition issue. For its characteristics of issue, a novel multi-angle face recognition method based on modified Gaussian Process Latent Variable Mode (GP-LVM) is proposed. Firstly, the probabilistic model of face manifold is established with the Gaussian Process (GP), and the GP-LVM can be gotten. Secondly, the shared information and private information can be gotten by analyzing the GP-LVM. Thereafter, the reference matrices and the reference values are calculated with maximum probability and Lagrange algorithm. Finally, the multi-angle face recognition can be achieved. The four classes of data sets are selected as the experimental data, which consist of Yale, JAFFE, FERET and CMU-PIE. The experiment results show that the proposed method not only has a great effect to recognize multi-angle face, but it can be applied to no angle face recognition.
2013, 35(9): 2040-2046.
doi: 10.3724/SP.J.1146.2012.01584
Abstract:
According to the issue of pedestrian detection in complex background, this paper presents an Improved Center-Symmetric CENTRIST (ICS_CENTRIST) feature from the view of the pedestrian edge information. This feature has characters of simple calculation and powerful description ability. It can express the pedestrian's edge contour information perfectly with only 32 dimensions. Three cascaded classifier are used for pedestrian detection. The linear SVM based on auxiliary integral image is used for excluding most non-pedestrian area quickly in the first stage. During the second and third stages, the ICS_CENTRIST features of first 12 and 21 blocks with most strong distinguishable chosen by Partial Least Squares (PLS) method are accepted respectively, and then Histogram Intersection Kernel SVM (HIK-SVM) is used for accurate detecting. Experimental results show that this algorithm can get better detection results in complex background, and the detection speed is average 50 ms for 447358 images, which is improved by 50% and 90% compared with the CENTRIST fast detecting and Histograms of Oriented Gradients (HOG) algorithm respectively and can meet the real-time requirements.
According to the issue of pedestrian detection in complex background, this paper presents an Improved Center-Symmetric CENTRIST (ICS_CENTRIST) feature from the view of the pedestrian edge information. This feature has characters of simple calculation and powerful description ability. It can express the pedestrian's edge contour information perfectly with only 32 dimensions. Three cascaded classifier are used for pedestrian detection. The linear SVM based on auxiliary integral image is used for excluding most non-pedestrian area quickly in the first stage. During the second and third stages, the ICS_CENTRIST features of first 12 and 21 blocks with most strong distinguishable chosen by Partial Least Squares (PLS) method are accepted respectively, and then Histogram Intersection Kernel SVM (HIK-SVM) is used for accurate detecting. Experimental results show that this algorithm can get better detection results in complex background, and the detection speed is average 50 ms for 447358 images, which is improved by 50% and 90% compared with the CENTRIST fast detecting and Histograms of Oriented Gradients (HOG) algorithm respectively and can meet the real-time requirements.
2013, 35(9): 2047-2053.
doi: 10.3724/SP.J.1146.2012.01552
Abstract:
Researches on current Dimensionality Reduction (DR) methods are mainly based on two ways. One attempts to ensure the stabilities of global features of high-dimensional samples, the other tries to make the local manifold structure between data before and after dimension reduction be as invariant as possible. As the existed information is not fully utilized by current DR methods, the DR efficiencies are restricted. Based on the above analysis, Manifold-based Discriminnant Analysis (MDA) is proposed based on Fisher criterion and manifold preserving. The global features and local structure are both taken into consideration by MDA. It defines two scatters: Manifold-based Within-Class Scatter (MWCS) and Manifold-based Between-Class Scatter (MBCS). According to Fisher criterion, the optimal projection satisfies the ratio of MBCS and MWCS is maximized. MDA not only inherits the superiorities of current DR methods, but further improves the DR efficiencies. Experiments on some standard datasets verify the effectiveness of the proposed method MDA.
Researches on current Dimensionality Reduction (DR) methods are mainly based on two ways. One attempts to ensure the stabilities of global features of high-dimensional samples, the other tries to make the local manifold structure between data before and after dimension reduction be as invariant as possible. As the existed information is not fully utilized by current DR methods, the DR efficiencies are restricted. Based on the above analysis, Manifold-based Discriminnant Analysis (MDA) is proposed based on Fisher criterion and manifold preserving. The global features and local structure are both taken into consideration by MDA. It defines two scatters: Manifold-based Within-Class Scatter (MWCS) and Manifold-based Between-Class Scatter (MBCS). According to Fisher criterion, the optimal projection satisfies the ratio of MBCS and MWCS is maximized. MDA not only inherits the superiorities of current DR methods, but further improves the DR efficiencies. Experiments on some standard datasets verify the effectiveness of the proposed method MDA.
2013, 35(9): 2054-2058.
doi: 10.3724/SP.J.1146.2012.01325
Abstract:
A new method named ball-averaged Lyapunov exponents method is presented to calculate Lyapunov exponents of nonlinear time-series signals. The method can be used as feature extraction and classification of electromyography. Firstly, the Lyapunov exponents of electromyography is calculated and it is combined with correlation dimension as input eigenvector. Then, multi-class classifier is constructed based on Twin Support Vector Machines (TSVM) with binary-tree. Finally, the four hand gestures (namely, radial flexion and ulnar flexion, hand opening and closing) are classified. The experimental results show that the method has stronger anti-jamming capability than Rosenstein method, and the recognition rate is above 96.0% in feature extraction and classification of electromyography. The proposed method is suitable for analyzing chaotic signals with lower signal-to-noise ratio.
A new method named ball-averaged Lyapunov exponents method is presented to calculate Lyapunov exponents of nonlinear time-series signals. The method can be used as feature extraction and classification of electromyography. Firstly, the Lyapunov exponents of electromyography is calculated and it is combined with correlation dimension as input eigenvector. Then, multi-class classifier is constructed based on Twin Support Vector Machines (TSVM) with binary-tree. Finally, the four hand gestures (namely, radial flexion and ulnar flexion, hand opening and closing) are classified. The experimental results show that the method has stronger anti-jamming capability than Rosenstein method, and the recognition rate is above 96.0% in feature extraction and classification of electromyography. The proposed method is suitable for analyzing chaotic signals with lower signal-to-noise ratio.
2013, 35(9): 2059-2065.
doi: 10.3724/SP.J.1146.2012.01647
Abstract:
Based on the concept of Difference Of Density (DOD), L2 Kernel Classifier(L2KC) exhibits its good performance. However, the assumption that the training domain and testing domain are independent and identically distributed severely constrains its usefulness. In order to overcome this shortcoming, a novel classifier named Transfer Learnging-L2 Kernel Classification (TL-L2KC) is proposed for the changing environment. The proposed classifier can not only inherit the advantage of L2KC, but also deal with the problem that the distribution inconsistency between the training and testing sets which is caused by the slow change of the datasets or the training set obtained with specific constraints. As demonstrated by extensive experiments in simulation datasets and UCI benchmark datasets, the proposed classifier TL-L2KC shows the performance which is comparable to or better than that of the classical algorithms on the transfer learning classification problems.
Based on the concept of Difference Of Density (DOD), L2 Kernel Classifier(L2KC) exhibits its good performance. However, the assumption that the training domain and testing domain are independent and identically distributed severely constrains its usefulness. In order to overcome this shortcoming, a novel classifier named Transfer Learnging-L2 Kernel Classification (TL-L2KC) is proposed for the changing environment. The proposed classifier can not only inherit the advantage of L2KC, but also deal with the problem that the distribution inconsistency between the training and testing sets which is caused by the slow change of the datasets or the training set obtained with specific constraints. As demonstrated by extensive experiments in simulation datasets and UCI benchmark datasets, the proposed classifier TL-L2KC shows the performance which is comparable to or better than that of the classical algorithms on the transfer learning classification problems.
2013, 35(9): 2066-2072.
doi: 10.3724/SP.J.1146.2012.01652
Abstract:
Due to lack of adaptability to assess objective image quality and poor correlation between the existing assessment methods and subjective human perception, a no-reference image quality assessment method based on parameter estimation is proposed. In the method, the metrics of structural information, color information and visual information are extracted by analysing qualitative characteristics of the images themselves. Then, relevant parameter is estimated by regression analysis. Experimental results illustrate that the proposed method has better consistency and robustness with the subjective assessment of human beings than the other objective assessment methods, thus it can be used to describe the visual perception of the image effectively.
Due to lack of adaptability to assess objective image quality and poor correlation between the existing assessment methods and subjective human perception, a no-reference image quality assessment method based on parameter estimation is proposed. In the method, the metrics of structural information, color information and visual information are extracted by analysing qualitative characteristics of the images themselves. Then, relevant parameter is estimated by regression analysis. Experimental results illustrate that the proposed method has better consistency and robustness with the subjective assessment of human beings than the other objective assessment methods, thus it can be used to describe the visual perception of the image effectively.
2013, 35(9): 2073-2080.
doi: 10.3724/SP.J.1146.2013.00041
Abstract:
The threshold segmentation of mixed noise image can not be solved by existing algorithms efficiently. A 3D minimum error thresholding algorithm is proposed. Using gray distribution information of pixels and relevant information of neighboring pixels, it combines information of image gray, mean and median to construct a three-dimensional observation space, and then defines a 3D optimal threshold discriminant based on the relative entropy. Furthermore, in order to improve its processing speed, the fast recursive formulas are also given. Its time complexity is O(L3). Experimental results show that the proposed algorithm outperforms those 2D thresholding methods not only for different types of noised image, but also for non-uniform illuminating images. Especially for mixed noise image, its advantage is more obvious.
The threshold segmentation of mixed noise image can not be solved by existing algorithms efficiently. A 3D minimum error thresholding algorithm is proposed. Using gray distribution information of pixels and relevant information of neighboring pixels, it combines information of image gray, mean and median to construct a three-dimensional observation space, and then defines a 3D optimal threshold discriminant based on the relative entropy. Furthermore, in order to improve its processing speed, the fast recursive formulas are also given. Its time complexity is O(L3). Experimental results show that the proposed algorithm outperforms those 2D thresholding methods not only for different types of noised image, but also for non-uniform illuminating images. Especially for mixed noise image, its advantage is more obvious.
2013, 35(9): 2081-2087.
doi: 10.3724/SP.J.1146.2012.01598
Abstract:
Otsu adaptive threshold algorithm is a classic image segmentation method. The two-dimensional Otsu algorithm and its improvements which based on original Otsu algorithm are restricted, due to their computation(or space) complexity, inability for anti-noise, difficulty to extend to multilevel thresholding. In order to improve these shortages, regarding noise points appearances as small probability events, noise point is changed to objective(or background) pixel by using its neighborhood average gray level to instead its gray level. Then the processed image is segmented through one-dimensional Otsu. So this method obtain good performance at low cost. The experimental result shows that this method has significant improvements in complexity, ability for anti-noise, ability for extending to multilevel thresholding and so on.
Otsu adaptive threshold algorithm is a classic image segmentation method. The two-dimensional Otsu algorithm and its improvements which based on original Otsu algorithm are restricted, due to their computation(or space) complexity, inability for anti-noise, difficulty to extend to multilevel thresholding. In order to improve these shortages, regarding noise points appearances as small probability events, noise point is changed to objective(or background) pixel by using its neighborhood average gray level to instead its gray level. Then the processed image is segmented through one-dimensional Otsu. So this method obtain good performance at low cost. The experimental result shows that this method has significant improvements in complexity, ability for anti-noise, ability for extending to multilevel thresholding and so on.
2013, 35(9): 2088-2093.
doi: 10.3724/SP.J.1146.2013.00059
Abstract:
Since exponential reproducing kernel has the finite support property in time domain, it is widely used as sampling kernel in Finite Rate of Innovation (FRI) sampling framework. However, this process will change white noise in signal to color one, which will grievously depress the performance of reconstruction. For this reason, by using the property that the exponential reproduction formula is preserved through convolution, a modified exponential reproducing kernel is proposed. Its coefficient matrix can keep the statistical property of noise, which ensures the reconstruction performance. Simulation results verify the validity of the proposed methods.
Since exponential reproducing kernel has the finite support property in time domain, it is widely used as sampling kernel in Finite Rate of Innovation (FRI) sampling framework. However, this process will change white noise in signal to color one, which will grievously depress the performance of reconstruction. For this reason, by using the property that the exponential reproduction formula is preserved through convolution, a modified exponential reproducing kernel is proposed. Its coefficient matrix can keep the statistical property of noise, which ensures the reconstruction performance. Simulation results verify the validity of the proposed methods.
2013, 35(9): 2094-2099.
doi: 10.3724/SP.J.1146.2012.01545
Abstract:
Sampling and reconstruction is a basic issue in signal processing. The known sampling theory in the FRactional Fourier Transform (FRFT) domain is based on the analysis of ideal sampling system. The actual Analog-to-Digital conversion is implemented through sample-and-hold circuits. This paper analyzes the actual sample-and-hold system in the FRFT domain, and then makes a feasible sample and reconstruction model. This model can be realized by adding two multipliers to the traditional sample-and-hold system. The result obtained in this paper makes the sampling theorem for the FRFT domain more perfect, and provides the theoretical foundation for practical implementation of sampling theorem in the FRFT domain.
Sampling and reconstruction is a basic issue in signal processing. The known sampling theory in the FRactional Fourier Transform (FRFT) domain is based on the analysis of ideal sampling system. The actual Analog-to-Digital conversion is implemented through sample-and-hold circuits. This paper analyzes the actual sample-and-hold system in the FRFT domain, and then makes a feasible sample and reconstruction model. This model can be realized by adding two multipliers to the traditional sample-and-hold system. The result obtained in this paper makes the sampling theorem for the FRFT domain more perfect, and provides the theoretical foundation for practical implementation of sampling theorem in the FRFT domain.
2013, 35(9): 2100-2107.
doi: 10.3724/SP.J.1146.2012.01155
Abstract:
Adaptive monopulse technique is widely used in surveillance and tracking radar due to its interference cancelling and angle estimation ability. In classical adaptive monopulse, the first-order linear approximation is adopted and the high-order items are ignored, which causes angle estimation error. To solve this issue, constraints to make the high-order items approach zero are proposed and combined to interference subspace to calculate weights of sum and difference beams. Finally, the obtained weights are applied to classical adaptive monopulse method to estimate the angle location while making equations more concise. Some rules and regulations on the locations of the constraints are also made in this paper. By using the method proposed in this paper, adaptive monopulse estimation can be applied to a larger area and angle estimation error can be reduced further. Simulation results show the effectiveness and correctness of the algorithm proposed in this paper.
Adaptive monopulse technique is widely used in surveillance and tracking radar due to its interference cancelling and angle estimation ability. In classical adaptive monopulse, the first-order linear approximation is adopted and the high-order items are ignored, which causes angle estimation error. To solve this issue, constraints to make the high-order items approach zero are proposed and combined to interference subspace to calculate weights of sum and difference beams. Finally, the obtained weights are applied to classical adaptive monopulse method to estimate the angle location while making equations more concise. Some rules and regulations on the locations of the constraints are also made in this paper. By using the method proposed in this paper, adaptive monopulse estimation can be applied to a larger area and angle estimation error can be reduced further. Simulation results show the effectiveness and correctness of the algorithm proposed in this paper.
2013, 35(9): 2108-2113.
doi: 10.3724/SP.J.1146.2013.00068
Abstract:
Wide Area Surveillance Ground Moving Target Indication (WAS-GMTI) mode is an important mode in airborne radar systems since it can monitor extensive area in a short time. However, there are few systems that possess WAS-GMTI mode. Therefore, real data of WAS-GMTI mode are difficult to obtain which have a great impact on the corresponding algorithm validation. In this paper, a target trajectory simulation method combined with electronic map is proposed. Since information of moving targets is based on information of platforms and electronic maps, simulating the echo of scenes can be avoided. Furthermore, the moving targets are combined with geographic environment reasonably and repetitious information of the same target can be used in tracking. Simulation results demonstrate the effectiveness of the proposed method.
Wide Area Surveillance Ground Moving Target Indication (WAS-GMTI) mode is an important mode in airborne radar systems since it can monitor extensive area in a short time. However, there are few systems that possess WAS-GMTI mode. Therefore, real data of WAS-GMTI mode are difficult to obtain which have a great impact on the corresponding algorithm validation. In this paper, a target trajectory simulation method combined with electronic map is proposed. Since information of moving targets is based on information of platforms and electronic maps, simulating the echo of scenes can be avoided. Furthermore, the moving targets are combined with geographic environment reasonably and repetitious information of the same target can be used in tracking. Simulation results demonstrate the effectiveness of the proposed method.
2013, 35(9): 2114-2120.
doi: 10.3724/SP.J.1146.2012.01609
Abstract:
Micro-Doppler is a unique characteristic of micro-motion targets, as an important signature of truck, micro-Doppler generated by rotation of wheels will offer proof for recognition of the truck. Firstly, on the basis of scatterer model, taking the relative position between radar and truck into consideration, the occlusion model of truck is established by applying the Painters algorithm in computer graphics; then the micro-motion of truck is modeling, the variation of Doppler and micro-Doppler with azimuth, depression angle, velocity and acceleration is analyzed for non-rotating scatterers and rotating scatterers respectively, which is the fundament of signature extraction and target recognition for the truck based on micro-Doppler. The effectiveness of the method and correctness of the analysis are proved by simulation results.
Micro-Doppler is a unique characteristic of micro-motion targets, as an important signature of truck, micro-Doppler generated by rotation of wheels will offer proof for recognition of the truck. Firstly, on the basis of scatterer model, taking the relative position between radar and truck into consideration, the occlusion model of truck is established by applying the Painters algorithm in computer graphics; then the micro-motion of truck is modeling, the variation of Doppler and micro-Doppler with azimuth, depression angle, velocity and acceleration is analyzed for non-rotating scatterers and rotating scatterers respectively, which is the fundament of signature extraction and target recognition for the truck based on micro-Doppler. The effectiveness of the method and correctness of the analysis are proved by simulation results.
2013, 35(9): 2121-2125.
doi: 10.3724/SP.J.1146.2012.01161
Abstract:
RCS of aircraft varies with time or radar wave incident angle without regular pattern during its movement. It is necessary to analyze RCS variation using statistic items or models. The research objects of existing literatures are mostly conventional aircrafts. Research on stealth aircraft is poor. A typical stealth aircraft is selected as the research object in this paper. Its dynamic RCS data under different flying attitudes are evaluated by the combination method of physical optics and physical theory of diffraction. And Swerling type Ⅰ and type Ⅲ distribution, chi-2 distribution and lognormal distribution are applied to analyze the statistic characteristics of the RCS datasets. As the ratio of mean to median of stealth target can be less than 1, the complete form of lognormal distribution model is proposed. Lognormal distribution model can provide a better approximation of the RCS datasets than the other models concerning the model errors and the results of fitting goodness test.
RCS of aircraft varies with time or radar wave incident angle without regular pattern during its movement. It is necessary to analyze RCS variation using statistic items or models. The research objects of existing literatures are mostly conventional aircrafts. Research on stealth aircraft is poor. A typical stealth aircraft is selected as the research object in this paper. Its dynamic RCS data under different flying attitudes are evaluated by the combination method of physical optics and physical theory of diffraction. And Swerling type Ⅰ and type Ⅲ distribution, chi-2 distribution and lognormal distribution are applied to analyze the statistic characteristics of the RCS datasets. As the ratio of mean to median of stealth target can be less than 1, the complete form of lognormal distribution model is proposed. Lognormal distribution model can provide a better approximation of the RCS datasets than the other models concerning the model errors and the results of fitting goodness test.
2013, 35(9): 2126-2132.
doi: 10.3724/SP.J.1146.2012.01550
Abstract:
The micro-Doppler effect induced by the rotating parts of the target, which provides a new approach for accurate auto radar target recognition, attracts great research attention in recent years. In this paper, taking target with rotating parts for an example, a method for micro-motion signature extraction based on high-order moment function is proposed, which is suitable for the wideband radar. Firstly, the echo induced by the rotating parts is analyzed by multiplying with the conjugate of the reference signal, and its high-order moments are calculated respect to the fast time and the slow time. Then the micro-motion signature is quickly obtained by detecting the peak value of the products of the Fourier transform results of its moments imaginary parts with different time delays. A computer simulation is given to illustrate the effectiveness of the proposed method.
The micro-Doppler effect induced by the rotating parts of the target, which provides a new approach for accurate auto radar target recognition, attracts great research attention in recent years. In this paper, taking target with rotating parts for an example, a method for micro-motion signature extraction based on high-order moment function is proposed, which is suitable for the wideband radar. Firstly, the echo induced by the rotating parts is analyzed by multiplying with the conjugate of the reference signal, and its high-order moments are calculated respect to the fast time and the slow time. Then the micro-motion signature is quickly obtained by detecting the peak value of the products of the Fourier transform results of its moments imaginary parts with different time delays. A computer simulation is given to illustrate the effectiveness of the proposed method.
2013, 35(9): 2133-2140.
doi: 10.3724/SP.J.1146.2012.01537
Abstract:
The spinning targets three-dimensional (3D) radar imaging is one of the most significant studies for space targets recognition. To obtain the real 3D characteristics of the targets that can not be acquired from a monostatic radar, a broadband 3D radar imaging algorithm based on micro-motion parameter correlation of several radars towards the same spinning target is proposed. Based on scattering centers micro-motion parameters differences in different radars, the scattering centers in echoes from radars respectively of the same target are correlated, and the 3D radar image of the target is obtained which corresponds with the real size of the target. The simulations confirm the effectiveness and robustness of the algorithm.
The spinning targets three-dimensional (3D) radar imaging is one of the most significant studies for space targets recognition. To obtain the real 3D characteristics of the targets that can not be acquired from a monostatic radar, a broadband 3D radar imaging algorithm based on micro-motion parameter correlation of several radars towards the same spinning target is proposed. Based on scattering centers micro-motion parameters differences in different radars, the scattering centers in echoes from radars respectively of the same target are correlated, and the 3D radar image of the target is obtained which corresponds with the real size of the target. The simulations confirm the effectiveness and robustness of the algorithm.
2013, 35(9): 2141-2146.
doi: 10.3724/SP.J.1146.2012.01581
Abstract:
For SAR images of two adjacent strips with the smaller overlapping area, the severe geometric distortion makes it difficult to extract directly the Tie Points (TPs). Based on imaging information and coherent information of the InSAR images, this paper puts forward a new method for extracting TPs. First, it makes the raw image process the affine transformation with the imaging information; and then it uses the feature matching method of optical image to obtain TPs; finally, the TPs are selected according to the quality map. Experiments on the real InSAR data (overlap region 15%) demonstrate that it can not only automatically extract TPs, but also the extracted TPs meet the requirements for the interferometric mapping of some scale. At the same time, it makes the number of strips reduce from five to three joining together for an image of some scale, which decrease greatly the mapping work and mapping cost.
For SAR images of two adjacent strips with the smaller overlapping area, the severe geometric distortion makes it difficult to extract directly the Tie Points (TPs). Based on imaging information and coherent information of the InSAR images, this paper puts forward a new method for extracting TPs. First, it makes the raw image process the affine transformation with the imaging information; and then it uses the feature matching method of optical image to obtain TPs; finally, the TPs are selected according to the quality map. Experiments on the real InSAR data (overlap region 15%) demonstrate that it can not only automatically extract TPs, but also the extracted TPs meet the requirements for the interferometric mapping of some scale. At the same time, it makes the number of strips reduce from five to three joining together for an image of some scale, which decrease greatly the mapping work and mapping cost.
2013, 35(9): 2147-2153.
doi: 10.3724/SP.J.1146.2012.01270
Abstract:
Caused by the limited quantization dynamic range of the spaceborne SAR, the echo saturation phenomenon happens between timesduring spaceborne SAR data acquiring. In this paper, saturation correction methods based on dynamic decoding are proposed for uniform quantization and Block Adaptive Quantization (BAQ), which are two of the most commonly used quantization method in spaceborne SAR. The dynamic decoding methods can alleviate the image power loss caused by saturation, improve the radiation precision, and enhance the image signal-to-noise ratio. The simulation results validate the effectiveness of the proposed methods.
Caused by the limited quantization dynamic range of the spaceborne SAR, the echo saturation phenomenon happens between timesduring spaceborne SAR data acquiring. In this paper, saturation correction methods based on dynamic decoding are proposed for uniform quantization and Block Adaptive Quantization (BAQ), which are two of the most commonly used quantization method in spaceborne SAR. The dynamic decoding methods can alleviate the image power loss caused by saturation, improve the radiation precision, and enhance the image signal-to-noise ratio. The simulation results validate the effectiveness of the proposed methods.
2013, 35(9): 2154-2160.
doi: 10.3724/SP.J.1146.2012.01111
Abstract:
The Bistatic SAR (BiSAR) acquires the radar cross section of target area from different directions, which is helpful for SAR image classification and target recognition. The spotlight mode has higher resolution and thus has advantages in target recognition. This paper introduces an advanced hyperbolic approximation method to fit accurately the cubic term of range history. Besides, the wavenumber domain signal is derived and the image formation process is given. Then, the residual phase term induced by range variance property is shown, and the compensation step is accommodated in the modified algorithm, which enhance the imaging performance at the edges of scene. The simulation results verify the accuracy and validity of the proposed method.
The Bistatic SAR (BiSAR) acquires the radar cross section of target area from different directions, which is helpful for SAR image classification and target recognition. The spotlight mode has higher resolution and thus has advantages in target recognition. This paper introduces an advanced hyperbolic approximation method to fit accurately the cubic term of range history. Besides, the wavenumber domain signal is derived and the image formation process is given. Then, the residual phase term induced by range variance property is shown, and the compensation step is accommodated in the modified algorithm, which enhance the imaging performance at the edges of scene. The simulation results verify the accuracy and validity of the proposed method.
2013, 35(9): 2161-2167.
doi: 10.3724/SP.J.1146.2012.01530
Abstract:
In the application of getting the earth surfaces Digital Elevation Model (DEM) through InSAR technology, multichannel (multi-frequency or multi-baseline) InSAR technique can be employed to improve the mapping ability for complex areas with high slopes or strong height discontinuities, and solve the ambiguity problem which existed in the situation of single baseline. This paper compares the performance of Maxmum Likelihood (ML) estimation techniques with Maximum A Posteriori (MAP) estimation techniques, and adds two steps of bad pixels judgment and weighted filtering after the ML estimation. Bad pixels judgment is completed through cluster analysis and the relationship between adjacent pixels. A special weighted mean filter is used to remove the bad pixels. In this way, the advantage of the ML methods good efficiency is kept, and the accuracy of DEM also is improved. Simulation results indicate that this method can not only keep good accuracy but also improve greatly the computation efficiency under the same condition, which is advantageous for processing large scale of data sets.
In the application of getting the earth surfaces Digital Elevation Model (DEM) through InSAR technology, multichannel (multi-frequency or multi-baseline) InSAR technique can be employed to improve the mapping ability for complex areas with high slopes or strong height discontinuities, and solve the ambiguity problem which existed in the situation of single baseline. This paper compares the performance of Maxmum Likelihood (ML) estimation techniques with Maximum A Posteriori (MAP) estimation techniques, and adds two steps of bad pixels judgment and weighted filtering after the ML estimation. Bad pixels judgment is completed through cluster analysis and the relationship between adjacent pixels. A special weighted mean filter is used to remove the bad pixels. In this way, the advantage of the ML methods good efficiency is kept, and the accuracy of DEM also is improved. Simulation results indicate that this method can not only keep good accuracy but also improve greatly the computation efficiency under the same condition, which is advantageous for processing large scale of data sets.
2013, 35(9): 2168-2174.
doi: 10.3724/SP.J.1146.2012.01064
Abstract:
Multi-channel SAR technique is an effective solution to obtain high-resolution images with better than 0.1 m. A radar transceiver channel scheme which consists of one wideband transmit channel and 8 down-converter receive channels is designed and implemented in an airborne SAR system. According to the frame design of the transceiver channel, this paper focuses on proposing one solution for obtaining and calibrating the real system amplitude-phase errors accurately. A closed-loop space radiation method is used to extract amplitude-phase errors of the broadband transceiver unit and a frequency-offset error correction technique is used to extract the sub-channel amplitude-phase errors in multi-channel receiver unit. Then the combination of the two is used to compensate the entire system error which will impact bandwidth synthesis and image processing seriously. An acute test and analysis of the system error is performed. The final airborne experiments demonstrate the validity and feasibility of the system calibration.
Multi-channel SAR technique is an effective solution to obtain high-resolution images with better than 0.1 m. A radar transceiver channel scheme which consists of one wideband transmit channel and 8 down-converter receive channels is designed and implemented in an airborne SAR system. According to the frame design of the transceiver channel, this paper focuses on proposing one solution for obtaining and calibrating the real system amplitude-phase errors accurately. A closed-loop space radiation method is used to extract amplitude-phase errors of the broadband transceiver unit and a frequency-offset error correction technique is used to extract the sub-channel amplitude-phase errors in multi-channel receiver unit. Then the combination of the two is used to compensate the entire system error which will impact bandwidth synthesis and image processing seriously. An acute test and analysis of the system error is performed. The final airborne experiments demonstrate the validity and feasibility of the system calibration.
2013, 35(9): 2175-2179.
doi: 10.3724/SP.J.1146.2012.01660
Abstract:
The combination of Loran-C and Global Navigation Satellite System (GNSS) has become a?new?application?mode, which provides reliable positioning, navigation and timing service for users. Based on the research on Loran-C signal system, considering its long signal acquisition time and poor anti-noise performance of Loran-C receiver, this paper proposes a new method of anti-noise and fast acquisition for Loran-C signal, which is based on delay correlation; and verifies the practicality of this method by theoretical analysis and carrying out simulation test. This method solves successfully the problem of fast acquisition of Loran-C signal in heavy noise environment. The result shows that the acquisition time of this method is less than 200 ms, the anti-noise property is less than -10 dB.
The combination of Loran-C and Global Navigation Satellite System (GNSS) has become a?new?application?mode, which provides reliable positioning, navigation and timing service for users. Based on the research on Loran-C signal system, considering its long signal acquisition time and poor anti-noise performance of Loran-C receiver, this paper proposes a new method of anti-noise and fast acquisition for Loran-C signal, which is based on delay correlation; and verifies the practicality of this method by theoretical analysis and carrying out simulation test. This method solves successfully the problem of fast acquisition of Loran-C signal in heavy noise environment. The result shows that the acquisition time of this method is less than 200 ms, the anti-noise property is less than -10 dB.
2013, 35(9): 2180-2186.
doi: 10.3724/SP.J.1146.2012.01303
Abstract:
Based on binary periodic complementary sequence set and binary Zero-Correlation Zone (ZCZ) sequence set, constructions of quaternary periodic complementary sequence sets with zero-correlation zone are proposed by using the inverse Gray mapping. If the initial sequence set is optimal, then the resultant quaternary ZCZ periodic complementary set is optimal or almost optimal with respect to the theoretical bound. ZCZ periodic complementary sets have larger set size compared with the conventional complementary sets, and can support more users in communication systems.
Based on binary periodic complementary sequence set and binary Zero-Correlation Zone (ZCZ) sequence set, constructions of quaternary periodic complementary sequence sets with zero-correlation zone are proposed by using the inverse Gray mapping. If the initial sequence set is optimal, then the resultant quaternary ZCZ periodic complementary set is optimal or almost optimal with respect to the theoretical bound. ZCZ periodic complementary sets have larger set size compared with the conventional complementary sets, and can support more users in communication systems.
2013, 35(9): 2187-2193.
doi: 10.3724/SP.J.1146.2012.01544
Abstract:
A novel Bussgang category of blind equalization with Exponential Expanded Multi-Modulus Algorithm (EEMMA) is proposed. Comparing with those traditional Bussgang blind equalization algorithms, the proposed one can decrease further the steady-state error. This paper analyses the new cost function, and error function effects on the performance of the algorithm, and analyses the complexity of the novel algorithm. Meantime, a calculation approach of constellation characteristic constant R using graphing method is presented. An approximate calculation method of constellation characteristic constant R is shown to reduce the dependence on the information of high order statistics. The approximate calculation method of R makes the proposed algorithm does not rely on any priori knowledge of constellations. Finally, using dense square and non-square QAM systems, simulation results demonstrate the effectiveness of this new algorithm.
A novel Bussgang category of blind equalization with Exponential Expanded Multi-Modulus Algorithm (EEMMA) is proposed. Comparing with those traditional Bussgang blind equalization algorithms, the proposed one can decrease further the steady-state error. This paper analyses the new cost function, and error function effects on the performance of the algorithm, and analyses the complexity of the novel algorithm. Meantime, a calculation approach of constellation characteristic constant R using graphing method is presented. An approximate calculation method of constellation characteristic constant R is shown to reduce the dependence on the information of high order statistics. The approximate calculation method of R makes the proposed algorithm does not rely on any priori knowledge of constellations. Finally, using dense square and non-square QAM systems, simulation results demonstrate the effectiveness of this new algorithm.
2013, 35(9): 2194-2199.
doi: 10.3724/SP.J.1146.2012.01742
Abstract:
Compared to traditional Time Division Multiplexed (TDM) and Frequency Division Multiplexed (FDM) training sequence, Superimposed Training (ST) sequence can effectively improve the frequency spectrum efficiency. However, the interference between information and training sequences in ST causes severe degradation on the performance of the system. The crux of improving channel estimation performance is to cancel the information sequence interference effectively. This paper firstly proposes a new first-order statistic-based channel estimation algorithm for the time-varying channel. In this algorithm, the time-varying channel is approximated by the basis expansion model. The information sequence interference is suppressed by calculating mean of the partitioned sequence in time domain. On this basis, an iterative channel estimation and detection scheme is proposed according to that the information and training sequences undergo the identical fading channel. In the new scheme, the Deterministic Maximum Likelihood (DML) detector is substituted by a Kalman filtering detector. The detected symbols are seemed as additional training sequence, which increasing the channel estimation performance remarkably. The simulation results show that the new scheme not only cancels the information sequence interference effectively, but also include better performance and lower computation complexity compared to other schemes.
Compared to traditional Time Division Multiplexed (TDM) and Frequency Division Multiplexed (FDM) training sequence, Superimposed Training (ST) sequence can effectively improve the frequency spectrum efficiency. However, the interference between information and training sequences in ST causes severe degradation on the performance of the system. The crux of improving channel estimation performance is to cancel the information sequence interference effectively. This paper firstly proposes a new first-order statistic-based channel estimation algorithm for the time-varying channel. In this algorithm, the time-varying channel is approximated by the basis expansion model. The information sequence interference is suppressed by calculating mean of the partitioned sequence in time domain. On this basis, an iterative channel estimation and detection scheme is proposed according to that the information and training sequences undergo the identical fading channel. In the new scheme, the Deterministic Maximum Likelihood (DML) detector is substituted by a Kalman filtering detector. The detected symbols are seemed as additional training sequence, which increasing the channel estimation performance remarkably. The simulation results show that the new scheme not only cancels the information sequence interference effectively, but also include better performance and lower computation complexity compared to other schemes.
2013, 35(9): 2200-2205.
doi: 10.3724/SP.J.1146.2012.01576
Abstract:
For the carrier synchronization in short burst communication system, a joint Rotational Periodogram Averaging (RPA) and demodulation soft information carrier estimation algorithm is proposed. First, the pilot sequence is rotated with different frequency offsets, and the coarse carrier estimation is carried out with periodogram averaging method. Then, further fine estimation is done by stepwise search. The precise carrier synchronization parameters are obtained based on the criterion of Maximum-Mean-Square Soft Output (M2S2O). Simulation results show that the proposed algorithm with short pilots and lower complexity can achieve a BER performance which is very near that of the optimal coherent demodulation and eliminate large carrier offsets as much as plus and minus half of the data rate. When Bit Error Rate (BER) is in the region of 10-2~10-4, the Signal-to-Noise Ratio (SNR) degradation is within 0.3 dB.
For the carrier synchronization in short burst communication system, a joint Rotational Periodogram Averaging (RPA) and demodulation soft information carrier estimation algorithm is proposed. First, the pilot sequence is rotated with different frequency offsets, and the coarse carrier estimation is carried out with periodogram averaging method. Then, further fine estimation is done by stepwise search. The precise carrier synchronization parameters are obtained based on the criterion of Maximum-Mean-Square Soft Output (M2S2O). Simulation results show that the proposed algorithm with short pilots and lower complexity can achieve a BER performance which is very near that of the optimal coherent demodulation and eliminate large carrier offsets as much as plus and minus half of the data rate. When Bit Error Rate (BER) is in the region of 10-2~10-4, the Signal-to-Noise Ratio (SNR) degradation is within 0.3 dB.
2013, 35(9): 2206-2212.
doi: 10.3724/SP.J.1146.2013.00171
Abstract:
A communication scheme is proposed based on spread spectrum combined synthetic-aperture for long- rang acoustic communication in shallow water. The Doppler effect and an effective motion compensation method are analyzed using resampling technique combined with All-Phase FFT (AP-FFT) to perform the frequency and phase accuracy estimation, while eliminating the time fuzzy caused by Doppler. This paper uses the acoustic toolbox to model the acoustic channel for further synthetic-aperture communication simulation. The results show that the motion compensation approach proposed can effectively eliminate Doppler effect due to the relative movement between the two elements especially at a high speed. Consequently, this method achieves the coherent combination of signals transmitted by the virtual sub-arrays, reduces diversity gain loss and improves the communication quality significantly.
A communication scheme is proposed based on spread spectrum combined synthetic-aperture for long- rang acoustic communication in shallow water. The Doppler effect and an effective motion compensation method are analyzed using resampling technique combined with All-Phase FFT (AP-FFT) to perform the frequency and phase accuracy estimation, while eliminating the time fuzzy caused by Doppler. This paper uses the acoustic toolbox to model the acoustic channel for further synthetic-aperture communication simulation. The results show that the motion compensation approach proposed can effectively eliminate Doppler effect due to the relative movement between the two elements especially at a high speed. Consequently, this method achieves the coherent combination of signals transmitted by the virtual sub-arrays, reduces diversity gain loss and improves the communication quality significantly.
2013, 35(9): 2213-2219.
doi: 10.3724/SP.J.1146.2012.01290
Abstract:
Most of current wireless packet scheduling algorithms aim at maximizing overall system throughput or achieving a certain type of fairness among mobile users. However, these content-independent algorithms can not be applied to wireless video transmission because different video packets have different levels of contribution to the overall video quality at the receiver side. Fully exploring the differences between the video packets and accurate predicting the transmission distortion caused by lost video packets can significantly improve the performance of video streaming over resource-constrained wireless networks. In this paper, a packet-level transmission distortion model is proposed to predict the quality degradation of decoded videos by lost video packets, the packet-level deadline is defined to extend the model. Based on this packet-level transmission distortion model and packet-level deadline extended model, a gradient-based distortion and deadline aware scheduling algorithm is proposed, which prioritizes the transmissions of different users by considering distortion impact, deadline requirements and fully exploit the flexibility of resource allocation provided by the Orthogonal Frequency Division Multiplexing (OFDM) technology in terms of time, frequency and power. The experimental results demonstrate that the proposed algorithm outperforms the content-independent algorithms with a gain of as much as 4.3 dB in terms of average Peak Signal-to-Noise Ratio (PSNR).
Most of current wireless packet scheduling algorithms aim at maximizing overall system throughput or achieving a certain type of fairness among mobile users. However, these content-independent algorithms can not be applied to wireless video transmission because different video packets have different levels of contribution to the overall video quality at the receiver side. Fully exploring the differences between the video packets and accurate predicting the transmission distortion caused by lost video packets can significantly improve the performance of video streaming over resource-constrained wireless networks. In this paper, a packet-level transmission distortion model is proposed to predict the quality degradation of decoded videos by lost video packets, the packet-level deadline is defined to extend the model. Based on this packet-level transmission distortion model and packet-level deadline extended model, a gradient-based distortion and deadline aware scheduling algorithm is proposed, which prioritizes the transmissions of different users by considering distortion impact, deadline requirements and fully exploit the flexibility of resource allocation provided by the Orthogonal Frequency Division Multiplexing (OFDM) technology in terms of time, frequency and power. The experimental results demonstrate that the proposed algorithm outperforms the content-independent algorithms with a gain of as much as 4.3 dB in terms of average Peak Signal-to-Noise Ratio (PSNR).
2013, 35(9): 2220-2226.
doi: 10.3724/SP.J.1146.2012.01343
Abstract:
A novel Quantum Adaptive Particle Swarm Optimization (QAPSO) method is proposed. In this algorithm, the position encoding of the particle is achieved with quantum bits, and the state of quantum bit is updated dynamically with particle trajectory information. Then the mutation operation is performed by quantum non-gate to avoid falling into local optimum, which increases the diversity of particles. Afterwards, the Radial Basis Function (RBF) neural network is trained with QAPSO to implement the optimization of RBF neural network parameters. The network traffic prediction model is established based on the Quantum Adaptive Particle Swarm Optimization and RBF Neural Network (QAPSO-RBFNN). Forecasting results on real network traffic demonstrate that the convergence speed of the proposed method is faster and prediction accuracy is more accurate than that of traditional RBF neural network, the Particle Swarm Optimization and RBFNN (PSO-RBFNN), Hybrid Particle Swarm Optimization and RBFNN (HPSO-RBFNN), Adaptive Particle Swarm Optimization and RBF Neural Network (APSO-RBFNN). Furthermore, the forecasting effect of this method is stable on different scales.
A novel Quantum Adaptive Particle Swarm Optimization (QAPSO) method is proposed. In this algorithm, the position encoding of the particle is achieved with quantum bits, and the state of quantum bit is updated dynamically with particle trajectory information. Then the mutation operation is performed by quantum non-gate to avoid falling into local optimum, which increases the diversity of particles. Afterwards, the Radial Basis Function (RBF) neural network is trained with QAPSO to implement the optimization of RBF neural network parameters. The network traffic prediction model is established based on the Quantum Adaptive Particle Swarm Optimization and RBF Neural Network (QAPSO-RBFNN). Forecasting results on real network traffic demonstrate that the convergence speed of the proposed method is faster and prediction accuracy is more accurate than that of traditional RBF neural network, the Particle Swarm Optimization and RBFNN (PSO-RBFNN), Hybrid Particle Swarm Optimization and RBFNN (HPSO-RBFNN), Adaptive Particle Swarm Optimization and RBF Neural Network (APSO-RBFNN). Furthermore, the forecasting effect of this method is stable on different scales.
2013, 35(9): 2227-2233.
doi: 10.3724/SP.J.1146.2012.01588
Abstract:
As the high complexity and low convergence speed, traditional methods could not solve QoS multicast routing problem to satisfy the network requirement. A Harmony Search algorithm based on Child-Node Encoding (CNE-HS) is proposed for better performance. Three improved aspects present as follows: a new method is designed to create initial solution and new solution, which improves convergence speed; a new dynamic method is proposed to change parameters, which accounts global searching and local searching ability; a new encode mechanism is designed based on children node, which accelerates improvising new solutions. Theoretical analysis and the results of simulations prove the low complexity of CNE-HS, and show that CNE-HS performs much better than GA and HS-based algorithm using Node Parent Index (HSNPI) algorithm in convergence speed and cost.
As the high complexity and low convergence speed, traditional methods could not solve QoS multicast routing problem to satisfy the network requirement. A Harmony Search algorithm based on Child-Node Encoding (CNE-HS) is proposed for better performance. Three improved aspects present as follows: a new method is designed to create initial solution and new solution, which improves convergence speed; a new dynamic method is proposed to change parameters, which accounts global searching and local searching ability; a new encode mechanism is designed based on children node, which accelerates improvising new solutions. Theoretical analysis and the results of simulations prove the low complexity of CNE-HS, and show that CNE-HS performs much better than GA and HS-based algorithm using Node Parent Index (HSNPI) algorithm in convergence speed and cost.
2013, 35(9): 2234-2239.
doi: 10.3724/SP.J.1146.2012.01527
Abstract:
For the localization algorithm of sparse targets based on orth, the orth preprocessing would affect the sparsity of original signals. A novel localization algorithm of sparse targets based on LU-decomposition is proposed. It translates target localization into compressive sensing issue by using gridding method for sensing area, and then utilizes LU-decomposition to obtain a new observation dictionary, which satisfies effectively the restricted isometry property. Moreover, the sparsity of original signal can not be affected during the preprocessing of data observed, which will ensure the reconstruction performance and improve the localization accuracy. The experimental results show that, compared with the localization algorithm of sparse targets based on orth, the localization algorithm proposed have a much better performance, and the target localization accuracy is excellently improved.
For the localization algorithm of sparse targets based on orth, the orth preprocessing would affect the sparsity of original signals. A novel localization algorithm of sparse targets based on LU-decomposition is proposed. It translates target localization into compressive sensing issue by using gridding method for sensing area, and then utilizes LU-decomposition to obtain a new observation dictionary, which satisfies effectively the restricted isometry property. Moreover, the sparsity of original signal can not be affected during the preprocessing of data observed, which will ensure the reconstruction performance and improve the localization accuracy. The experimental results show that, compared with the localization algorithm of sparse targets based on orth, the localization algorithm proposed have a much better performance, and the target localization accuracy is excellently improved.
2013, 35(9): 2240-2246.
doi: 10.3724/SP.J.1146.2012.01590
Abstract:
In the real world, the structure of social networks is not static, but varying with times changing, and the same communities as an essential feature of social networks is also true. An incremental dynamic community detecting algorithm is proposed to reveal the actual communities based attribute weighted networks. It associates attribute information with topology graph and defines topological potential attraction between nodes and communities, using the incremental comparing with previous time to update the current community structure. The experiment on real network data proved that the proposed algorithm could be more effectively and timely to discover meaningful community structure, and having a smaller time complexity.
In the real world, the structure of social networks is not static, but varying with times changing, and the same communities as an essential feature of social networks is also true. An incremental dynamic community detecting algorithm is proposed to reveal the actual communities based attribute weighted networks. It associates attribute information with topology graph and defines topological potential attraction between nodes and communities, using the incremental comparing with previous time to update the current community structure. The experiment on real network data proved that the proposed algorithm could be more effectively and timely to discover meaningful community structure, and having a smaller time complexity.
2013, 35(9): 2247-2253.
doi: 10.3724/SP.J.1146.2012.01360
Abstract:
Resource sharing is one of the key issues of distributed computing, and load balancing is the fundamental approach to sharing scarce resource in a distributed computing system. However, the existing load balancing methods are mostly confined to the homogeneous networks. With the diversity of the computing terminals, there is increasing requirement for the study of load balancing toward the heterogeneous networks. In this paper, a diffusion-based dynamic load balancing algorithm is proposed for the heterogeneous networks and it is proved with mathematical strictness that all nodes will converge to the excepted balanced point. The numerical results show that the algorithm is better than the GDA algorithm proposed by Rotaru et al. (2004) and has ideal convergence property over many network topologies including Mesh, Star and Tours etc. And it converges rather fast even in a randomly generated network.
Resource sharing is one of the key issues of distributed computing, and load balancing is the fundamental approach to sharing scarce resource in a distributed computing system. However, the existing load balancing methods are mostly confined to the homogeneous networks. With the diversity of the computing terminals, there is increasing requirement for the study of load balancing toward the heterogeneous networks. In this paper, a diffusion-based dynamic load balancing algorithm is proposed for the heterogeneous networks and it is proved with mathematical strictness that all nodes will converge to the excepted balanced point. The numerical results show that the algorithm is better than the GDA algorithm proposed by Rotaru et al. (2004) and has ideal convergence property over many network topologies including Mesh, Star and Tours etc. And it converges rather fast even in a randomly generated network.
2013, 35(9): 2254-2260.
doi: 10.3724/SP.J.1146.2012.01669
Abstract:
Current failure recovery for routing system has not effectively resolved issues including storage cost, redundant recovery and AS (Autonomous System) benefit protection. In the background of cascading failure prone to happen under paralyzing attack, a failure recovery approach 3R (Robust Route Recovery) based on structured backup subgraph is proposed. First, to reduce space complexity, two algorithms for topology keypoint and important adjacent nodes are designed to satisfy both demands of small radix and low increasing rate, as well as the redundant recovery feature for multi-node in the same subgraph. Second, considering the AS benefit request, sort for neighboring links based on traffic weight is implemented to make tradeoff between the failure recovery and private routing policy. Finally, structured backup subgraphs according to the redundant recovery sets are generated through multiple iterations. Simulation results show the effectiveness of 3R approach.
Current failure recovery for routing system has not effectively resolved issues including storage cost, redundant recovery and AS (Autonomous System) benefit protection. In the background of cascading failure prone to happen under paralyzing attack, a failure recovery approach 3R (Robust Route Recovery) based on structured backup subgraph is proposed. First, to reduce space complexity, two algorithms for topology keypoint and important adjacent nodes are designed to satisfy both demands of small radix and low increasing rate, as well as the redundant recovery feature for multi-node in the same subgraph. Second, considering the AS benefit request, sort for neighboring links based on traffic weight is implemented to make tradeoff between the failure recovery and private routing policy. Finally, structured backup subgraphs according to the redundant recovery sets are generated through multiple iterations. Simulation results show the effectiveness of 3R approach.
2013, 35(9): 2261-2265.
doi: 10.3724/SP.J.1146.2012.01732
Abstract:
The nonlinear dynamics of Superbuck converter is investigated because Superbuck converter is a kind of important topology in the solar photovoltaic power generation system. The discrete mapping model of Superbuck converter is derived by stroboscopic mapping method, based on state equation of the converter. Then the bifurcation diagram of the inductor current is got with the reference voltage as the bifurcation parameter. Finally, the nonlinear dynamics of Superbuck converter is studied by setting up the experimental circuit to verify the whole evolution from stable state to the period-doubling bifurcation state until the chaotic state. On the other hand, ElectroMagnetic Interference (EMI) of the system is effectively reduced through the application of chaos theory by analyzing the power spectra of the inductor current.
The nonlinear dynamics of Superbuck converter is investigated because Superbuck converter is a kind of important topology in the solar photovoltaic power generation system. The discrete mapping model of Superbuck converter is derived by stroboscopic mapping method, based on state equation of the converter. Then the bifurcation diagram of the inductor current is got with the reference voltage as the bifurcation parameter. Finally, the nonlinear dynamics of Superbuck converter is studied by setting up the experimental circuit to verify the whole evolution from stable state to the period-doubling bifurcation state until the chaotic state. On the other hand, ElectroMagnetic Interference (EMI) of the system is effectively reduced through the application of chaos theory by analyzing the power spectra of the inductor current.
2013, 35(9): 2266-2271.
doi: 10.3724/SP.J.1146.2012.01036
Abstract:
In storage service, searchable encryption scheme allows users to access their cipher data selectively, and meanwhile ensures the confidentiality of search data. Since possessing higher search accuracy, conjunctive keyword (namely Boolean combination of multiple keywords) searchable encryption scheme enjoys greater significance in secure storage service application. However, there are some flaws in existing searchable encryption schemes, such as the size of the trapdoor of conjunctive keyword is too large, the search efficiency is slow and there is no support for multiple users search, etc. In this paper, an efficient conjunctive keyword searchable encryption scheme is proposed based on the method that the keywords are encrypted by authorized users and storage server successively,in which authorized users are allowed to search encrypted documents with the trapdoor generated by conjunctive keyword. The scheme is provable secure in the decisional Diffie-Hellman assumption. Compared with the existing schemes, the overall efficiency of the proposed scheme in computation and communication cost, including the size of trapdoor, the speed of keyword encryption and searching, is improved. Moreover, the proposed scheme also supports multiple users, that is, users can be added or revoked dynamically, by this way, and users can share data directly in storage server.
In storage service, searchable encryption scheme allows users to access their cipher data selectively, and meanwhile ensures the confidentiality of search data. Since possessing higher search accuracy, conjunctive keyword (namely Boolean combination of multiple keywords) searchable encryption scheme enjoys greater significance in secure storage service application. However, there are some flaws in existing searchable encryption schemes, such as the size of the trapdoor of conjunctive keyword is too large, the search efficiency is slow and there is no support for multiple users search, etc. In this paper, an efficient conjunctive keyword searchable encryption scheme is proposed based on the method that the keywords are encrypted by authorized users and storage server successively,in which authorized users are allowed to search encrypted documents with the trapdoor generated by conjunctive keyword. The scheme is provable secure in the decisional Diffie-Hellman assumption. Compared with the existing schemes, the overall efficiency of the proposed scheme in computation and communication cost, including the size of trapdoor, the speed of keyword encryption and searching, is improved. Moreover, the proposed scheme also supports multiple users, that is, users can be added or revoked dynamically, by this way, and users can share data directly in storage server.
2013, 35(9): 2272-2277.
doi: 10.3724/SP.J.1146.2013.00027
Abstract:
An efficient numerical approach is presented to analyze the electromagnetic scattering characteristics from conducting targets based on the Characteristic Basis Function Method (CBFM). Combined with the Improved Fast Dipole Method (IFDM), the matrix-vector product is transformed into an aggregation- translation-disaggregation form, which in turn accelerates the matrix-vector multiplication procedure in the generation of Secondary level Characteristic Basis Function (SCBF) and construction of reduced matrix. The computational time and memory consumption are reduced significantly compared with the traditional method of Fast Dipole Method-Characteristic Basis Function Method (FDM-CBFM) under the equivalent precision. Numerical results demonstrate that the proposed method is accurate and efficient.
An efficient numerical approach is presented to analyze the electromagnetic scattering characteristics from conducting targets based on the Characteristic Basis Function Method (CBFM). Combined with the Improved Fast Dipole Method (IFDM), the matrix-vector product is transformed into an aggregation- translation-disaggregation form, which in turn accelerates the matrix-vector multiplication procedure in the generation of Secondary level Characteristic Basis Function (SCBF) and construction of reduced matrix. The computational time and memory consumption are reduced significantly compared with the traditional method of Fast Dipole Method-Characteristic Basis Function Method (FDM-CBFM) under the equivalent precision. Numerical results demonstrate that the proposed method is accurate and efficient.
2013, 35(9): 2283-2286.
doi: 10.3724/SP.J.1146.2012.01497
Abstract:
Neurons Action Potentials (NAP) contain the most critical information of neurons actions. Based on the discussion for the sparsity characteristics of NAP presented in the DWT domain, and the analysis of related Compressive Sensing (CS) measurements, a compressed sampling method for NAP based on random convolution is proposed. For the aspects of the signal recovery and physical realization, three compression measurement methods are compared. The experiment results show compressed sampling method for NAP based on random convolution is the best compressed sampling scheme of the three for its system realization is simplest.
Neurons Action Potentials (NAP) contain the most critical information of neurons actions. Based on the discussion for the sparsity characteristics of NAP presented in the DWT domain, and the analysis of related Compressive Sensing (CS) measurements, a compressed sampling method for NAP based on random convolution is proposed. For the aspects of the signal recovery and physical realization, three compression measurement methods are compared. The experiment results show compressed sampling method for NAP based on random convolution is the best compressed sampling scheme of the three for its system realization is simplest.
2013, 35(9): 2278-2282.
doi: 10.3724/SP.J.1146.2012.01380
Abstract:
In the 15th (2012) IACR international conference on practice and theory of Public-Key Cryptography (PKC), Fujioka et al. proposed a generic construction of Authenticated Key Exchange (AKE) from a Key Encapsulation Mechanism (KEM), which is called the GC protocol and is proven to be secure in the CK+ security model. In this paper, it is pointed out by cryptanalysis that the GC protocol is not CK+ secure. Concrete attacks in which the outside adversary, without knowing the static or ephemeral keys of the users, imitates a valid user are also given. Further, the errors in the original security proof are analyzed.
In the 15th (2012) IACR international conference on practice and theory of Public-Key Cryptography (PKC), Fujioka et al. proposed a generic construction of Authenticated Key Exchange (AKE) from a Key Encapsulation Mechanism (KEM), which is called the GC protocol and is proven to be secure in the CK+ security model. In this paper, it is pointed out by cryptanalysis that the GC protocol is not CK+ secure. Concrete attacks in which the outside adversary, without knowing the static or ephemeral keys of the users, imitates a valid user are also given. Further, the errors in the original security proof are analyzed.