Advanced Search

2011 Vol. 33, No. 9

Display Method:
Articles
Cramer-Rao Bounds for FOA and TOA Estimation from Galileo Search and Rescue Signal
Wang Kun, Wu Si-Liang, Han Yue-Tao
2011, 33(9): 2033-2038. doi: 10.3724/SP.J.1146.2011.00017
Abstract:
Considering the uncertainty of message bit width, the Cramer-Rao Bounds (CRBs) for Frequency Of Arrival (FOA) and Time Of Arrival (TOA) estimation from single Galileo Search And Rescue (SAR) signal are researched. Calculation expressions for the elements of Fisher information matrix of Galileo SAR signal are acquired. When calculating the sum of the square of dirac delta function, the properties of dirac delta function and Parsevals theorem are used to transform the computation from time-domain to frequency-domain, and the closed-form analytical solution of CRBs for FOA and TOA estimation are derived. Numerical calculation, Monte Carlo simulation and measurement results validate the effectiveness of the CRBs above which can be used to evaluate the performance of estimation algorithm for the parameters above.
Estimation of the Number of Signal Components Based on Singular Values of Noise Subspace
Zhou Xin-Peng, Han Feng, Wei Guo-Hua, Wu Si-Liang
2011, 33(9): 2039-2044. doi: 10.3724/SP.J.1146.2010.01216
Abstract:
A new method is proposed to improve the performance of estimating the number of signal components in low Signal-to-Noise Ratio (SNR) environment, based on singular values of noise subspace and the principle of Constant False Alarm Rate (CFAR). Through researching the relationship between singular values of Hankel matrix and the energy of noise, the threshold of detecting singular values of noise subspace is got in a certain probability of false alarm by using the characteristic of Gaussian white noise envelope submitting to exponential distribution. Simulation results show that the method is effective in low SNR condition.
Robust Capon Beamforming with Orthomodular Constraint
Xie Bin-Bin, Gan Lu, Li Li-Ping
2011, 33(9): 2045-2049. doi: 10.3724/SP.J.1146.2011.00127
Abstract:
Standard Capon beamformer has serious distortion in the small snapshot or steering vectors mismatched or high SNR. In order to improve the robustness of beam, a robust Capon Beamforming with Orthomodular Constraints algorithm (OCCB) is proposed, through the second derivative get the accurate mathematical expression of weights. The algorithm restrains the orthogonality of array weight and noise subspace in the base of Capon, achieving diagonal loading of noise eigenvalue, decreasing spread level of noise eigenvalue. The influence of partly diagonal loading to the desired signal, interference and noise is analyzed. In the small snapshot or steering vectors mismatched or high SNR, this algorithm can make beam get lower sidelobe and more accurate mainlobe direction. Meanwhile, the interference can be better suppressed. Simulation results confirm the effectiveness of the proposed algorithm.
Spline Function Renyi Entropy Based Time Diversity Wavelet Blind Equalization Algorithm
Guo Ye-Cai, Gong Xiu-Li, Zhang Yan-Ping
2011, 33(9): 2050-2055. doi: 10.3724/SP.J.1146.2011.00110
Abstract:
Aiming at the defects of large computational loads and slow convergence speed of Time diversity fractionally spaced Decision feedback blind equalization algorithm (TD), on the basis of defining spline function Renyi entropy, Spline function Renyi entropy Time diversity Decision feedback blind equalization algorithm based on Wavelet transform (STDW) is proposed. In the proposed algorithm, spline function Renyi entropy is defined and used as the cost function to update the weight vector, fractionally spaced equalizer is employed to obtain more detailed channel information, and orthogonal wavelet transform is used to reduce the autocorrelation of the input signals in order to speed up the convergence rate, as well as time diversity technique and decision feedback filter are utilized to reduce the influence of multi-path fading channels on communication quality. Simulation experiments with underwater acoustic channel have verify the effectiveness of the proposed algorithm.
Real-time Unambiguious Passive Direction Finding for Multiple Sound Sources with Widely Spaced Microphone Array
Xu Zhi-Yong, Zhao Zhao, Liu Ming
2011, 33(9): 2056-2061. doi: 10.3724/SP.J.1146.2010.01273
Abstract:
A multi-source time-delay estimation algorithm based on iterative cross-spectrum weighted histogram is studied for real-time passive direction finding of multiple sound sources with a widely spaced microphone array. By using the short-time spectral sparseness and orthogonality assumption of audio signals as well as the frequency-varying characteristic of delay ambiguity periods, instantaneous maximum signal-to-noise ratio peaks on the true delays of concurrent sounds can be obtained simultaneously without obvious sidelobes caused by phase-difference wraparound ambiguity. As a result, the common limitation in most existing sparseness based methods on the microphone spacing that must be no greater than half the minimum wavelength of signals is removed, leading to array systems being able to have both large aperture and low complexity. Simulation results verify the effectiveness of the studied technique.
Efficient Compressed Sensing Quantization of LSP Parameters Based on the Approximate KLT Domain
Xiao Qiang, Chen Liang, Zhu Tao, Huang Jian-Jun
2011, 33(9): 2062-2067. doi: 10.3724/SP.J.1146.2011.00014
Abstract:
For low bit rate speech coding applications, it is very important to quantize the Line Spectrum Pair (LSP) parameters accurately using as few bits as possible without sacrificing the speech quality. In this paper, the sparsity of LSP parameters on the approximated Karhunen-Loeve Transform (KLT) domain is researched, and then an efficient LSP parameters quantization scheme is proposed based on the Compressed Sensing (CS). In the encoder, the LSP parameters extracted from consecutive speech frames are compressed by CS on the approximate KLT domain to produce a low dimensional measurement vector, the measurements are quantized using the split vector quantizer. In the decoder, according to the quantized measurements, the original LSP vector is reconstructed by the orthogonal matching pursuit method, the reconstructed LSP vector is the ultimate quantization value of the original LSP parameters. Experimental results show that the scheme can obtain transparent quality at 5 bit/frame with realistic codebook storage and search complexity.
Waveform Design and Imaging Algorithm Research of Random Frequency Stepped Chirp Signal ISAR
He Jin, Luo Ying, Zhang Qun, Yang Xiao-You
2011, 33(9): 2068-2075. doi: 10.3724/SP.J.1146.2011.00033
Abstract:
Frequency-stepped chirp signal is an effective radar signal which can utilize the digital signal processing to obtain high range resolution with relatively narrow instantaneous bandwidth. Because the anti-jamming ability of frequency-stepped chirp signal radar is not enough, the random frequency-stepped chirp signal is designed, and a random frequency-stepped chirp signal ISAR imaging algorithm based on compress sensing theory is proposed. The range profile and high resolution 2-D ISAR image can be obtained with fewer subpulses. Then, the target velocitys influence on frequency-stepped chirp signal radar imaging is analyzed, the random frequency-stepped chirp signal which contains velocity measuring subpulses is designed, and the velocity estimate and compensation method which is combined with time-frequency analysis, Radon transform and binary mathematical morphology is proposed. The simulation verifies the capability of random frequency-stepped chirp signal ISAR and effectiveness of the imaging algorithm proposed in this paper.
Method of Scattering Centers Association and 3D Reconstruction for Non-cooperative Radar Target
Zhang Ying-Kang, Xiao Yang, Hu Shao-Hai
2011, 33(9): 2076-2082. doi: 10.3724/SP.J.1146.2010.01449
Abstract:
For the problem of associating the 1D scattering centers at unknown viewing angles existed in the technique of non-cooperative radar target 3D reconstruction, a method of scattering centers association based on geometric constraint is proposed. In the method, a part of reliable scattering centers are automatically selected and effectively associated via checking the back-projection error. Meanwhile, with the motion parameters estimated from the range data of the associated scattering centers, more scattering centers on the target can be associated and reconstructed. The simulations verify that, the proposed algorithm is applicable to the complex situations where missing, fault and overlapping points existed in the 1D scattering centers, and the robustness of the scattering centers association and reconstruction for unknown moving target can be effectively enhanced.
Feature Extraction of Precession and Structure of Cone-shaped Object Based on Time-HRRP Distribution
Ai Xiao-Feng, Zou Xiao-Hai, Li Yong-Zhen, Zhao Feng, Xiao Shun-Ping
2011, 33(9): 2083-2088. doi: 10.3724/SP.J.1146.2011.00097
Abstract:
Utilizing multi-feature for ballistic target recognition is a trend in the ballistic missile defense field. In the published literatures, the structure feature extraction of ballistic targets need know some precession parameters, while the precession feature extraction need know some structure parameters, which makes a deadlock in the feature extraction of ballistic targets. Firstly, this paper analyzes the trace of each scatter in the time-High Range Resolution Profile (HRRP) distribution of the cone-shaped warhead in detail. Then, based on the time-HRRP distribution and combining the trace and geometry relation of each scatter, the precession feature, including the precession frequency, precession angle, initial phase angle and the angle between the precession axis and the Line Of Sight (LOS), and the structure feature, including the target height, radius of the undersurface, distance between the centroid and undersurface, and the distance between the centroid and radar, are extracted jointly. Finally, the simulation results are given for validating the proposed algorithms.
A Two-dimensional Spectrum for MEO SAR Based on High-order Modified Hyperbolic Range Equation
Bao Min, Xing Meng-Dao, Li Ya-Chao, Bao Zheng
2011, 33(9): 2089-2096. doi: 10.3724/SP.J.1146.2011.00047
Abstract:
Because of the long integration time of Medium Earth Orbit SAR (MEO SAR), the hyperbolic range equation based on linear trajectory module is not suit for MEO SAR. Considering this issue, a high-order modified hyperbolic range equation is proposed. Incorporating with an additional linear component and quartic component, quartic Taylor series expansion of it has exactly the same value as which of the actual range history of MEO SAR. Then, the two-dimensional spectrum based on high-order modified hyperbolic range is analytically derived by using an approximate azimuth stationary point, based on method of series reversion the accuracy of the two-dimensional spectrum is analyzed which is exactly equal to quartic phase term. Simulation results show that the proposed range equation and the two-dimensional spectrum are accurate which can give fine resolution imagery with the entire aperture.
Waveform Design for Compressive Sensing Radar Based on Minimizing the Statistical Coherence of the Sensing Matrix
He Ya-Peng, Zhuang Shan-Na, Li Hong-Tao, Zhu Xiao-Hua
2011, 33(9): 2097-2102. doi: 10.3724/SP.J.1146.2011.00021
Abstract:
To enhance the performance of Compressive Sensing Radar (CSR) target information extraction ability, a CSR optimal waveform design method based on minimizing the statistical coherence of the sensing matrix is proposed. First, a universal CSR model is established and waveform optimization object function minimizing the coherence of the sensing matrix is derived. Then, the Genetic Algorithm (GA) is employed to solve this problem with polyphase coded signal as an example code. The optimized waveform makes the sub-sensing matrix orthogonality degree approximately optimal. Comparing with traditional waveforms, this waveform reduces effectively the target information estimation error, increases the permissible upper bound of target detection number, and enhances the accuracy and robustness of CSR target information extraction. Computer simulation shows the effectiveness of the method.
Doppler Ambiguity Resolution Based on Compressive Sensing Theory
Zhang Yu-Xi, Sun Jin-Ping, Zhang Bing-Chen, Hong Wen
2011, 33(9): 2103-2107. doi: 10.3724/SP.J.1146.2011.00073
Abstract:
Multiple target Doppler ambiguity resolution is one of key processing techniques for low Pulse Repetition Frequency (PRF) radar. A new Doppler ambiguity resolution approach based on Compressive Sensing (CS) theory is presented. Making use of the characteristic of under sampling in the time domain during the Coherent Processing Interval (CPI) and the sparsity of Doppler spectrum of multiple PRF system, the CS model of Doppler ambiguity resolution is constructed and the Orthogonal Matching Pursuit (OMP) algorithm is adopted to estimate directly the response of Doppler spectrum without ambiguity. The method is validated through simulation results of resolving Doppler ambiguity in multiple target situations for grouping staggered multiple PRF radar system.
A Modified -k Algorithm for Wide-field and High-resolution Spaceborne Spotlight SAR
Liu Yan, Sun Guang-Cai, Xing Meng-Dao
2011, 33(9): 2108-2113. doi: 10.3724/SP.J.1146.2011.00150
Abstract:
Based on the equivalent-squint range model, an modified-k algorithm for wide-field and high-resolution spaceborne spotlight SAR is proposed. In order to process the wide field and high resolution spaceborne SAR data precisely, by taking use of the relationship between the range and the equivalent velocity, the Stolt mapping of the classical-k algorithm is modified. The modified Stolt mapping takes into account for the range-dependence of the equivalent velocity, so that the range cell migration is corrected accurately. Based on the modified Stolt mapping, the modified-k algorithm is presented, which guarantees the uniform image quality along range and is validated by the simulation.
Motion Compensation for Airborne SAR with Synthetic Bandwidth
Zhang Mei, Liu Chang, Wang Yan-Fei
2011, 33(9): 2114-2119. doi: 10.3724/SP.J.1146.2011.00190
Abstract:
Bandwidth synthetic technique provides a fine range resolution for the advanced airborne SAR system. In strip-map mode, in order to get an as good azimuth resolution as the range, the precise motion compensation is important. This paper studies several key issues of the two-step motion compensation method in the bandwidth synthetic system. In the first compensation step the precise Inertial Measurement Unit (IMU)/GPS data is used, the real target distance and the ideal airborne line are always unknown. In this paper, the corner reflectors are used to estimate these values. And in the second step, an improved Phase Gradient Autofocus (PGA) algorithm is used to compensate the residual time-mutative phase error. Real data testifies that after the two-step compensation the azimuth resolution better than 0.25 m is obtained finally.
Space-time Separated Interpretation Method for Forward-looking Airborne Radar Clutter Spectrum
Liu Jin-Hui, Liao Gui-Sheng, Li Ming
2011, 33(9): 2120-2124. doi: 10.3724/SP.J.1146.2010.01133
Abstract:
In Forward-Looking Airborne Radar (FLAR), the covariance matrix of the Cell Under Test (CUT) is not accurately estimated which is caused by the range-dependence problem, and it declines the clutter rejection performance of the Space-Time Adaptive Processing (STAP). The space-time interpretation method is one of the interpretation methods which can handle the range-dependence efficiently, however, it requires huge computation load, and brings great challenge to the real-time processing of this method. This paper proposes a space-time separated interpretation method. The proposed method makes the space-time interpretation transform in spatial domain and temporal domain separately. Comparing with space-time interpretation method, the proposed method can reduce the computation load greatly while having a little STAP performance loss. In addition, the proposed method can achieve good performance when the antenna error exists. The computer simulation results show the validity of the proposed method.
Research on the Estimation of Clutter Rank for Coherent Airborne MIMO Radar
Zhang Xi-Chuan, Zhang Yong-Shun, Xie Wen-Chong, Wang Yong-Liang
2011, 33(9): 2125-2131. doi: 10.3724/SP.J.1146.2010.01135
Abstract:
This paper focuses on the problem of clutter rank estimation of coherent airborne MIMO radar with arbitrary transmitted waveform synthetic structures. A new construct method of clutter rank estimation for Multiple Input Multiple Output (MIMO) radar is presented. Clutter rank can be estimated by an equivalence matrix which is constructed by the waveform synthetic structures and sparse structure from the method, instead of the computation complicated direct decomposition of the Clutter Covariance Matrix (CCM). A simple and effective estimation rule of clutter rank is proposed and strict proofed based on the decomposition of the construct matrix. The quantitative relationship among the synthetic structures, sparse structure and Degree Of Freedom (DOF) of clutter is established, and the eigenspectrum structure of the clutter under arbitrary transmitted waveform synthetic structures can be obtained from the method and rule. Simulations also verify the accuracy of the method and rule. The proposed theory can be used to design an optimal synthetic structure and effective STAP algorithm.
A New Method for Ship Detection in Harbor Region of SAR Images
Chen Qi, Wang Na, Lu Jun, Shi Gong-Tao, Kuang Gang-Yao
2011, 33(9): 2132-2137. doi: 10.3724/SP.J.1146.2011.00018
Abstract:
Ship detection in harbor region is an important aspect in SAR ocean application research. Fast and accurate ship detection can improve the ability of automatic SAR images interpretation greatly. By analyzing of the characteristics of ship berthing in harbor, a new method is proposed for ship detection in harbor region of SAR images. Firstly, the SAR image of the harbor coastwise region is acquired based on the harbor coastline. Then, a detailed analysis is presented on the clutter statistical properties of harbor coastwise region in SAR image. Further, the ship detection is completed based on the Constant False Alarm Rate (CFAR) detector with the G0 distribution. The experimental results demonstrate that the novel method can separate the ships in harbor with various shapes from the land effectively, and is characterized by the high detection rate and the low false alarm rate.
The Impact of Non-ideal Front-end Characteristic on PN Zero Value Measurement of Navigation Receivers
Li Bai-Yu, Chen Lei, Li Cai-Hua, Ou Gang
2011, 33(9): 2138-2143. doi: 10.3724/SP.J.1146.2010.01392
Abstract:
Zero value measurement is an important measurement of navigation receivers. The non-ideal characteristic of actual receiver channel, including the amplitude-frequency response and the phase-frequency response, has different influence on zero value estimate. Now, there is not a universal and quantificational analytic method for this problem. This thesis analyzes the impact of any non-ideal characteristic on zero value measurement quantificationally; then, verifies the analysis conclusion for a software receiver, the result agrees well with the analysis conclusion. The analytic model and method in this paper can be extended to analyze the impact of channel non-ideal characteristic on the zero value of carrier phase.
Fast High-dimensional Feature Matching for Retrieving Remote Sensing Images
Chen Hui-Zhong, Chen Yong-Guang, Jing Ning, Chen Luo
2011, 33(9): 2144-2151. doi: 10.3724/SP.J.1146.2011.00074
Abstract:
The key point in applying high-dimensional local features to remote sensing image retrieval is to improve the efficiency of feature matching. A new Compressed Priority Filter (CPF) algorithm is investigated that quantizes the feature vectors to compress the search space, constructs a high-dimensional index, searches candidates via priority queue, and calculates the exact feature vectors to get nearest neighbors. Then, a fast remote sensing image retrieval algorithm based on Speeded Up Robust Feature (SURF) features is proposed based on CPF. It is proved by experiments and via analysis that CPF can reduce disk I/O and float-pointing calculation. When the number of features is big, it is much faster and more precise than the classical BBF algorithm. It is obvious that the fast remote sensing image retrieval algorithm based on SURF can return to the correct related target image from the gallery quickly, together with similar images.
An Image Matching Algorithm Based on SCCH Feature Descriptor
Tang Yong-He, Lu Huan-Zhang, Hu Mou-Fa
2011, 33(9): 2152-2157. doi: 10.3724/SP.J.1146.2011.00007
Abstract:
In order to solve the problem that it is difficult to balance the real-time performance and robustness in image matching using local feature, an image matching algorithm based on Signed Contrast Context Histogram (SCCH) feature descriptor is presented. Multi-scale feature points are extracted with Harris operator in Gaussian pyramid images to reduce the data for processing. Feature descriptor is built with the means of the differences of gray value in the sub-regions of feature point neighborhood, which not only decreases the complexity of building descriptor and the dimensions of descriptor, but also enhances the robustness and distinctiveness of descriptor. Furthermore, the absolute distance between descriptors is used to match feature points as a similarity measurement to lessen the computation. Simulation results indicate that the proposed algorithm keeps invariant in the case of image zoom, rotation, blurring, luminance varying as well as smaller view angle changes, and its matching speed is faster.
A Data-driven Fusion and Its Application to Acoustic Vehicle Classification
Lin Yue-Song, Chen Lin, Guo Bao-Feng
2011, 33(9): 2158-2163. doi: 10.3724/SP.J.1146.2011.00156
Abstract:
Most traditional information fusion methods depend on system models, where certain simplification will be introduced. However, with increased complexity of applications, these models tend to be inadequate and show bias to the real situation. In some cases, precise models are just impossible to build up. Aiming at this problem, two data-driven information fusion methods are presented in this paper. By combining a data-driven feature set with a model-based feature set, the performance of information fusion is improved due to a compensation deficiency for model-based approaches. The proposed method is then applied to acoustic vehicle classification, and better classification performance is achieved, which shows the feasibility and advantages to introduce data-driven ideas into information fusion.
Quality Assessment of Photoelectric Image Interference
Zeng Kai, Yang Hua, Di Yue, Zhang Hong
2011, 33(9): 2164-2168. doi: 10.3724/SP.J.1146.2010.01400
Abstract:
Interfered image quality assessment is a key issue on photoelectric interference. The assessment result should match with visual perceptual factors. First, the characteristic of interfered image and the shortage of normal image quality assessment methods are analyzed. Then, according to the knowledge about visual perceptual characteristic, the image is divided into edge region and non-edge region by quadtree. For edge region, a edge contrast function is made to evaluate the edges change. For non-edge region, the structural similarity function is introduced. And the method of dividing the assessment block is improved for fitting the characteristic of interfered image. A function which suits interfered image quality assessment is proposed. Finally, the images interfered by strong light and smoke are assessed. The results indicate that the method give the fine agreement with the visual perceptual factors.
Fast Matrix Embedding Based on Bit-control
Wang Chao, Zhang Wei-Ming, Liu Jiu-Fen
2011, 33(9): 2169-2174. doi: 10.3724/SP.J.1146.2010.01410
Abstract:
For good security and large payload in steganography, it is desired to embed as many messages as possible per change of the cover-object, i.e., to have high embedding efficiency. Matrix embedding is the most popular method for increasing the embedding efficiency. Matrix embedding based on random linear codes, proposed by Fridrich, can achieve high embedding efficiency, but cost high computational complexity. In this paper, Fridrichs matrix embedding is improved by majority-vote parity check. First, a new cover is constructed from the original cover block with several control-bits by exclusive-or operations. Second, the matrix embedding is executed on the new cover, and a modification pattern with few changes on the original cover can be fast found by investigating the states of the control-bits. Analysis and experimental results show that the proposed method can flexibly trade embedding efficiency for embedding speed, or vice versa. Comparing with Fridrichs method, the computational complexity of the novel method exponentially decreases with increasing the number of the control-bits when embedding efficiency is unchangeable. Comparing with previous fast matrix embedding methods, the proposed method can reach higher embedding efficiency with faster embedding speed.
Locally Discriminant Projection Algorithm Based on the Block Optimization and Combination Strategy
Zheng Jian-Wei, Wang Wan-Liang, Yao Xin-Wei
2011, 33(9): 2175-2180. doi: 10.3724/SP.J.1146.2010.01358
Abstract:
It is difficult for the traditional projection algorithms to extend to incremental learning since they use the whole training sets for solving out the projection matrix directly. To tackle this problem, a novel method, named Block optimization and Combination strategy, used for Locally Discriminant Projection (BCLDP) is proposed. This method takes into account both intra-class and interclass geometries; and has the orthogonality property. Furthermore, BCLDP is extended to nonlinear case using kernel function, which makes BCLDP better suits for diverse application. The experiments on ORL face database and speaker identification application demonstrate the effectiveness of the proposed algorithm.
Fast Decision Using SVM for Incoming Samples
Zhang Zhan-Cheng, Wang Shi-Tong, Deng Zhao-Hong, Chung Fu-lai
2011, 33(9): 2181-2186. doi: 10.3724/SP.J.1146.2011.00107
Abstract:
The number of Support Vectors (SVs) of SVM is usually large and this results in a substantially slower classification speed than many other approaches. The less SVs means the more sparseness and higher classification speed. How to reduce the number of SVs but without loss of generalization performance becomes a significant problem both theoretically and practically. Basing on the sparsity of SVs, it is proven that when clustering original SVs, the minimal upper bound of the error between the original decision function and the fast decision function can be achieved by K-means clustering the original SVs in input space, then a new algorithm called Fast Decision algorithm of Support Vector Machine (FD-SVM) is proposed, which employs K-means to cluster a dense SVs set to a sparse set and the cluster centers are used as the new SVs, then aiming to minimize the classification gap between SVM and FD-SVM, a quadratic programming model is built for obtaining the optimal coefficients of the new sparse SVs. Experiments on toy and real-world data sets demonstrate that compared with original SVM, the number of SVs decreases and the speed of classification increases, while the loss of accuracy is acceptable at the 0.05 significant level.
A Maximum Margin Learning Machine Based on Entropy Concept and Kernel Density Estimation
Liu Zhong-Bao, Wang Shi-Tong
2011, 33(9): 2187-2191. doi: 10.3724/SP.J.1146.2010.01434
Abstract:
In order to circumvent the deficiencies of Support Vector Machine (SVM) and its improved algorithms, this paper presents Maximum-margin Learning Machine based on Entropy concept and Kernel density estimation (MLMEK). In MLMEK, data distributions in samples are represented by kernel density estimation and classification uncertainties are represented by entropy. MLMEK takes boundary data between classes and inner data in each class seriously, so it performs better than traditional SVM. MLMEK can work for two-class and one-class pattern classification. Experimental results obtained from UCI data sets verify that the algorithms proposed in the paper is effective and competitive.
A Fault Propagation-aware Program Fault Location Method
He Jia-Lang, Meng Jin, Zhang Kun, Zhang Hong
2011, 33(9): 2192-2198. doi: 10.3724/SP.J.1146.2011.00396
Abstract:
Based on the existing Coverage-Based Fault-Localization (CBFL) methods can not effectively solve the problem of failure propagation impact for the location precision, this paper proposes a propagation-aware program fault location method. This method uses the collected information of program covering paths to compress the suspicious nodes space and reduces effectively the computational complexity, then uses nodes frequency information appearing in normal and fault execution paths to compute each nodes initial suspicious degree. By introducing the concept of edge propagation trend, the method perceives fault propagation for the node having the maximum initial suspicious degree and finally revises the initial suspicious degree for related nodes. The results of the analysis and experiments show that the method can effectively reduce the impact of the propagation for the location precision and has great advantage of time consuming than other methods when the expansion of the scale of program, so has high practical value.
A Homomorphic Hashing Based Provable Data Possession
Chen Lan-Xiang
2011, 33(9): 2199-2204. doi: 10.3724/SP.J.1146.2011.00001
Abstract:
In cloud storage, in order to allow users to verify that the storage service providers store the user's data intactly. A homomorphic hashing based Provable Data Possession (PDP) method is proposed. Because of the homomorphism of hash algorithm, the hash value of the sum of two blocks is equal to the product of the two hash values. It stores all data blocks and their hash values in setup stage. When the user challenges the storage server, the server returns the sum of the requested data blocks and their hash values. The user computes the hash value of the sum of these data blocks and verifies whether they are equal. In the data lifecycle, the user can perform unlimited number of verification. The method provides provable data possession at the same time it provides integrity protection. Users only need to save a key K, about 520 byte, the information transferred for verification only need about 18 bit, and verification only needs one time hash computation. The security and performance analysis show that the method is feasible.
Energy Optimization of NoC Based on Voltage-frequency Islands under Processor Reliability Constraints
Zhang Jian-Xian, Zhou Duan, Yang Yin-Tang, Lai Rui, Gao Xiang
2011, 33(9): 2205-2211. doi: 10.3724/SP.J.1146.2010.01266
Abstract:
A method for energy optimization based on the partitioning, assignment, and task mapping of voltage-frequency islands is proposed, considering the issue of energy optimization of Network-on-Chip (NoC) that supports voltage-frequency islands. The energy consumption of the processor is reduced by the partitioning of voltage-frequency islands under processor reliability constraints; the number of complex routers between different voltage-frequency islands is decreased by the voltage-frequency islands assignment strategy of near convex region selection; and the energy consumption of system communications is reduced through NoC mapping optimizations by quantum-behaved particle swarm algorithm. Experimental results show that the presented algorithm can significantly reduce the NoC system energy consumption with the requirements of NoC processor reliability satisfied.
A TCAM Service Identification Algorithm Based on Access Compression Using Bloom-filter
Chen Zheng-Hu, Lan Ju-Long, Huang Wan-Wei, Li Yu-Feng
2011, 33(9): 2212-2218. doi: 10.3724/SP.J.1146.2011.00058
Abstract:
Within the confines of access width and storage capacity in Ternary Content Addressable Memory (TCAM) chip, pattern matching algorithm using TCAM has low throughput and limited memory. This paper proposes BF-TCAM algorithm based on access compression by single byte coding using Bloom-Filter (BF) algorithm, increases the throughput and improves the utilization efficiency of storage space. BF algorithm will bring collision, which can be reduced by using vector memory instead of bit vector in BF-TCAM algorithm. The experimentation shows that the BF-TCAM algorithm proposed in this paper can improve the matching throughput and storage space utilization, and at the same time, effectively reduce the conflict ratio resulting from BF compression.
Research on the Real-time Identification Methods for P2P-TV Flows
Hu Chao, Chen Ming, Xu Bo, Li Bing
2011, 33(9): 2219-2224. doi: 10.3724/SP.J.1146.2010.00975
Abstract:
P2P-based IPTV (P2P-TV) is one of the Internet applications grown dramatically currently. It is the key to identify P2P-TV video streaming in real-time for managing the P2P-TV traffic and understanding the networking behaviors. Analyzing the architecture, communications processes, message frames and system characteristics of P2P-TV represented by PPLive, a Crawler-based Identifying Video Flows (CIVF) algorithm and a Protocol characteristic-based Identifying Video Flows (PIVF) algorithm identifying video flows in real-time are proposed. CIVF identifies P2P-TV flows based on the peer information, which is obtained from crawler program, while PIVF makes use of the communication sequence and payload characteristic in P2P-TV applications to achieve real-time identification. The results of experiments in Internet environment shows that CIVF algorithm has features such as ease to realize, not high accurateness and long outdated peer information, and PIVF algorithm has features such as high accurateness, fast working and high expansibility.
Inter-domain Embedded Carrying Network Construction in ReFlexNet
Qi Ning, Wang Bin-Qiang, Yuan Bo, Zhang Bo, Wang Bao-Jin
2011, 33(9): 2225-2230. doi: 10.3724/SP.J.1146.2011.00026
Abstract:
To solve the puzzle faced by traditional Internet infrastructure, a new Reconfigurable Flexible Network (ReFlexNet) Infrastructure is proposed. The inter-domain Reconfigurable Embedded Carrying Network (RECN) construction issues in ReFlexNet are discussed. Distributed layered management system and resource management mechanism is proposed. Based on the manner of token passing, a distributed inter-domain RECN construction method is proposed, which can resolve large-scale inter-domain construction problem efficiently. To improve efficiency of token transfer, an improved Token-Ring establishment algorithm is proposed which is called ImprovedSA based on traditional Simulated Annealing (SA) algorithm of solving Hamilton circle problem. It can find a better circle rapidly and effectively via ameliorating the result gained by traditional algorithm.
Crosstalk Model Based on Neighboring Elements for Small Element IRFPA
Liu Jing, Wang Xia, Jin Wei-Qi, Xu Chao
2011, 33(9): 2231-2236. doi: 10.3724/SP.J.1146.2010.00919
Abstract:
Crosstalk is one of the important parameters for InfraRed Focal Plane Array (IRFPA) performance evaluation. As the size of IRFPA element reduces, the size of testing spot of traditional small spot test method is close to or even greater than that of IRFPA element, wherein new theory and method for crosstalk measurement is required. First, typical situations are analyzed when infrared spot illuminates onto IRFPA. Then the electrical signals of adjacent IRFPA elements are studied, and new crosstalk model for small element IRFPA is proposed based on 8 neighboring elements. 4 neighboring elements and traditional situation are two special cases of crosstalk model based on 8 neighboring elements. Mathematical analysis shows: the size of testing spot affects crosstalk apparently in small element IRFPA; crosstalk model based on neighboring elements can realize small element IRFPA crosstalk measurement using existing equipment, given that the software is modified.
Design of Zero-crossing Distortion of Boost Power Factor Correction (PFC) Based on Feedforward Current Control Slope Compensation
Li Ya-Ni, Yang Yin-Tang, Zhu Zhang-Ming, Qiang Wei
2011, 33(9): 2237-2242. doi: 10.3724/SP.J.1146.2010.01262
Abstract:
The method of feedforward current control slope compensation and its circuit structure are proposed. With this method, the zero-crossing distortion of boundary boost Power Factor Correction (PFC) converter is reduced, for improving limits for harmonic current and frequency of the system. Based on boundary boost PFC converter topology, the modulation of Pulse Width Modulation (PWM) signal duty cycle is analyzed theoretically by the feedforward current control slope compensation technology. The relationship between the compensation slope and input line voltage is derived, which forces the current to follow the input voltage in the vicinity of the ac line voltage zero-crossing points. Simulation and experimental results reveal that, the zero-crossing distortion of the system could be suppressed efficiently with this method, as well as improved system dynamic performance, especially under the condition of high frequency or light load. The measured Total Harmonic Distortion (THD) of the Boost PFC converter is only 3.8%, the power factor is 0.988, the load adjust is 3%, the linear adjust rate is less than 1%, and the efficiency is 97.3%. The active die area is 1.611.52 mm2.
The Theory and Simulation of the Klystron RF Coupling Coefficient with the AC Space Charge Effects
Huang Chuan-Lu, Ding Yao-Gen, Wang Yong, Quan Ya-Min, Xie Xing-Juan
2011, 33(9): 2243-2247. doi: 10.3724/SP.J.1146.2010.01443
Abstract:
In the high power klystrons, the AC space-charge effects from the bunching of electrons in gap can not be ignored due to the higher beam current and longer gap distance. The traditional coupling coefficient calculation model according to the simplified kinematic theory does not take the effects in account, so its result is higher than the actual value. This paper develops a new coupling coefficient model based on the Webster debunching theory considering the AC space-charge effects with arbitrary gap field distributing. In addition, a simulation study of the coupling coefficient is given using the particle-in-cell code. The simulation results show agreement well with the calculating results.
Compressed Hyperspectral Image Sensing Reconstruction Based on Interband Prediction and Joint Optimization
Liu Hai-Ying, Wu Cheng-Ke, Lv Pei , Song Juan
2011, 33(9): 2248-2252. doi: 10.3724/SP.J.1146.2010.01343
Abstract:
According to the correlation analysis of Compressed Sensing (CS) measurements for hyperspectral images, a new reconstruction algorithm based on interband prediction and joint optimization is proposed. In the method, linear prediction is first applied to remove the correlations among successive hyperspectral measurement vectors. The obtained residual measurement vectors are then recovered using the proposed joint optimization based POCS (Projections Onto Convex Sets) algorithm with the steepest descent method. In addition, a pixel-guided stopping criterion is introduced to stop the iteration. Experimental results show that the proposed algorithm exhibits its superiority over other known CS reconstruction algorithms in the literature at the same measurement rates, while with a faster convergence speed.
A Robust Space-time Super-resolution Reconstruction Algorithm Based on Multiple Video Sequences
Song Hai-Ying, He Xiao-Hai, Wu Yuan-Yuan, Qing Lin-Bo
2011, 33(9): 2253-2257. doi: 10.3724/SP.J.1146.2010.01435
Abstract:
To improve the temporal-spatial resolution of video, a space-time super-resolution reconstruction algorithm using L1 norm minimization is proposed, and space-time regularization based on the total variation is utilized. The proposed algorithm can increase the resolution both in time and in space by using multiple low resolution video sequences of the same scene obtained at sub-pixel and sub-frame misalignments. This algorithm does not require large matrixes directly constructed, which reduces greatly the memory requirements. Experimental results show the effectiveness of the algorithm and verify the robustness of the algorithm to errors in imaging model estimation.
A New Measurement Method of Eddy Current for Biological Tissue in Magnetic Induction Tomography
Lv Yi , Wang Xu, Yang Dan, Jin Jing-Jing
2011, 33(9): 2258-2262. doi: 10.3724/SP.J.1146.2010.01422
Abstract:
Magnetic Induction Tomography (MIT) is a new technique for conductivity of biological tissue to reconstruct image. Weak eddy current signal within biological tissue restricts the precision of detection device and resolution of reconstruction image. A method of reducing the primary magnetic field as much as possible is presented based on the characteristics of eddy current signal. Sensor coils are improved to cancel the primary field signal, thus increasing the eddy current field. Simulation experiments are carried out for the MIT measurement model presented in the paper to determine the optimal parameters of detection coil to offset the primary field. Signal linearity and sensitivity is tested under this mode. The results prove that the method gets better effect in signal linearity and sensitivity of MIT.
140 GHz Data Rate Wireless Communication Technology Research
Wang Cheng, Lin Chang-Xing, Deng Xian-Jin, Xiao Shi-Wei
2011, 33(9): 2263-2267. doi: 10.3724/SP.J.1146.2010.01431
Abstract:
Institute of Electronic Engineering, China Academy of Engineering Physics, Mianyang 621900, China
Specific Emitter Verification Based on Maximal Classification Margin SVDD
Luo Zhen-Xing, Lou Cai-Yi, Chen Shi-Chuan, Li Shao-Wei
2011, 33(9): 2268-2272. doi: 10.3724/SP.J.1146.2011.00103
Abstract:
Specific Emitter Verification (SEV) is one of the key technology to identify a specific emitter. Specific Emitter Verification algorithm based on Support Vector Data Description (SVDD) is studied in this paper. To improve the low fraction of target class that is accepted by the classical SVDD in the case of atypical target training data, Maximal Classification Margin SVDD (MCM-SVDD) using outlier training data is proposed. At the same time that the margin is maximized between hyper-sphere and target training data as well as outlier training data, hyper-sphere volume is minimized by MCM-SVDD to improve the generalization of target data accepting. By experiment on data from 20 real communication emitters, MCM-SVDD is proved to perform better mean verification rate than SVDD, SVDD-neg and SVM.
Efficient Implementing of LDPC Decoding Algorithm Based on Quantization
Ma Zhuo, Du Shuan-Yi, Wang Xin-Mei
2011, 33(9): 2273-2277. doi: 10.3724/SP.J.1146.2011.00041
Abstract:
An implementing scheme of sum-product algorithm is proposed based on 2-dimation broken line approach, which avoids the look-up table with size related to the exponential of the number of quantization bits and reduces the memory consumption of the decoder. Then, an algorithm called second-minimum value corrected min-sum algorithm is proposed based on the implementing scheme proposed above. The algorithm use three 2-dimension piecewise approach to correct the min-sum algorithm and its performance is very close to that of the floating point sum-product algorithm. The correction process of this algorithm just includes simple arithmetic and logic operations, which is easy to be implemented by FPGA.
Unicast Network Loss Tomography Based on k-th Order Markov Chain
Fei Gao-Lei, Hu Guang-Min
2011, 33(9): 2278-2282. doi: 10.3724/SP.J.1146.2010.00814
Abstract:
This paper addresses the issue of temporal dependence network link loss inference, presents a k-th order Markov chain based unicast network loss tomography method. The method introduces firstly k-th order Markov Chain (k-MC) to describe the link packet loss process, and then uses pseudo maximum likelihood method to estimate the state transition probabilities of k-th order Markov chain. If k is large enough, then the method presented in this paper is capable of obtaining an accurate loss probability estimate of each packet based on unicast end-to-end measurements. ns-2 simulation validated the effectiveness of the method.
A Novel Compact Electromagnetic Band-gap Structure Using for SSN Suppression in High Speed Circuits
Shi Ling-Feng, Hou Bin
2011, 33(9): 2283-2286. doi: 10.3724/SP.J.1146.2011.00022
Abstract:
According to the physical mechanism of the Electromagnetic Band-Gap (EBG) structure and the equivalent circuit model of the planar EBG structure, a novel compact EBG structure is proposed for Simultaneous Switching Noise (SSN) suppression in high speed circuits. -30 dB stop-band is from 0.6 GHz to 6.4 GHz, and the bandwidth is 5.8 GHz. Comparing with the traditional L-bridge EBG structure, the bandwidth increases by 1.8 GHz and the relative bandwidth increases by 45%. A low band-gap center frequency and wide bandwidth is realized and good signal integrity is achieved.