Advanced Search

2009 Vol. 31, No. 2

Display Method:
Articles
Nonnegative Matrix-Set Factorization
Li Le, Zhang Yu-Jin
2009, 31(2): 255-260. doi: 10.3724/SP.J.1146.2007.01265
Abstract:
Nonnegative Matrix Factorization (NMF) is a recently developed technique for nonlinearly finding purely additive, parts-based, linear, and low-dimension representations of nonnegative multivariate data to consequently reveal the latent structure, feature or pattern in the data. Although NMF has been successfully applied to several research fields, it is confronted with two main problems (unsatisfactory accuracy and bad generality) while the processed is a matrix-set, because the processed objects of NMF are intrinsically vectors and the necessary vectorization for every matrix in the processed matrix-set often make corresponding NMF learning to be typical small-sample learning. In this paper, Nonnegative Matrix-Set Factorization (NMSF) is conceived to overcome the problems and to retain NMFs good properties. As opposed to NMF, NMSF directly processes original data matrices rather than vectorization results of them. Theoretical analysis shows that while processing a data matrix-set, NMSF should be more accurate and has better generality than NMF. To show how to implement NMSF, and to validate NMSFs properties by experiments, Bilinear Form-Based NMSF (BFBNMSF) algorithm, as an implementation mode of NMSF, is formulated. Results of comparison experiments between BFBNMSF and NMF stably support the theoretical analysis. It is worth noting that higher accuracy and better generality actually means that NMSF is better at extracting essential features of data matrices than NMF.
An Automatic Mosaic Algorithm for Region of Interest Search
Wang Yong, He Xiao-chuan, Liu Qing-hua, Xu Lu-ping
2009, 31(2): 261-264. doi: 10.3724/SP.J.1146.2007.01754
Abstract:
To reduce image collection conditions in image mosaic, an adaptive Region of Interest (ROI) search algorithm is developed for enhancing adaptability and flexibility in process of image mosaic. The method searches ROI to realize image registration for overlapped region from view of key feature objects. Then image geometry adjustment, color correctness and image fusion are followed after image registration. Experimental results indicate that the method reduces image collection condition and enhances adaptability and flexibility while keeping high accuracy and robustness in image mosaic.
Direction Adaptive Image Interpolation via Wavelet Transform
Cheng Guang-quan, Cheng Li-zhi
2009, 31(2): 265-269. doi: 10.3724/SP.J.1146.2007.01218
Abstract:
Image interpolation is an important technique of image processing. The blur and jaggy of image details or edges are inevitable during conventional image interpolation. In order to obtain interpolation images with better quality, the improved bilinear interpolation method with adaptive direction is applied to the image. Wavelet is implemented to provide more high frequency information, post-processing is applied to improve the visual quality of interpolation images. The experiments show that the interpolation images have much clearer edge, and much smoother along their direction. The results effectively remove the jaggy and blur at the edge. The subjective quality and visual effect of images interpolated using the proposed method have obviously improved, which is more in accord with the human visual system characteristic.
An Algorithm for the Automatic Annotation Refinement on Large-Scale Web Images
Wang Bin, Yu Neng-hai
2009, 31(2): 270-274. doi: 10.3724/SP.J.1146.2006.02013
Abstract:
When Web images are indexed, the textual information in the hosting web pages are usually used as approximate image description. However, such information is not accurate enough. In this paper, a framework is proposed to utilize the visual content, the textual context, and the semantic relations between keywords to refine the image annotation. Experiments on large-scale dataset demonstrate the effectiveness of the proposed method.
Visual Target Tracking Based on the Adaptive Particle Filter in the Complex Situation
Yao Hong-ge, Qi Hua, Hao Chong-yang
2009, 31(2): 275-278. doi: 10.3724/SP.J.1146.2007.01272
Abstract:
This paper presents an adaptive particle filtering algorithm for image tracking based on weighted color probability contribution. First, a weighted color contribution graph is proposed. Taking use of the graph, similarity between target template and particles area is calculated, that makes the target located more reasonable and efficient. During the filtering, aiming at particles degeneration, a resampling method is proposed, forming adaptive adjustment to sampled particle-set. This promotes quality of particle and reduces its quantity while cost of calculation is reduced greatly. Experiment result shows, for some complex tracking conditions such as overall occultation and object tracking with fast moving and great maneuverability, the proposed algorithm has a better robust.
Tracking DOA of Multiple Targets Based on a Matrix Norm Minimization Technique
Zhang Huai-gen, Zhang Lin-rang, Wu Shun-jun, Liu Yin
2009, 31(2): 279-282. doi: 10.3724/SP.J.1146.2007.01458
Abstract:
Analyzing the inner configuration of the covariance matrix, a new multiple targets DOA tracking algorithm is presented based on a matrix norm minimization technique. Furthermore, by solving a set of linear equations, the power of each target is got at different time. This algorithm can be applied when the power of each target changes with time, and it can track the angles and power jointly. Simulation results show the method has high performance in tracking.
Kernel Based Orthogonal Locality Preserving Projections for Face Recognition
Jin Yi, Ruan Qiu-qi
2009, 31(2): 283-287. doi: 10.3724/SP.J.1146.2007.01450
Abstract:
In this paper, considering kernel and orthogonal basis functions, a new method named kernel based orthogonal locality preserving projections algorithm, which aims at discovering an embedding that preserves nonlinear information is proposed for face representation and recognition. In this algorithm, first, the nonlinear kernel mapping is used to map the face data into an implicit feature space, and then a linear transformation which produces orthogonal basis functions is performed to preserve locality geometric structures of the face image. Experiments based on both ORL and Yale face database demonstrate the effectiveness of the new algorithm.
A Face Recognition Algorithm Based on Selective Ensemble of E-HMMs
Li Jin-xiu, Gao Xin-bo, Yang Yue, Xiao Bing
2009, 31(2): 288-292. doi: 10.3724/SP.J.1146.2007.01224
Abstract:
The performance of Embedded Hidden Markov Model (E-HMM) based face recognition algorithm heavily depends on the selection of model parameters. A selective ensemble of multi E-HMMs based face recognition algorithm is proposed, selecting many accurate and diverse models for ensemble face recognition. Comparing with the traditional E-HMM based face recognition algorithm, the experimental results illustrate that the proposed method can not only obtain better and more stable recognition effect, but also achieve higher generalization ability.
Rejecting Nearest Neighbor Classifier Based on Structural Risk Minimization Principle Self-organization Multiple Region Covering Model
Hu Zheng-ping, Jia Qian-wen
2009, 31(2): 293-296. doi: 10.3724/SP.J.1146.2007.01360
Abstract:
According to the mode of rejecting pattern recognition principles, which is based on matter description and matter separation in uniform statistical pattern recognition, a rejecting nearest neighbor classifier based on structural risk minimization self-organization multiple region covering model is presented in this paper. This new model is better closer to the actual instance, rather than traditional statistical pattern recognition only using optimal separationas its main principle. Firstly, the optimal samples are selected from the training samples based on structural risk minimization, which is used for same class pattern matter description. Then the kNN distinguish is as a following step to identified the exact class. The simulation experimental results show that this method is valid and efficient.
DBN Model Based Multi-stream Asynchrony Triphone for Audio-Visual Speech Recognition and Phone Segmentation
Lü Guo-yun, Jiang Dong-mei, Fan Yang-yu, Zhao Rong-chun, H. Sahli, W. Verhelst
2009, 31(2): 297-301. doi: 10.3724/SP.J.1146.2007.01216
Abstract:
In this paper, a novel Multi-stream Multi-states Asynchronous Dynamic Bayesian Network based context-dependent TRIphone (MM-ADBN-TRI) model is proposed for audio-visual speech recognition and phone segmentation. The model looses the asynchrony of audio and visual stream to the word level. Both in audio stream and in visual stream, word-triphone-state topology structure is used. Essentially, MM-ADBN-TRI model is a triphone model whose recognition basic units are triphones, which captures the variations in real continuous speech spectra more accurately. Recognition and segmentation experiments are done on continuous digit audio-visual speech database, and results show that: MM-ADBN-TRI model obtains the best overall performance in word accuracy and phone segmentation results with time boundaries, and more reasonable asynchrony between audio and visual speech.
Speaker Verification Based on Factor Analysis and SVM
Guo Wu, Dai Li-rong, Wang Ren-hua
2009, 31(2): 302-305. doi: 10.3724/SP.J.1146.2007.01289
Abstract:
In the text-independent speaker recognition system, the mean-supervector of Gaussian Mixture Models (GMM) and Support Vector Machine (SVM) system can outperform the traditional GMM and Universal Background Models (UBM) system, but the session variability is still one of the most important reasons that deteriorate the performance. In this paper, the factor analysis is tailored to solve the session variability problem of GMM mean-supervector. The proposed algorithm can outperform the Nuisance Attribute Projection (NAP) algorithm. Furthermore, the proposed system based on factor analysis is more stable than the system based on NAP. In the NIST 2006 SRE corpus, the Equal Error Rate (EER) of the proposed system can obtain 6.0%.
Common-Acoustical-Poles/Zeros Modeling of HRTF Using Logarithmic Error Criterion
Wang Lin, Yin Fu-liang, Chen Zhe
2009, 31(2): 306-309. doi: 10.3724/SP.J.1146.2007.01494
Abstract:
Common-Acoustical-Poles/Zeros (CAPZ) approximation is an efficient way to model Head-Related Transfer Functions (HRTF), which requires fewer parameters than pole/zero model. Conventional methods estimate CAPZ model parameters based on least square error criterion, while the attributes of human auditory perception are more consistent with the logarithmic error criterion. This paper therefore presents a new method to estimate CAPZ model parameters by minimizing log-magnitude error. Common acoustical poles are estimated with Haneda method firstly. Then, under logarithmic error criterion, the zeros of CAPZ model are estimated with iterative weighted least-square algorithm. Simulation results are given to validate the effectiveness of the proposed method.
Extraction of Translation Example Based on Shallow Parsing Information
Chen Yin, Zhao Tie-jun, Yang Mu-yun, Li Sheng
2009, 31(2): 310-313. doi: 10.3724/SP.J.1146.2007.01230
Abstract:
Translation example base is the main knowledge source of example-based machine translation system. In this paper, a shallow parsing information based approach is proposed to extract translation examples. First, translation units of source and target language sentences are segmented respectively according to shallow parsing information. Then, guided by word alignment result, an statistical model is used to align translation units between source and target translation units, and thus translation examples are extracted. Experiment result shows that the proposed method achieves satisfying result in both direct evaluation of example base and indirect evaluation by EBMT system.
Adaptive Spatial Filter Based on ERD/ERS for Brain-Computer Interfaces
Lü Jun, Xie Sheng-li, Zhang Jin-long
2009, 31(2): 314-318. doi: 10.3724/SP.J.1146.2007.01462
Abstract:
For motor related Brain-Computer Interface (BCI), if the sample size is small, Common Spatial Patterns (CSP) algorithm is sensitive to outlier data and lacks of robustness. In this paper, an Adaptive Spatial Filter (ASF) algorithm is proposed to take filtered samples variances as the features and seek the spatial filter to maximize the ratio of two classes means. Unlike CSP, ASF is an iterative algorithm and have soft determination. ASF can adaptively decrease outliers effects according to the updated filters. Using two datasets from BCI competition 2003 and 2005, the experimental results show that ASF outperforms CSP,especially when training samples are few.
SVM Based Underdetermined Blind Source Separation
Li Rong-hua, Yang Zu-yuan, Zhao Min, Xie Sheng-li
2009, 31(2): 319-322. doi: 10.3724/SP.J.1146.2007.01370
Abstract:
A new sparse measure of signals is proposed in this paper. After the number of efficient sources is estimated, the observations are classified using Support Vector Machine (SVM) trained through samples which are constructed by the direction angles of sources. Each clustering center is obtained based on the sum of samples belong to the same class with different weights which are adjusted adaptively. It gets out of the trap of the initial values which interfere k-mean clustering seriously. Furthermore, the online algorithm is proposed for large scale samples. Simulations show the stability and robustness of the methods.
2D DOA Estimation Algorithm for Coherently Distributed Source
Han Ying-hua, Wang Jin-kuan, Song Xin
2009, 31(2): 323-326. doi: 10.3724/SP.J.1146.2007.01300
Abstract:
In many two-dimensional (2D) Direction Of Arrival (DOA) estimation approaches for coherently distributed source, the computational complexity induced by 2D searching manipulation is prohibitively high. A decoupled 2D DOA estimation algorithm is proposed. The integral steering vector of coherently distributed source is deduced to be a Schur-Hadamard product comprising the steering vector of the point source and a real vector. And then a second statistics is proposed for the data collected at subarray X, the rotational invariance matrices can be estimated based on propagator method. So the azimuth and elevation angle can be obtained by the proposed second statistics and the rotational invariance matrices even if elevation angle approaches 90. In addition, the presented method does not apply any peak-finding searching and eigenvalue decomposition, which has significantly reduced the computational complexity compared with classical subspace algorithm. Simulation results verify the effectiveness of the proposed algorithm.
A New Method for Joint Estimation of Frequency and 2-D Arrival Angles of Coherent Signals Based on Fourth-Order Cumulant
Du Gang, Zhang Yong-shun, Wang Yong-liang, Jiang Xin-ying
2009, 31(2): 327-330. doi: 10.3724/SP.J.1146.2007.01299
Abstract:
Based on fourth-order cumulant, this paper presents a new method, i.e. CTSS algorithm, for joint estimation of frequency and 2-D arrival angles of coherent signals. Firstly a time-space smoothing matrix is constructed by temporal and spatial data of two parallel uniform linear arrays and the smoothing technique, and then 3-D parameters of coherent signals can be obtained from its eigenvalues and the corresponding eigenvectors. In Gaussian colored noise, the algorithm can precisely estimate 3-D parameters of coherent signals, and does not need multidimensional spectrum peak search with 3-D parameters paired automatically. In addition, it can still estimate correctly 3-D parameters of signals when signals have the same one or two dimensional parameters. Computer simulation confirms its effectiveness.
Joint Diagonalization Algorithm for Harmonic Retrieval
Nie Wei-ke, Feng Da-zheng, Zhang Bin
2009, 31(2): 331-334. doi: 10.3724/SP.J.1146.2007.01286
Abstract:
In this paper, a set of eigen matrices are introduced which possess diagonal structure. A new iterative algorithm is proposed to implement the joint diagonalization of the eigen matrices, the harmonic retrieval can be accomplished by the diagonalization procedure. The new algorithm improves the cost function of the well-know ACDC algorithm, changes it from fourth function to a quadratic function. Each iteration step poseses a typical least square problem with a unique closed solution, hence there is no error propagation as the ACDC algorithm. Simulation results demonstrate it is a new reliable and faster algorithm which is particularly accurate in extremely low SNR.
A Reduce-Reference Image Quality Assessment Metrics Based on Wavelet Transform
Lu Wen, Gao Xin-bo, Wang Ti-sheng
2009, 31(2): 335-338. doi: 10.3724/SP.J.1146.2007.01288
Abstract:
Reduced-reference image quality measure has become to be one of the focuses in image processing fields. In this paper, a reduced-reference image quality assessment method based on wavelet is proposed. Based on the character of the human visual system, it is taken into account the variance of the visual sensitive coefficients to get the measure of distorted image. The proposed approach is characterized by a very low complexity of computation, and compared with traditional method of RR-WISM, is with a higher correlation coefficient by 3%, a lower outlier ratio by 6%, and a less amount of information being transmitted by 50%, the computation time is greatly reduced. The experiment results illustrate that the proposed approach has a good prediction accord with the subjective perception.
An Efficient Algorithm for Variable Rate Demodulation
Lai Wei-dong, Zhan Ya-feng, Lu Jian-hua
2009, 31(2): 339-342. doi: 10.3724/SP.J.1146.2007.01372
Abstract:
In spacecraft, hardware resources and energy are limited, which restricts the hardware consumption of the variable rate demodulation in Telemetry and Telecommand(TTC) systems. This paper proposes an algorithm which accomplishes variable rate demodulation based on pseudo high datarate demodulation and repeated decoding without reducing the sampling rate. It turns out that the performance of the method is equivalent to the ideal result of variable rate demodulation, and the hardware consumption is less than half of that of the classical Direct Digital Converter (DDC) chips. Computer simulations and practical experiments show that the proposed method is effective for variable rate demodulation in TTC systems.
Research on Application of AEAD Techniques for CCSDS Telecommand Protocol
Zhang Lei, Zhou Jun, Tang Chao-jing
2009, 31(2): 343-348. doi: 10.3724/SP.J.1146.2008.00227
Abstract:
Risk analyses performed by several space agencies have provided indications of the impact of different threats on several categories of space missions in the context of space communication. A method of integrating the GCM (Galois/Counter Mode) into data link layer of Telecommand protocol is proposed after the performance analysis of various AEAD (Authenticated Encryption with Associated Data) techniques. The simulation framework of CCSDS Telecommand systems is designed and implemented with OPNET. The simulation results show, AEAD techniques can effectively provide the confidentiality, integrity and authentication of Telecommand information with no decrease to the throughput performance.
Improved RDM for SAR Autofocusing
Li Gang, Peng Ying-ning, Xia Xiang-gen
2009, 31(2): 349-352. doi: 10.3724/SP.J.1146.2007.01540
Abstract:
Radar (SAR) autofocus processing, whose performance degenerates with low contrast of the scene. This paper proposes an improved RDM for SAR autofocusing. Based on the relationship between the Doppler rate and the range, the proposed method can adaptively overcome the effect of the low contrast on the Doppler rate estimation and obtain fine autofocusing results. The experiments with real SAR data show the effectivity of the proposed method.
A Novel Ground Moving Target Detector in Dual-channel SAR Images Based on Adjacent Average and Orthogonal Projection
Shi Gong-tao, Kuang Gang-yao, Gui Lin
2009, 31(2): 353-357. doi: 10.3724/SP.J.1146.2007.01417
Abstract:
Concerning the detection of ground moving targets in dual-channel SAR images, a novel detector based on adjacent average and orthogonal projection is proposed. According to obtaining the component of the sample covariance matrix energy vector that is perpendicular to the clutter covariance matrix energy vector, then effective metric can be constructed by this orthogonal component. Combining the processing of adjacent average, all of the slow ground moving targets can be exactly detected. Compared with traditional DPCA, this new method can achieve better clutter rejection, eliminate influences from moving targets sidelobes, set threshold more easily and get lower false alarm probability. The simulated results prove the effectiveness of this metric.
STAP Method for Space Based Radar Based on Spectrum Registration with Non-Uniformed Frequency Samples
Yu Wen-xian, Zhang Zeng-hui, Hu Wei-dong
2009, 31(2): 358-362. doi: 10.3724/SP.J.1146.2007.01244
Abstract:
Due to the earths rotation or non-sidelooking radar configuration, the spectrum of Space Based Radar (SBR) clutter varies with the range and shows non-stability. The non-stability of clutter will degrade Space Time Adaptive Processing (STAP) performance significantly and should be compensated. A new spectrum registration based method is proposed which uses non-uniformed discrete frequency sampling points. The mathematical model of the non-uniformed spectrum registration method is built. The choices of these non-uniformed discrete frequency samples and the estimate of clutter covariance matrix after compensation are also studied. Simulations show that the proposed spectrum registration method can compensate the non-stability effectively and achieve approximately optimal performance.
An Algorithm Based on Differential Preprocessing of Low Elevation Estimation in VHF Radar
Zhao Guang-hui, Chen Bai-xiao, Wu Xiang-dong, Zhang Shou-hong
2009, 31(2): 363-365. doi: 10.3724/SP.J.1146.2007.01381
Abstract:
Considering the presence of multi-path propagation, it is quite difficult for VHF radar to measure the altitude of a low elevation target. In this paper, a new algorithm is proposed to estimate the DOA of the target, which by using the differential preprocessing, the phase of the multipath wave can be compensated, the echo from multi-path propagation can be subtracted form the signal that received. Then the general DBF can be used to estimate the direction of the direct wave. The Monte-Carlo experiments prove that the performance of the new algorithm is better than that of the spatial smoothing technique and the forward-backward predictors for bearing estimation. The real data from some VHF radar demonstrates the validity and feasibility of the new algorithm.
The Two-Dimensional Spatial-Variant Properties of Airborne SAR Motion Error and Its Compensation
Tan Ge-wei, Deng Yun-kai
2009, 31(2): 366-369. doi: 10.3724/SP.J.1146.2007.01169
Abstract:
The precise motion compensation is a crucial issue for high-resolution airborne SAR imaging, which is especially important for SAR imaging under airborne small platform or severe disturbance. In order to obtain high-quality SAR images, combining wavenumber domain algorithm and considering the two-dimensional spatial-variant properties of motion error, practical motion compensation algorithms are put forward in this paper. Simulation and SAR imaging results based on practical measuring data with these algorithms are given to prove the feasibility of this method.
A Real-Time Signal Processing Method for Air-born Three-channels GMTI
Deng Hai-tao, Zhang Chang-yao
2009, 31(2): 370-373. doi: 10.3724/SP.J.1146.2007.01316
Abstract:
In this paper, the performance of several China GMTI systems is briefly introduced at first. After that, a method is prompted based on CSI algorithm for air-born three-channels GMTI signal processing. The processing diagram of this method is given, the principle of cluster suppress is analyzed, an adaptive phase compensation method for each channel is presented based on the minimum power criterion and the locating and velocity- estimation problem of moving targets is analyzed. The actual experiment result show that the presented method is efficient.
Multi-Channel SAR-GMTI Technique and Performance Analysis Using Eigen-Decomposition
Yu Jing, Liao Gui-sheng, Zeng Cao
2009, 31(2): 374-377. doi: 10.3724/SP.J.1146.2007.01086
Abstract:
A multi-channel SAR-GMTI technique based on eigen-decomposition of the covariance matrix is proposed. The variation of the sum of small eigenvalues of the covariance matrix is used to detect moving targets. The radial velocity of the moving target is estimated by two steps. Firstly, using the interferometric phase of two SAR images to get the coarse radial velocity estimation, then the more precise radial velocity is obtained by searching the space-domain steering vector of the moving target. It overcomes the sensitivity of the interferometric phase to clutter and noise. The effectiveness of the presented technique is demonstrated by both simulated and measured SAR data.
Detection Performance of MIMO Radar for Coherent Pulses
Qu Jin-you, Zhang Jian-yun
2009, 31(2): 378-381. doi: 10.3724/SP.J.1146.2007.01469
Abstract:
The detection performance of spatial diversity Multiple-Input Multiple-Output(MIMO) radar for Swerling I and Swerling II target is analyzed based on coherent pulses. The detection method provides an unified frame for conventional radar and the MIMO radar. Simulation results demonstrate that MIMO radar can get better detection performance when more coherent pulses is used. In contrast with conventional radar, the detection performance of MIMO radar in Swerling I target outperforms Swerling II target.
Performance Analysis of Surface Current Extraction by HF-SAR Based on First Order Sea Echo
Xue Wen-hu, Zhang Ming-min, Li Xian-mao, Yuan Xiang-hui
2009, 31(2): 382-385. doi: 10.3724/SP.J.1146.2007.01500
Abstract:
It is important to apply synthetic aperture technique to High Frequency Surface Wave Radar (HFSWR), and thus to implement surface current extraction by a single site, for the sake of reducing observation cost and improving efficiency. However the existing signal model of High Frequency SAR (HF-SAR) is based on theoretic point targets, and therefore the Bragg scattering effect of sea resolution cells is not considered. To solve this problem, an improved signal model of surface current extraction by HF-SAR is presented based on first order sea echo, and the effect of Bragg scattering to the azimuth echo of HF-SAR is analyzed. Then a velocity estimation algorithm is designed according to the signal features of the azimuth echo. Finally the velocity estimation performance of HF-SAR is analyzed on a specified resolution cell by Monte Carlo simulation. Simulation results show that the estimation precision of velocity and direction meets the requirements of engineering application, which indicates that it is feasible to extract surface current by HF-SAR.
Matching Optical Image to SAR Image Using Nonlinear Equation and Hausdorff Distance
Wang Zi-lu, Li Zhi-yong, Su Yi
2009, 31(2): 386-390. doi: 10.3724/SP.J.1146.2007.01453
Abstract:
This paper describes a new method for matching SAR image and optical image. First, regularizing anisotropic heat diffusion equations is used for segmenting closed-boundary regions in SAR image. After superposing the center of mass of closed-boundary regions, Haudorff distance and genetic algorithm are used to determine scaling and rotation parameters respectively. Finally, the affine-transformed result is refined by binary image correlation to achieve high precise registration. Experimental results indicated that this method can perform automatic registration under precision acquirement for images which differ by translation, rotation and scaling.
Fast Algorithm of Subband Discrete Cosine Transform Based on H.264
Jiang Jian-Guo, Lu Xiao-Hong, Qi Mei-Bin, Zhan Shu
2009, 31(2): 391-395. doi: 10.3724/SP.J.1146.2007.01291
Abstract:
Fast DCT (Discrete Cosine Transform) is one of the key issues in H.264, according to the properties of DCT coefficients energy distribution and the characteristics of zigzag scan, one fast DCT algorithm is proposed based on divided subbands. In the algorithm, DCT coefficients of the prediction residue (Zero Quantized DCT coefficients, ZQDCT) are set zero predictably before implementing DCT and quantization (Q), and then the redundant computations are deduced greatly. One adaptive scheme is also presented with multiple thresholds to divide the subbands. By this scheme, only the DCT coefficients without ZQDCT will be computed. The experimental results show that the proposed algorithm outperform other approaches in literature, and achieve the best performance in reducing computations in the case of the same picture quality and the same compression ratio by the traditional algorithms in H.264 .
A Low Complexity Scheme for Space-Time Multi-user Iterative Detection
Du Na, Xu Da-zhuan
2009, 31(2): 396-399. doi: 10.3724/SP.J.1146.2007.01446
Abstract:
In multiuser space-time coding system the research already made is almost focused on STTC and STBC scheme. For this reason, a multiuser space-time coding system combined with Turbo-BLAST scheme is proposed. The conventional iterative receiver with Symbol-Level Cancellation (SLC) and detection still has a high computation complexity, so a low complexity iterative receiver scheme with Bit-Level Cancellation (BLC) and detection is proposed which can performs bit-level cancellation by decomposing of an M-QAM constellation into a linear combination of binary constellations. Theoretic analysis and simulation results show that the proposed scheme has a low complexity with no performance degradation compared with conventional iterative receiver, while the high spectral efficiency of the system from BLAST is retained.
Local Projection Noise Reduction Based on Nonlinear Constraints
Han Min, Liu Yun-xia
2009, 31(2): 400-404. doi: 10.3724/SP.J.1146.2007.01330
Abstract:
An improved method is proposed for noise reduction of chaotic time series based on the reconstruction of phase space theory. Recursive map is firstly used for the chaos characteristics analysis of the time series observed, then the conditions of nonlinear constraints are introduced to the local projection method, and Singular Spectrum Analysis(SSA) is combined in the local neighborhood, which uses the main components representing the attractors to reconstruct the time series. The improved method raised in this paper overcomes the problems that the traditional local projection can not fully character the nonlinear relationship of system, reduces the deviation of the reconstruction, and improves the signal-to-noise ratio of the system. The chaotic time series generated by Lorenz model and sunspot time series are respectively applied to simulation analysis. The numerical experiment results confirm the effect of the method raised in this paper for noise reduction in the time series observed.
Linear Transceiver Design for Multiuser MIMO Downlink
Xu Dao-feng, Yang Lu-xi, Huang Yong-ming
2009, 31(2): 405-409. doi: 10.3724/SP.J.1146.2007.01461
Abstract:
An iterative linear transceiver design scheme under Sum Mean Squared Error (SMSE) minimization criterion is proposed. By modifying the structure of transceiver, the complex computation of Lagrangian multiplier within traditional MMSE transceiver design can be effectively obviated, and thus the whole system complexity can be greatly reduced. Because the Lagrangian multiplier has analytical solution, transmit precoding matrix also has closed-form solution, and can be solved easily with fixed-point iterations. The receiver filter is worked out independently with MMSE criterion at each terminal, and the downlink signaling of each receiver filter from basestation is not necessary. Simulations demonstrate that the proposed scheme is effective.
A Joint Channel Estimation Algorithm Based on Strong Paths Selection
Song Xiao-qin, Hu Ai-qun, Li Ke
2009, 31(2): 410-413. doi: 10.3724/SP.J.1146.2007.01326
Abstract:
The higher complexity and the less number of the joint estimation interfering users are the main disadvantages of the high accuracy joint multi-cell channel estimation algorithm. A strong paths selection based joint channel estimation algorithm is proposed by fully considering the difference between the strong paths and week paths in the channel estimation results. It is shown from the complexity and Bit Error Rate (BER) performance analysis, the proposed algorithm can reduce 50% complexity algorithm and improve the BER performance slightly.
Performance Analysis of Selection Amplify-and-Forward Cooperative Communication in Nakagami Fading Channels
Fang Zhao-xi, Shan Hang-guan, Wang Zong-xin
2009, 31(2): 414-417. doi: 10.3724/SP.J.1146.2007.01549
Abstract:
The behavior of the probability density function of the harmonic mean of two independent distributed non-negative random variables at the origin is analyzed. This result is then applied to study the performance of selection amplify-and-forward cooperation protocol in Nakagami fading channel and closed-form expression of the Symbol Error Rate (SER) in high SNR region is also provided. Both analytical and numeric results show that the selection amplify-and-forward cooperation protocol maintains the same diversity order as the conventional amplify-and-forward protocol, and has better SER performance.
EBPSK Demodulation Analysis Based on Second-order Phase Locked Loop
Qi Chen-hao, Chen Guo-qiang, Wu Le-nan
2009, 31(2): 418-421. doi: 10.3724/SP.J.1146.2007.01325
Abstract:
High efficiency modulation and demodulation techniques are significant for data transmission. Based on present EBPSK transmission system, a kind of demodulation method that employs a second-order PLL with its PD output structure is discussed in the paper. Firstly, PLL linear model is established in order for the comparative analysis between phase step error response and phase rectangular-pulse error response under different damping ratio. The ideal PD waveform expression is figured out. And then in case of narrow bandwidth Gauss noise condition, an optimal integral length for EBPSK demodulation is explored, with its corresponding simulation results. Finally it is demonstrated by simulations that, once PLL is capable of retrieving tracking state from capturing state in some SNR condition, effective EBPSK demodulation is available.
Clustering Based Blind Despread Method of Tamed Direct Sequence Spread Spectrum Signals
Wang Hang, Guo Jing-bo, Wang Zan-ji
2009, 31(2): 422-425. doi: 10.3724/SP.J.1146.2007.01251
Abstract:
Blind despread of multi-sequence Direct Sequence Spread Spectrum signals (tamed DSSS signals) with unknown spreading codes are discussed in this paper. The Dominant Mode DeSpreading (DMDS) algorithm is certified to be a successful solution for the blind estimation of the conventional DSSS signals. However, it proved to be not applicable to tamed DSSS signals. Borrowing unsupervised cluster analysis ideas, a novel method named K-means Clustering DeSpreading (KCDS) algorithm for tamed DSSS signals is proposed. KCDS algorithm, divides the tamed DSSS signal into non-overlapped individuals, and then exploits the clustering property of these individuals to estimate the spreading codes. The delay time and the number of spreading codes can be estimated by maximizing the average silhouette width. It is demonstrated to be effective via simulation results for a 32-ary DSSS signal in the presence of zero-mean noise.
Performance Analysis of Limited Feedback Multiuser MIMO Transmission with Channel Estimation Error
Zeng Er-lin, Zhu Shi-hua, Liao Xue-wen
2009, 31(2): 426-429. doi: 10.3724/SP.J.1146.2007.01588
Abstract:
In this paper, the impact of channel estimation error on the performance of multiuser Multiple-Input Multiple-Output (MIMO) transmission based on limited feedback information is analyzed. For zero-forcing beamforming without multiuser scheduling, a sum capacity lower bound is derived based on the quantization cell approximation, which shows that in the presence of the channel estimation error, the worst-case sum capacity converges to a finite ceil regardless of how fast the codebook size B increases at asymptotically high SNR. It is also shown that the larger the variance of the channel estimation error, the earlier the sum capacity begins to converge in terms of B. The case with multiuser selection diversity is also investigated, and it is shown that the sum capacity is bounded when the number of active users approaches infinity. The results are in contrast to the conclusions in the recent literature, and are verified by simulations.
QPID-AVQ:A Novel PID-Controlled Adaptive Virtual Queue Algorithm Based on Queue Length
Kang Qiao-yan, Meng Xiang-ru, Wang Jian-feng, Ma Hai-yuan
2009, 31(2): 430-434. doi: 10.3724/SP.J.1146.2007.01340
Abstract:
To settle the problems existing in AVQ algorithm and to further improve the performance of system stability and anti-jamming, a novel adaptive virtual queue algorithm is proposed, termed PID-AVQ, which added integral control function to PD-AVQ algorithm. Furthermore, considering both queue length and packet arriving rate, a novel PID-controlled adaptive virtual queue algorithm is proposed based on queue length, termed QPID-AVQ. QPID-AVQ algorithm sets parameters in terms of the real network statuses, which levels off the queue length at approximate expected value. And QPID-AVQ algorithm adopts the PID-AVQ algorithm to update the virtual capacity. The simulation results show that, QPID-AVQ algorithm can adapt to changes in network conditions well, and can keep queue length at approximate expected value while not being affected by the number of FTP connections. And compared with PD-AVQ and RED algorithms, QPID-AVQ algorithm has better stability, anti-jamming capability and higher link utilization.
Global Avalanche Characteristics and Nonlinearity of Boolean Function with the Hamming Weight k
Zhou Yu, Wang Wei-qiong, Xiao Guo-zhen
2009, 31(2): 435-438. doi: 10.3724/SP.J.1146.2007.01276
Abstract:
Some properties of autocorrelation coefficient and cross-correlation coefficient are given. The restricted relationship among n(n variables, wt(f) (the Hamming weight of Boolean function f (x) and t (t-th propagation criteria) was derived, then a lower bound on the sum-of-squares of any Boolean functions with Hamming weight k is concluded. Finally, the results generalized a upper bound on nonlinearity of Boolean function only depending on Hamming weight. This paper improved known results.
Design and Analysis of Security Protocols for RFID Based on LPN Problem
Tang Jing, Ji Dong-yao
2009, 31(2): 439-443. doi: 10.3724/SP.J.1146.2007.01240
Abstract:
The existing security protocols for RFID based on LPN problem are systematically analyzed and their secure vulnerabilities are summarized. In order to conquer these security leaks, a new RFID security protocol named HB# is designed, which is an improved version of HB+ protocol. Finally, it is proved that HB# protocol is secure in the random oracle model.
A Correctness Proof of WAPI Key Management Protocol Based on PCL
Tie Man-xia, Li Jian-dong, Wang Yu-min
2009, 31(2): 444-447. doi: 10.3724/SP.J.1146.2007.01356
Abstract:
Based on PCL, a formal correctness proof of WAPI key management protocol is presented. First, unicast key negotiation and multicast key announcement sub-protocols are analyzed, and their separate proofs of specific security properties of SSA and KS are detailed under unbounded number of participants and sessions. Second, according to the sequential rule and staged composition theorem, all principals do not execute both roles of ASUE and AE, and the precondition of a sub-protocol is preserved by the other one later in the chain, so, WAPI key management protocol possesses the required security properties and achieves its predefined goals.
A Forward-Secure Ring Signature Scheme Based on Bilinear Pairing in Standard Model
Wang Ling-ling, Zhang Guo-yin, Ma Chun-guang
2009, 31(2): 448-452. doi: 10.3724/SP.J.1146.2007.01264
Abstract:
Since the proposed ring signatures has the key exposure problem, a new forward-secure ring signature scheme based on bilinear pairings is proposed. Forward security of the scheme means that even if the secret key of current time period is compromised, some security remains. It is impossible to forge the signature relating to the past. Secret key is evolved with different period time while the public key is fixed in the life time. The scheme is proven to be secure against adaptive chosen message attack in the standard model.
Reconfigurable Clustered Architecture of Block Cipher Processor
Meng Tao, Dai Zi-bin
2009, 31(2): 453-456. doi: 10.3724/SP.J.1146.2007.01586
Abstract:
This paper presents the reconfigurable clustered architecture of block cipher processor. Appointed by instructions, the data-path of this architecture can be dynamically configured to be three modes, which includes 4clusters, 2clusters and single cluster mode. In different mode, different operations can be done, which improves the flexibility of this processor. Basing on clustered architecture, Explicit-decomposition low-power-design method is presented, which can reduce the power by 36.1%. With 5stages pipeline and wave-pipeline, this processor can work in a high rate. And the performances of AES/DES/IDEA reach 689.6Mbit/s, 400Mbit/s, 416.7Mbit/s.
A Multi-radio Based on QoS Guarantee Mechanism for Wireless Ad hoc Networks
Cao Zhi-yan, Ji Zhen-zhou, Hu Ming-zeng
2009, 31(2): 457-461. doi: 10.3724/SP.J.1146.2007.01384
Abstract:
In order to improve the capacity of wireless Ad hoc networks and satisfy the QoS requirements of multimedia sessions, multi-radio technique is introduced and stateless QoS model in single-interface single-channelSWAN is extended to stateless QoS model in multi-interface multi-channelMMSWAN. At the same time, a multi-interface multi-channel QoS routing protocolMMQAODV is proposed, which is combined with MMSWAN to implement a cross-layer QoS guarantee mechanism. Simulation shows that the mechanism improves QoS of multimedia sessions and performances of best-effort sessions. In comparison with SWAN, end-to-end delay is reduced to its 2%~27% and delivered best-effort data are increased to 1.29~3.55 times of it.
An Adaptive Active Queue Management AlgorithmSelf-Adaptive BLUE
Liu Wei-yan, Sun Yan-fei, Zhang Shun-yi, Liu Bin
2009, 31(2): 462-466. doi: 10.3724/SP.J.1146.2007.01263
Abstract:
As a classical Active Queue Management (AQM) algorithm, compared with RED (Random Early Detection), BLUE has many advantages. BLUE uses packet loss and link idle events to manage congestion. However, there are still some insufficiencies in parameter setting for BLUE. Especially when TCP connections changed dramatically will lead to queue overflow and underflow. Based on the study of BLUE, a novel self-adaptive BLUE is proposed. NS simulation results show that the algorithm can effectively stabilize the queue occupation independent of the number of active TCP connections and mitigate the queue overflow and underflow, it can improve link utilization and decrease packet loss rate at the same time.
Fuzzy Flow Awareness Based Dynamical Priority Fair Scheduling Algorithm
Qiu Gon-gan, Zhang Shun-yi, Hu Jun
2009, 31(2): 467-471. doi: 10.3724/SP.J.1146.2007.00891
Abstract:
Flow-awareness based priority fairness scheduling scheme will perform distinct forwarding policy dynamically for different traffic flows to adapt the network change and enhance the fairness of scheduling. The fuzzy flow-awareness with the load state information can identify different services congruously in path. And the dynamic priority fairness scheduling algorithm based on fuzzy flow-awareness will adjust the priority of forwarding dynamically between streaming flows and elastic flows by changing the threshold of priority queue. The algorithm emphasizes the fairness of scheduling under the light load and the priority of real-time applications under the heavy load for their delay requirements. The analysis of fairness and the results of simulation show that proposed algorithm could enhance the admission probability of elastic flows largely by increasing the priority queue length reasonably. At the same time, simulation shows that the algorithm has high the average throughput of link and the utility of resources.
General Template-Operation Based Mobility Model Discovery Mechanism in DTN
Zhou Xiao-bo, Zhang Xing, Peng Min, Lu Han-cheng, Hong Pei-lin
2009, 31(2): 472-475. doi: 10.3724/SP.J.1146.2007.01454
Abstract:
DTN(Delay-Tolerant Network) describes the situation in which longtime partition often happens. DTN doesnt assume the existence of End-to-End path, so it focuses on successful delivery ratio of packets. Nodes mobility model such as group, is an attractive field in DTN research; this paper concentrates on Macro-Mobility and presents a framework to detect and use mobility models? TOM2D(Template-Operation based Mobility Model Discovery). The main idea is: each node maintains a 3D matrix which contains the capacities of links through routing information exchange, then detects the mobility models from the matrix by template-operations, and stores them in a universal data structure at last. This paper gives an example of using TOM2D in OLSR and DSDV, and the results of simulation show that TOM2D performs well.
A Multipath Routing Algorithm for Ad hoc Networks Based on Channel Resistance
Liu Yong-guang, Ye Wu, Feng Sui-li
2009, 31(2): 476-479. doi: 10.3724/SP.J.1146.2007.01584
Abstract:
A multipath dynamic source routing algorithm based on channel resistance is presented. In the algorithm, the concept of channel resistance is defined and used for distributing data flow between different paths. Because the parameters of link quality are comprehensively considered in calculating channel resistance, the algorithm can reasonably distribute data flow to different paths according to their transmission ability. Simulations under NS2 environment prove that the new algorithm has better performance in balancing the network load and improving network throughput.
Arbitrary Dual-band Microstrip Branch-line Coupler Using Composite Right/Left-handed Transmission Lines
Liu Xiao, Li Chao, Li Fang
2009, 31(2): 480-483. doi: 10.3724/SP.J.1146.2007.01351
Abstract:
The Dual-band technique based on Composite Right/Left Handed (CRLH) Transmission Lines (TLs) is improved in this paper. By properly choosing the parameters of the CRLH-TLs, the structure will yield electrical length and at the two operating frequencies. Employing this structure, an arbitrary dual-band microstrip Branch-Line Coupler (BLC) operating at 0.9GHz and 1.8GHz is achieved. Simulated and measured results demonstrate that when the rest of the parameters are unchanged, this BLC is significantly more compact compared to the CRLH approach proposed before, and as small as that of the conventional BLC based on quarter-wavelength TLs with single operating frequency at 0.9GHz. The proposed method can mitigate some of the problems between the dual-band work and the need for compact size, which is of great use in practical applications.
An Architecture and VLSI Implementation for Adaptive Reed-Solomon Decoder
Qiu Xin, Zhang Hao, Qi Zhong-rui, Liu Yi, Chen Jie
2009, 31(2): 484-488. doi: 10.3724/SP.J.1146.2007.01279
Abstract:
This paper proposes the architecture of an adaptive Reed-Solomon (RS) decoder. This adaptive decoder can decode shorten RS code with variable block length as well as variable message length. The proposed RS decoder is independent of the interval of codeword block. Consequently, it can work not only in a burst mode, but also in a continuous mode. Further, this decoder can provide the integrity of interval information between codeword blocks, thus satisfying the requirements of special services in wireless communication and Ethernet. The VLSI implementation of a RS (255,239) decoder is also presented, which is based on the architecture of the adaptive RS decoder. This adaptive RS decoder has been designed and implemented with TSMC 0.18 COMS technology. The testing results validate the function of the adaptive RS decoder. The port rate is up to 1.6Gb/s.
A SAR Interferogram Noise Reduction Algorithm Based on the SNR Threshold and Wavelet Transform
Li Chen, Zhu Dai-yin
2009, 31(2): 497-500. doi: 10.3724/SP.J.1146.2007.01415
Abstract:
This paper first introduces a DWT-based noise reduction algorithm, which phase noise model and processing flow is discussed in detail. By using the Static Wavelet Transform (SWT), an amelioration of this algorithm and simulation of both algorithms are made with raw data. Based on the analysis of simulation results, a new scheme, which has good performance both in reducing the phase noise and maintaining the continuity of the interferometric fringes especially in highly noisy region, is addressed. In addition, a SAR interferogram noise reduction algorithm based on the SNR threshold is proposed with detailed flow graph analysis. In the raw-data simulation, the results show that the new algorithm is feasible and effective.
Using Multi-central Frequency Distributed Small Satellite SAR to Achieve Wide Swath and Two Dimensional High Resolution
Xia Yu-li, Lei Hong, Huang Yao
2009, 31(2): 501-504. doi: 10.3724/SP.J.1146.2007.01327
Abstract:
Distributed small satellite SAR can overcome the traditional contradiction of swath and azimuth resolution. In this paper, an algorithm to achieve both two dimensional high resolution and wide swath is proposed. Every small satellite work in the active state, they transmit and receive different frequency band signal simultaneity. First, the azimuth ambiguity is eliminated in each frequency sub-band, then the signal is rectified to zero Doppler frequency, after that this algorithm does the range spectral synthesis to achieve high range resolution. In other aspects, this active work mode can enhance the systems average transmit power, and increase the SNR.
High-Performance Architecture of Deblocking Filter for AVS Video Coding
Fang Jian, Ling Bo, Wang Kuang
2009, 31(2): 505-508. doi: 10.3724/SP.J.1146.2007.01437
Abstract:
In the video decoder for AVS, deblocking filter becomes one of the bottom necks for real-time processing. An implement architecture for deblocking filter is proposed in this paper. With a novel filtering order, the unfiltered data storage is reduced to a 168 block instead of whole 1616 macroblock。With data reuse strategy, the intermediate data storage is also reduced efficiently. The experiment shows the proposed design can achieve 50 MHz with only gate count of 9.2k by using 0.18m CMOS technology. When clocked at 50MHz, the proposed design can support real-time deblocking of HD1080 (19801088@30Hz) video application.
Discussions
A New Secure Universal Designated Verifier Signature Proof System
Chen Guo-min, Chen Xiao-feng
2009, 31(2): 489-492. doi: 10.3724/SP.J.1146.2007.01585
Abstract:
The notion of Universal Designated Verifier Signature (UDVS) allows any holder of a signature to convince any designated verifier that the signer indeed generated the signature without revealing the signature itself, while the verifier can not transfer the proof to convince anyone else of this fact. Such signature schemes can protect the privacy of signature holders and have applications to certification systems. However, they require the designated verifier to create a public key using the signers public key parameter and have it certified to ensure the resulting public key is compatible with the setting that the signer provided. This is unrealistic in some situations. Very recently, Baek et al introduced the concept of Universal Designated Verifier Signature Proof (UDVSP) to solve this problem in Asiacrypt 2005. In this paper, it is first showed that there exits a security flaw in this UDVSP, i.e., it does not satisfy the non-transferability. A new secure UDVSP system is proposed and the system is proved to achieve the desired security notions.
Cryptanalysis of Three Blind Proxy Multi-signature Schemes
Wang Tian-yin, Liu Mai-xue, Wen Qiao-yan
2009, 31(2): 493-496. doi: 10.3724/SP.J.1146.2007.01338
Abstract:
Through the cryptanalysis of three blind proxy multi-signature schemes, it shows that in Li Yuan et al.s scheme, any original signer can sign a valid blind proxy multi-signature by the means of forging proxy key, and in Kang Li et al.s first type blind proxy multi-signature scheme, attacker not only can forge any proxy signers proxy sub-key, but also can forge blind proxy multi-signatures on any message, and in Kang Li et al.s second type blind proxy multi-signature scheme, attacker can sign a valid blind proxy multi-signature by the means of forging proxy key, therefore the three schemes are not secure.