Email alert
2014 Vol. 36, No. 4
Display Method:
2014, 36(4): 763-768.
doi: 10.3724/SP.J.1146.2013.00971
Abstract:
Cognitive radio can make full use of idle spectrum for data transfer, and therefore improve the spectrum utilization. Sparse channel estimation explores the sparse property of wireless channels, which reduces the pilot overhead and further improves the spectrum efficiency. This paper investigates the sparse channel estimation in cognitive radio systems as well as the pilot optimization therein, and the channel estimation is formulated as a sparse recovery issue. With the objective to minimize the cross correlation of the measurement matrix, a fast pilot optimization algorithm is then proposed. By flexibly setting the number of outer loop and inner loop, each entry of pilot pattern can be sequentially updated and optimized. Simulation results show that compared to the Least Squares (LS) channel estimation, sparse channel estimation can reduce 72.4% of the pilot overhead and improve the spectrum efficiency by 8.2%. Moreover, the proposed pilot optimization algorithm outperforms the current random search algorithm by saving 5 dB of Signal to Noise Ratio (SNR) at the same 0.012 of Bit Error Rate (BER).
Cognitive radio can make full use of idle spectrum for data transfer, and therefore improve the spectrum utilization. Sparse channel estimation explores the sparse property of wireless channels, which reduces the pilot overhead and further improves the spectrum efficiency. This paper investigates the sparse channel estimation in cognitive radio systems as well as the pilot optimization therein, and the channel estimation is formulated as a sparse recovery issue. With the objective to minimize the cross correlation of the measurement matrix, a fast pilot optimization algorithm is then proposed. By flexibly setting the number of outer loop and inner loop, each entry of pilot pattern can be sequentially updated and optimized. Simulation results show that compared to the Least Squares (LS) channel estimation, sparse channel estimation can reduce 72.4% of the pilot overhead and improve the spectrum efficiency by 8.2%. Moreover, the proposed pilot optimization algorithm outperforms the current random search algorithm by saving 5 dB of Signal to Noise Ratio (SNR) at the same 0.012 of Bit Error Rate (BER).
2014, 36(4): 769-774.
doi: 10.3724/SP.J.1146.2013.01091
Abstract:
In a valid cognitive radio system, the requirement for real-time spectrum sensing in the case of lacking priori information of primary user, fading channel and dynamically varying noise level, indeed poses a major challenge to the classical spectrum sensing algorithms. In this paper, a novel spectrum sensing algorithm based on the Power spectral density Segment Cancellation (PSC) is proposed. It makes use of asymptotic normality and independence of Fourier transform to get the stochastic properties of Power Spectral Density (PSD). The proposed algorithm takes the ratio of some PSD lines to all of them as the detection statistics to detect signals. The mathematical expression for probabilities of false alarm and correct detection in different channel models is derived. In accordance with the Neyman-Pearson criteria, the closed-form expression of decision threshold is calculated. The theoretical analysis and simulation results show that the PSC algorithm is robust to noise uncertainty, and spectrum sensing performance does not vary with the ambient noise level of secondary users when Signal to Noise Ratio (SNR) is fixed. Meanwhile, the PSC algorithm could offer high probability of detection at low probability of false alarm for a wide range of the SNR in the white Gaussian noise and flat slow fading channel. The PSC spectrum sensing algorithm has low computational complexity, which can be completed in a micro-seconds duration.
In a valid cognitive radio system, the requirement for real-time spectrum sensing in the case of lacking priori information of primary user, fading channel and dynamically varying noise level, indeed poses a major challenge to the classical spectrum sensing algorithms. In this paper, a novel spectrum sensing algorithm based on the Power spectral density Segment Cancellation (PSC) is proposed. It makes use of asymptotic normality and independence of Fourier transform to get the stochastic properties of Power Spectral Density (PSD). The proposed algorithm takes the ratio of some PSD lines to all of them as the detection statistics to detect signals. The mathematical expression for probabilities of false alarm and correct detection in different channel models is derived. In accordance with the Neyman-Pearson criteria, the closed-form expression of decision threshold is calculated. The theoretical analysis and simulation results show that the PSC algorithm is robust to noise uncertainty, and spectrum sensing performance does not vary with the ambient noise level of secondary users when Signal to Noise Ratio (SNR) is fixed. Meanwhile, the PSC algorithm could offer high probability of detection at low probability of false alarm for a wide range of the SNR in the white Gaussian noise and flat slow fading channel. The PSC spectrum sensing algorithm has low computational complexity, which can be completed in a micro-seconds duration.
2014, 36(4): 775-779.
doi: 10.3724/SP.J.1146.2013.00871
Abstract:
A joint random sensing policy is proposed by using the inactive secondary users in the cognitive network, which is different from previous cognitive radio sensing researches. The policy is proposed due to the uncertainty of primary users states and the problem of inability for the random sensing policy to find out vacant channels rapidly. After utilizing inactive secondary users to detect, sensing performance of cognitive networks can be deduced by modeling the list of available channels stored in cognitive base station as Markov model. Both the theoretical analysis and simulating results show that sensing with inactive cognitive users can reduce the service overhead and improve the cognitive network throughput. After considering the report overhead of secondary users, the optimal number of inactive users is obtained which yields the highest throughput of cognitive network through an optimization algorithm.
A joint random sensing policy is proposed by using the inactive secondary users in the cognitive network, which is different from previous cognitive radio sensing researches. The policy is proposed due to the uncertainty of primary users states and the problem of inability for the random sensing policy to find out vacant channels rapidly. After utilizing inactive secondary users to detect, sensing performance of cognitive networks can be deduced by modeling the list of available channels stored in cognitive base station as Markov model. Both the theoretical analysis and simulating results show that sensing with inactive cognitive users can reduce the service overhead and improve the cognitive network throughput. After considering the report overhead of secondary users, the optimal number of inactive users is obtained which yields the highest throughput of cognitive network through an optimization algorithm.
2014, 36(4): 780-786.
doi: 10.3724/SP.J.1146.2013.00778
Abstract:
To solve the problem of that the wireless physical layer secrecy coding can not achieve strong security while the legal channel is noisy, a strong security coding method is proposed based on partitioning coset. First, it is proved that if and only if the minimum Hamming distance of the dual code of the coset mother code is larger than the number of leak bits, the method can keep strong security. It is also proved that many properties of partitioning coset can help to decrease the calculation complexity to one time table search to get the Hamming distance among cosets, and a search algorithm is proposed based on tree deep priority to get maximum available coset set. Finally, the abilities of anti-information leakage in eavesdropper channel, the anti-noise in legal channel, and the corresponding maximum available coset set of typical linear block codes is presented. Compared with the traditional method, the proposed method reduces the requirement of the legal channel quality with 5 dB while keeping strong security when the mother code is the dual code of BCH(15,11).
To solve the problem of that the wireless physical layer secrecy coding can not achieve strong security while the legal channel is noisy, a strong security coding method is proposed based on partitioning coset. First, it is proved that if and only if the minimum Hamming distance of the dual code of the coset mother code is larger than the number of leak bits, the method can keep strong security. It is also proved that many properties of partitioning coset can help to decrease the calculation complexity to one time table search to get the Hamming distance among cosets, and a search algorithm is proposed based on tree deep priority to get maximum available coset set. Finally, the abilities of anti-information leakage in eavesdropper channel, the anti-noise in legal channel, and the corresponding maximum available coset set of typical linear block codes is presented. Compared with the traditional method, the proposed method reduces the requirement of the legal channel quality with 5 dB while keeping strong security when the mother code is the dual code of BCH(15,11).
2014, 36(4): 787-791.
doi: 10.3724/SP.J.1146.2013.00872
Abstract:
The suboptimal and optimal adaptive transmit power allocation algorithms are proposed to minimize the Bit Error Rate (BER) performance for Turbo-BLAST system with channel feedback delay. The conditional probability density function of the instantaneous Signal Noise Ratio (SNR) is deduced by system modeling and performance analysis. The system BER can be computed using mathematical transformation. At the transmitter, with the total transmit power constraint condition, the suboptimal and optimal transmit power matrix can be calculated by using the Lagrange multiplier method and Newton iteration technique, respectively. At the receiver, the iterative soft interference cancellation algorithm based on Zero Forcing (ZF) rule is used to detect the received symbols. Simulation results show that the proposed algorithms can improve the system BER performance. The optimal power allocation algorithm can improve BER performance more at the cost of increasing the computational complexity. The system performance can be further improved by iterative detection.
The suboptimal and optimal adaptive transmit power allocation algorithms are proposed to minimize the Bit Error Rate (BER) performance for Turbo-BLAST system with channel feedback delay. The conditional probability density function of the instantaneous Signal Noise Ratio (SNR) is deduced by system modeling and performance analysis. The system BER can be computed using mathematical transformation. At the transmitter, with the total transmit power constraint condition, the suboptimal and optimal transmit power matrix can be calculated by using the Lagrange multiplier method and Newton iteration technique, respectively. At the receiver, the iterative soft interference cancellation algorithm based on Zero Forcing (ZF) rule is used to detect the received symbols. Simulation results show that the proposed algorithms can improve the system BER performance. The optimal power allocation algorithm can improve BER performance more at the cost of increasing the computational complexity. The system performance can be further improved by iterative detection.
2014, 36(4): 792-796.
doi: 10.3724/SP.J.1146.2013.00905
Abstract:
A direct blind recovery method based on SemiDefinite Relaxation (SDR) technique is proposed to detect Short Burst Data (SBD) signals in cooperative relay networks. Without channel state information and signal statistics information, SBD sequences can be recovered directly by the proposed method. Firstly, the mathematical model of blind SBD signal detection in cooperative communication systems with one relay is presented. Then, the cost function of least square estimation for high order Quadrature Amplitude Modulation (QAM) signals is formulated. Meantime, SDR technique is introduced to solve the optimization problem, which leads to a low-rank approximate solution toward the optimal one efficiently. Finally, simulation results verify that the proposed method owns good performance and fast convergence rate. Furthermore, all the results imply that the proposed method provides a reference to those cases of SBD signal detection.
A direct blind recovery method based on SemiDefinite Relaxation (SDR) technique is proposed to detect Short Burst Data (SBD) signals in cooperative relay networks. Without channel state information and signal statistics information, SBD sequences can be recovered directly by the proposed method. Firstly, the mathematical model of blind SBD signal detection in cooperative communication systems with one relay is presented. Then, the cost function of least square estimation for high order Quadrature Amplitude Modulation (QAM) signals is formulated. Meantime, SDR technique is introduced to solve the optimization problem, which leads to a low-rank approximate solution toward the optimal one efficiently. Finally, simulation results verify that the proposed method owns good performance and fast convergence rate. Furthermore, all the results imply that the proposed method provides a reference to those cases of SBD signal detection.
2014, 36(4): 797-803.
doi: 10.3724/SP.J.1146.2013.01008
Abstract:
Opportunistic Distributed Space-Time Coding (O-DSTC) is a new type of cooperative communication. In the guarantee of the Symbol Error Rate (SER), the residual energy balanced cooperative partner selection algorithm of O-DSTC is presented based on the benefit function and regret function. When considering distance and only the statistics characteristics of channel information is available, the average SER of O-DSTC can not be computed, therefore the SER approaching formula is presented. The approaching formula selects the appropriate nodes as the cooperative partner candidates, and each candidate node sets the delay time distributedly. Meanwhile, each candidate node contends to become the cooperative partner. Simulation results demonstrate the reliability and effectiveness of the approaching formula. The simulation also shows that the cooperative partner selection algorithm can assure SER, enhance the minimum residual energy and reduce the contention access time.
Opportunistic Distributed Space-Time Coding (O-DSTC) is a new type of cooperative communication. In the guarantee of the Symbol Error Rate (SER), the residual energy balanced cooperative partner selection algorithm of O-DSTC is presented based on the benefit function and regret function. When considering distance and only the statistics characteristics of channel information is available, the average SER of O-DSTC can not be computed, therefore the SER approaching formula is presented. The approaching formula selects the appropriate nodes as the cooperative partner candidates, and each candidate node sets the delay time distributedly. Meanwhile, each candidate node contends to become the cooperative partner. Simulation results demonstrate the reliability and effectiveness of the approaching formula. The simulation also shows that the cooperative partner selection algorithm can assure SER, enhance the minimum residual energy and reduce the contention access time.
2014, 36(4): 804-809.
doi: 10.3724/SP.J.1146.2013.00774
Abstract:
This paper presents an approach for analysis of coverage time and handoff number on the mobile LEO satellite communication systems, which fully reflects the random distribution characteristics of the users. Based on the distribution of the user locations, the statistical models of coverage time on satellites and beams are proposed, and the lower bound calculations of the expected number on inter-beam handoff and inter-satellite handoff are derived. Simulation results demonstrate the effectiveness of the proposed algorithms, based on the Iridium communication systems model including the parameters of constellation and earth station, the model of the multi-beam array antennas.
This paper presents an approach for analysis of coverage time and handoff number on the mobile LEO satellite communication systems, which fully reflects the random distribution characteristics of the users. Based on the distribution of the user locations, the statistical models of coverage time on satellites and beams are proposed, and the lower bound calculations of the expected number on inter-beam handoff and inter-satellite handoff are derived. Simulation results demonstrate the effectiveness of the proposed algorithms, based on the Iridium communication systems model including the parameters of constellation and earth station, the model of the multi-beam array antennas.
2014, 36(4): 810-816.
doi: 10.3724/SP.J.1146.2013.00845
Abstract:
The national TV industry standard AVS+, which faces the demand of 3D and high definition, got a high progress in coding efficiency, but it also increases complexity. Parallel coding technique is an efficient solution to deal with high-complex encoder. A new parallel video coding framework is proposed by changing the traditional video coding back feed loop to realize efficient parallel coding. The framework is applied to the AVS+ real-time encoder, and implements a parallel video coding algorithm based on AVS+ real-time coding. Experimental results demonstrate that, the algorithm makes full use of the computing ability of the multi-core processor, and improves the video coding rate significantly.
The national TV industry standard AVS+, which faces the demand of 3D and high definition, got a high progress in coding efficiency, but it also increases complexity. Parallel coding technique is an efficient solution to deal with high-complex encoder. A new parallel video coding framework is proposed by changing the traditional video coding back feed loop to realize efficient parallel coding. The framework is applied to the AVS+ real-time encoder, and implements a parallel video coding algorithm based on AVS+ real-time coding. Experimental results demonstrate that, the algorithm makes full use of the computing ability of the multi-core processor, and improves the video coding rate significantly.
2014, 36(4): 817-822.
doi: 10.3724/SP.J.1146.2013.00858
Abstract:
A novel Multiple Description video coding scheme is proposed based on Predictive Error Resilience (MD-PER). At the encoder, the possible error caused by the single-channel reconstruction can be predicted firstly, and then the necessary redundancy information is inserted for each description. In view of the compression efficiency, different coding modes are designed to compress the generated redundant information. At the decoder, the redundancy information can be applied to achieve high quality recovery of the lost video frames. Experimental results demonstrate that compared with the traditional temporal sampling method the proposed scheme achieves better rate-distortion performance.
A novel Multiple Description video coding scheme is proposed based on Predictive Error Resilience (MD-PER). At the encoder, the possible error caused by the single-channel reconstruction can be predicted firstly, and then the necessary redundancy information is inserted for each description. In view of the compression efficiency, different coding modes are designed to compress the generated redundant information. At the decoder, the redundancy information can be applied to achieve high quality recovery of the lost video frames. Experimental results demonstrate that compared with the traditional temporal sampling method the proposed scheme achieves better rate-distortion performance.
2014, 36(4): 823-827.
doi: 10.3724/SP.J.1146.2013.00759
Abstract:
JPEG-LS is suitable for the field of lossless/near-lossless image compression because of its simple algorithm and high performance. However, it is difficult for JPEG-LS to control precisely the bit rate, which makes it hard to be applied to bandwidth-constrained occasions such as the satellite image coding. To solve this issue, the mathematic relation between bit rate and the quantization parameter is obtained by analyzing the coding characteristics of JPEG-LS, with which a novel dynamic bit-rate control algorithm is proposed based on the look-up table algorithm. Experimental results show that the proposed new bit-rate control algorithm is superior to the current JPEG-LS rate control scheme on the accuracy of bit-rate and the speed of bit-rate convergence.
JPEG-LS is suitable for the field of lossless/near-lossless image compression because of its simple algorithm and high performance. However, it is difficult for JPEG-LS to control precisely the bit rate, which makes it hard to be applied to bandwidth-constrained occasions such as the satellite image coding. To solve this issue, the mathematic relation between bit rate and the quantization parameter is obtained by analyzing the coding characteristics of JPEG-LS, with which a novel dynamic bit-rate control algorithm is proposed based on the look-up table algorithm. Experimental results show that the proposed new bit-rate control algorithm is superior to the current JPEG-LS rate control scheme on the accuracy of bit-rate and the speed of bit-rate convergence.
2014, 36(4): 828-833.
doi: 10.3724/SP.J.1146.2013.00870
Abstract:
The pseudoranness of the Lai-Massey schme is studied in this paper. First, itis proved that if is an affine almost orthomorphism, 3-round Lai-Massey scheme can not reach the pseudorandomness, which is a counterexample of the Lai-Massey scheme designer's result. Then, it is proved that at least 3-round Lai-Massey scheme is need for the property of pseudorandomness when the bijective is an arbitrary orthomorphism, and at least 4-round Lai-Massey scheme is need for the property of super pseudorandomness when the bijective is an orthomorphism. From the above results, it is preferable to design a bijective which is nonlinear orthomorphism or almost orthomorphism to construct the Lai-Massey scheme with better pseudorandomness.
The pseudoranness of the Lai-Massey schme is studied in this paper. First, itis proved that if is an affine almost orthomorphism, 3-round Lai-Massey scheme can not reach the pseudorandomness, which is a counterexample of the Lai-Massey scheme designer's result. Then, it is proved that at least 3-round Lai-Massey scheme is need for the property of pseudorandomness when the bijective is an arbitrary orthomorphism, and at least 4-round Lai-Massey scheme is need for the property of super pseudorandomness when the bijective is an orthomorphism. From the above results, it is preferable to design a bijective which is nonlinear orthomorphism or almost orthomorphism to construct the Lai-Massey scheme with better pseudorandomness.
2014, 36(4): 834-839.
doi: 10.3724/SP.J.1146.2013.00700
Abstract:
Designing measurement matrix is one of the key points of applying Compressed Sensing (CS) to solve practical issue. In this paper, a kind of probabilistic sparse random matrix is designed for compressive data gathering in Wireless Sensor Networks (WSNs). Besides cutting the number of projection calculating nodes, the probabilistic sparse random matrices also make their location centralized, which leads a further reduction of communication overhead. Then, an optimization method for probabilistic sparse random matrices is also proposed to enhance the accuracy of network data reconstruction. Compared with the existing data gathering method using sparse random matrices and sparse Toeplitz matrices, the proposed method can reduce significantly not only the energy consumption, but also the reconstruction error.
Designing measurement matrix is one of the key points of applying Compressed Sensing (CS) to solve practical issue. In this paper, a kind of probabilistic sparse random matrix is designed for compressive data gathering in Wireless Sensor Networks (WSNs). Besides cutting the number of projection calculating nodes, the probabilistic sparse random matrices also make their location centralized, which leads a further reduction of communication overhead. Then, an optimization method for probabilistic sparse random matrices is also proposed to enhance the accuracy of network data reconstruction. Compared with the existing data gathering method using sparse random matrices and sparse Toeplitz matrices, the proposed method can reduce significantly not only the energy consumption, but also the reconstruction error.
2014, 36(4): 840-846.
doi: 10.3724/SP.J.1146.2013.00960
Abstract:
On the basis of the low-rank characteristic of the Internet latency matrix, the in-complete latency matrix completion problem in full-decentralized environment is studied through setting a priori estimation of the l0 norm of this matrix. First, the problem is componentized into a couple of convex optimization problems, thus it can be solved by alternative direction method. Then, to achieve low computation cost along with well generalization, an Adaptive Distributed Matrix Completion (ADMC) algorithm is proposed. ADMC doubles the upper-bound of the iterative step size searching area, and introduces several kinds of loss functions as the latency estimation error measures. Experiments show that, without losing any accuracy, ADMC reduces the computation cost significantly without any additional measurement or communication cost, and the introduced various loss functions also improve the robustness of the algorithm.
On the basis of the low-rank characteristic of the Internet latency matrix, the in-complete latency matrix completion problem in full-decentralized environment is studied through setting a priori estimation of the l0 norm of this matrix. First, the problem is componentized into a couple of convex optimization problems, thus it can be solved by alternative direction method. Then, to achieve low computation cost along with well generalization, an Adaptive Distributed Matrix Completion (ADMC) algorithm is proposed. ADMC doubles the upper-bound of the iterative step size searching area, and introduces several kinds of loss functions as the latency estimation error measures. Experiments show that, without losing any accuracy, ADMC reduces the computation cost significantly without any additional measurement or communication cost, and the introduced various loss functions also improve the robustness of the algorithm.
2014, 36(4): 847-854.
doi: 10.3724/SP.J.1146.2013.00866
Abstract:
Multivariate Time Series (MTS) are used in very broad areas such as medicine, finance, multimedia and so on. A new method for similar pattern matching is proposed based on 2D Singular Value Decomposition (2DSVD). 2DSVD is an extension of standard SVD, which can explicitly describe the 2D nature of MTS. First, MTS is decomposed by 2DSVD. Second, the eigenvectors of row-row and column-column covariance matrix of MTS samples are computed for feature pattern matrix. Then, Eculid distance is adopted to measure the similarity between feature pattern matrix. Finally, through the comparison with directly Eculid distance, principal component analysis, trend distance and matching method based on point distribution for 3 different data sets, the experimental results show that it is easy to character the nature of MTS with this method, and with which various scales of series data can be processed more efficently.
Multivariate Time Series (MTS) are used in very broad areas such as medicine, finance, multimedia and so on. A new method for similar pattern matching is proposed based on 2D Singular Value Decomposition (2DSVD). 2DSVD is an extension of standard SVD, which can explicitly describe the 2D nature of MTS. First, MTS is decomposed by 2DSVD. Second, the eigenvectors of row-row and column-column covariance matrix of MTS samples are computed for feature pattern matrix. Then, Eculid distance is adopted to measure the similarity between feature pattern matrix. Finally, through the comparison with directly Eculid distance, principal component analysis, trend distance and matching method based on point distribution for 3 different data sets, the experimental results show that it is easy to character the nature of MTS with this method, and with which various scales of series data can be processed more efficently.
2014, 36(4): 855-861.
doi: 10.3724/SP.J.1146.2013.00799
Abstract:
To solve the issues of the low detection performance and high computational complexity of the available initial ranging algorithms for Orthogonal Frequency Division Multiple Access (OFDMA) systems, a low- complexity initial ranging algorithm with iterative interference cancellation is presented. By using the Parallel Interference Cancellation (PIC) scheme in an iterative fashion, the proposed algorithm detects parallelly the valid paths of active users by means of the strongest power criterion at the receiver, and mitigates the channel estimation interference by employing the parameters estimation of the valid paths. Then it rebuilds and cancels parallelly the detected multiusers signals. Simulation results show that when there are 8 initial ranging users in one ranging time slot and the signal noise ratio is 9 dB, the computational complexity of the proposed algorithm is approximately 25% of that of the Successive MultiUser Detection (SMUD) with interference cancellation algorithm, and the detection performance is improved by 5% compared with SMUD.
To solve the issues of the low detection performance and high computational complexity of the available initial ranging algorithms for Orthogonal Frequency Division Multiple Access (OFDMA) systems, a low- complexity initial ranging algorithm with iterative interference cancellation is presented. By using the Parallel Interference Cancellation (PIC) scheme in an iterative fashion, the proposed algorithm detects parallelly the valid paths of active users by means of the strongest power criterion at the receiver, and mitigates the channel estimation interference by employing the parameters estimation of the valid paths. Then it rebuilds and cancels parallelly the detected multiusers signals. Simulation results show that when there are 8 initial ranging users in one ranging time slot and the signal noise ratio is 9 dB, the computational complexity of the proposed algorithm is approximately 25% of that of the Successive MultiUser Detection (SMUD) with interference cancellation algorithm, and the detection performance is improved by 5% compared with SMUD.
2014, 36(4): 862-867.
doi: 10.3724/SP.J.1146.2013.00921
Abstract:
The time-varying characteristics of radio frequency signal make it difficult to practice multi-object Device-Free Localization (DFL). A novel algorithm based on compressive sensing and fingerprint method is proposed to locate bi-object in this paper. It utilizes link-centric probabilistic coverage model to construct the mapping relationship between single object radio map and bi-object radio map, which reduces the offline train labour brought for the increased number of objects. Furthermore, K-means clustering method is taken to classify the established bi-object radio map. By comparing online measurement with the centre elements of every cluster, the possible locations of the bi-object are limited to a smaller area, which shortens the computing time. Then, compressive sensing is adopted to transform the localization problem to a sparse signal reconstruction problem. Experiments confirm that the proposed algorithm outperforms than the Radio Tomographic Imaging (RTI) based algorithm.
The time-varying characteristics of radio frequency signal make it difficult to practice multi-object Device-Free Localization (DFL). A novel algorithm based on compressive sensing and fingerprint method is proposed to locate bi-object in this paper. It utilizes link-centric probabilistic coverage model to construct the mapping relationship between single object radio map and bi-object radio map, which reduces the offline train labour brought for the increased number of objects. Furthermore, K-means clustering method is taken to classify the established bi-object radio map. By comparing online measurement with the centre elements of every cluster, the possible locations of the bi-object are limited to a smaller area, which shortens the computing time. Then, compressive sensing is adopted to transform the localization problem to a sparse signal reconstruction problem. Experiments confirm that the proposed algorithm outperforms than the Radio Tomographic Imaging (RTI) based algorithm.
2014, 36(4): 868-874.
doi: 10.3724/SP.J.1146.2013.00827
Abstract:
In order to solve the issue that carrier frequency estimation of time-frequency overlapped signals is difficult in Alpha-stable distribution noise environment, a novel carrier frequency estimation method of the time-frequency overlapped signals is proposed. First the definition of Generalized Fourth-Order Cyclic Cumulant (GFOCC) of the time-frequency overlapped signals is proposed. Then the carrier frequency of each signal component is estimated by detecting the cycle frequency of the amplitude spectrum of GFOCC, which is corresponding to discrete spectra line. Finally, the theoretical asymptotic analysis demonstrates that this estimation method is asymptotically unbiased and consistent. Simulation results show that the proposed method achieves good estimation performance and robustness in Alpha-stable distribution noise environment.
In order to solve the issue that carrier frequency estimation of time-frequency overlapped signals is difficult in Alpha-stable distribution noise environment, a novel carrier frequency estimation method of the time-frequency overlapped signals is proposed. First the definition of Generalized Fourth-Order Cyclic Cumulant (GFOCC) of the time-frequency overlapped signals is proposed. Then the carrier frequency of each signal component is estimated by detecting the cycle frequency of the amplitude spectrum of GFOCC, which is corresponding to discrete spectra line. Finally, the theoretical asymptotic analysis demonstrates that this estimation method is asymptotically unbiased and consistent. Simulation results show that the proposed method achieves good estimation performance and robustness in Alpha-stable distribution noise environment.
2014, 36(4): 875-881.
doi: 10.3724/SP.J.1146.2013.00946
Abstract:
Stereo video technology provides depth perception and immersion visual experience, but it also makes people feel visual fatigue and causes the decrease of the experience quality. Therefore, how to evaluate the visual comfort of stereoscopic image/video effectively is still a research focus. In this paper, an objective visual comfort assessment metric based on visual important regions is proposed. First, the Visual Important Regions (VIR) are obtained from image saliency and disparity map information. Then, disparity amplitude, disparity gradient, and spatial frequency features are extracted and fused into a feature vector. Finally, the values of objective assessment are predicted by Support Vector Regression (SVR). Experimental results show that compared with existing methods, the proposed metric achieves higher consistency with subjective visual comfort assessment of stereoscopic images.
Stereo video technology provides depth perception and immersion visual experience, but it also makes people feel visual fatigue and causes the decrease of the experience quality. Therefore, how to evaluate the visual comfort of stereoscopic image/video effectively is still a research focus. In this paper, an objective visual comfort assessment metric based on visual important regions is proposed. First, the Visual Important Regions (VIR) are obtained from image saliency and disparity map information. Then, disparity amplitude, disparity gradient, and spatial frequency features are extracted and fused into a feature vector. Finally, the values of objective assessment are predicted by Support Vector Regression (SVR). Experimental results show that compared with existing methods, the proposed metric achieves higher consistency with subjective visual comfort assessment of stereoscopic images.
2014, 36(4): 882-887.
doi: 10.3724/SP.J.1146.2013.00846
Abstract:
该文提出一种鲁棒的基于对比度的局部特征描述方法,即独立元素对比度直方图(Independent Elementary Contrast Histogram, IECH)描述子。首先计算特征区域内各像素与被随机采样像素间的对比度值。然后,在极坐标下以特征主方向为基准,将局部特征区域分割成32个子区域,分别统计2维正负对比度直方图。最后,对统计结果进行归一化处理,产生64维的IECH特征描述向量。实验结果表明,该方法在保持与SIFT相当的匹配性能的同时,具有更快的特征生成速度与更低的特征维数。相比于具有相同时间复杂度与特征维数的对比度上下文直方图(CCH)方法,该方法描述子的性能有了明显的提高,更适合在实时应用中使用。
该文提出一种鲁棒的基于对比度的局部特征描述方法,即独立元素对比度直方图(Independent Elementary Contrast Histogram, IECH)描述子。首先计算特征区域内各像素与被随机采样像素间的对比度值。然后,在极坐标下以特征主方向为基准,将局部特征区域分割成32个子区域,分别统计2维正负对比度直方图。最后,对统计结果进行归一化处理,产生64维的IECH特征描述向量。实验结果表明,该方法在保持与SIFT相当的匹配性能的同时,具有更快的特征生成速度与更低的特征维数。相比于具有相同时间复杂度与特征维数的对比度上下文直方图(CCH)方法,该方法描述子的性能有了明显的提高,更适合在实时应用中使用。
2014, 36(4): 888-895.
doi: 10.3724/SP.J.1146.2013.00826
Abstract:
The self-training based discriminative tracking methods use the classification results to update the classifier itself. However, these methods easily suffer from the drifting issue because the classification errors are accumulated during tracking. To overcome the disadvantages of self-training based tracking methods, a novel co-training tracking algorithm, termed Co-SemiBoost, is proposed based on online semi-supervised boosting. The proposed algorithm employs a new online co-training framework, where unlabeled samples are used to collaboratively train the classifiers respectively built on two feature views. Moreover, the pseudo-labels and weights of unlabeled samples are iteratively predicted by combining the decisions of a prior model and an online classifier. The proposed algorithm can effectively improve the discriminative ability of the classifier, and is robust to occlusions, illumination changes, etc. Thus the algorithm can better adapt to object appearance changes. Experimental results on several challenging video sequences show that the proposed algorithm achieves promising tracking performance.
The self-training based discriminative tracking methods use the classification results to update the classifier itself. However, these methods easily suffer from the drifting issue because the classification errors are accumulated during tracking. To overcome the disadvantages of self-training based tracking methods, a novel co-training tracking algorithm, termed Co-SemiBoost, is proposed based on online semi-supervised boosting. The proposed algorithm employs a new online co-training framework, where unlabeled samples are used to collaboratively train the classifiers respectively built on two feature views. Moreover, the pseudo-labels and weights of unlabeled samples are iteratively predicted by combining the decisions of a prior model and an online classifier. The proposed algorithm can effectively improve the discriminative ability of the classifier, and is robust to occlusions, illumination changes, etc. Thus the algorithm can better adapt to object appearance changes. Experimental results on several challenging video sequences show that the proposed algorithm achieves promising tracking performance.
2014, 36(4): 896-902.
doi: 10.3724/SP.J.1146.2013.00623
Abstract:
A Probabilistic Neural Network (PNN) based on Particle Swarm Optimization (PSO) is proposed for ballistic target recognition due to its difficulty in this paper. The fusion of multispectral infrared data is achieved through the use of this method. Firstly, the temperature and emissivity-area of targets are extracted by using a novel multi-colorimetric technology, then the parameter of the PNN is optimized with Gaussian PSO (GPSO), and finally the four typical ballistic targets are classified via the optimized PNN. The method fuses the multi-spectral and multiple dynamic features, hence allowing this algorithm to be quite robust. In addition, the method fully exploits the PNNs capability for its higher stability and fault-tolerance mechanism. The simulation experiments present multi-spectral infrared radiation intensity sequence of four ballistic targets, and the results show that the proposed method based on the PNN is able to recognize the multiple ballistic targets.
A Probabilistic Neural Network (PNN) based on Particle Swarm Optimization (PSO) is proposed for ballistic target recognition due to its difficulty in this paper. The fusion of multispectral infrared data is achieved through the use of this method. Firstly, the temperature and emissivity-area of targets are extracted by using a novel multi-colorimetric technology, then the parameter of the PNN is optimized with Gaussian PSO (GPSO), and finally the four typical ballistic targets are classified via the optimized PNN. The method fuses the multi-spectral and multiple dynamic features, hence allowing this algorithm to be quite robust. In addition, the method fully exploits the PNNs capability for its higher stability and fault-tolerance mechanism. The simulation experiments present multi-spectral infrared radiation intensity sequence of four ballistic targets, and the results show that the proposed method based on the PNN is able to recognize the multiple ballistic targets.
2014, 36(4): 903-907.
doi: 10.3724/SP.J.1146.2013.00887
Abstract:
Dual-channel SAR system and single-look InSAR system are characterized by strong anti-interference capability. In this paper, a countermeasure method against dual-channel cancellation is proposed based on two jammers, the principle of dual-channel cancellation is reviewed, the interference signals of two jammers are analyzed, the requirement of the interference phase of the two jammers and the estimation method are studied. Theoretical analysis and simulation experiment verify the effectiveness of the proposed method.
Dual-channel SAR system and single-look InSAR system are characterized by strong anti-interference capability. In this paper, a countermeasure method against dual-channel cancellation is proposed based on two jammers, the principle of dual-channel cancellation is reviewed, the interference signals of two jammers are analyzed, the requirement of the interference phase of the two jammers and the estimation method are studied. Theoretical analysis and simulation experiment verify the effectiveness of the proposed method.
2014, 36(4): 908-914.
doi: 10.3724/SP.J.1146.2013.01095
Abstract:
An improved four-component decomposition method is proposed based on Polarimetric Interferometric Similarity Parameter (PISP). This method can solve the vegetation component overestimation problems with traditional polarimetric SAR decomposition. The PISP is calculated by three optimized mechanisms obtained from PolInSAR datasets. Therefore, it is sensitive to the dimensional distribution of terrain target, and rotation invariant. The proposed method uses the PISP to improve the volume model. With the improved model, the volume scattering power of different terrain target can be calculated adaptively. The effectiveness of the proposed method is demonstrated with German Aerospace Centers (DLR) E-SAR L-band PolInSAR datasets. The experiment result shows that the building and forest can be well distinguished with the proposed method.
An improved four-component decomposition method is proposed based on Polarimetric Interferometric Similarity Parameter (PISP). This method can solve the vegetation component overestimation problems with traditional polarimetric SAR decomposition. The PISP is calculated by three optimized mechanisms obtained from PolInSAR datasets. Therefore, it is sensitive to the dimensional distribution of terrain target, and rotation invariant. The proposed method uses the PISP to improve the volume model. With the improved model, the volume scattering power of different terrain target can be calculated adaptively. The effectiveness of the proposed method is demonstrated with German Aerospace Centers (DLR) E-SAR L-band PolInSAR datasets. The experiment result shows that the building and forest can be well distinguished with the proposed method.
2014, 36(4): 915-922.
doi: 10.3724/SP.J.1146.2013.00859
Abstract:
The Medium-Earth-Orbit SAR (MEOSAR) is one of the potential next-generation spaceborne SARs. Ionospheric effects analysis is one of the critical techniques for the development of MEOSAR. An analysis model for background ionospheric effect on MEOSAR is established based on the system characteristics of MEOSAR and the spatio-temporal variability of ionosphere. The degradation of image quality, including resolution and displacement distortion, induced by background ionosphere and its spatio-temporal variability is analyzed. The analysis result shows that ionosphere and its time-space variability affect critically the quality of obtained images. In condition of the same ionosphere, the degradation of resolution in both azimuth and range, the distortion on range image and the displacements in azimuth image are all more serious with the increasing of SAR orbit height when the resolution is same.
The Medium-Earth-Orbit SAR (MEOSAR) is one of the potential next-generation spaceborne SARs. Ionospheric effects analysis is one of the critical techniques for the development of MEOSAR. An analysis model for background ionospheric effect on MEOSAR is established based on the system characteristics of MEOSAR and the spatio-temporal variability of ionosphere. The degradation of image quality, including resolution and displacement distortion, induced by background ionosphere and its spatio-temporal variability is analyzed. The analysis result shows that ionosphere and its time-space variability affect critically the quality of obtained images. In condition of the same ionosphere, the degradation of resolution in both azimuth and range, the distortion on range image and the displacements in azimuth image are all more serious with the increasing of SAR orbit height when the resolution is same.
2014, 36(4): 923-930.
doi: 10.3724/SP.J.1146.2013.00673
Abstract:
Squint Terrain Observation by Progressive Scans (TOPS) SAR imaging mode is confronted with three issues: azimuth spectrum aliasing, severe range-azimuth coupling and azimuth time output aliasing. To solve these issues, a subaperture imaging algorithm is proposed based on the SPECtral ANalysis (SPECAN) technique. Firstly, the subaperture data is expanded in the azimuth time domain and the two-dimensional spectrum free from aliasing can be obtained. Then the modified Range Migration Algorithm (RMA) is used to complete Range Cell Migration Correction (RCMC) and range compression. After that, the full-aperture azimuth spectrum can be derived by subaperture recombination. Finally, the signal is focused in the Doppler domain by the SPECAN technique. Simulation results demonstrate the effectiveness of the proposed method.
Squint Terrain Observation by Progressive Scans (TOPS) SAR imaging mode is confronted with three issues: azimuth spectrum aliasing, severe range-azimuth coupling and azimuth time output aliasing. To solve these issues, a subaperture imaging algorithm is proposed based on the SPECtral ANalysis (SPECAN) technique. Firstly, the subaperture data is expanded in the azimuth time domain and the two-dimensional spectrum free from aliasing can be obtained. Then the modified Range Migration Algorithm (RMA) is used to complete Range Cell Migration Correction (RCMC) and range compression. After that, the full-aperture azimuth spectrum can be derived by subaperture recombination. Finally, the signal is focused in the Doppler domain by the SPECAN technique. Simulation results demonstrate the effectiveness of the proposed method.
2014, 36(4): 931-937.
doi: 10.3724/SP.J.1146.2013.00576
Abstract:
Considering the sparsity of the frequency-aspect backscattered data in the attributed scattering center model parameter domain, a novel method based on sparse representation is proposed to extract attributed scattering center and estimate parameters in frequency-aspect domain. Due to the high dimension of model parameter, one high dimensional joint dictionary needs to be constructed, which may cost a mass storage. In this paper, two low dimensional dictionaries including localization and aspect attribute parameters respectively are constructed to replace the high dimensional joint dictionary to decouple the range characteristic and aspect characteristic, and reduce resource cost; Orthogonal Matching Pursuit (OMP) combined with RELAX are utilized to find the solution of the minimuml0 norm optimization issue and estimate localization parameters and aspect attribute parameters simultaneously. With the extracted attributed scattering centers, geometrical dimensions of the target or its main structure can be estimated. Numerical results both on electromagnetic computation data and measured data verify the validity of the proposed method.
Considering the sparsity of the frequency-aspect backscattered data in the attributed scattering center model parameter domain, a novel method based on sparse representation is proposed to extract attributed scattering center and estimate parameters in frequency-aspect domain. Due to the high dimension of model parameter, one high dimensional joint dictionary needs to be constructed, which may cost a mass storage. In this paper, two low dimensional dictionaries including localization and aspect attribute parameters respectively are constructed to replace the high dimensional joint dictionary to decouple the range characteristic and aspect characteristic, and reduce resource cost; Orthogonal Matching Pursuit (OMP) combined with RELAX are utilized to find the solution of the minimuml0 norm optimization issue and estimate localization parameters and aspect attribute parameters simultaneously. With the extracted attributed scattering centers, geometrical dimensions of the target or its main structure can be estimated. Numerical results both on electromagnetic computation data and measured data verify the validity of the proposed method.
2014, 36(4): 938-945.
doi: 10.3724/SP.J.1146.2013.00011
Abstract:
The Fourier Transform Pairs (FTP) between the image domain and its corresponding range-compressed phase history domain is critical for autofocus. Different from the frequency-domain algorithms, this requirement of FTP is both complicated and difficult to be met in the time-domain algorithms. For a swift and effective reconstruction and autofocus processing of imagery created by the time-domain algorithm, the necessary improvement and optimization in the Fast Factorized Back-Projection (FFBP) is performed, presenting an Improved FFBP (IFFBP) algorithm further. Through the pseudo-polar coordinate system, the IFFBP is able to pave the way for autofocus application. Also, in view of the practical requirements for data processing, the IFFBP uses a moderate-accuracy Inertial Measurement Unit (IMU) to achieve a coarse compensation,and combines with the Phase Gradient Autofocus (PGA) to make a fine compensation. Finally, the simulation results and the collected data sets verify and validate the proposed approach.
The Fourier Transform Pairs (FTP) between the image domain and its corresponding range-compressed phase history domain is critical for autofocus. Different from the frequency-domain algorithms, this requirement of FTP is both complicated and difficult to be met in the time-domain algorithms. For a swift and effective reconstruction and autofocus processing of imagery created by the time-domain algorithm, the necessary improvement and optimization in the Fast Factorized Back-Projection (FFBP) is performed, presenting an Improved FFBP (IFFBP) algorithm further. Through the pseudo-polar coordinate system, the IFFBP is able to pave the way for autofocus application. Also, in view of the practical requirements for data processing, the IFFBP uses a moderate-accuracy Inertial Measurement Unit (IMU) to achieve a coarse compensation,and combines with the Phase Gradient Autofocus (PGA) to make a fine compensation. Finally, the simulation results and the collected data sets verify and validate the proposed approach.
2014, 36(4): 946-952.
doi: 10.3724/SP.J.1146.2013.00891
Abstract:
In Through-the Wall Imaging (TWI), wall reflections are often stronger than the target, hence they make great interference on the imaging and detection of the target. Spatial filtering based on single input and single output is a traditional method for wall-clutter mitigation, whereas it is not applicable to MIMO Through-the-Wall Radar (TWR). In this paper, the spatial signatures of wall and target from MIMO TWR measurements are analyzed respectively. The results based on parametric models show that the wall reflections not only have no relations with positions of antenna array, but also have symmetry properties, whereas the target reflections do not. According to the above difference, a new method called symmetry subtraction for suppressing wall reflections is introduced. Simulation results indicate that the proposed method can suppress efficiently the wall reflections without affecting target signal.
In Through-the Wall Imaging (TWI), wall reflections are often stronger than the target, hence they make great interference on the imaging and detection of the target. Spatial filtering based on single input and single output is a traditional method for wall-clutter mitigation, whereas it is not applicable to MIMO Through-the-Wall Radar (TWR). In this paper, the spatial signatures of wall and target from MIMO TWR measurements are analyzed respectively. The results based on parametric models show that the wall reflections not only have no relations with positions of antenna array, but also have symmetry properties, whereas the target reflections do not. According to the above difference, a new method called symmetry subtraction for suppressing wall reflections is introduced. Simulation results indicate that the proposed method can suppress efficiently the wall reflections without affecting target signal.
2014, 36(4): 953-959.
doi: 10.3724/SP.J.1146.2013.00955
Abstract:
A new central angle estimation method of the coherently distributed sources for bistatic MIMO radar is proposed based on second virtual array aperture extension. Firstly, the bistatic MIMO radar data model for the coherently distributed sources with the identical deterministic angular distribution function and distribution parameter is built based on nonuniform array. The second virtual array aperture extension is also realized by the colocated difference arrays of the minimum redundancy arrays. Furthermore, the new correlation matrix is obtained via transformal, eliminating redundant and changeable dimensional matrix. Finally, the central angles of DODs and DOAs are estimated without pairing algorithm by the idea of ESPRIT. Because of the second virtual array aperture extension, the proposed method provides much more parameter identifiability and better parameter estimation performance than the conventional bistatic MIMO radar. The effectiveness of the proposed method is verified with the computer simulation.
A new central angle estimation method of the coherently distributed sources for bistatic MIMO radar is proposed based on second virtual array aperture extension. Firstly, the bistatic MIMO radar data model for the coherently distributed sources with the identical deterministic angular distribution function and distribution parameter is built based on nonuniform array. The second virtual array aperture extension is also realized by the colocated difference arrays of the minimum redundancy arrays. Furthermore, the new correlation matrix is obtained via transformal, eliminating redundant and changeable dimensional matrix. Finally, the central angles of DODs and DOAs are estimated without pairing algorithm by the idea of ESPRIT. Because of the second virtual array aperture extension, the proposed method provides much more parameter identifiability and better parameter estimation performance than the conventional bistatic MIMO radar. The effectiveness of the proposed method is verified with the computer simulation.
2014, 36(4): 960-966.
doi: 10.3724/SP.J.1146.2013.01007
Abstract:
In order to enhance the performance of estimating range-Doppler parameters in presence of mismatch error between sensing matrix and target information vector for Compressive Sensing Radar (CSR), a robust blind sparsity target parameter estimation algorithm is proposed. First, a two-dimensional sparse sensing model for range-Doppler estimation is established when there exists CSR system model mismatch error, and a waveform optimization object function is derived based on minimization Coherence of Sensing Matrix (CSM). Then, a novel blind sparsity CSR algorithm is employed to correct system sensing matrix and estimate the range-Doppler parameters by optimizing iteratively transmit waveform, system mismatch error and target information vector. Compared with traditional CSR algorithm, the proposed method reduces the range-Doppler estimation error, and enhances the accuracy and robustness of CSR target information estimation. The validity of the proposed method is demonstrated with numerical simulation.
In order to enhance the performance of estimating range-Doppler parameters in presence of mismatch error between sensing matrix and target information vector for Compressive Sensing Radar (CSR), a robust blind sparsity target parameter estimation algorithm is proposed. First, a two-dimensional sparse sensing model for range-Doppler estimation is established when there exists CSR system model mismatch error, and a waveform optimization object function is derived based on minimization Coherence of Sensing Matrix (CSM). Then, a novel blind sparsity CSR algorithm is employed to correct system sensing matrix and estimate the range-Doppler parameters by optimizing iteratively transmit waveform, system mismatch error and target information vector. Compared with traditional CSR algorithm, the proposed method reduces the range-Doppler estimation error, and enhances the accuracy and robustness of CSR target information estimation. The validity of the proposed method is demonstrated with numerical simulation.
2014, 36(4): 967-973.
doi: 10.3724/SP.J.1146.2013.00817
Abstract:
Wide Area Surveillance Ground Moving Target Indication mode is an important mode in surveillance radar systems since it can monitor extensive area in a short time, which enables moving target tracking. A method for tracking ground moving targets based on airborne wide-area surveillance radar system is proposed in this paper. First, the location of the moving targets is estimate according to the moving target information obtained from the detection process. Then a correlation threshold is selected for each target to decide which one would compute weighted relate-rate. Finally, the global optimal relevance is computed. This paper analyzes the applicable error scope of the method. The algorithm is compared with another two methods in the simulation and proved to be effective. Moreover real data results demonstrate the feasibility of the proposed method
Wide Area Surveillance Ground Moving Target Indication mode is an important mode in surveillance radar systems since it can monitor extensive area in a short time, which enables moving target tracking. A method for tracking ground moving targets based on airborne wide-area surveillance radar system is proposed in this paper. First, the location of the moving targets is estimate according to the moving target information obtained from the detection process. Then a correlation threshold is selected for each target to decide which one would compute weighted relate-rate. Finally, the global optimal relevance is computed. This paper analyzes the applicable error scope of the method. The algorithm is compared with another two methods in the simulation and proved to be effective. Moreover real data results demonstrate the feasibility of the proposed method
2014, 36(4): 974-980.
doi: 10.3724/SP.J.1146.2013.00686
Abstract:
A new SAR image partition model is constructed based on 8-neighbor grid code, which is fast solved by region merging. Utilizing multi-direction ratio edge detector to construct Ratio Edge Strength Map (RESM) of SAR image, a novel thresholding method is proposed to suppress the minima value in the homogeneous region of RESM, which reduces the number of regions in an initial partition produced by watershed of the thresholding processed RESM. Sub-optimization of the partition model is obtained by merging adjacent region pair iteratively. Region Adjacency Graph (RAG) and its Nearest Neighbor Graph (NNG) characteristic are used to speed up the proceeding of region merging. Precision (P ) and Recall (R) are introduced to evaluate the boundary localization precision of segmentation methods. Compared with three widely used methods, the proposed method has higher boundary localization precision and lower computational complexity.
A new SAR image partition model is constructed based on 8-neighbor grid code, which is fast solved by region merging. Utilizing multi-direction ratio edge detector to construct Ratio Edge Strength Map (RESM) of SAR image, a novel thresholding method is proposed to suppress the minima value in the homogeneous region of RESM, which reduces the number of regions in an initial partition produced by watershed of the thresholding processed RESM. Sub-optimization of the partition model is obtained by merging adjacent region pair iteratively. Region Adjacency Graph (RAG) and its Nearest Neighbor Graph (NNG) characteristic are used to speed up the proceeding of region merging. Precision (P ) and Recall (R) are introduced to evaluate the boundary localization precision of segmentation methods. Compared with three widely used methods, the proposed method has higher boundary localization precision and lower computational complexity.
2014, 36(4): 981-987.
doi: 10.3724/SP.J.1146.2013.00848
Abstract:
A low-rank constraint eigenphone speaker adaptation method is proposed. Original eigenphone speaker adaptation method performs well when the amount of adaptation data is sufficient. However, it suffers from server overfitting when insufficient amount of adaptation data is provided, possibly resulting in lower performance than that of the unadapted system. Firstly, a simplified estimation alogrithm of the eigenphone matrix is deduced in case of hidden Markov model-Gaussian mixture model (HMM-GMM) based speech recognition system with diagonal covariance matrices. Then, a low-rank constraint is applied to estimation of the eigenphone matrix. The nuclear norm is used as a convex approximation of the rank of a matrix. The weight of the norm is adjusted to control the complexity of the adaptation model. Finally, an accelerated proximal gradient method is adopted to solve the mathematic optimization. Experiments on an Mandarin Chinese continuous speech recognition task show that, the performance of the original eigenphone method is improved remarkably. The new method outperforms the maximum likelihood linear regression followed by maximum a posterriori (MLLR+MAP) methods under 5~50 s adaptation data testing conditions.
A low-rank constraint eigenphone speaker adaptation method is proposed. Original eigenphone speaker adaptation method performs well when the amount of adaptation data is sufficient. However, it suffers from server overfitting when insufficient amount of adaptation data is provided, possibly resulting in lower performance than that of the unadapted system. Firstly, a simplified estimation alogrithm of the eigenphone matrix is deduced in case of hidden Markov model-Gaussian mixture model (HMM-GMM) based speech recognition system with diagonal covariance matrices. Then, a low-rank constraint is applied to estimation of the eigenphone matrix. The nuclear norm is used as a convex approximation of the rank of a matrix. The weight of the norm is adjusted to control the complexity of the adaptation model. Finally, an accelerated proximal gradient method is adopted to solve the mathematic optimization. Experiments on an Mandarin Chinese continuous speech recognition task show that, the performance of the original eigenphone method is improved remarkably. The new method outperforms the maximum likelihood linear regression followed by maximum a posterriori (MLLR+MAP) methods under 5~50 s adaptation data testing conditions.
2014, 36(4): 988-992.
doi: 10.3724/SP.J.1146.2013.00306
Abstract:
The passive detection and location of the underwater target can be realized by using a single three-axis seismic sensor. In order to increase the resolving capacity in merely using the condition of only deploying single Ocean Bottom Seismometer (OBS), the MUSIC algorithm is introduced into DOA (Direction Of Arrival) estimation. To address the performance degradation caused by the traditional MUSIC algorithm existing in the coherent signal source, the oblique projection polarization separation method is employed to separate multi-path propagating signal which gives the high resolution DOA estimation in various subspace of polarization state based on the estimate to spatial spectrum of the target in water. The simulation results show that the proposed of this algorithm achieves high resolution DOA estimation with only one OBS in shallow water multi-path environment. The experimental results using data collected in lake demonstrate the effectiveness of proposed algorithm.
The passive detection and location of the underwater target can be realized by using a single three-axis seismic sensor. In order to increase the resolving capacity in merely using the condition of only deploying single Ocean Bottom Seismometer (OBS), the MUSIC algorithm is introduced into DOA (Direction Of Arrival) estimation. To address the performance degradation caused by the traditional MUSIC algorithm existing in the coherent signal source, the oblique projection polarization separation method is employed to separate multi-path propagating signal which gives the high resolution DOA estimation in various subspace of polarization state based on the estimate to spatial spectrum of the target in water. The simulation results show that the proposed of this algorithm achieves high resolution DOA estimation with only one OBS in shallow water multi-path environment. The experimental results using data collected in lake demonstrate the effectiveness of proposed algorithm.
2014, 36(4): 993-997.
doi: 10.3724/SP.J.1146.2013.00282
Abstract:
In order to locate the underwater robot in the pools of the nuclear power plant, the scan sonar is used. First, the signal characteristics of the scan sonar are analyzed, and the preprocessing method of sonar is used to reduce signal interference and eliminate redundant data, and the computational efficiency is improved by preprocessing of the threshold denoising, distance limitation and reduction of sampling. Then, the probabilistic iterative correspondence algorithm is proposed based on the measurement noise of the sonar. The nearest matching points between sonar image and the map of the pools of the nuclear power plant are computed by the Mahalanobis distance. Meanwhile, the degree of confidence is used to improve the matching accuracy, and the absolute position and orientation of underwater robots is estimated by the optimization iterations. The algorithm is compared with the traditional iterative closest point algorithm and the results show that the proposed algorithm improves the estimation accuracy of underwater robots. Finally, experiments carried out in the pool verify the effectiveness of the proposed algorithm.
In order to locate the underwater robot in the pools of the nuclear power plant, the scan sonar is used. First, the signal characteristics of the scan sonar are analyzed, and the preprocessing method of sonar is used to reduce signal interference and eliminate redundant data, and the computational efficiency is improved by preprocessing of the threshold denoising, distance limitation and reduction of sampling. Then, the probabilistic iterative correspondence algorithm is proposed based on the measurement noise of the sonar. The nearest matching points between sonar image and the map of the pools of the nuclear power plant are computed by the Mahalanobis distance. Meanwhile, the degree of confidence is used to improve the matching accuracy, and the absolute position and orientation of underwater robots is estimated by the optimization iterations. The algorithm is compared with the traditional iterative closest point algorithm and the results show that the proposed algorithm improves the estimation accuracy of underwater robots. Finally, experiments carried out in the pool verify the effectiveness of the proposed algorithm.
2014, 36(4): 998-1002.
doi: 10.3724/SP.J.1146.2013.00841
Abstract:
For the increasing requirements of ultra long sequences FFTs efficient implementation in application systems, an implementation method is proposed to realize ultra long sequences FFT. The method can solve resources limitation of existed processing platforms, which can save mass memory resources by optimizing the twiddle factor store and adopting the way of rows and columns instead of three explicitly matrix transposition to access the two-dimensional matrix. The characteristics of processors hierarchical memory structure is analysed to optimize the rules of matrix partition, which improves the access efficiency. Experiment results show that the performance of execution efficiency of ultra long sequences FFT is improved nearly half of the memory resources.
For the increasing requirements of ultra long sequences FFTs efficient implementation in application systems, an implementation method is proposed to realize ultra long sequences FFT. The method can solve resources limitation of existed processing platforms, which can save mass memory resources by optimizing the twiddle factor store and adopting the way of rows and columns instead of three explicitly matrix transposition to access the two-dimensional matrix. The characteristics of processors hierarchical memory structure is analysed to optimize the rules of matrix partition, which improves the access efficiency. Experiment results show that the performance of execution efficiency of ultra long sequences FFT is improved nearly half of the memory resources.
2014, 36(4): 1003-1007.
doi: 10.3724/SP.J.1146.2013.00269
Abstract:
With only considering the impact of neighborhood pixels, the traditional Bayesian segmentation method based on Markov Random Field (MRF) can not suppress the speckle noise effectively. In the traditional priori model, the influence of each pixel within the neighborhood to the center one is assumed the same, which makes the description of the edge imprecise and the segmentation ineffective. Thus, an adaptive Bayesian segmentation method fused of local and non-local information is proposed. For the multiplicative noise model contained in SAR image, the similarity measure based on ratio probability is introduced, and the nonlocal similar pixel-blocks are adopted to guide the segmentation of the current pixel. Furthermore, the Coefficient of Variation (CV) method is employed to obtain the image template of edge area. In the edge region, the structure index and the size of search window are adaptively adjusted to improve the inconsistency between excessive smooth and structure preserving. In the experimental analysis, parts of the SAR image segmentation results with the new technique are given, which are compared with the traditional means. There is a significant advantage that the proposed algorithm enables more accurate segmentation results, which not only make the speckle noise suppressed, but also keep the detail characteristics effectively.
With only considering the impact of neighborhood pixels, the traditional Bayesian segmentation method based on Markov Random Field (MRF) can not suppress the speckle noise effectively. In the traditional priori model, the influence of each pixel within the neighborhood to the center one is assumed the same, which makes the description of the edge imprecise and the segmentation ineffective. Thus, an adaptive Bayesian segmentation method fused of local and non-local information is proposed. For the multiplicative noise model contained in SAR image, the similarity measure based on ratio probability is introduced, and the nonlocal similar pixel-blocks are adopted to guide the segmentation of the current pixel. Furthermore, the Coefficient of Variation (CV) method is employed to obtain the image template of edge area. In the edge region, the structure index and the size of search window are adaptively adjusted to improve the inconsistency between excessive smooth and structure preserving. In the experimental analysis, parts of the SAR image segmentation results with the new technique are given, which are compared with the traditional means. There is a significant advantage that the proposed algorithm enables more accurate segmentation results, which not only make the speckle noise suppressed, but also keep the detail characteristics effectively.
2014, 36(4): 1008-1012.
doi: 10.3724/SP.J.1146.2013.00863
Abstract:
Data compression is an effective measure to save the costs of data transmission and storage. A new and effective bit-recombination mark coding method that can be used to lossless data compression is proposed for the integer data sequence which has a small mean squared value. In the new method, the bit-recombination process is firstly applied to the integer data sequence to increase the occurrence probabilities of some data; then, the correct coding format is adaptively selected to encode the data stream according to the occurrence probability distribution characteristics of local data. Integer data sequences that have small mean squared values are applied to test the proposed method with several other lossless compression methods, and the compression effects are compared and analyzed. Test results show that, the integer data sequences can be compressed and decompressed losslessly by the proposed method. Moreover, the compression effect of the proposed method is superior to that of the classical arithmetic coding method, the LZW method, the universal WinRAR software, and the professional audio data compression software FLAC. The experimental results demonstrate the proposed method has a good application prospect.
Data compression is an effective measure to save the costs of data transmission and storage. A new and effective bit-recombination mark coding method that can be used to lossless data compression is proposed for the integer data sequence which has a small mean squared value. In the new method, the bit-recombination process is firstly applied to the integer data sequence to increase the occurrence probabilities of some data; then, the correct coding format is adaptively selected to encode the data stream according to the occurrence probability distribution characteristics of local data. Integer data sequences that have small mean squared values are applied to test the proposed method with several other lossless compression methods, and the compression effects are compared and analyzed. Test results show that, the integer data sequences can be compressed and decompressed losslessly by the proposed method. Moreover, the compression effect of the proposed method is superior to that of the classical arithmetic coding method, the LZW method, the universal WinRAR software, and the professional audio data compression software FLAC. The experimental results demonstrate the proposed method has a good application prospect.
2014, 36(4): 1013-1016.
doi: 10.3724/SP.J.1146.2013.00899
Abstract:
A Syndrome-assisted list decoding algorithm for BCH codes of B1I navigation signal in China Beidou Satellite navigation system is proposed. First, error pattern lists are built based on syndrome and Hamming weight. Then, the syndrome of hard-decision sequence is used to select the list for decoding. Finally, the optimal error pattern is found for decoding by using correlation function difference metric. The results of simulation show that, the difference of SNR between proposed algorithm and Maximum-Likelihood (ML) decoding is less than 0.08 dB at BER of10-5 which illustrates that the syndrome-assisted list decoding algorithm is a near optimal decoding algorithm for BCH codes of Beidou B1I signal. Additionally, the new algorithm is linear complexity and can be parallel implemented.
A Syndrome-assisted list decoding algorithm for BCH codes of B1I navigation signal in China Beidou Satellite navigation system is proposed. First, error pattern lists are built based on syndrome and Hamming weight. Then, the syndrome of hard-decision sequence is used to select the list for decoding. Finally, the optimal error pattern is found for decoding by using correlation function difference metric. The results of simulation show that, the difference of SNR between proposed algorithm and Maximum-Likelihood (ML) decoding is less than 0.08 dB at BER of10-5 which illustrates that the syndrome-assisted list decoding algorithm is a near optimal decoding algorithm for BCH codes of Beidou B1I signal. Additionally, the new algorithm is linear complexity and can be parallel implemented.