Email alert
2011 Vol. 33, No. 11
Display Method:
2011, 33(11): 2541-2546.
doi: 10.3724/SP.J.1146.2011.00218
Abstract:
Since compressing depth map using existing video coding techniques yields unacceptable distortions while rendering virtual views, the depth maps need to be compressed in a way that it minimizes distortions in the rendered views. A distortion model is proposed that approximates rendering distortions caused by depth changes on depth coding. First, relationships between distortions in coded depth map and rendered view are derived. Then, a region based video characteristics distortion model is proposed for precisely estimation distortion in view synthesis. Finally, the new distortion metric is used to select encoding mode decisions for Rate Distortion (RD) optimization, while reducing rendering distortions. Simulation results illustrate that proposed techniques improve the objective quality of the rendered virtual views by up to 2 dB over the Lagrange Optimization based mode selection technique that considers the distortions only in the depth map.
Since compressing depth map using existing video coding techniques yields unacceptable distortions while rendering virtual views, the depth maps need to be compressed in a way that it minimizes distortions in the rendered views. A distortion model is proposed that approximates rendering distortions caused by depth changes on depth coding. First, relationships between distortions in coded depth map and rendered view are derived. Then, a region based video characteristics distortion model is proposed for precisely estimation distortion in view synthesis. Finally, the new distortion metric is used to select encoding mode decisions for Rate Distortion (RD) optimization, while reducing rendering distortions. Simulation results illustrate that proposed techniques improve the objective quality of the rendered virtual views by up to 2 dB over the Lagrange Optimization based mode selection technique that considers the distortions only in the depth map.
2011, 33(11): 2547-2552.
doi: 10.3724/SP.J.1146.2011.00126
Abstract:
A serious problem in ordinary vector quantization is edge degradation, it can not accurately preserve the edge information. To tackle this problem, a novel classified vector quantization based on Reversible integer Time Domain Lapped Transform (RTDLT) is proposed. Firstly, the image is divided to several blocks and RTDLT is performed on the original image. Secondly, the image block is classified, according to the gradient magnitude within each image block and RTDLT coefficient. Finally, the RTDLT coefficients of different classified block are coded using fuzzy c-means vector quantization. Simulation results indicate that the proposed approach can compress images at lower bit rate and reconstruct images with higher peak signal-to-noise ratio than other approaches such as JPEG2000.
A serious problem in ordinary vector quantization is edge degradation, it can not accurately preserve the edge information. To tackle this problem, a novel classified vector quantization based on Reversible integer Time Domain Lapped Transform (RTDLT) is proposed. Firstly, the image is divided to several blocks and RTDLT is performed on the original image. Secondly, the image block is classified, according to the gradient magnitude within each image block and RTDLT coefficient. Finally, the RTDLT coefficients of different classified block are coded using fuzzy c-means vector quantization. Simulation results indicate that the proposed approach can compress images at lower bit rate and reconstruct images with higher peak signal-to-noise ratio than other approaches such as JPEG2000.
An Iterative Side Information Refinement Method Based on MHMCP Denoising in Distributed Video Coding
2011, 33(11): 2553-2558.
doi: 10.3724/SP.J.1146.2011.00355
Abstract:
In the Distributed Video Coding (DVC), the quality of Side Information (SI) has a critical impact on the coding efficiency and Rate-Distortion (RD) performance. Improving SI quality by extracting motion information from decoded frames becomes a new hotspot research area in recent years. According to noise correlation between SI, partial decoded Wyner-Ziv (WZ) frame and source, a novel iterative SI refinement method based on Multi-Hypothesis Motion-Compensated Prediction (MHMCP) denoising is proposed. In this scheme, firstly the original SI is refined by MHMCP denoising and a better motion compensated frame is generated instead of a similar one. Then it is refined iteratively by refinement module after each bit-plane is decoded. Experimental results show that the proposed strategy can significantly improve quality of WZ frames and reduce bit rate, thereby improve effectively the RD performance of DVC.
In the Distributed Video Coding (DVC), the quality of Side Information (SI) has a critical impact on the coding efficiency and Rate-Distortion (RD) performance. Improving SI quality by extracting motion information from decoded frames becomes a new hotspot research area in recent years. According to noise correlation between SI, partial decoded Wyner-Ziv (WZ) frame and source, a novel iterative SI refinement method based on Multi-Hypothesis Motion-Compensated Prediction (MHMCP) denoising is proposed. In this scheme, firstly the original SI is refined by MHMCP denoising and a better motion compensated frame is generated instead of a similar one. Then it is refined iteratively by refinement module after each bit-plane is decoded. Experimental results show that the proposed strategy can significantly improve quality of WZ frames and reduce bit rate, thereby improve effectively the RD performance of DVC.
2011, 33(11): 2559-2563.
doi: 10.3724/SP.J.1146.2011.00172
Abstract:
Pilot Symbol Assisted Modulation (PSAM) is widely used in the digital burst transmission system. The frequency estimation of a Pre+Post pilot structure departs from its CramrRao Lower Bound (CRLB) when frequency error is large and SNR is low. In this paper, the distribution of the main lobe and side lobe of the maximum likelihood measurements is derived for the estimation of symmetric pilot structures frequency and the estimation ambiguity issue is analyzed. Then, a novel mixed pilot structure is proposed and the principles of selecting parameters are given. This new pilot structure can deal with high Doppler frequency shift with high precision under low SNR. The simulation results show the advantages of this pilot structure.
Pilot Symbol Assisted Modulation (PSAM) is widely used in the digital burst transmission system. The frequency estimation of a Pre+Post pilot structure departs from its CramrRao Lower Bound (CRLB) when frequency error is large and SNR is low. In this paper, the distribution of the main lobe and side lobe of the maximum likelihood measurements is derived for the estimation of symmetric pilot structures frequency and the estimation ambiguity issue is analyzed. Then, a novel mixed pilot structure is proposed and the principles of selecting parameters are given. This new pilot structure can deal with high Doppler frequency shift with high precision under low SNR. The simulation results show the advantages of this pilot structure.
2011, 33(11): 2564-2568.
doi: 10.3724/SP.J.1146.2011.00389
Abstract:
The performances analysis of a multiuser Amplify-and-Forward (AF) cooperative communication networks is presented based on the outdated Channel State Information (CSI). The system architecture is point-to-multipoint and the user selection is based on outdated CSI. Asymptotical expressions of outage probability, channel capacity and Symbol Error Rate (SER) are derived. Analysis is applicable to Independent Identically Distributed (IID) or Independent None-identically Distributed (IND) fading channels. Finally, simulations are carried out to verify the correctness of theoretical analysis and illustrate the effort of parameters on the performance. Results show that performances dose not improve with the increase of user number and relocation of the relay node can be regarded as one way of performance enhancement.
The performances analysis of a multiuser Amplify-and-Forward (AF) cooperative communication networks is presented based on the outdated Channel State Information (CSI). The system architecture is point-to-multipoint and the user selection is based on outdated CSI. Asymptotical expressions of outage probability, channel capacity and Symbol Error Rate (SER) are derived. Analysis is applicable to Independent Identically Distributed (IID) or Independent None-identically Distributed (IND) fading channels. Finally, simulations are carried out to verify the correctness of theoretical analysis and illustrate the effort of parameters on the performance. Results show that performances dose not improve with the increase of user number and relocation of the relay node can be regarded as one way of performance enhancement.
2011, 33(11): 2569-2574.
doi: 10.3724/SP.J.1146.2011.00247
Abstract:
Femtocells are a promising solution to enhance indoor coverage and system capacity. Hybrid access methods reach a compromise between the performance of subscribers and non-subscribers. In this paper, a Weighted Proportional Fair (WPF) algorithm is proposed, in which different users with different weights, for the downlink hybrid access OFDMA femtocell networks. Besides, two algorithms are developed to obtain the user scheduling weight. The one is that adaptively change the scheduling weight, the other is based on the asymptotic analysis of WPF algorithm. Simulation results show that the two scheduling algorithms in the hybrid access femtocells could reasonably allocate the resource between the subscribers and non-subscribers and ensure the rate requirements of different users.
Femtocells are a promising solution to enhance indoor coverage and system capacity. Hybrid access methods reach a compromise between the performance of subscribers and non-subscribers. In this paper, a Weighted Proportional Fair (WPF) algorithm is proposed, in which different users with different weights, for the downlink hybrid access OFDMA femtocell networks. Besides, two algorithms are developed to obtain the user scheduling weight. The one is that adaptively change the scheduling weight, the other is based on the asymptotic analysis of WPF algorithm. Simulation results show that the two scheduling algorithms in the hybrid access femtocells could reasonably allocate the resource between the subscribers and non-subscribers and ensure the rate requirements of different users.
2011, 33(11): 2575-2581.
doi: 10.3724/SP.J.1146.2011.00101
Abstract:
To reduce packet collision probability, Binary Exponential Backoff (BEB) algorithm is presented in IEEE 802.11 standard. The BEB, however, exhibits the shortcoming that Contention Window (CW) oscillation occurs when packet collision probability is large. That is, it repeats frequently that the CW size has to be doubled several times from its minimum value so that the node is able to transmit a frame successfully and then the node resets the CW size to the minimal value again. To overcome CW oscillation, a Two-step BEB (TBEB) algorithm is proposed in this paper. Additionally, the statistics of the TBEB, such as the probability distributions of backoff, the average CW size, the average number of backoffs, the time needed by the node for transmitting a frame, and throughput, are all derived from a two-dimension Markov model, and they are validated by simulations. The TBEB is able to maximize the throughput by resetting its CW to the best size obtained from solving the simple optimization problem proposed in this paper.
To reduce packet collision probability, Binary Exponential Backoff (BEB) algorithm is presented in IEEE 802.11 standard. The BEB, however, exhibits the shortcoming that Contention Window (CW) oscillation occurs when packet collision probability is large. That is, it repeats frequently that the CW size has to be doubled several times from its minimum value so that the node is able to transmit a frame successfully and then the node resets the CW size to the minimal value again. To overcome CW oscillation, a Two-step BEB (TBEB) algorithm is proposed in this paper. Additionally, the statistics of the TBEB, such as the probability distributions of backoff, the average CW size, the average number of backoffs, the time needed by the node for transmitting a frame, and throughput, are all derived from a two-dimension Markov model, and they are validated by simulations. The TBEB is able to maximize the throughput by resetting its CW to the best size obtained from solving the simple optimization problem proposed in this paper.
2011, 33(11): 2582-2587.
doi: 10.3724/SP.J.1146.2011.00309
Abstract:
An Ultra Wide Band (UWB) communication system with Dechirp and complex polyphase filter bank is proposed. The transmitted signal uses On-Off Keying (OOK) modulation and Chirp spread spectrum. The received signal goes through the Dechirp pulse compression, low-pass filtering and analog-digital conversion processes, and the subchannel division is carried out through the complex polyphase filter bank, followed by the subchannel selection and maximum ratio combination. Finally, the coarse timing synchronization, fine timing synchronization, SNR estimation and OOK demodulation through energy detection are realized. By means of the theoretical analysis and simulation experiments, the performance of this UWB communication system is evaluated over the AWGN channel, IEEE 802.15.3a CM1 and CM4 channel. The results show that this UWB communication system can achieve high processing gain and good anti-multipath capability, and that it is suitable for range extension communications.
An Ultra Wide Band (UWB) communication system with Dechirp and complex polyphase filter bank is proposed. The transmitted signal uses On-Off Keying (OOK) modulation and Chirp spread spectrum. The received signal goes through the Dechirp pulse compression, low-pass filtering and analog-digital conversion processes, and the subchannel division is carried out through the complex polyphase filter bank, followed by the subchannel selection and maximum ratio combination. Finally, the coarse timing synchronization, fine timing synchronization, SNR estimation and OOK demodulation through energy detection are realized. By means of the theoretical analysis and simulation experiments, the performance of this UWB communication system is evaluated over the AWGN channel, IEEE 802.15.3a CM1 and CM4 channel. The results show that this UWB communication system can achieve high processing gain and good anti-multipath capability, and that it is suitable for range extension communications.
2011, 33(11): 2588-2593.
doi: 10.3724/SP.J.1146.2011.00090
Abstract:
Multiplied by constant on modulo 2n operation, a building block, is widely used in the ciphers like Sosemanuk, RC6, MARS, and so on. This code link is recognized as a permutation with strong nonlinear property and fine realization efficiency, when the constant c is odd. But there is no published paper analyzed it with differential cryptanalysis. In this paper, the differential property of the operation is studied. And the characters of structure, counts of the input and output differentials and the constant are given for the first time, when the differential probability is to be 1. Then the recursive connection of its carries counts is given. Based on that, an algorithm of this operations differential probability is given, which time complexity is O(n) on average.
Multiplied by constant on modulo 2n operation, a building block, is widely used in the ciphers like Sosemanuk, RC6, MARS, and so on. This code link is recognized as a permutation with strong nonlinear property and fine realization efficiency, when the constant c is odd. But there is no published paper analyzed it with differential cryptanalysis. In this paper, the differential property of the operation is studied. And the characters of structure, counts of the input and output differentials and the constant are given for the first time, when the differential probability is to be 1. Then the recursive connection of its carries counts is given. Based on that, an algorithm of this operations differential probability is given, which time complexity is O(n) on average.
2011, 33(11): 2594-2599.
doi: 10.3724/SP.J.1146.2011.00112
Abstract:
A new scheme of joint channel coding and Physical layer Network Coding (PNC) is proposed for multiple-access channel based on Trellis Coded Modulation (TCM) in the Multiple Access Control (MAC) stage of the investigated two-way relay scenario. As a result of the adoption of TCM that combines channel coding with modulation, the scheme improves the free distance of the coding sequence, thus more encoding gain is obtained. In addition, the proposed scheme takes advantage of the linearity of convolutional code and MAC-XOR NC so that digital bits of network coding can be estimated directly. In this way the complexity of the decoding at the relay node is reduced by almost 50%. The proposed scheme considers the problem of joint design of channel coding, modulation and PNC, so that the system not only increases the information transmission rate, but also guarantees the reliability.
A new scheme of joint channel coding and Physical layer Network Coding (PNC) is proposed for multiple-access channel based on Trellis Coded Modulation (TCM) in the Multiple Access Control (MAC) stage of the investigated two-way relay scenario. As a result of the adoption of TCM that combines channel coding with modulation, the scheme improves the free distance of the coding sequence, thus more encoding gain is obtained. In addition, the proposed scheme takes advantage of the linearity of convolutional code and MAC-XOR NC so that digital bits of network coding can be estimated directly. In this way the complexity of the decoding at the relay node is reduced by almost 50%. The proposed scheme considers the problem of joint design of channel coding, modulation and PNC, so that the system not only increases the information transmission rate, but also guarantees the reliability.
2011, 33(11): 2600-2604.
doi: 10.3724/SP.J.1146.2010.01395
Abstract:
A power-interference pricing model is presented using pricing scheme based on utility optimization. Then, a distributed jointly channel assignment and power allocation algorithm is proposed for multi-channel wireless mesh networks. Each node adjusts its power price according to the amount of power expended, and adjusts its interference price according to the interference suffered. To maximize the network utility, the channel assignment and power allocation are adjusted through power prices and interference prices. Simulation results show that the proposed algorithm can converge to approximate optimal solution rapidly and smoothly. The influence of the available channels, radios and power equipped on each node is also simulated, which is a reference for the network configuration.
A power-interference pricing model is presented using pricing scheme based on utility optimization. Then, a distributed jointly channel assignment and power allocation algorithm is proposed for multi-channel wireless mesh networks. Each node adjusts its power price according to the amount of power expended, and adjusts its interference price according to the interference suffered. To maximize the network utility, the channel assignment and power allocation are adjusted through power prices and interference prices. Simulation results show that the proposed algorithm can converge to approximate optimal solution rapidly and smoothly. The influence of the available channels, radios and power equipped on each node is also simulated, which is a reference for the network configuration.
2011, 33(11): 2605-2609.
doi: 10.3724/SP.J.1146.2011.00191
Abstract:
3-Dimensional Mobile Ad hoc NETwork (3-D MANET) is a case of mobile Ad hoc network which is distributed on the 3-Dimensional space. Analysis of link dynamics is a fundamental issue in the studies of 3-D MANET, the conclusion of which can be the basis of the design of network protocol. Based on the Constant Velocity (CV) mobility model, analytical expressions of expected link lifetime and its distributions are derived from a probability model. Simulation results verify the accuracy of the analytical expressions. The conclusion is valuable for the research and application of 3-D MANET.
3-Dimensional Mobile Ad hoc NETwork (3-D MANET) is a case of mobile Ad hoc network which is distributed on the 3-Dimensional space. Analysis of link dynamics is a fundamental issue in the studies of 3-D MANET, the conclusion of which can be the basis of the design of network protocol. Based on the Constant Velocity (CV) mobility model, analytical expressions of expected link lifetime and its distributions are derived from a probability model. Simulation results verify the accuracy of the analytical expressions. The conclusion is valuable for the research and application of 3-D MANET.
2011, 33(11): 2610-2615.
doi: 10.3724/SP.J.1146.2011.00323
Abstract:
In order to approach the theoretical limit of decode-and-forward strategy for the half-duplex relay channel, a cooperative LDPC coding structure and its corresponding optimization scheme is proposed. Different from the structure of Bilayer-Expurgated LDPC (BE-LDPC) codes, new structure makes the extra parity bits generated by the relay as a part of the overall code. Using the messages from both source and relay nodes, the destination node can decode the information of the source node. To analyze the performance of cooperative LDPC codes,bilayer EXtrinsic Information Transfer (EXIT) charts based on message-error rate is devised as an extension of standard EXIT. Based on bilayer-EXIT chart, a design methodology for cooperative LDPC codes is proposed, in which the degree distribution with maximum noise threshold is iteratively improved through Differential Evolution (DE). The experiment results show that the cooperative LDPC codes always outperform the BE-LDPC codes.
In order to approach the theoretical limit of decode-and-forward strategy for the half-duplex relay channel, a cooperative LDPC coding structure and its corresponding optimization scheme is proposed. Different from the structure of Bilayer-Expurgated LDPC (BE-LDPC) codes, new structure makes the extra parity bits generated by the relay as a part of the overall code. Using the messages from both source and relay nodes, the destination node can decode the information of the source node. To analyze the performance of cooperative LDPC codes,bilayer EXtrinsic Information Transfer (EXIT) charts based on message-error rate is devised as an extension of standard EXIT. Based on bilayer-EXIT chart, a design methodology for cooperative LDPC codes is proposed, in which the degree distribution with maximum noise threshold is iteratively improved through Differential Evolution (DE). The experiment results show that the cooperative LDPC codes always outperform the BE-LDPC codes.
2011, 33(11): 2616-2621.
doi: 10.3724/SP.J.1146.2011.00324
Abstract:
Belief Propagation (BP) decoding algorithm of Repeat Accumulate (RA) code approaches Shannon limit, but it possesses high complexity because of the use of the complex hyperbolic tangent function and the inverse hyperbolic tangent function at check nodes updating. In order to reduce the complexity with little performance sacrifice, combining the two methods lookup tables and piecewise approximation function, a modified decoding algorithm is proposed in this paper. The algorithm uses piecewise function to approximate the original function, which is obtained by transform and simplify the check node updating of BP algorithm, and use a very small lookup table to get a correction to correct error between the original function and approximate function. It can avoid the complex calculations, and get the minimal error. The algorithm greatly reduces the decoding complexity, and has a performance close to the BP decoding algorithm.
Belief Propagation (BP) decoding algorithm of Repeat Accumulate (RA) code approaches Shannon limit, but it possesses high complexity because of the use of the complex hyperbolic tangent function and the inverse hyperbolic tangent function at check nodes updating. In order to reduce the complexity with little performance sacrifice, combining the two methods lookup tables and piecewise approximation function, a modified decoding algorithm is proposed in this paper. The algorithm uses piecewise function to approximate the original function, which is obtained by transform and simplify the check node updating of BP algorithm, and use a very small lookup table to get a correction to correct error between the original function and approximate function. It can avoid the complex calculations, and get the minimal error. The algorithm greatly reduces the decoding complexity, and has a performance close to the BP decoding algorithm.
2011, 33(11): 2622-2627.
doi: 10.3724/SP.J.1146.2011.00322
Abstract:
To overcome the drawback of the high complexity of Log Likelihood Ratio (LLR) generation and its accompanying sorting in Extended Min-Sum (EMS) decoding of non-binary Low-Density Parity-Check (LDPC) codes, for non-binary LDPC-coded BPSK modulation systems, a high-speed and low-complexity LLR derivation algorithm is proposed in this paper. The proposed algorithm employs an iterative computation method to generate and sort the LLRs. The front-end of a decoder implementing the proposed algorithm can work in pipeline mode, which accelerates the decoding process and increases the throughput of the decoder. Simulation results show that, the proposed algorithm incurs negligible performance loss, which makes it a good candidate for the hardware implementation of the front-end in non-binary LDPC decoders.
To overcome the drawback of the high complexity of Log Likelihood Ratio (LLR) generation and its accompanying sorting in Extended Min-Sum (EMS) decoding of non-binary Low-Density Parity-Check (LDPC) codes, for non-binary LDPC-coded BPSK modulation systems, a high-speed and low-complexity LLR derivation algorithm is proposed in this paper. The proposed algorithm employs an iterative computation method to generate and sort the LLRs. The front-end of a decoder implementing the proposed algorithm can work in pipeline mode, which accelerates the decoding process and increases the throughput of the decoder. Simulation results show that, the proposed algorithm incurs negligible performance loss, which makes it a good candidate for the hardware implementation of the front-end in non-binary LDPC decoders.
2011, 33(11): 2628-2633.
doi: 10.3724/SP.J.1146.2011.00303
Abstract:
An accurate and efficient Dynamic Framed-Slotted ALOHA (DFSA) anti-collision algorithm is proposed based on unequal timeslots in Radio Frequency IDentification (RFID) system. Considering the influence of idle and collided timeslots on RFID system efficiency, the algorithm adopts an optimized DFSA anti-collision strategy based on unequal timeslots, which determines frame length by optimized parameter and unread tags, performs tag estimation with optimized Chebyshevs inequality, and analyses the process of identification based on Markovian for controling read cycles. The analysis and simulations show that the proposed algorithm achieves better system performance, reduces identification time, and has higher accuracy than Low Bound (LB), Schoute and Cratio estimations.
An accurate and efficient Dynamic Framed-Slotted ALOHA (DFSA) anti-collision algorithm is proposed based on unequal timeslots in Radio Frequency IDentification (RFID) system. Considering the influence of idle and collided timeslots on RFID system efficiency, the algorithm adopts an optimized DFSA anti-collision strategy based on unequal timeslots, which determines frame length by optimized parameter and unread tags, performs tag estimation with optimized Chebyshevs inequality, and analyses the process of identification based on Markovian for controling read cycles. The analysis and simulations show that the proposed algorithm achieves better system performance, reduces identification time, and has higher accuracy than Low Bound (LB), Schoute and Cratio estimations.
2011, 33(11): 2634-2639.
doi: 10.3724/SP.J.1146.2011.00221
Abstract:
Non-Local Means (NLM) filter is an effective method for image denoising. However, it only focuses on the geometry structure of image, ignoring the appearance model and directional information. In this paper, a new Non-Subsampled Shearlet Descriptor (NSSD) is proposed and employed to model the appearance of image patches and the measurement of similarity between two image patches is more robust. According to NSSD, a more effective Shearlet Non-Local Means (SNLM) algorithm is proposed by combining the NSSD with non-local computation model. For another, for texture images with directional information, a direction enhance window is proposed, which increases the weights on the main direction in the neighborhood window in the measurement of similarity. Experiment results show that the proposed NLM algorithm gets better performance on natural image denoising than the traditional NLM algorithm. Moreover, for texture image, the algorithm based on direction enhance neighborhood window can not only remove the noise but also preserve the detail information such as edges and textures and show great advantages on denoising.
Non-Local Means (NLM) filter is an effective method for image denoising. However, it only focuses on the geometry structure of image, ignoring the appearance model and directional information. In this paper, a new Non-Subsampled Shearlet Descriptor (NSSD) is proposed and employed to model the appearance of image patches and the measurement of similarity between two image patches is more robust. According to NSSD, a more effective Shearlet Non-Local Means (SNLM) algorithm is proposed by combining the NSSD with non-local computation model. For another, for texture images with directional information, a direction enhance window is proposed, which increases the weights on the main direction in the neighborhood window in the measurement of similarity. Experiment results show that the proposed NLM algorithm gets better performance on natural image denoising than the traditional NLM algorithm. Moreover, for texture image, the algorithm based on direction enhance neighborhood window can not only remove the noise but also preserve the detail information such as edges and textures and show great advantages on denoising.
2011, 33(11): 2640-2646.
doi: 10.3724/SP.J.1146.2011.00151
Abstract:
In order to improve recovery accuracy of the greedy algorithms, Bayesian hypothesis Testing Match Pursuit (BTMP) algorithm is proposed. Firstly, this algorithm presents a Bayesian hypothesis testing model which is used to identify the indexes of nonzero elements of sparse signal in the noisy case. Secondly, the output index-set of pursuit algorithm is used as the candidate set of this mode, and then every element of the set is tested to eliminate redundant indexes. Finally, the evaluation of sparse signal is reconstructed from the eliminated indexes set by least-squares algorithm. Simulated results show that in the same conditions, BTMP algorithm has no redundant indexes, and shows better anti-jamming ability and recovery accuracy than those of the traditional greedy algorithms.
In order to improve recovery accuracy of the greedy algorithms, Bayesian hypothesis Testing Match Pursuit (BTMP) algorithm is proposed. Firstly, this algorithm presents a Bayesian hypothesis testing model which is used to identify the indexes of nonzero elements of sparse signal in the noisy case. Secondly, the output index-set of pursuit algorithm is used as the candidate set of this mode, and then every element of the set is tested to eliminate redundant indexes. Finally, the evaluation of sparse signal is reconstructed from the eliminated indexes set by least-squares algorithm. Simulated results show that in the same conditions, BTMP algorithm has no redundant indexes, and shows better anti-jamming ability and recovery accuracy than those of the traditional greedy algorithms.
2011, 33(11): 2647-2651.
doi: 10.3724/SP.J.1146.2011.00338
Abstract:
According to the high precision demand of estimating TOA in Galileo Search And Rescue (SAR) system, and considering the uncertainty of message bit width and the asymmetry degree of modulation in actual COSPAS-SARSAT signal, a new TOA estimation algorithm based on Multiple dimensions Joint Maximum Likelihood Estimation (MJMLE) algorithm is proposed. The COSPAS-SARSAT signal model containing the asymmetry degree of modulation is introduced first. Then the principle of the algorithm is derived and the concrete realization is given. Monte Carlo simulation results show that when Carrier to Noise Ratio (CNR) equals to the threshold of 34.8 dBHz and the asymmetry degree of modulation is the extreme condition with 5% and -5%, root-mean-square errors (rmse) of TOA estimation of this algorithm is less than 13.5 s, which can satisfy the system requirement of 15 s and better than that of other estimation algorithms.
According to the high precision demand of estimating TOA in Galileo Search And Rescue (SAR) system, and considering the uncertainty of message bit width and the asymmetry degree of modulation in actual COSPAS-SARSAT signal, a new TOA estimation algorithm based on Multiple dimensions Joint Maximum Likelihood Estimation (MJMLE) algorithm is proposed. The COSPAS-SARSAT signal model containing the asymmetry degree of modulation is introduced first. Then the principle of the algorithm is derived and the concrete realization is given. Monte Carlo simulation results show that when Carrier to Noise Ratio (CNR) equals to the threshold of 34.8 dBHz and the asymmetry degree of modulation is the extreme condition with 5% and -5%, root-mean-square errors (rmse) of TOA estimation of this algorithm is less than 13.5 s, which can satisfy the system requirement of 15 s and better than that of other estimation algorithms.
2011, 33(11): 2652-2657.
doi: 10.3724/SP.J.1146.2010.01333
Abstract:
In many acoustic signal processing systems (e.g. hearing aids), directional algorithm is adopted to process signals from separated sources. However, reverberation in a living room or a conference room usually degrades noise reduction of a directional system, while the existing dereverberation algorithms can not effectively suppress interference noises. In this paper, an adaptive dual-microphone directional algorithm with two closely arranged omnidirectional microphones is proposed for reverberant environment. The proposed algorhthm combines the Adaptive Null-Forming (ANF) structure and a dereverberation algorithm based on statistical model to achieve adaptive directionality in reverberant environment. Compared with existing directional or dereverbertion algorithms, the proposed algorithm uses a simple structure to realize directionality and dereverberation synchronically, as well as owns low complexity and real-time capability. Finally, the directional performance of the proposed algorithm in reverberant enviroment was verified by simulation results.
In many acoustic signal processing systems (e.g. hearing aids), directional algorithm is adopted to process signals from separated sources. However, reverberation in a living room or a conference room usually degrades noise reduction of a directional system, while the existing dereverberation algorithms can not effectively suppress interference noises. In this paper, an adaptive dual-microphone directional algorithm with two closely arranged omnidirectional microphones is proposed for reverberant environment. The proposed algorhthm combines the Adaptive Null-Forming (ANF) structure and a dereverberation algorithm based on statistical model to achieve adaptive directionality in reverberant environment. Compared with existing directional or dereverbertion algorithms, the proposed algorithm uses a simple structure to realize directionality and dereverberation synchronically, as well as owns low complexity and real-time capability. Finally, the directional performance of the proposed algorithm in reverberant enviroment was verified by simulation results.
2011, 33(11): 2658-2664.
doi: 10.3724/SP.J.1146.2011.00208
Abstract:
This paper presents a method for 3D human pose estimation using shape and motion information from multiple synchronized video streams. It separates the whole human body into head, torso and limbs. The state of each part in current frame is predicted by motion information, and the shape information is used as detector for the pose. The use of complementary cues in the system alleviates the twin problem of drift and convergence to local minima, and it also makes the system automatically initialize and recover from failures. Meantime, the use of multiple data also allows us to deal with the problems due to self-occlusion and kinematic singularity. The experimental results on sequences with different kinds of motion illustrate the effectiveness of the approach, and the performance is better than the Condensation algorithm and annealing particle filter.
This paper presents a method for 3D human pose estimation using shape and motion information from multiple synchronized video streams. It separates the whole human body into head, torso and limbs. The state of each part in current frame is predicted by motion information, and the shape information is used as detector for the pose. The use of complementary cues in the system alleviates the twin problem of drift and convergence to local minima, and it also makes the system automatically initialize and recover from failures. Meantime, the use of multiple data also allows us to deal with the problems due to self-occlusion and kinematic singularity. The experimental results on sequences with different kinds of motion illustrate the effectiveness of the approach, and the performance is better than the Condensation algorithm and annealing particle filter.
2011, 33(11): 2665-2671.
doi: 10.3724/SP.J.1146.2011.00045
Abstract:
The exterior Computed Tomography (CT) reconstruction majors in reconstructing the cross section image of pipe wall in reality. Subset Average-Total Variation Minimization-Projection Onto Convex Sets (SA-TVM-POCS) numerical reconstruction algorithm is one of the algorithms of exterior reconstruction methods that is stabilization and has good quality reconstructed image on non-destructive-testing of pipes. One disadvantage of the SA-TVM-POCS algorithm, just as other iterative algorithms, is the long computation time typically associated with reconstruction. This obstacle can be overcomed by implementing reconstruction algorithms on Computer Unified Device Architecture (CUDA). In this work the SA-TVM-POCS algorithm is implemented by CUDA in order to signi?cantly reduce the reconstruction time. The mathematical and computational details of the implementation are explored. The experiment results show that the algorithm can be speedup to 20 times without reducing the quality of reconstruction image. So, the SA-TVM-POCS algorithm of exterior computed tomography can be accelerated by CUDA effectively.
The exterior Computed Tomography (CT) reconstruction majors in reconstructing the cross section image of pipe wall in reality. Subset Average-Total Variation Minimization-Projection Onto Convex Sets (SA-TVM-POCS) numerical reconstruction algorithm is one of the algorithms of exterior reconstruction methods that is stabilization and has good quality reconstructed image on non-destructive-testing of pipes. One disadvantage of the SA-TVM-POCS algorithm, just as other iterative algorithms, is the long computation time typically associated with reconstruction. This obstacle can be overcomed by implementing reconstruction algorithms on Computer Unified Device Architecture (CUDA). In this work the SA-TVM-POCS algorithm is implemented by CUDA in order to signi?cantly reduce the reconstruction time. The mathematical and computational details of the implementation are explored. The experiment results show that the algorithm can be speedup to 20 times without reducing the quality of reconstruction image. So, the SA-TVM-POCS algorithm of exterior computed tomography can be accelerated by CUDA effectively.
2011, 33(11): 2672-2678.
doi: 10.3724/SP.J.1146.2010.01426
Abstract:
A novel learning-based image inpainting method is presented. As a further development of classical sparse representation model, the non-local self-similar patches are unified for joint sparse representation and learning dictionary, in which each element of the self-similar patches has the same sparse pattern. The method assures the self-similar patches possess similarity when projected on the sparse space, and efficiently builds the sparse association among them. This association is next taken as a priori knowledge for image inpainting. The paper uses numerous samples and non-local patches of input image to train overcomplete dictionary. The method not only takes into account the priori knowledge of samples, but also considers the non-local self-similar information of input image. Large and small region inpainting experiments and text removing experiments on natural images show the good performance of the method.
A novel learning-based image inpainting method is presented. As a further development of classical sparse representation model, the non-local self-similar patches are unified for joint sparse representation and learning dictionary, in which each element of the self-similar patches has the same sparse pattern. The method assures the self-similar patches possess similarity when projected on the sparse space, and efficiently builds the sparse association among them. This association is next taken as a priori knowledge for image inpainting. The paper uses numerous samples and non-local patches of input image to train overcomplete dictionary. The method not only takes into account the priori knowledge of samples, but also considers the non-local self-similar information of input image. Large and small region inpainting experiments and text removing experiments on natural images show the good performance of the method.
2011, 33(11): 2679-2685.
doi: 10.3724/SP.J.1146.2011.00113
Abstract:
Terrain Observation by Progressive scans SAR (TOPSAR) is a novel spaceborne imaging mode with wide swath coverage. Imaging algorithms for such mode are required to resolve three existing problems: Doppler spectrum aliasing, large range cell migration and azimuth output time folding. As for these issue, a new imaging algorithm based on two-dimension Chirp-Z transform is proposed. The complete derivation processing of the presented algorithm and the expression of each transfer function are given in detail. In this algorithm, azimuth pre-filtering with deramp operation is adopted to resolve Doppler spectrum aliasing problem with fewer azimuth samples compared with other methods. Chirp-Z transforms in the range and azimuth domain can implement the large Range Cell Migration Correction (RCMC) and azimuth data focusing, respectively. The presented imaging algorithm only requires the limited azimuth increased samples and with interpolation free. Therefore, it is with high computational efficiency. Simulation results validate the effectiveness of the presented imaging algorithm.
Terrain Observation by Progressive scans SAR (TOPSAR) is a novel spaceborne imaging mode with wide swath coverage. Imaging algorithms for such mode are required to resolve three existing problems: Doppler spectrum aliasing, large range cell migration and azimuth output time folding. As for these issue, a new imaging algorithm based on two-dimension Chirp-Z transform is proposed. The complete derivation processing of the presented algorithm and the expression of each transfer function are given in detail. In this algorithm, azimuth pre-filtering with deramp operation is adopted to resolve Doppler spectrum aliasing problem with fewer azimuth samples compared with other methods. Chirp-Z transforms in the range and azimuth domain can implement the large Range Cell Migration Correction (RCMC) and azimuth data focusing, respectively. The presented imaging algorithm only requires the limited azimuth increased samples and with interpolation free. Therefore, it is with high computational efficiency. Simulation results validate the effectiveness of the presented imaging algorithm.
2011, 33(11): 2686-2693.
doi: 10.3724/SP.J.1146.2011.00289
Abstract:
Because of the long integration time of Geosynchronous Earth Orbit Synthetic Aperture Radar (GEO SAR), the imaging algorithms based on linear trajectory module is not suit for GEO SAR, the imaging algorithms based on linear trajectory module may induce considerable distortion. Thus, this paper establishes the range equation of curve trajectory model by using the high order approximate based on the characters of the GEO SAR movements. Then, the two-dimensional spectrum is derived by the method of series reversion, based on which an improved Chirp Scaling (CS) imaging algorithm for GEO SAR based on curve trajectory model is presented. Simulation results show that the proposed range equation is more precise and the algorithm proposed is effective to correct the range-migration and to give high resolution imagery with the entire aperture.
Because of the long integration time of Geosynchronous Earth Orbit Synthetic Aperture Radar (GEO SAR), the imaging algorithms based on linear trajectory module is not suit for GEO SAR, the imaging algorithms based on linear trajectory module may induce considerable distortion. Thus, this paper establishes the range equation of curve trajectory model by using the high order approximate based on the characters of the GEO SAR movements. Then, the two-dimensional spectrum is derived by the method of series reversion, based on which an improved Chirp Scaling (CS) imaging algorithm for GEO SAR based on curve trajectory model is presented. Simulation results show that the proposed range equation is more precise and the algorithm proposed is effective to correct the range-migration and to give high resolution imagery with the entire aperture.
2011, 33(11): 2694-2701.
doi: 10.3724/SP.J.1146.2011.00148
Abstract:
The high accurate Digital Elevation Model (DEM) generated by InSAR (Interferometric Synthetic Aperture Radar) relies on the interferometric calibration. However, the requirement of sufficient Ground Control Points (GCPs) is time consuming and impractical for the cartographic surveying by InSAR over large areas. This paper describes how to implement the interferometric calibration when few GCPs are available. First, the tie points are automatically detected between the adjacent scenes, and then the mathematical model of combined block adjustment among multi-strips and multi-scenes is deduced. Then based on this model, experimental results on the X-band InSAR data validate effectiveness of the presented method.
The high accurate Digital Elevation Model (DEM) generated by InSAR (Interferometric Synthetic Aperture Radar) relies on the interferometric calibration. However, the requirement of sufficient Ground Control Points (GCPs) is time consuming and impractical for the cartographic surveying by InSAR over large areas. This paper describes how to implement the interferometric calibration when few GCPs are available. First, the tie points are automatically detected between the adjacent scenes, and then the mathematical model of combined block adjustment among multi-strips and multi-scenes is deduced. Then based on this model, experimental results on the X-band InSAR data validate effectiveness of the presented method.
2011, 33(11): 2702-2708.
doi: 10.3724/SP.J.1146.2011.00281
Abstract:
For deficiencies of existing design methods of orthogonal coded signals such as notably computational complexity, limited coding length and limited signal number, a kind of random discrete frequency coding signal based on chaotic series is proposed in this paper, considering the attracting characteristics of chaotic series such as noise like behavior, initial value sensitiveness and easy for generation and use. The time-Doppler ambiguity function of the designed signal is induced in detail, the range and Doppler resolution are analyzed, and the pseudo-orthogonal property is discussed in this paper based on the signal coding model. Signal performances are compared with different chaotic series. The proposed signals present quite good performances in ambiguity function, pseudo-orthogonal property, easy generation and unlimited coding length and large signals number, and can be applied as a kind of promising radar signals.
For deficiencies of existing design methods of orthogonal coded signals such as notably computational complexity, limited coding length and limited signal number, a kind of random discrete frequency coding signal based on chaotic series is proposed in this paper, considering the attracting characteristics of chaotic series such as noise like behavior, initial value sensitiveness and easy for generation and use. The time-Doppler ambiguity function of the designed signal is induced in detail, the range and Doppler resolution are analyzed, and the pseudo-orthogonal property is discussed in this paper based on the signal coding model. Signal performances are compared with different chaotic series. The proposed signals present quite good performances in ambiguity function, pseudo-orthogonal property, easy generation and unlimited coding length and large signals number, and can be applied as a kind of promising radar signals.
2011, 33(11): 2709-2713.
doi: 10.3724/SP.J.1146.2011.00111
Abstract:
The channel mismatch error between separate radar instruments is one of the challenges in the single pass Interferometric SAR (InSAR) system. A channel amplitude and phase mismatch error model is introduced. The correlation coefficient with the channel mismatch error is obtained by a statistical signal model of extended scene. The influence of channel mismatch error on the InSAR phase deviation and variance is analyzed. Finally, a ground-based hardware-in-loop simulation is carried out to study the actual radar channel mismatch error impact on the InSAR performance. The simulation results are concordant with the theoretical analysis, which validate the correctness of the influence analysis of the channel mismatch error.
The channel mismatch error between separate radar instruments is one of the challenges in the single pass Interferometric SAR (InSAR) system. A channel amplitude and phase mismatch error model is introduced. The correlation coefficient with the channel mismatch error is obtained by a statistical signal model of extended scene. The influence of channel mismatch error on the InSAR phase deviation and variance is analyzed. Finally, a ground-based hardware-in-loop simulation is carried out to study the actual radar channel mismatch error impact on the InSAR performance. The simulation results are concordant with the theoretical analysis, which validate the correctness of the influence analysis of the channel mismatch error.
2011, 33(11): 2714-2719.
doi: 10.3724/SP.J.1146.2011.00271
Abstract:
Based on the cross-correlation of the received data of Ground Penetrating Radar (GPR), a novel Back Projection (BP) algorithm is proposed to suppress the artifacts in the results of GPR imaging. Comparing with standard BP algorithm, the proposed Cross-correlated Back Projection (CBP) algorithm adds the procedure of calculating the cross-correlation of the received data without increasing any additional channel for reference signal. Both the theoretical analysis and the experimental results present the superiority of the CBP algorithm over standard BP algorithm in artifacts suppression, as well as a slightly improvement in image resolution.
Based on the cross-correlation of the received data of Ground Penetrating Radar (GPR), a novel Back Projection (BP) algorithm is proposed to suppress the artifacts in the results of GPR imaging. Comparing with standard BP algorithm, the proposed Cross-correlated Back Projection (CBP) algorithm adds the procedure of calculating the cross-correlation of the received data without increasing any additional channel for reference signal. Both the theoretical analysis and the experimental results present the superiority of the CBP algorithm over standard BP algorithm in artifacts suppression, as well as a slightly improvement in image resolution.
2011, 33(11): 2720-2726.
doi: 10.3724/SP.J.1146.2011.00252
Abstract:
Sparse representation of signals is of great significance in many applications. In this paper, sparse representation for chirp echoes in broadband radar on an orthogonal dictionary is proposed. The stretch processing of broadband radar is reformulated in matrix form, and an orthogonal dictionary is established. Combining the orthogonal sparse representation with theory of compressed sensing, a novel sampling mechanism for chirp echoes called randomly selecting can be obtained. Simulation results show that the performance of the sparse representation is better than that in the redundant dictionary consisting of Gabor atoms. Furthermore, sparse representation on the orthogonal dictionary is much more computationally efficient. Real data experiment validates the feasibility of randomly selecting sampling mechanism for chirp echoes.
Sparse representation of signals is of great significance in many applications. In this paper, sparse representation for chirp echoes in broadband radar on an orthogonal dictionary is proposed. The stretch processing of broadband radar is reformulated in matrix form, and an orthogonal dictionary is established. Combining the orthogonal sparse representation with theory of compressed sensing, a novel sampling mechanism for chirp echoes called randomly selecting can be obtained. Simulation results show that the performance of the sparse representation is better than that in the redundant dictionary consisting of Gabor atoms. Furthermore, sparse representation on the orthogonal dictionary is much more computationally efficient. Real data experiment validates the feasibility of randomly selecting sampling mechanism for chirp echoes.
2011, 33(11): 2727-2734.
doi: 10.3724/SP.J.1146.2011.00213
Abstract:
The bistatic scattering centres are usually modeled as the monostatic scattering centres, however, the fact that the location of the scattering centre changes with the bistatic configuration is ignored. This paper focuses on the bistatic scattering characteristic of a rotational symmetry cone-shaped target. Firstly, the locations of the bistatic scattering centres on the edge of the undersurface are deduced with the method of the equivalent currents, where are the points of intersection of the edge of the undersurface and the plane constructed by the symmetry axis and the bisector of the bistatic angle. Then, based on the above conclusion, the wideband bistatic echo model of this target is deduced and the theoretical bistatic High-Range Resolution Profile (HRRP) is constructed. Finally, the bistatic HRRPs of a cone-shaped target are calculated via the Feko software and the calculated bistatic HRRPs keep consilient with the theoretical bistatic HRRPs, which validates the theoretical deduction. The location of the bistatic scattering centres revealed in this paper will provide an exact mathematics model for the wideband echo simulation, imaging, feature extraction and Automatic Target Recognition (ATR) of the rotational symmetry target in the bistatic radar.
The bistatic scattering centres are usually modeled as the monostatic scattering centres, however, the fact that the location of the scattering centre changes with the bistatic configuration is ignored. This paper focuses on the bistatic scattering characteristic of a rotational symmetry cone-shaped target. Firstly, the locations of the bistatic scattering centres on the edge of the undersurface are deduced with the method of the equivalent currents, where are the points of intersection of the edge of the undersurface and the plane constructed by the symmetry axis and the bisector of the bistatic angle. Then, based on the above conclusion, the wideband bistatic echo model of this target is deduced and the theoretical bistatic High-Range Resolution Profile (HRRP) is constructed. Finally, the bistatic HRRPs of a cone-shaped target are calculated via the Feko software and the calculated bistatic HRRPs keep consilient with the theoretical bistatic HRRPs, which validates the theoretical deduction. The location of the bistatic scattering centres revealed in this paper will provide an exact mathematics model for the wideband echo simulation, imaging, feature extraction and Automatic Target Recognition (ATR) of the rotational symmetry target in the bistatic radar.
2011, 33(11): 2735-2741.
doi: 10.3724/SP.J.1146.2011.00261
Abstract:
The radar signal sorting method based on traditional clustering algorithm takes on a high time complexity and has poor accuracy. Considering the issue, a new sorting method is researched based on Cone Cluster Labeling (CCL) method for Support Vector Clustering (SVC) algorithm. The CCL method labels cluster in data space, and therefore avoides the high complexity caused by the calculation of adjacency matrix in feature space. This method is introduced into the radar signal sorting and it is modified for lower complexity and high accuracy by handling the outliers. Meanwhile a new cluster validity index, Similitude Entropy (SE) index, is proposed which assesses the compactness and separation of clusters using information entropy theory. Experimental results show that the strategy can improve efficiency without sacrificing sorting accuracy.
The radar signal sorting method based on traditional clustering algorithm takes on a high time complexity and has poor accuracy. Considering the issue, a new sorting method is researched based on Cone Cluster Labeling (CCL) method for Support Vector Clustering (SVC) algorithm. The CCL method labels cluster in data space, and therefore avoides the high complexity caused by the calculation of adjacency matrix in feature space. This method is introduced into the radar signal sorting and it is modified for lower complexity and high accuracy by handling the outliers. Meanwhile a new cluster validity index, Similitude Entropy (SE) index, is proposed which assesses the compactness and separation of clusters using information entropy theory. Experimental results show that the strategy can improve efficiency without sacrificing sorting accuracy.
2011, 33(11): 2742-2747.
doi: 10.3724/SP.J.1146.2011.00491
Abstract:
Remote sensing images are most likely affected by both blur and noise, which makes the quality of them are difficult to obtain for they can not come down to one certain distortion type. Based on the natural scene statistical feature of natural image, the means of wavelet subbands coefficient amplitudes decrease approximately linearly with scale index. This linear feature can be destroyed by both noise and blurness in different ways, according to the quantitative analysis of the destroyed degree, both blur strength and noise strength of an image can be obtained. Finally, the weighted sum of them are considered as the eventual quality index of the remote sensing image. The experiment shows that, compare with the Peak Signal-Noise Rate (PSNR) index, the proposed index has better consistence with the Structure SIMilarity (SSIM) index, and can make an effective and correct evaluation of noise image, blur image or image with both noise and blur.
Remote sensing images are most likely affected by both blur and noise, which makes the quality of them are difficult to obtain for they can not come down to one certain distortion type. Based on the natural scene statistical feature of natural image, the means of wavelet subbands coefficient amplitudes decrease approximately linearly with scale index. This linear feature can be destroyed by both noise and blurness in different ways, according to the quantitative analysis of the destroyed degree, both blur strength and noise strength of an image can be obtained. Finally, the weighted sum of them are considered as the eventual quality index of the remote sensing image. The experiment shows that, compare with the Peak Signal-Noise Rate (PSNR) index, the proposed index has better consistence with the Structure SIMilarity (SSIM) index, and can make an effective and correct evaluation of noise image, blur image or image with both noise and blur.
2011, 33(11): 2748-2752.
doi: 10.3724/SP.J.1146.2011.00397
Abstract:
A more accurate moment method solution for longitudinal slot in a rectangular waveguide is presented, taking into account both distribution of the transverse electrical filed across the slot aperture and the finite wall thickness. Resonant length of the slot is calculated. It is shown that the calculated results have quite high accuracy, compared with both measured results and simulated results using software. Convergence performance of the method is discussed. It is found that the method will converge with up to about 20 basis functions and 2020 waveguide modes. Influence of the transverse electrical field distribution on calculating resonant slot length is also analyzed. The results show that the influence is great in cases including semi-height waveguide, large slot offset, wide slot width and thin wall thickness.
A more accurate moment method solution for longitudinal slot in a rectangular waveguide is presented, taking into account both distribution of the transverse electrical filed across the slot aperture and the finite wall thickness. Resonant length of the slot is calculated. It is shown that the calculated results have quite high accuracy, compared with both measured results and simulated results using software. Convergence performance of the method is discussed. It is found that the method will converge with up to about 20 basis functions and 2020 waveguide modes. Influence of the transverse electrical field distribution on calculating resonant slot length is also analyzed. The results show that the influence is great in cases including semi-height waveguide, large slot offset, wide slot width and thin wall thickness.
2011, 33(11): 2753-2758.
doi: 10.3724/SP.J.1146.2011.00137
Abstract:
Recent studies on soft errors focus on the low-cost fault tolerant techniques, thus motivating an early and accurate evaluation of microprocessor reliability (i.e., Architectural Vulnerability Factor (AVF)). However, current AVF evaluation tools have their own limitations in accuracy and applicability. In order to improve the accuracy of AVF estimation of the key structures of microprocessors (i.e., memories) for low-cost fault tolerant design, this paper proposes a Hybrid AVF Evaluation Strategy (HAES) which combines memory access analysis and instruction identification for AVF evaluation. Then the HAES is integrated into a general simulator and an improved AVF evaluation framework is implemented. Experimental results show that compared with other AVF evaluation tools, AVF computed using the evaluation framework is reduced by 22.6% averagely. AVFs which are estimated using the improved AVF evaluation framework reflect the reliability of memories more accurately, and play a significant role for low-cost fault tolerant design.
Recent studies on soft errors focus on the low-cost fault tolerant techniques, thus motivating an early and accurate evaluation of microprocessor reliability (i.e., Architectural Vulnerability Factor (AVF)). However, current AVF evaluation tools have their own limitations in accuracy and applicability. In order to improve the accuracy of AVF estimation of the key structures of microprocessors (i.e., memories) for low-cost fault tolerant design, this paper proposes a Hybrid AVF Evaluation Strategy (HAES) which combines memory access analysis and instruction identification for AVF evaluation. Then the HAES is integrated into a general simulator and an improved AVF evaluation framework is implemented. Experimental results show that compared with other AVF evaluation tools, AVF computed using the evaluation framework is reduced by 22.6% averagely. AVFs which are estimated using the improved AVF evaluation framework reflect the reliability of memories more accurately, and play a significant role for low-cost fault tolerant design.
2011, 33(11): 2759-2763.
doi: 10.3724/SP.J.1146.2011.00284
Abstract:
The routers on chip, which adopt packet-connection circuit switching, establish the links by sending a request packet and transfer data by circuit switching. Conventional routing algorithms are not suitable for the new features of Network on Chip (NoC) system based on packet-circuit switching. According to these new features, this paper proposes a new routing algorithm, namely Retrograde-Turn (RT) routing algorithm, to improve the performance of the NoC network. Compared with the dynamic XY routing algorithm, the experiment results demonstrate that the new routing algorithm can improve the average throughput and the average latency by 26.7% and 11.6% at best, respectively.
The routers on chip, which adopt packet-connection circuit switching, establish the links by sending a request packet and transfer data by circuit switching. Conventional routing algorithms are not suitable for the new features of Network on Chip (NoC) system based on packet-circuit switching. According to these new features, this paper proposes a new routing algorithm, namely Retrograde-Turn (RT) routing algorithm, to improve the performance of the NoC network. Compared with the dynamic XY routing algorithm, the experiment results demonstrate that the new routing algorithm can improve the average throughput and the average latency by 26.7% and 11.6% at best, respectively.
2011, 33(11): 2764-2770.
doi: 10.3724/SP.J.1146.2011.00480
Abstract:
A high performance and low power fix-point Special Function Unit (SFU) for mobile vertex processors is presented in this paper. The system supports the fix-point format for OpenGL ES 1.X and implements 16 bit precision after the decimal point and faithfully rounded reciprocal, square root, reciprocal square root, logarithm, and exponential functions. The functions are approximated by using a piecewise quadratic interpolation technique. A square root 2 circuit is used in the unit, and the lookup table size is reduced by 29% with respect to previously proposed techniques, without any loss in accuracy. Based on analysis result of computer error and truncate error, the speed and area of lookup table, square unit, multiplier and fused accumulation tree reach optimal. The SFU has been implemented in a 0.18m CMOS technology. The circuit is able to operate up to 300 MHz clock frequency, with a power dissipation of 12.8 mW at 300 MHz and area only 0.112 mm2. The results show that the fixed-point SFU is ideal for mobile vertex processors computing elementary functions.
A high performance and low power fix-point Special Function Unit (SFU) for mobile vertex processors is presented in this paper. The system supports the fix-point format for OpenGL ES 1.X and implements 16 bit precision after the decimal point and faithfully rounded reciprocal, square root, reciprocal square root, logarithm, and exponential functions. The functions are approximated by using a piecewise quadratic interpolation technique. A square root 2 circuit is used in the unit, and the lookup table size is reduced by 29% with respect to previously proposed techniques, without any loss in accuracy. Based on analysis result of computer error and truncate error, the speed and area of lookup table, square unit, multiplier and fused accumulation tree reach optimal. The SFU has been implemented in a 0.18m CMOS technology. The circuit is able to operate up to 300 MHz clock frequency, with a power dissipation of 12.8 mW at 300 MHz and area only 0.112 mm2. The results show that the fixed-point SFU is ideal for mobile vertex processors computing elementary functions.
2011, 33(11): 2771-2774.
doi: 10.3724/SP.J.1146.2010.01285
Abstract:
A novel and high-performance electric field microsensor is presented based on Silicon-On-Insulator (SOI) fabrication technology. In order to improve the sensitivity and SNR (Signal to Noise Ratio) of the sensor, the unique design of the shutter covering the side wall of the sensing electrodes is used, which reduces the effect of fringing fields of the shutter. Moreover, the electrode structure parameters of the sensor are optimized by Finite Element Simulation (FES). It is found that the new sensor had a resolution of 50 V/m at atmospheric pressure, a uncertainty of better than 2% in a electric field range of 0~50 kV/m.
A novel and high-performance electric field microsensor is presented based on Silicon-On-Insulator (SOI) fabrication technology. In order to improve the sensitivity and SNR (Signal to Noise Ratio) of the sensor, the unique design of the shutter covering the side wall of the sensing electrodes is used, which reduces the effect of fringing fields of the shutter. Moreover, the electrode structure parameters of the sensor are optimized by Finite Element Simulation (FES). It is found that the new sensor had a resolution of 50 V/m at atmospheric pressure, a uncertainty of better than 2% in a electric field range of 0~50 kV/m.
2011, 33(11): 2775-2779.
doi: 10.3724/SP.J.1146.2011.00337
Abstract:
The system calibration is very important for the Ultra Wide Band-Virtual Aperture Radar (UWB-VAR) which could penetrate ground to detect the flush buried targets with weak scattering. The usual system calibration method used in narrow band radar with high frequency is based on one single calibration object, which can not be applied to the UWB-VAR system any more for its ultra bandwidth and inconsistentness among different channels. In this paper after analyzing the system errors and the electromagnetism of both calibration objects and landmines a new method basing on fusing multiple calibrators and multiband is introduced. This new method could not only calibrate the system errors efficiently, but also enhance the performance of imaging. Finally it is proved to be effective by the real data.
The system calibration is very important for the Ultra Wide Band-Virtual Aperture Radar (UWB-VAR) which could penetrate ground to detect the flush buried targets with weak scattering. The usual system calibration method used in narrow band radar with high frequency is based on one single calibration object, which can not be applied to the UWB-VAR system any more for its ultra bandwidth and inconsistentness among different channels. In this paper after analyzing the system errors and the electromagnetism of both calibration objects and landmines a new method basing on fusing multiple calibrators and multiband is introduced. This new method could not only calibrate the system errors efficiently, but also enhance the performance of imaging. Finally it is proved to be effective by the real data.
2011, 33(11): 2780-2784.
doi: 10.3724/SP.J.1146.2011.00196
Abstract:
A modeling and optimization method for electromechanical coupling of cavity filters is proposed to improve the electrical performance and yield rate of assembled filters. In the method, a coupling model that can reveal the effect of manufacturing precision on electrical performance of cavity filters is developed by an improved multi-kernel linear programming support vector regression, according to some data from the manufacturing process of filters. Then, the manufacturing precision is optimized by using the coupling model, and the optimal mechanical structure is obtained. Some experiments from a practical filter have been carried out, and the results confirm the effectiveness of the proposed approach. The approach is particularly suitable to the computer-aided manufacturing of volume-producing filters.
A modeling and optimization method for electromechanical coupling of cavity filters is proposed to improve the electrical performance and yield rate of assembled filters. In the method, a coupling model that can reveal the effect of manufacturing precision on electrical performance of cavity filters is developed by an improved multi-kernel linear programming support vector regression, according to some data from the manufacturing process of filters. Then, the manufacturing precision is optimized by using the coupling model, and the optimal mechanical structure is obtained. Some experiments from a practical filter have been carried out, and the results confirm the effectiveness of the proposed approach. The approach is particularly suitable to the computer-aided manufacturing of volume-producing filters.
2011, 33(11): 2785-2789.
doi: 10.3724/SP.J.1146.2011.00384
Abstract:
Towards the problem of personalized services recommendation in mobile telecommunication network, a collaborative filtering algorithm based on context similarity for mobile users is proposed by incorporating mobile users context information into collaborative filtering recommendation process. The algorithm calculates firstly user-based context similarities to construct a set of similar contexts related to the current context of the active user. Then it reduces the mobile user-mobile service-context 3D model to the mobile user-mobile service 2D model by using context pre-filtering recommendation method. Finally it predicts the unknown user preferences and generates recommendations based on the traditional 2D Collaborative Filtering (CF) algorithm. Experimental results indicate that this algorithm can be applied to predict user preferences in mobile network service environment and achieve better recommendation accuracy than the traditional CF algorithm.
Towards the problem of personalized services recommendation in mobile telecommunication network, a collaborative filtering algorithm based on context similarity for mobile users is proposed by incorporating mobile users context information into collaborative filtering recommendation process. The algorithm calculates firstly user-based context similarities to construct a set of similar contexts related to the current context of the active user. Then it reduces the mobile user-mobile service-context 3D model to the mobile user-mobile service 2D model by using context pre-filtering recommendation method. Finally it predicts the unknown user preferences and generates recommendations based on the traditional 2D Collaborative Filtering (CF) algorithm. Experimental results indicate that this algorithm can be applied to predict user preferences in mobile network service environment and achieve better recommendation accuracy than the traditional CF algorithm.
2011, 33(11): 2790-2794.
doi: 10.3724/SP.J.1146.2011.00398
Abstract:
The Fast Dipole Method (FDM) is used for the fast calculation of electromagnetic scattering from composite metallic and material targets, which is based on the Equivalent Dipole-moment Method (EDM). In the FDM, a simple Taylor's series expansion and grouping scheme are used to transform the Matrix Vector Product (MVP) into an aggregation-translation-disaggregation form naturally, which accelerates the MVP remarkably. Further more, the impedance elements related to the far group pairs are not stored, which saves much memory. In addition, the EDM is used to speed up the calculation of mutual impedance elements in the near-field groups. Simulation results are presented to demonstrate the efficiency and satisfactory accuracy of this method.
The Fast Dipole Method (FDM) is used for the fast calculation of electromagnetic scattering from composite metallic and material targets, which is based on the Equivalent Dipole-moment Method (EDM). In the FDM, a simple Taylor's series expansion and grouping scheme are used to transform the Matrix Vector Product (MVP) into an aggregation-translation-disaggregation form naturally, which accelerates the MVP remarkably. Further more, the impedance elements related to the far group pairs are not stored, which saves much memory. In addition, the EDM is used to speed up the calculation of mutual impedance elements in the near-field groups. Simulation results are presented to demonstrate the efficiency and satisfactory accuracy of this method.