Email alert
2007 Vol. 29, No. 7
column
Display Method:
2007, 29(7): 1525-1528.
doi: 10.3724/SP.J.1146.2006.00030
Abstract:
How to allocate power to antennas and carriers is a key problem when adaptive power allocation method is used to multi-antenna and multi-carriers Orthogonal Frequency Division Multiplexing (MIMO-OFDM) system. In this paper, a novel algorithm for allocation of power is proposed. The algorithm firstly combines OFDM with closed-loop transmit diversity technique, then allocates the power to carriers to minimize the Bit Error Rate (BER). This algorithm has the optimum performance which has been proved by mathematical method. To the two transmitted antennas and one received antenna system when the number of OFDM subcarriers are equal to 4, simulation results show that there is 6.5dB gain by comparing with the traditional OFDM-STBC method at the BER of 0.1%.
How to allocate power to antennas and carriers is a key problem when adaptive power allocation method is used to multi-antenna and multi-carriers Orthogonal Frequency Division Multiplexing (MIMO-OFDM) system. In this paper, a novel algorithm for allocation of power is proposed. The algorithm firstly combines OFDM with closed-loop transmit diversity technique, then allocates the power to carriers to minimize the Bit Error Rate (BER). This algorithm has the optimum performance which has been proved by mathematical method. To the two transmitted antennas and one received antenna system when the number of OFDM subcarriers are equal to 4, simulation results show that there is 6.5dB gain by comparing with the traditional OFDM-STBC method at the BER of 0.1%.
2007, 29(7): 1529-1532.
doi: 10.3724/SP.J.1146.2005.01698
Abstract:
For Orthogonal Frequency-Division Multiplexing (OFDM) communication systems with high mobile speed, the frequency offset caused by Doppler frequency offset and the frequency differences will introduce Inter-Carriers Interference (ICI), which degrades the performance. Based on the analysis of the interference mechanism of ICI, a high efficient ICI cancellation scheme based on differential coding in terms of channel estimation is introduced. This scheme improves the spectrum usage efficiency of ICI self-cancellation. And compared with the normal OFDM system, the proposed scheme has 4dB channel estimation improvement and avoid the error floor caused by ICI.
For Orthogonal Frequency-Division Multiplexing (OFDM) communication systems with high mobile speed, the frequency offset caused by Doppler frequency offset and the frequency differences will introduce Inter-Carriers Interference (ICI), which degrades the performance. Based on the analysis of the interference mechanism of ICI, a high efficient ICI cancellation scheme based on differential coding in terms of channel estimation is introduced. This scheme improves the spectrum usage efficiency of ICI self-cancellation. And compared with the normal OFDM system, the proposed scheme has 4dB channel estimation improvement and avoid the error floor caused by ICI.
2007, 29(7): 1533-1536.
doi: 10.3724/SP.J.1146.2005.01648
Abstract:
Based on the analysis of OFDMA system capacity, a stochastic model, M| M| m| n Markov queue model, is proposed. According to the model, the call admission control strategy is presented, where the number of call connections into the network is limited by the system throughput and users QoS requirements. The results demonstrate that the M| M| m| n model is suitable to adaptive OFDMA system supporting for FTP service, and the curves of blocking probability verse arrival rate and average service time verse arrival rate achieved by it coincide with that provided by the simulating results.
Based on the analysis of OFDMA system capacity, a stochastic model, M| M| m| n Markov queue model, is proposed. According to the model, the call admission control strategy is presented, where the number of call connections into the network is limited by the system throughput and users QoS requirements. The results demonstrate that the M| M| m| n model is suitable to adaptive OFDMA system supporting for FTP service, and the curves of blocking probability verse arrival rate and average service time verse arrival rate achieved by it coincide with that provided by the simulating results.
2007, 29(7): 1537-1541.
doi: 10.3724/SP.J.1146.2005.01149
Abstract:
In this paper, a computationally efficient algorithm for transmit power and bit allocations in wireless OFDM communication system is proposed, the aim is to minimize total transmit power under the constraints of data rate and max Bit Error Rate (BER). By exploiting the relation between water-filling level and system data rate, the proposed algorithm finds out an iterative method of searching water-filling level without preset step and initial value, then allocates the final bits and power with a simplified Greedy algorithm in partial subcarriers. The proposed algorithm avoids the probolems of the convergency probobality, the preset initial valuse and the selecting of optimal step in traditional adaptive water-filling algorithm by combining the water-filling and the Greedy algorithms effectivly, and its computationally efficiency is high. Simulation results verify the performance of the proposed algorithm.
In this paper, a computationally efficient algorithm for transmit power and bit allocations in wireless OFDM communication system is proposed, the aim is to minimize total transmit power under the constraints of data rate and max Bit Error Rate (BER). By exploiting the relation between water-filling level and system data rate, the proposed algorithm finds out an iterative method of searching water-filling level without preset step and initial value, then allocates the final bits and power with a simplified Greedy algorithm in partial subcarriers. The proposed algorithm avoids the probolems of the convergency probobality, the preset initial valuse and the selecting of optimal step in traditional adaptive water-filling algorithm by combining the water-filling and the Greedy algorithms effectivly, and its computationally efficiency is high. Simulation results verify the performance of the proposed algorithm.
2007, 29(7): 1542-1545.
doi: 10.3724/SP.J.1146.2005.01672
Abstract:
Aiming at mobile frequency nonselective fading MIMO channels, this paper proposes the 3D statistical channels model and a new space function for receive antennas. The effects of space distance parameters on the correlation and capacity are given by computer simulation. The result of simulation shows the channel capacity in inverse proportion to the correlation.
Aiming at mobile frequency nonselective fading MIMO channels, this paper proposes the 3D statistical channels model and a new space function for receive antennas. The effects of space distance parameters on the correlation and capacity are given by computer simulation. The result of simulation shows the channel capacity in inverse proportion to the correlation.
2007, 29(7): 1546-1550.
doi: 10.3724/SP.J.1146.2005.01551
Abstract:
A novel fast recursive V-BLAST detection algorithm is proposed in this paper. Since a simple recursive relationship of computing pseudoinverses is exploited in the algorithm, the Zero-Forcing (ZF) weight matrix and the ZF weight vector at each iteration can be directly computed from the ZF weight matrix and vector determined at the previous iteration. It is shown that the proposed algorithm not only guarantees the optimal detection performance, but also has lower computational complexity and faster processing speed than the existing algorithms.
A novel fast recursive V-BLAST detection algorithm is proposed in this paper. Since a simple recursive relationship of computing pseudoinverses is exploited in the algorithm, the Zero-Forcing (ZF) weight matrix and the ZF weight vector at each iteration can be directly computed from the ZF weight matrix and vector determined at the previous iteration. It is shown that the proposed algorithm not only guarantees the optimal detection performance, but also has lower computational complexity and faster processing speed than the existing algorithms.
2007, 29(7): 1551-1555.
doi: 10.3724/SP.J.1146.2005.01552
Abstract:
The main impairment for location of terminals in wireless communications system is the Non-Line Of Sight (NLOS) condition. This paper presents a robust location tracking architecture for a NLOS situation. It mitigates NLOS error in the raw measurement using biased Kalman filtering. And then, a reckoning mechanism is introduced to Kalman filtering of location estimates. Simulation results demonstrate that, with the novel architecture, the location estimate can be obtained with good accuracy even in severe NLOS propagation conditions.
The main impairment for location of terminals in wireless communications system is the Non-Line Of Sight (NLOS) condition. This paper presents a robust location tracking architecture for a NLOS situation. It mitigates NLOS error in the raw measurement using biased Kalman filtering. And then, a reckoning mechanism is introduced to Kalman filtering of location estimates. Simulation results demonstrate that, with the novel architecture, the location estimate can be obtained with good accuracy even in severe NLOS propagation conditions.
2007, 29(7): 1556-1559.
doi: 10.3724/SP.J.1146.2006.00013
Abstract:
The implementation of mobile location utilizing only the service base station is relatively easy and the probability of positioning is higher. But the major challenge is poor accuracy. A single base station mobile tracking technique is proposed based on particle filter, which is a powerful method dealing with highly non-linear non-Gaussian problem. Simulation results indicate this technique can achieve high accuracy and the convergence speed is faster than extended Kalman filter based location algorithm.
The implementation of mobile location utilizing only the service base station is relatively easy and the probability of positioning is higher. But the major challenge is poor accuracy. A single base station mobile tracking technique is proposed based on particle filter, which is a powerful method dealing with highly non-linear non-Gaussian problem. Simulation results indicate this technique can achieve high accuracy and the convergence speed is faster than extended Kalman filter based location algorithm.
2007, 29(7): 1560-1563.
doi: 10.3724/SP.J.1146.2005.01597
Abstract:
A real-time smoothing scheme of MEPG-4 video for CDMA systems is proposed in this paper. Due to the high peak rate and frequent rate variations nature of MPEG-4 video, it is a challenge to transmit real-time variable bit rate video on CDMA systems. In the proposed scheme, a real-time video smoothing algorithm within Group of Pictures (GOP) is carried out first, and transmitter chooses the transmission rate among CDMA physical layer rate set according to the smoothing rate. This scheme can considerably reduce the peak rate, variance and bandwidth of the transmitted signal. The simulation results show that with the proposed smoothing scheme, higher system stability and better QoS have been achieved. Moreover, the system performance gain is more considerable when the channel condition is relatively bad.
A real-time smoothing scheme of MEPG-4 video for CDMA systems is proposed in this paper. Due to the high peak rate and frequent rate variations nature of MPEG-4 video, it is a challenge to transmit real-time variable bit rate video on CDMA systems. In the proposed scheme, a real-time video smoothing algorithm within Group of Pictures (GOP) is carried out first, and transmitter chooses the transmission rate among CDMA physical layer rate set according to the smoothing rate. This scheme can considerably reduce the peak rate, variance and bandwidth of the transmitted signal. The simulation results show that with the proposed smoothing scheme, higher system stability and better QoS have been achieved. Moreover, the system performance gain is more considerable when the channel condition is relatively bad.
2007, 29(7): 1564-1568.
doi: 10.3724/SP.J.1146.2005.01441
Abstract:
The TDD-CDMA system with unbalanced slot allocation between uplink (UL) and downlink (DL) will cause the cross-slot interference which may be inferior to system. In order to decrease the interference, a Dynamic Channel Allocation (DCA) based on path gain with buffer (PGBDCA) strategy is proposed. In PGBDCA, the cell is divided into three regions: besides inner and outer region, the PGBDCA introduce the buffer region to manage the slots in which path gain is near to threshold. With static system level simulation, the PGBDCA algorithm shows better performances than conventional algorithms in terms of the probability of the access users. The simulation result also shows the position and size of the buffer are the key factors that will influence on the performance of the system.
The TDD-CDMA system with unbalanced slot allocation between uplink (UL) and downlink (DL) will cause the cross-slot interference which may be inferior to system. In order to decrease the interference, a Dynamic Channel Allocation (DCA) based on path gain with buffer (PGBDCA) strategy is proposed. In PGBDCA, the cell is divided into three regions: besides inner and outer region, the PGBDCA introduce the buffer region to manage the slots in which path gain is near to threshold. With static system level simulation, the PGBDCA algorithm shows better performances than conventional algorithms in terms of the probability of the access users. The simulation result also shows the position and size of the buffer are the key factors that will influence on the performance of the system.
2007, 29(7): 1569-1572.
doi: 10.3724/SP.J.1146.2005.01609
Abstract:
A FFA and TFA based interception of LPI frequency hopping signal is presented in this paper. the algorithm can detect the frequency hopping signal effectively in low Signal to Noise Ratio (SNR); By selecting the range of the folding period and resolution properly, the parameters of frequency hopping signal such as hop duration, time offset and hop frequency can be estimated. Finally, simulation and performance analysis illustrate that the algorithm can intercept the frequency hopping signal under 0dB, and has better performance than the adaptive threshold detection.
A FFA and TFA based interception of LPI frequency hopping signal is presented in this paper. the algorithm can detect the frequency hopping signal effectively in low Signal to Noise Ratio (SNR); By selecting the range of the folding period and resolution properly, the parameters of frequency hopping signal such as hop duration, time offset and hop frequency can be estimated. Finally, simulation and performance analysis illustrate that the algorithm can intercept the frequency hopping signal under 0dB, and has better performance than the adaptive threshold detection.
2007, 29(7): 1573-1575.
doi: 10.3724/SP.J.1146.2005.01291
Abstract:
In 2000, Tang, Fan and Matsufuji presented the theoretical bound of an (L, M, Zcz)-ZCZ sequence family is ZczL/M-1. In this paper, for given positive integers n and L, a construction algorithm of interleaved ZCZ sequence families is proposed, by which a class of binary (2n+1L,2L,2n-1)-#61485;ZCZ sequence families can be generated from an orthogonal sequence family composed of L sequences with period L. If n2 and 4∣, the correlation value between even-number-indexed sequences and odd-number-indexed sequences with shift of this sequence family is zero. Furthermore, choose different orthogonal sequence families or different shift sequences, different ZCZ sequence families can be generated by this construction algorithm.
In 2000, Tang, Fan and Matsufuji presented the theoretical bound of an (L, M, Zcz)-ZCZ sequence family is ZczL/M-1. In this paper, for given positive integers n and L, a construction algorithm of interleaved ZCZ sequence families is proposed, by which a class of binary (2n+1L,2L,2n-1)-#61485;ZCZ sequence families can be generated from an orthogonal sequence family composed of L sequences with period L. If n2 and 4∣, the correlation value between even-number-indexed sequences and odd-number-indexed sequences with shift of this sequence family is zero. Furthermore, choose different orthogonal sequence families or different shift sequences, different ZCZ sequence families can be generated by this construction algorithm.
2007, 29(7): 1576-1579.
doi: 10.3724/SP.J.1146.2005.01671
Abstract:
m-sequence is one of the most widely used codes in spread spectrum communications. The triple correlation function(TCF) of m-sequence, the partial triple correlation function of m-sequence and its peak feature are studied and described in this paper. Then a detection method and a recognition standard of m-sequence are proposed based on the peak feature of the partial TCF. It is testified by simulation that the peak feature of the partial TCF is same as that of the TCF during the corresponding intercepted section. With the peak feature m-sequence can be detected and recognized and this is the basis for detecting and recognizing the direct sequence spread spectrum signals. The detection method and the recognition standard of m-sequence have been proved available by the simulation.
m-sequence is one of the most widely used codes in spread spectrum communications. The triple correlation function(TCF) of m-sequence, the partial triple correlation function of m-sequence and its peak feature are studied and described in this paper. Then a detection method and a recognition standard of m-sequence are proposed based on the peak feature of the partial TCF. It is testified by simulation that the peak feature of the partial TCF is same as that of the TCF during the corresponding intercepted section. With the peak feature m-sequence can be detected and recognized and this is the basis for detecting and recognizing the direct sequence spread spectrum signals. The detection method and the recognition standard of m-sequence have been proved available by the simulation.
2007, 29(7): 1580-1583.
doi: 10.3724/SP.J.1146.2006.00023
Abstract:
This paper focus on the research of LUT based digital pre-distortion techniques in OFDM systems. The defects of the conventional LUT based pre-distortion techniques are pointed out with the improved method proposed. The performance of the algorithms are compared and analyzed in terms of BER, PSD and the convergence speed. Simulation results and analysis demonstrate the better performance of the proposed method.
This paper focus on the research of LUT based digital pre-distortion techniques in OFDM systems. The defects of the conventional LUT based pre-distortion techniques are pointed out with the improved method proposed. The performance of the algorithms are compared and analyzed in terms of BER, PSD and the convergence speed. Simulation results and analysis demonstrate the better performance of the proposed method.
2007, 29(7): 1584-1587.
doi: 10.3724/SP.J.1146.2005.01692
Abstract:
OFDM system is very sensitive to time-varying channel, which may cause inter-carrier interference. In this paper, a layered inter-carrier interference cancellation scheme is proposed for time-varying channel, with detailed analysis of inter-carrier interference in frequency domain. The scheme proposed in this paper can compensate for the ICI terms that significantly affect the bit-error rate of the system. Simulation result is also given to show the performance this new scheme achieved.
OFDM system is very sensitive to time-varying channel, which may cause inter-carrier interference. In this paper, a layered inter-carrier interference cancellation scheme is proposed for time-varying channel, with detailed analysis of inter-carrier interference in frequency domain. The scheme proposed in this paper can compensate for the ICI terms that significantly affect the bit-error rate of the system. Simulation result is also given to show the performance this new scheme achieved.
2007, 29(7): 1588-1591.
doi: 10.3724/SP.J.1146.2005.01610
Abstract:
By using the Minimum Mean Square Error (MMSE) criterion, the design of two improved Belief Propagation (BP)-based, i.e., the scaled BP-based and offset BP-based algorithms is presented, for decoding of short Low-Density Parity-Check (LDPC) codes on the fast Rayleigh fading channel. Based on the MMSE criterion, theoretical formulas and numerical calculations on the optimum factors for these two BP-based algorithms are provided. The simulation results for the (3,6) regular LDPC codes of lengths 504 and 1008 on the fast Rayleigh fading channel demonstrate that the scaled BP-based and offset BP-based algorithms with the proposed factors can achieve the performance better than that of the BP algorithm.
By using the Minimum Mean Square Error (MMSE) criterion, the design of two improved Belief Propagation (BP)-based, i.e., the scaled BP-based and offset BP-based algorithms is presented, for decoding of short Low-Density Parity-Check (LDPC) codes on the fast Rayleigh fading channel. Based on the MMSE criterion, theoretical formulas and numerical calculations on the optimum factors for these two BP-based algorithms are provided. The simulation results for the (3,6) regular LDPC codes of lengths 504 and 1008 on the fast Rayleigh fading channel demonstrate that the scaled BP-based and offset BP-based algorithms with the proposed factors can achieve the performance better than that of the BP algorithm.
2007, 29(7): 1592-1595.
doi: 10.3724/SP.J.1146.2005.01642
Abstract:
The main problem of decoding Turbo TAST code is to reduce its dimension. Performance of different reduction methods varies greatly. Null space method is no longer valid for TAST code. Reduction to single input single output with MMSE equalization achieves a good balance between the performance and computational complexity. Link level capacity is deduced for these methods, and it agrees well with the results of simulation.
The main problem of decoding Turbo TAST code is to reduce its dimension. Performance of different reduction methods varies greatly. Null space method is no longer valid for TAST code. Reduction to single input single output with MMSE equalization achieves a good balance between the performance and computational complexity. Link level capacity is deduced for these methods, and it agrees well with the results of simulation.
2007, 29(7): 1596-1599.
doi: 10.3724/SP.J.1146.2005.01664
Abstract:
This paper presents eight problems of video encoding and decoding subsystem design in H.323 video conference system, such as motion estimation, rate control etc. For each problem, after introducing their background, the paper finds their solutions and algorithm steps are described in detail. Some technical renovations are discussed also. These results are incorporated into a real-life H.323 system which achieves outstanding video performance improvement.
This paper presents eight problems of video encoding and decoding subsystem design in H.323 video conference system, such as motion estimation, rate control etc. For each problem, after introducing their background, the paper finds their solutions and algorithm steps are described in detail. Some technical renovations are discussed also. These results are incorporated into a real-life H.323 system which achieves outstanding video performance improvement.
2007, 29(7): 1600-1603.
doi: 10.3724/SP.J.1146.2005.01586
Abstract:
This paper presents a new modeling method of microwave VCO( Voltage Control Oscillator). Based on this model a novel board bandwidth microwave noise generator , which is composed of a digital algorithm part and an analog modulation part, is simulated. This noise generator features broad adjustable bandwidth( from 30MHz to 300MHz) and wide central frequency range( from 8GHz to 18GHz). The comparison of simulative with experimental results of the generator is give. And the causes of agreements and disagreements are also analyzed. The accordance between simulative and experimental results, such as 3dB band-width and in-band flattness, indicates the validity of the presented modeling method.
This paper presents a new modeling method of microwave VCO( Voltage Control Oscillator). Based on this model a novel board bandwidth microwave noise generator , which is composed of a digital algorithm part and an analog modulation part, is simulated. This noise generator features broad adjustable bandwidth( from 30MHz to 300MHz) and wide central frequency range( from 8GHz to 18GHz). The comparison of simulative with experimental results of the generator is give. And the causes of agreements and disagreements are also analyzed. The accordance between simulative and experimental results, such as 3dB band-width and in-band flattness, indicates the validity of the presented modeling method.
2007, 29(7): 1604-1607.
doi: 10.3724/SP.J.1146.2005.01573
Abstract:
In environment with coherent interferences, performance of adaptive array will decline dramatically, and the general solution is spatial smoothing. However, adaptive array utilizing conventional Uniform Spatial Smoothing (USS) has poor ability to suppress coherent interferences, and it will lose the aperture of array. In the paper, an improved approach to suppress coherent interferences is proposed. Firstly, the paper presents an adaptive Weighted-Spatial-Smoothing (WSS) algorithm, which, through weighted averaging of the correlation matrices of each sub-array, can de-correlate the coherent interferences effectively. And then, based on WSS, using the Linear-Constrained-Minimum-Variance (LCMV) criterion, the optimal weight vector of sub-array beamformer can be obtained. Finally, considering the phase relationship of each sub-array, an approach for full array beamforming is proposed. The approach highly improvs the arrays ability to suppress the coherent interferences, and avoids the loss of aperture caused by conventional spatial smoothing. Theoretical analysis and computer simulation confirm the availability and robustness of the algorithm.
In environment with coherent interferences, performance of adaptive array will decline dramatically, and the general solution is spatial smoothing. However, adaptive array utilizing conventional Uniform Spatial Smoothing (USS) has poor ability to suppress coherent interferences, and it will lose the aperture of array. In the paper, an improved approach to suppress coherent interferences is proposed. Firstly, the paper presents an adaptive Weighted-Spatial-Smoothing (WSS) algorithm, which, through weighted averaging of the correlation matrices of each sub-array, can de-correlate the coherent interferences effectively. And then, based on WSS, using the Linear-Constrained-Minimum-Variance (LCMV) criterion, the optimal weight vector of sub-array beamformer can be obtained. Finally, considering the phase relationship of each sub-array, an approach for full array beamforming is proposed. The approach highly improvs the arrays ability to suppress the coherent interferences, and avoids the loss of aperture caused by conventional spatial smoothing. Theoretical analysis and computer simulation confirm the availability and robustness of the algorithm.
2007, 29(7): 1608-1611.
doi: 10.3724/SP.J.1146.2005.01596
Abstract:
A parallel allocation algorithm is proposed, which is a modification of CSGC (Color Sensitive Graph Coloring)algorithm. Under constraint of maximizing system utilization, the parallel algorithm obtains the same allocation matrix as CSGC, while reducing the allocation period, so that it can be adapted to the agile sense requirement of cognitive radio. Results of simulation and analysis prove this conclusion.
A parallel allocation algorithm is proposed, which is a modification of CSGC (Color Sensitive Graph Coloring)algorithm. Under constraint of maximizing system utilization, the parallel algorithm obtains the same allocation matrix as CSGC, while reducing the allocation period, so that it can be adapted to the agile sense requirement of cognitive radio. Results of simulation and analysis prove this conclusion.
2007, 29(7): 1612-1616.
doi: 10.3724/SP.J.1146.2005.01558
Abstract:
iRGRR(iterative Request-Grant-based Round-Robin) is a scheduling algorithm for input-queued crossbars, which has many good features, such as simple, scalability and fine performance. This paper proposes a new packet scheduling scheme based on iRGRR, called iRGRR/PM (iRGRR with Packet Mode), for high-speed crossbars. iRGRR/PM algorithm is appropriate to schedule IP packet, and can be used in routers with high-speed and large capacity. Compared to iRGRR, iRGRR/PM not only simplifies the design of packet output reassembly module, but also improves the bandwidth utilization of crossbar. The relation of packet delay between two algorithms is briefly analyzed, and simulation studies is done in detail. The results show that iRGRR/PM achieves higher throughput under the same circumstances, especially, reaches 100% throughput under nonuniform traffics. In addition, iRGRR/PM provides better performance of delay for larger packets.
iRGRR(iterative Request-Grant-based Round-Robin) is a scheduling algorithm for input-queued crossbars, which has many good features, such as simple, scalability and fine performance. This paper proposes a new packet scheduling scheme based on iRGRR, called iRGRR/PM (iRGRR with Packet Mode), for high-speed crossbars. iRGRR/PM algorithm is appropriate to schedule IP packet, and can be used in routers with high-speed and large capacity. Compared to iRGRR, iRGRR/PM not only simplifies the design of packet output reassembly module, but also improves the bandwidth utilization of crossbar. The relation of packet delay between two algorithms is briefly analyzed, and simulation studies is done in detail. The results show that iRGRR/PM achieves higher throughput under the same circumstances, especially, reaches 100% throughput under nonuniform traffics. In addition, iRGRR/PM provides better performance of delay for larger packets.
2007, 29(7): 1617-1621.
doi: 10.3724/SP.J.1146.2006.00109
Abstract:
Previous capacity estimation techniques can not measure path capacity and available bandwidth simultaneously. In this article an asymptotically-accurate available bandwidth estimators is obtained through a stochastic analysis of a single congested node. Based on the idea some major revisions are made to the algorithm in Kapoor(2004) and a new capacity and available bandwidth estimation method is presented. The method can estimate these two metrics using the same samples group. Simulation validates the theoretical results of the algorithm.
Previous capacity estimation techniques can not measure path capacity and available bandwidth simultaneously. In this article an asymptotically-accurate available bandwidth estimators is obtained through a stochastic analysis of a single congested node. Based on the idea some major revisions are made to the algorithm in Kapoor(2004) and a new capacity and available bandwidth estimation method is presented. The method can estimate these two metrics using the same samples group. Simulation validates the theoretical results of the algorithm.
2007, 29(7): 1622-1627.
doi: 10.3724/SP.J.1146.2006.00194
Abstract:
According to the disadvantages of the existing AODV and AODV-BR schemes in Ad hoc networks, a cache bypass and local recovery based route reconstruction scheme is proposed in this paper. The mobile node listens in all frames including data packets and routing control signaling in its free time, and maintains a neighbor list and local route cache, thus well reduces signaling cost brought by the periodical HELLO messages, and gains more usable route information. Once the intermediate node detects a broken link, it tries local recovery, instead of broadcasting RREQ messages to achieve route discovery from the source node. Fast route discovery and local recovery will be achieved according to local route cache and neighbors route caches, thus routing control signaling and packet drop ratio are well reduced.
According to the disadvantages of the existing AODV and AODV-BR schemes in Ad hoc networks, a cache bypass and local recovery based route reconstruction scheme is proposed in this paper. The mobile node listens in all frames including data packets and routing control signaling in its free time, and maintains a neighbor list and local route cache, thus well reduces signaling cost brought by the periodical HELLO messages, and gains more usable route information. Once the intermediate node detects a broken link, it tries local recovery, instead of broadcasting RREQ messages to achieve route discovery from the source node. Fast route discovery and local recovery will be achieved according to local route cache and neighbors route caches, thus routing control signaling and packet drop ratio are well reduced.
2007, 29(7): 1628-1632.
doi: 10.3724/SP.J.1146.2006.00154
Abstract:
In large scale P2P networks, it is difficult to deal with different type of attacks issued by malicious peers. A novel trust model based on reputation and risk evaluation is proposed in this paper to solve this problem, in which the uncertainty between trust relationships is considered. The risk is quantified using information entropy and further trust degree and uncertainty degree are presented in a uniform form. The simulation results show that the proposed trust model can evidently enhance successful transaction ratio of system and efficiently help peers establish trust relationships in open P2P networks.
In large scale P2P networks, it is difficult to deal with different type of attacks issued by malicious peers. A novel trust model based on reputation and risk evaluation is proposed in this paper to solve this problem, in which the uncertainty between trust relationships is considered. The risk is quantified using information entropy and further trust degree and uncertainty degree are presented in a uniform form. The simulation results show that the proposed trust model can evidently enhance successful transaction ratio of system and efficiently help peers establish trust relationships in open P2P networks.
2007, 29(7): 1633-1637.
doi: 10.3724/SP.J.1146.2005.01323
Abstract:
Combining the delivery of query messages with the setting up of data transmitting structure, a novel distributed data collection scheme for wireless sensor networkTBDCS (Tree Based Data Collection Scheme) is proposed in this paper. Using a flooding avoidance method, TBDCS sets up a tree with minimum intermediate nodes, which are also data aggregators when sensor nodes send data back. Theoretical analysis proves that TBDCS changes neither the network connectivity nor the shortest paths length between the sink and any other sensor nodes. Simulations show it significantly reduces the traffic and achieves longer system lifetime.
Combining the delivery of query messages with the setting up of data transmitting structure, a novel distributed data collection scheme for wireless sensor networkTBDCS (Tree Based Data Collection Scheme) is proposed in this paper. Using a flooding avoidance method, TBDCS sets up a tree with minimum intermediate nodes, which are also data aggregators when sensor nodes send data back. Theoretical analysis proves that TBDCS changes neither the network connectivity nor the shortest paths length between the sink and any other sensor nodes. Simulations show it significantly reduces the traffic and achieves longer system lifetime.
2007, 29(7): 1638-1641.
doi: 10.3724/SP.J.1146.2005.01628
Abstract:
This paper describes how to reduce the complexity of particle filter in order to make good use of the energy of sensor nodes. First, a structure is proposed for distributed particle filter. Second, how the particle number impacts on the performance of particle filter according to the Large Number Law is studied. Thirdly, based on sensors position information and measurement equation, a sensor selection criterion is proposed. Under this criterion, particle filtering is performed in distributed way. Finally, the performance of this algorithm by computer simulations is demonstrated.
This paper describes how to reduce the complexity of particle filter in order to make good use of the energy of sensor nodes. First, a structure is proposed for distributed particle filter. Second, how the particle number impacts on the performance of particle filter according to the Large Number Law is studied. Thirdly, based on sensors position information and measurement equation, a sensor selection criterion is proposed. Under this criterion, particle filtering is performed in distributed way. Finally, the performance of this algorithm by computer simulations is demonstrated.
2007, 29(7): 1642-1644.
doi: 10.3724/SP.J.1146.2005.01562
Abstract:
Recently, Xie et al. proposed a multi-secret sharing authenticating scheme based on double secret shadow. In their scheme, the participant must use his two secret subshadows to prove his honesty in the secret reconstruction stage. This article, however, will demonstrate their scheme can not withstand the participant cheating. Finally, this article improves the Xie et al.s scheme to overcome the weakness and reduce the computation by using parallel algorithm in secret recovery phase.
Recently, Xie et al. proposed a multi-secret sharing authenticating scheme based on double secret shadow. In their scheme, the participant must use his two secret subshadows to prove his honesty in the secret reconstruction stage. This article, however, will demonstrate their scheme can not withstand the participant cheating. Finally, this article improves the Xie et al.s scheme to overcome the weakness and reduce the computation by using parallel algorithm in secret recovery phase.
2007, 29(7): 1645-1948.
doi: 10.3724/SP.J.1146.2005.01327
Abstract:
Signal can be decomposed sparsely and power-focally in an over-complete dictionary with Matching Pursuit (MP). This paper proposes a modified MP method to decompose signal more sparsely. In the iteration procedure of the modified MP, the over-complete dictionary is classified into two separate dictionaries with the selected and unselected atoms, the algorithm is designed to have more chances than the original MP to choose the atom in the selected atom dictionary as the optimal atom by a simulate annealing threshold function, thus the algorithm availed for a more sparse decomposition. The decomposition results for a cosine-modulated exponential signal and an actual speech signal show that the proposed modified MP can decompose signal more sparsely.
Signal can be decomposed sparsely and power-focally in an over-complete dictionary with Matching Pursuit (MP). This paper proposes a modified MP method to decompose signal more sparsely. In the iteration procedure of the modified MP, the over-complete dictionary is classified into two separate dictionaries with the selected and unselected atoms, the algorithm is designed to have more chances than the original MP to choose the atom in the selected atom dictionary as the optimal atom by a simulate annealing threshold function, thus the algorithm availed for a more sparse decomposition. The decomposition results for a cosine-modulated exponential signal and an actual speech signal show that the proposed modified MP can decompose signal more sparsely.
2007, 29(7): 1649-1652.
doi: 10.3724/SP.J.1146.2005.01699
Abstract:
In multiscale product coefficient hard thresholding, how to determine the optimal threshold is the main problem due to the discontinuity of MSE. Here a semi-soft thresholding function is constructed in the product form of shrinkage coefficient function and wavelet coefficients. This function is infinite-order differentiable with respect to wavelet coefficient, and can adaptively shrink wavelet coefficient in the neighborhood of the threshold. Through minimizing the Stein Unbiased Risk Estimate (SURE) based on the function, the optimal threshold, varying with the signal and noise, is obtained in the Mean Square Error (MSE) sense. In simulations to denoise multiple classic noisy signals, the multiscale product coefficient thresholding is improved through our semi-soft thresholding function.
In multiscale product coefficient hard thresholding, how to determine the optimal threshold is the main problem due to the discontinuity of MSE. Here a semi-soft thresholding function is constructed in the product form of shrinkage coefficient function and wavelet coefficients. This function is infinite-order differentiable with respect to wavelet coefficient, and can adaptively shrink wavelet coefficient in the neighborhood of the threshold. Through minimizing the Stein Unbiased Risk Estimate (SURE) based on the function, the optimal threshold, varying with the signal and noise, is obtained in the Mean Square Error (MSE) sense. In simulations to denoise multiple classic noisy signals, the multiscale product coefficient thresholding is improved through our semi-soft thresholding function.
2007, 29(7): 1653-1656.
doi: 10.3724/SP.J.1146.2005.01611
Abstract:
The gain and phase error calibration for Linear Equi-spaced Arrays (LEA) is considered in this paper. The analysis of the covariance matrix perturbation along different diagonal lines due to finite snapshot is first considered, and an apparent formula for the variance of the gain and phase perturbations along different diagonal lines is derived. This result reveals that the statistics analyses of the perturbation along different diagonal lines are different. Based on this analytic result, it is found out that the optimal gain and phase error calibration methods are respectively based on main diagonal line and the first upper diagonal line. Computer simulation results verified the analyses.
The gain and phase error calibration for Linear Equi-spaced Arrays (LEA) is considered in this paper. The analysis of the covariance matrix perturbation along different diagonal lines due to finite snapshot is first considered, and an apparent formula for the variance of the gain and phase perturbations along different diagonal lines is derived. This result reveals that the statistics analyses of the perturbation along different diagonal lines are different. Based on this analytic result, it is found out that the optimal gain and phase error calibration methods are respectively based on main diagonal line and the first upper diagonal line. Computer simulation results verified the analyses.
2007, 29(7): 1657-1661.
doi: 10.3724/SP.J.1146.2006.00645
Abstract:
A blind Signal-to-Noise Ratio (SNR) estimator based on the modified PASTd (Projection Approximation Subspace Tracking deflation) algorithm is proposed in this paper for Intermediate Frequency (IF) signals in the Additive White Gaussian Noise (AWGN) channel. The orthogonality of the estimated eigenvectors is guaranteed by the use of the modified Gram-Schmidt orthogonization process in the original PASTd method. Computer simulations are performed for the commonly used IF signals, such as MPSK (M=2,4,8) and MQAM (M=16,64,128,256) signals. The results show that the performance of the algorithm is robust and when the true SNR is in the range from 5dB to 25dB the estimation bias is under 1dB and the corresponding STD is within 0.3. Compared with the Eigenvalue Decomposition (ED)-based method, the proposed algorithm can achieve a more accurate estimation with a simple computational complexity.
A blind Signal-to-Noise Ratio (SNR) estimator based on the modified PASTd (Projection Approximation Subspace Tracking deflation) algorithm is proposed in this paper for Intermediate Frequency (IF) signals in the Additive White Gaussian Noise (AWGN) channel. The orthogonality of the estimated eigenvectors is guaranteed by the use of the modified Gram-Schmidt orthogonization process in the original PASTd method. Computer simulations are performed for the commonly used IF signals, such as MPSK (M=2,4,8) and MQAM (M=16,64,128,256) signals. The results show that the performance of the algorithm is robust and when the true SNR is in the range from 5dB to 25dB the estimation bias is under 1dB and the corresponding STD is within 0.3. Compared with the Eigenvalue Decomposition (ED)-based method, the proposed algorithm can achieve a more accurate estimation with a simple computational complexity.
2007, 29(7): 1662-1665.
doi: 10.3724/SP.J.1146.2006.00001
Abstract:
A novel approach of Ground Moving Target Indication (GMTI) based on subapertures and masking is developed in this paper. At first, Range Compressed and Azimuth Unfocused (RCAU) complex SAR image is split into two subapertures symmetrically in azimuth. Due to different behaviors of moving target and stationary target between the two subapertutes, the stationary target can be canceled and the moving target can be reserved by subtraction between the subapertures. The trajectory of moving target can be recovered and a image mask can be formed from this difference image. The phase history of the moving target can be restored on this mask by multiplying with the RCAU complex image. Then, the moving target is focused with a wider Doppler band matched filter and refined with Phase Gradient Autofocus (PGA) to aggregate energy and enhance detection probability. Finally, the moving target is detected with a 2-D Constant False Alarm Rate (CFAR). Combined with Along-Track Interferometry (ATI) the approach can extend velocity range of moving target extensively.
A novel approach of Ground Moving Target Indication (GMTI) based on subapertures and masking is developed in this paper. At first, Range Compressed and Azimuth Unfocused (RCAU) complex SAR image is split into two subapertures symmetrically in azimuth. Due to different behaviors of moving target and stationary target between the two subapertutes, the stationary target can be canceled and the moving target can be reserved by subtraction between the subapertures. The trajectory of moving target can be recovered and a image mask can be formed from this difference image. The phase history of the moving target can be restored on this mask by multiplying with the RCAU complex image. Then, the moving target is focused with a wider Doppler band matched filter and refined with Phase Gradient Autofocus (PGA) to aggregate energy and enhance detection probability. Finally, the moving target is detected with a 2-D Constant False Alarm Rate (CFAR). Combined with Along-Track Interferometry (ATI) the approach can extend velocity range of moving target extensively.
2007, 29(7): 1666-1669.
doi: 10.3724/SP.J.1146.2005.01653
Abstract:
Due the presence of multiplicative and speckle noise, the method of registering InSAR image is more difficult than traditional image registration methods. In this paper , an automatic InSAR image registration method is presented, based on B-spline curve fitting and matching, that can handle the registration of InSAR image efficiently. After the registration parameters are established by the least square method, a two-step method is applied to registering with sub-pixel accuracy. The experimental results demonstrate the algorithm is of robustness, efficiency and accuracy.
Due the presence of multiplicative and speckle noise, the method of registering InSAR image is more difficult than traditional image registration methods. In this paper , an automatic InSAR image registration method is presented, based on B-spline curve fitting and matching, that can handle the registration of InSAR image efficiently. After the registration parameters are established by the least square method, a two-step method is applied to registering with sub-pixel accuracy. The experimental results demonstrate the algorithm is of robustness, efficiency and accuracy.
2007, 29(7): 1670-1673.
doi: 10.3724/SP.J.1146.2005.01647
Abstract:
Motion Compensation is essential for wide-beam airborne Synthetic Aperture Radar (SAR). This paper proposes a motion compensation algorithm for wide-beam airborne SAR based on phase correcting in frequency domain for azimuth blocks. With the relationship between time and frequency of chirp signal, residual errors in time domain are mapped into frequency domain and compensated. Short Fourier Transforms are used to gain high efficiency. This paper analyzes the principle, limitations, processing steps and efficiency of the algorithm. Simulation of point target and images of a P band airborne SAR with low-frequency motion errors validates the proposed algorithm.
Motion Compensation is essential for wide-beam airborne Synthetic Aperture Radar (SAR). This paper proposes a motion compensation algorithm for wide-beam airborne SAR based on phase correcting in frequency domain for azimuth blocks. With the relationship between time and frequency of chirp signal, residual errors in time domain are mapped into frequency domain and compensated. Short Fourier Transforms are used to gain high efficiency. This paper analyzes the principle, limitations, processing steps and efficiency of the algorithm. Simulation of point target and images of a P band airborne SAR with low-frequency motion errors validates the proposed algorithm.
2007, 29(7): 1674-1677.
doi: 10.3724/SP.J.1146.2005.01639
Abstract:
One of the major problems in continuous wave bistatic radar is direct-path interference. Conventional solutions to this problem is the use of an adaptive antenna, steering null towards the interference. Unfortunately the null depth obtained by this technology is not enough for surveillance radar. First of all, direct-path interference appeared in bistatic radar based on FM broadcast transmitter is analyzed in the paper; Secondly, a solution based on adaptive fractional delay estimation is introduced how to deal with this problem; Finally, a bistatic radar experimental system based on FM broadcast transmitter is discussed. Simulation results with real collected data show applying the method has better the performance of target detection.
One of the major problems in continuous wave bistatic radar is direct-path interference. Conventional solutions to this problem is the use of an adaptive antenna, steering null towards the interference. Unfortunately the null depth obtained by this technology is not enough for surveillance radar. First of all, direct-path interference appeared in bistatic radar based on FM broadcast transmitter is analyzed in the paper; Secondly, a solution based on adaptive fractional delay estimation is introduced how to deal with this problem; Finally, a bistatic radar experimental system based on FM broadcast transmitter is discussed. Simulation results with real collected data show applying the method has better the performance of target detection.
2007, 29(7): 1678-1682.
doi: 10.3724/SP.J.1146.2005.01564
Abstract:
In general, Doppler parameters are the main reason that leads to degrading of the SAR (Synthetic Aperture Radar) imaging quality. Now the algorithms to estimate Doppler parameters mainly are Mapdrift, Phase Gradient Autofocus (PGA) algorithms and so on. The drawback of these algorithms lie in the fact that the high-order Doppler parameter can not be estimated, and need iteration during estimating. In this paper Product High-order Ambiguity Function (PHAF) is introduced to estimate the Doppler parameter in synthetic aperture radar. The new algorithm, which has the ability to estimate high order parameters, doesnt need any initial information on Doppler rate and it can be completed with clutter lock at the same time. The algorithm based on PHAF is presented and analyzed in detail. The autofocus result is compared between PHAF and MapDrift under the condition of low Signal Noise Ratio (SNB) and with the existence of high order phase errors. It shows that PHAF is faster, more robust and accurate; meanwhile, exact result is available when SNB is low, and finally the imaging results indicate that the PHAF can improve resolution of SAR image greatly.
In general, Doppler parameters are the main reason that leads to degrading of the SAR (Synthetic Aperture Radar) imaging quality. Now the algorithms to estimate Doppler parameters mainly are Mapdrift, Phase Gradient Autofocus (PGA) algorithms and so on. The drawback of these algorithms lie in the fact that the high-order Doppler parameter can not be estimated, and need iteration during estimating. In this paper Product High-order Ambiguity Function (PHAF) is introduced to estimate the Doppler parameter in synthetic aperture radar. The new algorithm, which has the ability to estimate high order parameters, doesnt need any initial information on Doppler rate and it can be completed with clutter lock at the same time. The algorithm based on PHAF is presented and analyzed in detail. The autofocus result is compared between PHAF and MapDrift under the condition of low Signal Noise Ratio (SNB) and with the existence of high order phase errors. It shows that PHAF is faster, more robust and accurate; meanwhile, exact result is available when SNB is low, and finally the imaging results indicate that the PHAF can improve resolution of SAR image greatly.
2007, 29(7): 1683-1686.
doi: 10.3724/SP.J.1146.2005.01543
Abstract:
Traditionally, in the simulation of radar network, the aircraft is regarded as a point, whose RCS(Radar Cross Section) is constant; In fact, the RCS is a function of the frequency of radar and the angle between the aircraft and radar. Radar targets characteristic database is created by creating the electromagnetic model of targets, calculating the RCS of the angles and frequencies, storing the data in a database. The radar network simulation system is established composed of center computer, radar targets characteristic database, and several radars. The simulation results indicate that: the results of simulation in which the RCS of aircraft is changed, using the target characteristic database, than fixed, are more creditable; radar network increases the detected probability of radars greatly.
Traditionally, in the simulation of radar network, the aircraft is regarded as a point, whose RCS(Radar Cross Section) is constant; In fact, the RCS is a function of the frequency of radar and the angle between the aircraft and radar. Radar targets characteristic database is created by creating the electromagnetic model of targets, calculating the RCS of the angles and frequencies, storing the data in a database. The radar network simulation system is established composed of center computer, radar targets characteristic database, and several radars. The simulation results indicate that: the results of simulation in which the RCS of aircraft is changed, using the target characteristic database, than fixed, are more creditable; radar network increases the detected probability of radars greatly.
2007, 29(7): 1687-1690.
doi: 10.3724/SP.J.1146.2006.00089
Abstract:
This paper gives the geometry of bistatic radar DPCA(Displaced Phase Center Antenna) technique based on two-antenna .The computer simulation results are also given. The bistatic radar DPCA technique requires that the spacing between the phase centers of the two antennas D is made equal to mVa/PRF, where is a positive integer, is the pulse repetition frequency and is the velocity of the moving platform, just as the monostatic case. When the condition D=mVa/PRF is not satisfied, clutter cant be cancelled completely.
This paper gives the geometry of bistatic radar DPCA(Displaced Phase Center Antenna) technique based on two-antenna .The computer simulation results are also given. The bistatic radar DPCA technique requires that the spacing between the phase centers of the two antennas D is made equal to mVa/PRF, where is a positive integer, is the pulse repetition frequency and is the velocity of the moving platform, just as the monostatic case. When the condition D=mVa/PRF is not satisfied, clutter cant be cancelled completely.
2007, 29(7): 1691-1694.
doi: 10.3724/SP.J.1146.2005.01645
Abstract:
In this paper a new method is proposed to solve the geo-referencing problem of satellite SAR imagery without GCPs by considering the consecutive imaging parameters. That is, the imaging parameters of consecutive imagery are calculated based on GCPs in order to set up the prediction formula, and then the imaging parameters of satellite SAR imagery can be forecasted. Thus the radar collinearity equation model is introduced to precisely locate imagery. RADARSAT imagery is used to test and geo-referencing accuracy reaches 6-7 pixels.
In this paper a new method is proposed to solve the geo-referencing problem of satellite SAR imagery without GCPs by considering the consecutive imaging parameters. That is, the imaging parameters of consecutive imagery are calculated based on GCPs in order to set up the prediction formula, and then the imaging parameters of satellite SAR imagery can be forecasted. Thus the radar collinearity equation model is introduced to precisely locate imagery. RADARSAT imagery is used to test and geo-referencing accuracy reaches 6-7 pixels.
2007, 29(7): 1695-1699.
doi: 10.3724/SP.J.1146.2005.01325
Abstract:
This paper presents a Waveform Interpolation (WI) speech coder, which characteristic waveform extraction rate is adaptive to the feature of the input frame. Efficient pitch estimation algorithm is based on the principle of maximizing double weighted Long Time Prediction(LTP) gain and uses the forward pitch detection. The waveform extraction rate and the update rate of SEW(Sulocoly Evoloiuy Waveform) and REW(Rapidly Evolving Waveform) are based on the three features: pitch cycle, voicing degree and stationary degree of waveform surface. Tests indicate that the proposed WI coding algorithm has lower average bit rate and computing complexity compared to the fixed-extraction-rate WI coder and obviously deliver better quality than FS1016 CELP at 4.8kbps.
This paper presents a Waveform Interpolation (WI) speech coder, which characteristic waveform extraction rate is adaptive to the feature of the input frame. Efficient pitch estimation algorithm is based on the principle of maximizing double weighted Long Time Prediction(LTP) gain and uses the forward pitch detection. The waveform extraction rate and the update rate of SEW(Sulocoly Evoloiuy Waveform) and REW(Rapidly Evolving Waveform) are based on the three features: pitch cycle, voicing degree and stationary degree of waveform surface. Tests indicate that the proposed WI coding algorithm has lower average bit rate and computing complexity compared to the fixed-extraction-rate WI coder and obviously deliver better quality than FS1016 CELP at 4.8kbps.
2007, 29(7): 1700-1702.
doi: 10.3724/SP.J.1146.2006.00787
Abstract:
This paper proposes an algorithm for voice conversion based on mixtures of linear transformation which avoids the need for parallel training corpus inherent in conventional approaches. In maximum likelihood framework, the EM algorithm is used to compute the parameters of the transfer function. And the chirp Z-transform is utilized to enhance the smoothed spectral envelop due to the linear weighted averaging. The proposed voice conversion system is evaluated using both objective and subjective measures. The experiment results demonstrate that the proposed approach is capable of effectively transforming speaker identity and can achieve comparable results of the conventional methods where a parallel corpus is needed.
This paper proposes an algorithm for voice conversion based on mixtures of linear transformation which avoids the need for parallel training corpus inherent in conventional approaches. In maximum likelihood framework, the EM algorithm is used to compute the parameters of the transfer function. And the chirp Z-transform is utilized to enhance the smoothed spectral envelop due to the linear weighted averaging. The proposed voice conversion system is evaluated using both objective and subjective measures. The experiment results demonstrate that the proposed approach is capable of effectively transforming speaker identity and can achieve comparable results of the conventional methods where a parallel corpus is needed.
2007, 29(7): 1703-1706.
doi: 10.3724/SP.J.1146.2005.01659
Abstract:
Voiced/unvoiced decision is an important component in speech signal processing. In this paper, different topological structures in Recurrence Plots (RPs) are described for the different physical models of speech production. By statistically analyzing the determinism and the normalized maximal length of diagonal structures acquired from Recurrence Quantification Analysis (RQA), a flexible and efficient decision framework is proposed. Comparing with some traditional methods, the proposed algorithm has lower wrong decision rate. The method provides a new way for feature extraction and speech recognition.
Voiced/unvoiced decision is an important component in speech signal processing. In this paper, different topological structures in Recurrence Plots (RPs) are described for the different physical models of speech production. By statistically analyzing the determinism and the normalized maximal length of diagonal structures acquired from Recurrence Quantification Analysis (RQA), a flexible and efficient decision framework is proposed. Comparing with some traditional methods, the proposed algorithm has lower wrong decision rate. The method provides a new way for feature extraction and speech recognition.
2007, 29(7): 1707-1712.
doi: 10.3724/SP.J.1146.2005.01587
Abstract:
Nonlinear methods perform well in the multiple classifier fusion. However, the nonlinear methods used for the multiple classifier fusion have poor comprehensibility. As a nonlinear method, the fuzzy rule-based pattern recognition has good comprehensibility, but has not been applied to the multiple classifier fusion. Therefore, this paper introduces fuzzy system to the classifier fusion, where the designing issues for accurate and comprehensible fuzzy system are studied, and an improved support vector based fuzzy rule system designing method is proposed. Experiments have been carried out on four data sets from the ELENA project database and the UCI database. The experimental results show that the proposed method can fuse multiple classifiers with low classification error rate based on comprehensible fuzzy systems.
Nonlinear methods perform well in the multiple classifier fusion. However, the nonlinear methods used for the multiple classifier fusion have poor comprehensibility. As a nonlinear method, the fuzzy rule-based pattern recognition has good comprehensibility, but has not been applied to the multiple classifier fusion. Therefore, this paper introduces fuzzy system to the classifier fusion, where the designing issues for accurate and comprehensible fuzzy system are studied, and an improved support vector based fuzzy rule system designing method is proposed. Experiments have been carried out on four data sets from the ELENA project database and the UCI database. The experimental results show that the proposed method can fuse multiple classifiers with low classification error rate based on comprehensible fuzzy systems.
2007, 29(7): 1713-1716.
doi: 10.3724/SP.J.1146.2005.01631
Abstract:
In this paper, the ill-posed inverse problem of super-resolution reconstruction is concerned. Firstly, the mathematical model of super-resolution reconstruction is given, and the ill-posed property of the least square estimation is analyzed. Then, the regularization of ill-posed problem is processed by modifying the original energy functional, and an adaptive dynamic method is proposed for the choice of the regularization coefficient. Finally, the convergence of the iteration formula and the choice of parameter are thoroughly studied. Experimental results demonstrate the effectiveness of the proposed method.
In this paper, the ill-posed inverse problem of super-resolution reconstruction is concerned. Firstly, the mathematical model of super-resolution reconstruction is given, and the ill-posed property of the least square estimation is analyzed. Then, the regularization of ill-posed problem is processed by modifying the original energy functional, and an adaptive dynamic method is proposed for the choice of the regularization coefficient. Finally, the convergence of the iteration formula and the choice of parameter are thoroughly studied. Experimental results demonstrate the effectiveness of the proposed method.
2007, 29(7): 1717-1721.
doi: 10.3724/SP.J.1146.2006.00965
Abstract:
The detection errors undermine the credibility of a watermarking system. An optimum image blind watermark detector is designed in this paper in discrete wavelet domain, which adopts a dual channel detection using bispectrum detection and energy detection. Even with small signal noise ration, the detector can still get a high detection probability with enough bispectum information. When the bispectrum is small, this dual channel detection will become a likelihood detection system, and it is still an optimum detection system. Experimental results demonstrate that the proposed optimum watermarking detector is robust with respect to attacks produced by the popular watermark test software - Stirmark, such as cropping, JPEG compression, rotation, scaling, random geometrid distortion. The experimental results also show that this optimum image blind watermark detector has superior advantages than the exist watermark detectors.
The detection errors undermine the credibility of a watermarking system. An optimum image blind watermark detector is designed in this paper in discrete wavelet domain, which adopts a dual channel detection using bispectrum detection and energy detection. Even with small signal noise ration, the detector can still get a high detection probability with enough bispectum information. When the bispectrum is small, this dual channel detection will become a likelihood detection system, and it is still an optimum detection system. Experimental results demonstrate that the proposed optimum watermarking detector is robust with respect to attacks produced by the popular watermark test software - Stirmark, such as cropping, JPEG compression, rotation, scaling, random geometrid distortion. The experimental results also show that this optimum image blind watermark detector has superior advantages than the exist watermark detectors.
2007, 29(7): 1722-1725.
doi: 10.3724/SP.J.1146.2005.01216
Abstract:
To solve the nonlinear problem in the Bearings-Only Tracking (BOT), an Unscented Particle Filter(UPF) tracking method is proposed. Based on the Unscented transformation, the UPF is used to incorporate the most current observations and to generate the proposal distribution of the nonlinear Particle Filter (PF). The specific application steps of the UPF are deduced combined with the BOT model. The comparisons are made between the UPF and other filters by simulations of the constant speed target and the maneuvering one in the BOT, where the performance and the root-mean-square error of the UPF are analyzed. The results show that the UPF not only solves the linearized loss problem in the extended Kalman filter, but also is more accurate than the PF in the BOT.
To solve the nonlinear problem in the Bearings-Only Tracking (BOT), an Unscented Particle Filter(UPF) tracking method is proposed. Based on the Unscented transformation, the UPF is used to incorporate the most current observations and to generate the proposal distribution of the nonlinear Particle Filter (PF). The specific application steps of the UPF are deduced combined with the BOT model. The comparisons are made between the UPF and other filters by simulations of the constant speed target and the maneuvering one in the BOT, where the performance and the root-mean-square error of the UPF are analyzed. The results show that the UPF not only solves the linearized loss problem in the extended Kalman filter, but also is more accurate than the PF in the BOT.
2007, 29(7): 1726-1730.
doi: 10.3724/SP.J.1146.2006.00037
Abstract:
Data-based machine learning is exploring the rule to predict new data from the observation data. In this paper, a novel classification decision method, called as Cover Algorithm (CA), is presented. In the training procedure, some representative samples of the training set can be obtained by utilizing a certain cover rule. Then, in the classification phase, the classifier can make a decision according to the distances from a test sample to the representatives, namely the class of the test sample is determined by the representative closest to the test sample. Comparing with the nearest neighbor method, the presented method needs less cost and memory space as the representative samples are only a little part of the training set. Furthermore, cover algorithm is suitable for automated classification of large data because it does not need to consider choosing kernel function like SVM and its main computation is distance operation between samples. The experiment results show that the cover algorithm has good robustness and high classifying accuracy over Normal Galaxies and stars datasets.
Data-based machine learning is exploring the rule to predict new data from the observation data. In this paper, a novel classification decision method, called as Cover Algorithm (CA), is presented. In the training procedure, some representative samples of the training set can be obtained by utilizing a certain cover rule. Then, in the classification phase, the classifier can make a decision according to the distances from a test sample to the representatives, namely the class of the test sample is determined by the representative closest to the test sample. Comparing with the nearest neighbor method, the presented method needs less cost and memory space as the representative samples are only a little part of the training set. Furthermore, cover algorithm is suitable for automated classification of large data because it does not need to consider choosing kernel function like SVM and its main computation is distance operation between samples. The experiment results show that the cover algorithm has good robustness and high classifying accuracy over Normal Galaxies and stars datasets.
2007, 29(7): 1731-1734.
doi: 10.3724/SP.J.1146.2006.00974
Abstract:
The standard Kernel Fisher Discriminant Analysis(KFDA) may suffer from the large computation complexity and the slow speed of feature extraction for the case of large number of training samples. To tackle these problems, a fast algorithm of KFDA is presented. The algorithm firstly proposes an optimized algorithm based on the theory of linear correlation, which finds out a basis of the sub-space spanned by the training samples mapped onto the feature space and which avoids the operation of matrix inversion ; Then using the linear combination of the basis to express the optimal projection vectors,and combining with Fisher criterion in the feature space, a novel criterion for the computation of the optimal projection vectors is presented, which only needs to calculate the eigenvalue of a matrix which size is the same as the number of the basis. In addition, the feature extraction for one sample only needs to calculate the kernel functions between the basis and the sample. The experimental results using different datasets demonstrate the validity of the presented algorithm.
The standard Kernel Fisher Discriminant Analysis(KFDA) may suffer from the large computation complexity and the slow speed of feature extraction for the case of large number of training samples. To tackle these problems, a fast algorithm of KFDA is presented. The algorithm firstly proposes an optimized algorithm based on the theory of linear correlation, which finds out a basis of the sub-space spanned by the training samples mapped onto the feature space and which avoids the operation of matrix inversion ; Then using the linear combination of the basis to express the optimal projection vectors,and combining with Fisher criterion in the feature space, a novel criterion for the computation of the optimal projection vectors is presented, which only needs to calculate the eigenvalue of a matrix which size is the same as the number of the basis. In addition, the feature extraction for one sample only needs to calculate the kernel functions between the basis and the sample. The experimental results using different datasets demonstrate the validity of the presented algorithm.
2007, 29(7): 1735-1738.
doi: 10.3724/SP.J.1146.2005.01332
Abstract:
Harris corner detection is a classical algorithm, but has not the property of scale invariant. In this paper, the multi-resolution idea is introduced into the Harris algorithm, and the wavelet-based formula for measuring the image intensity variation is developed, meanwhile, the auto-correlation matrix is obtained that reflected the scale variation information. Then, a novel Harris multi-scale corner detection algorithm wavelet-based is presented, which might detect the corners in different scales and overcome the drawback that the single-scale Harris detector usually leads to either missing significant corners or detecting false corners due to noise. Compared with Harris algorithm, the presented algorithm is more efficient in detecting the corners with accurate location and is robust with respect to noise.
Harris corner detection is a classical algorithm, but has not the property of scale invariant. In this paper, the multi-resolution idea is introduced into the Harris algorithm, and the wavelet-based formula for measuring the image intensity variation is developed, meanwhile, the auto-correlation matrix is obtained that reflected the scale variation information. Then, a novel Harris multi-scale corner detection algorithm wavelet-based is presented, which might detect the corners in different scales and overcome the drawback that the single-scale Harris detector usually leads to either missing significant corners or detecting false corners due to noise. Compared with Harris algorithm, the presented algorithm is more efficient in detecting the corners with accurate location and is robust with respect to noise.
2007, 29(7): 1739-1743.
doi: 10.3724/SP.J.1146.2005.01572
Abstract:
Skin color detection is an important problem in computer vision. A new elliptical model based on KL transform for skin color detection is proposed in this paper. As to the proposed algorithm, firstly, the training skin color samples are made to distribute uniformly, then the elliptical boundary equation of skin color region is attained by means of KL transform. This method proves to be simple and intuitive. The experimental results on the images taken under a wide range of different environment demonstrate its efficiency and its clear improvement over the method based on explicit skin region boundary and single Gaussian model.
Skin color detection is an important problem in computer vision. A new elliptical model based on KL transform for skin color detection is proposed in this paper. As to the proposed algorithm, firstly, the training skin color samples are made to distribute uniformly, then the elliptical boundary equation of skin color region is attained by means of KL transform. This method proves to be simple and intuitive. The experimental results on the images taken under a wide range of different environment demonstrate its efficiency and its clear improvement over the method based on explicit skin region boundary and single Gaussian model.
2007, 29(7): 1744-1748.
doi: 10.3724/SP.J.1146.2005.01567
Abstract:
Margin plays an important role in research of machine learning. Margin-based feature selection methods choose the weights of features from the view of classification. This paper analyzes different types of margin and proposed methods to improve the Sequential Backward Selection (SBS) method respectively using sample-margin and hypothesis-margin as feature selection criterion. A SVM polynomial classifier, which has optimal hyper-parameters, is then designed for face recognition. Experiments are conducted on FERET face database. Recognition accuracies between the proposed methods and relief feature selection method are compared. Experiments are also conducted by respectively using SVM and Nearest Neighbor (NN) classifier. Experimental results indicate that the proposed feature selection and recognition methods are efficient for face recognition.
Margin plays an important role in research of machine learning. Margin-based feature selection methods choose the weights of features from the view of classification. This paper analyzes different types of margin and proposed methods to improve the Sequential Backward Selection (SBS) method respectively using sample-margin and hypothesis-margin as feature selection criterion. A SVM polynomial classifier, which has optimal hyper-parameters, is then designed for face recognition. Experiments are conducted on FERET face database. Recognition accuracies between the proposed methods and relief feature selection method are compared. Experiments are also conducted by respectively using SVM and Nearest Neighbor (NN) classifier. Experimental results indicate that the proposed feature selection and recognition methods are efficient for face recognition.
2007, 29(7): 1749-1752.
doi: 10.3724/SP.J.1146.2005.01129
Abstract:
On-line identification and following control of nonlinear uncertain Chuas chaotic system using dynamic neural networks are studied in this paper. The passive technique is applied to access properties of neuro-identifier that the gradient descent algorithm for weight adjustment is stable. Then an optimal controller based on the identification model is designed to direct the Chuas chaotic system towards desired target trajectory, and the tracking error is guaranteed to be bounded. Finally, the simulations are provided to demonstrate the effectiveness of the approach proposed.
On-line identification and following control of nonlinear uncertain Chuas chaotic system using dynamic neural networks are studied in this paper. The passive technique is applied to access properties of neuro-identifier that the gradient descent algorithm for weight adjustment is stable. Then an optimal controller based on the identification model is designed to direct the Chuas chaotic system towards desired target trajectory, and the tracking error is guaranteed to be bounded. Finally, the simulations are provided to demonstrate the effectiveness of the approach proposed.
2007, 29(7): 1753-1756.
doi: 10.3724/SP.J.1146.2006.00050
Abstract:
A adaptable T-S fuzzy model which membership functions , structure and parameters optimized by the algorithms of GA-Annealing strategy is proposed to identify chaotic system . Based on this, the asymptotic stability algorithm of fuzzy control having simple and effective control laws is employed. That the system can efficiently track the objective functions which can be either period orbits or continuous variable functions is also proved , if the precision of the adaptable T-S fuzzy model is good. The simulations to control chaotic system models of Logistic system and Henon system show the effectiveness and feasibility of the proposed method.
A adaptable T-S fuzzy model which membership functions , structure and parameters optimized by the algorithms of GA-Annealing strategy is proposed to identify chaotic system . Based on this, the asymptotic stability algorithm of fuzzy control having simple and effective control laws is employed. That the system can efficiently track the objective functions which can be either period orbits or continuous variable functions is also proved , if the precision of the adaptable T-S fuzzy model is good. The simulations to control chaotic system models of Logistic system and Henon system show the effectiveness and feasibility of the proposed method.
2007, 29(7): 1757-1760.
doi: 10.3724/SP.J.1146.2005.01615
Abstract:
The CLEAN algorithm is introduced from non-instant u-v coverage aperture synthesis in radio astronomy to instant situation in 2-D airborne mm-wave imaging, to remove the bad influences of the undersampled u-v plane coverage to image quality so as to eliminate the negative impacts on the image caused by high side slobs. In this paper, the effects of the kind of clean impulse, the relevant FWHM and the scale coefficient to the quality of the modified image is mainly studied, and a conclusion is that: a Gauss clean impulse, a relevant FWHM equal to the FWHM of system response and the scale coefficient about 1/256 is favorable to improve the modified image. This conclusion is valuable to the real airborne mm-wave aperture synthesis imaging project.
The CLEAN algorithm is introduced from non-instant u-v coverage aperture synthesis in radio astronomy to instant situation in 2-D airborne mm-wave imaging, to remove the bad influences of the undersampled u-v plane coverage to image quality so as to eliminate the negative impacts on the image caused by high side slobs. In this paper, the effects of the kind of clean impulse, the relevant FWHM and the scale coefficient to the quality of the modified image is mainly studied, and a conclusion is that: a Gauss clean impulse, a relevant FWHM equal to the FWHM of system response and the scale coefficient about 1/256 is favorable to improve the modified image. This conclusion is valuable to the real airborne mm-wave aperture synthesis imaging project.
2007, 29(7): 1761-1764.
doi: 10.3724/SP.J.1146.2005.01575
Abstract:
Reusability is the key element in todays VLSI design. Open-source hardware provides a more thorough and effective way for design reusability than traditional charged closed-source IP by sharing its design documents and IP modules. And open-source hardware based SoC design method is being accepted and practiced by more and more designers. This paper introduces relative concepts, benefits, problems and perspectives of open-source hardware in detail, and discusses open-source hardware based design flow in depth by exampling an open-source processor design.
Reusability is the key element in todays VLSI design. Open-source hardware provides a more thorough and effective way for design reusability than traditional charged closed-source IP by sharing its design documents and IP modules. And open-source hardware based SoC design method is being accepted and practiced by more and more designers. This paper introduces relative concepts, benefits, problems and perspectives of open-source hardware in detail, and discusses open-source hardware based design flow in depth by exampling an open-source processor design.
2007, 29(7): 1765-1768.
doi: 10.3724/SP.J.1146.2005.01612
Abstract:
The phenomenon of stochastic resonance of an over-damped linear oscillator stimulated by an amplitude-modulation signal and a random telegraph noise is investigated. The exact expressions of the Output-Amplitude-Gain (OAG) of the Upper Side-frequency Component (USC) and the Lower Side-frequency Component (LSC) are obtained based on linear-system theory. It is shown that the OAG of the USC (or the LSC) is a non-monotonic function of the strength and correlation time of the noise as well as the frequency of the USC (or the LSC). Furthermore, by choosing appropriate parameters of the noise and the oscillator, the OAG of the USC and LSC of the noisy oscillator can be larger than that of the noise-free oscillator. The effect of the noise strength and the frequency of the side-frequency components as well as the parameters of the oscillator on the OAG are discussed.
The phenomenon of stochastic resonance of an over-damped linear oscillator stimulated by an amplitude-modulation signal and a random telegraph noise is investigated. The exact expressions of the Output-Amplitude-Gain (OAG) of the Upper Side-frequency Component (USC) and the Lower Side-frequency Component (LSC) are obtained based on linear-system theory. It is shown that the OAG of the USC (or the LSC) is a non-monotonic function of the strength and correlation time of the noise as well as the frequency of the USC (or the LSC). Furthermore, by choosing appropriate parameters of the noise and the oscillator, the OAG of the USC and LSC of the noisy oscillator can be larger than that of the noise-free oscillator. The effect of the noise strength and the frequency of the side-frequency components as well as the parameters of the oscillator on the OAG are discussed.
2007, 29(7): 1769-1771.
doi: 10.3724/SP.J.1146.2005.01568
Abstract:
An important characteristic of the beam-wave interaction in traveling-wave tube is that the velocity modulation and bunching of electron beam, also its energy exchange with the RF field, happen continuously and simultaneously along the whole slow-wave structure. This is why traveling wave tube can offer large output power in a very broad band. On the basis of cold cavity characteristic research, quantitative analysis of the large signal beam-wave interaction of coupled-cavity traveling-wave tube is performed using three-dimensional PIC simulation code. In addition, an X-band CW coupled-cavity traveling-wave tube is designed with the design parameters as follows:operating frequency from 7.1GHz to 8.5GHz,bandwidth 18%,the highest output power of 3kW.
An important characteristic of the beam-wave interaction in traveling-wave tube is that the velocity modulation and bunching of electron beam, also its energy exchange with the RF field, happen continuously and simultaneously along the whole slow-wave structure. This is why traveling wave tube can offer large output power in a very broad band. On the basis of cold cavity characteristic research, quantitative analysis of the large signal beam-wave interaction of coupled-cavity traveling-wave tube is performed using three-dimensional PIC simulation code. In addition, an X-band CW coupled-cavity traveling-wave tube is designed with the design parameters as follows:operating frequency from 7.1GHz to 8.5GHz,bandwidth 18%,the highest output power of 3kW.
2007, 29(7): 1775-1778.
doi: 10.3724/SP.J.1146.2005.01670
Abstract:
In the paper, a new general optimal design method of stable high-order interpolative - A/D modulator is presented. The design principle and process in detail are given by using an interpolative structure in state space and the transfer function conversion, then the noise transfer function is realized by optimized zero and Butterworth poles. The stable condition of modulator is studied and the structure coefficients are gained by minimizing the state energy in order to make the modulator more stable. Finally illustrative examples are demonstrated to show the effectiveness.
In the paper, a new general optimal design method of stable high-order interpolative - A/D modulator is presented. The design principle and process in detail are given by using an interpolative structure in state space and the transfer function conversion, then the noise transfer function is realized by optimized zero and Butterworth poles. The stable condition of modulator is studied and the structure coefficients are gained by minimizing the state energy in order to make the modulator more stable. Finally illustrative examples are demonstrated to show the effectiveness.
2007, 29(7): 1772-1774.
doi: 10.3724/SP.J.1146.2005.01623
Abstract:
Wang XM proposed a group signature scheme and claimed that it can resist all kinds of forgery attacks(2003). However,it was carefully analyzed, and there were defects in it: First, It cannot revoke group member effectively. Second,It cannot resist forgery attack.. In this paper, an effective attack scheme is proposed, and two factors that a secure group signature scheme should possess was given.
Wang XM proposed a group signature scheme and claimed that it can resist all kinds of forgery attacks(2003). However,it was carefully analyzed, and there were defects in it: First, It cannot revoke group member effectively. Second,It cannot resist forgery attack.. In this paper, an effective attack scheme is proposed, and two factors that a secure group signature scheme should possess was given.