Email alert
2010 Vol. 32, No. 4
Display Method:
2010, 32(4): 763-769.
doi: 10.3724/SP.J.1146.2009.00542
Abstract:
In multi-antenna amplify-and-forward two-way relay system, the closed form expression of the MSE optimal relay processing matrix with low complexity based on Minimum Sum Mean Square Error (MSMSE) criterion is derived. To utilize the spatial diversity and frequency diversity comprehensively, the resource allocation strategy in OFDM two-way relay system is investigated, and the layered subcarrier pairing with low complexity and power optimized allocation strategy is proposed. Simulation results show that, the proposed relay processing scheme outperforms other two-way relaying schemes in the performances of system sum rate and bit error ratio, which improve with the number of relay antennas, and the layered subcarrier pairing strategy combined with power allocation can improve the system sum rate dramatically and its performance approaches the optimal strategy.
In multi-antenna amplify-and-forward two-way relay system, the closed form expression of the MSE optimal relay processing matrix with low complexity based on Minimum Sum Mean Square Error (MSMSE) criterion is derived. To utilize the spatial diversity and frequency diversity comprehensively, the resource allocation strategy in OFDM two-way relay system is investigated, and the layered subcarrier pairing with low complexity and power optimized allocation strategy is proposed. Simulation results show that, the proposed relay processing scheme outperforms other two-way relaying schemes in the performances of system sum rate and bit error ratio, which improve with the number of relay antennas, and the layered subcarrier pairing strategy combined with power allocation can improve the system sum rate dramatically and its performance approaches the optimal strategy.
2010, 32(4): 770-774.
doi: 10.3724/SP.J.1146.2008.00728
Abstract:
This paper presents a new scheme based on proportional fairness for MIMO-OFDM downlink resource allocation to maximize the sum of user data rates, subject to constraints on total power, bit error rate, and proportionality among user data rates. This new scheme embraces the ability to simultaneously exploit space, frequency and multi-user diversity to improve spectrum efficiency. Based on MIMO channel state information, eigen-channels are intelligently used to determine subcarrier allocation and power allocation, which forms the foundation of the new scheme. A linear non-iterative power allocation method with low complexity is deduced that is made by the relaxation of strict user rate proportionality constraints. Simulation results show that this new adaptive allocation scheme can achieve good tradeoff between capacity and fairness, while requiring significantly less computation.
This paper presents a new scheme based on proportional fairness for MIMO-OFDM downlink resource allocation to maximize the sum of user data rates, subject to constraints on total power, bit error rate, and proportionality among user data rates. This new scheme embraces the ability to simultaneously exploit space, frequency and multi-user diversity to improve spectrum efficiency. Based on MIMO channel state information, eigen-channels are intelligently used to determine subcarrier allocation and power allocation, which forms the foundation of the new scheme. A linear non-iterative power allocation method with low complexity is deduced that is made by the relaxation of strict user rate proportionality constraints. Simulation results show that this new adaptive allocation scheme can achieve good tradeoff between capacity and fairness, while requiring significantly less computation.
2010, 32(4): 775-780.
doi: 10.3724/SP.J.1146.2008.00401
Abstract:
The main objective of the traditional OFDMA uplink resource allocation focuses on two aspects: one is to maximize the transmission rate of each user, the other is to minimize power. But both of them do not consider the power efficiency of each user. To deal with this problem, in this paper, a novel joint power and subcarrier allocation scheme in uplink OFDMA systems is proposed based on game theory. The goal is to maximize the power efficiency of each user under peak power constraint. For the purpose, the necessary condition for optimality using Karush-Kuhn-Tucker condition is drawn and the existence of the Nash Equilibrium of the function is proved. Then the subcarrier and power allocation algorithm is showed. The simulation results show that the power efficiency of the proposed algorithm increases greatly over that of the MaxRt+WF (Maximal marginal Rate subcarrier and WaterFilling power allocation), which is the optimal algorithm to derive the maximal transmission rate, and MaxFA+WF (Fixed subcarrier Allocation and WaterFilling power allocation). Meanwhile, if the pricing fact is properly chosen which the number is five in the simulation model, the sum of power efficiency can be maximized.
The main objective of the traditional OFDMA uplink resource allocation focuses on two aspects: one is to maximize the transmission rate of each user, the other is to minimize power. But both of them do not consider the power efficiency of each user. To deal with this problem, in this paper, a novel joint power and subcarrier allocation scheme in uplink OFDMA systems is proposed based on game theory. The goal is to maximize the power efficiency of each user under peak power constraint. For the purpose, the necessary condition for optimality using Karush-Kuhn-Tucker condition is drawn and the existence of the Nash Equilibrium of the function is proved. Then the subcarrier and power allocation algorithm is showed. The simulation results show that the power efficiency of the proposed algorithm increases greatly over that of the MaxRt+WF (Maximal marginal Rate subcarrier and WaterFilling power allocation), which is the optimal algorithm to derive the maximal transmission rate, and MaxFA+WF (Fixed subcarrier Allocation and WaterFilling power allocation). Meanwhile, if the pricing fact is properly chosen which the number is five in the simulation model, the sum of power efficiency can be maximized.
2010, 32(4): 781-785.
doi: 10.3724/SP.J.1146.2009.00346
Abstract:
In high SNR region, power offset is the zero-order term in SNR axis of SNR-capacity curve and its optimization is helpful to improve capacity. In this paper, based on fitting determinant curve of tri-diagonal Toeplitz matrix, expression of extreme points is derived to analyze power offset of two transmit multiple receive single-user MIMO systems with uniform linear antenna array of fixed length. These proposed extreme points are determined by correlation of receive antenna elements and maximum of number of antenna elements between transmit and receive arrays. According to the obtained expression, the simulation results show that optimal power offset can be achieved by selecting suitable number of receive antenna.
In high SNR region, power offset is the zero-order term in SNR axis of SNR-capacity curve and its optimization is helpful to improve capacity. In this paper, based on fitting determinant curve of tri-diagonal Toeplitz matrix, expression of extreme points is derived to analyze power offset of two transmit multiple receive single-user MIMO systems with uniform linear antenna array of fixed length. These proposed extreme points are determined by correlation of receive antenna elements and maximum of number of antenna elements between transmit and receive arrays. According to the obtained expression, the simulation results show that optimal power offset can be achieved by selecting suitable number of receive antenna.
2010, 32(4): 786-789.
doi: 10.3724/SP.J.1146.2009.00509
Abstract:
Since different delay offsets are applied to the spatially multiplexed data streams of asynchronous vertical Bell Labs layered space-time (V-BLAST), the symbol-by-symbol power allocation method used in synchronous systems is invalid. According to this issue, a block-by-block power allocation algorithm is proposed to minimize the Block Average Bit Error Rate (BABER). This algorithm firstly computes the instantaneous SNR per symbol. Then BABER of asynchronous block is worked out. Finally, the optimum transmit power of each antennas is obtained from the solution of an optimization problem. Simulation results in flat Rayleigh fading channel show that the proposed algorithm has 2 dB gain at BER of 10-3 compared to the 2Tx2Rx asynchronous V-BLAST using BPSK modulation and zero-forcing detection.
Since different delay offsets are applied to the spatially multiplexed data streams of asynchronous vertical Bell Labs layered space-time (V-BLAST), the symbol-by-symbol power allocation method used in synchronous systems is invalid. According to this issue, a block-by-block power allocation algorithm is proposed to minimize the Block Average Bit Error Rate (BABER). This algorithm firstly computes the instantaneous SNR per symbol. Then BABER of asynchronous block is worked out. Finally, the optimum transmit power of each antennas is obtained from the solution of an optimization problem. Simulation results in flat Rayleigh fading channel show that the proposed algorithm has 2 dB gain at BER of 10-3 compared to the 2Tx2Rx asynchronous V-BLAST using BPSK modulation and zero-forcing detection.
Iterative Detection Scheme for Multiuser Turbo-BLAST System with Imperfect Channel State Information
2010, 32(4): 790-793.
doi: 10.3724/SP.J.1146.2009.00076
Abstract:
The space-time multiuser detection scheme is applied to Turbo-BLAST system to form a multiuser Turbo-BLAST system. A scheme for de-correlating and interference cancellation detection technique is proposed in the presence of channel state information imperfection. At the transmitter, V-BLAST structure is combined with CDMA technique for spatial multiplexing. At the receiver, the de-correlating algorithm is employed to remove the multiuser interference first, and then the iterative interference cancellation scheme which takes the channel estimation errors into account is introduced to mitigate the co-antenna interference. Simulation results show that the proposed space-time multiuser model is effective in the presence of channel state information imperfection,it makes the detection of every user be comparatively independent, and benefits for application of the traditional Turbo-BLAST scheme.
The space-time multiuser detection scheme is applied to Turbo-BLAST system to form a multiuser Turbo-BLAST system. A scheme for de-correlating and interference cancellation detection technique is proposed in the presence of channel state information imperfection. At the transmitter, V-BLAST structure is combined with CDMA technique for spatial multiplexing. At the receiver, the de-correlating algorithm is employed to remove the multiuser interference first, and then the iterative interference cancellation scheme which takes the channel estimation errors into account is introduced to mitigate the co-antenna interference. Simulation results show that the proposed space-time multiuser model is effective in the presence of channel state information imperfection,it makes the detection of every user be comparatively independent, and benefits for application of the traditional Turbo-BLAST scheme.
2010, 32(4): 794-798.
doi: 10.3724/SP.J.1146.2008.01691
Abstract:
Generally in MIMO system, the tradeoff between multiplexing gain and diversity gain,under their definite product, can be obtained by decreasing the efficiency of space-time code. In this paper, according to analyzing the relationship among the VBLAST, HARQ and STBC, a novel HARQ scheme that implement the combining between space-time code and HARQ smoothly, was proposed. On the premise of full multiplexing gain, the HARQ delay can be transform into SNR gain, introducing diversity gain superfluity. Furthermore, the relationship among the channel correlation, retransmitting time and spatial diversity is deeply and systemically analyzed in the paper. The simulation results show the effectiveness of the proposed scheme.
Generally in MIMO system, the tradeoff between multiplexing gain and diversity gain,under their definite product, can be obtained by decreasing the efficiency of space-time code. In this paper, according to analyzing the relationship among the VBLAST, HARQ and STBC, a novel HARQ scheme that implement the combining between space-time code and HARQ smoothly, was proposed. On the premise of full multiplexing gain, the HARQ delay can be transform into SNR gain, introducing diversity gain superfluity. Furthermore, the relationship among the channel correlation, retransmitting time and spatial diversity is deeply and systemically analyzed in the paper. The simulation results show the effectiveness of the proposed scheme.
2010, 32(4): 799-804.
doi: 10.3724/SP.J.1146.2009.00788
Abstract:
A Distributed Precoded Non-Orthogonal Cooperative Diversity (DP-NOCD) system with limited feedback is proposed, which utilizes the multi-antenna characteristic at the destination to make the preprocessed relay signals transmitted on the shared channel resources, so that the spectral efficiency and reliability of the traditional Orthogonal Cooperative Diversity (OCD) system can be improved simultaneously. With the Decode and Forward (DF) relay channel model and the virtual two-input multiple-output channel decomposed into two orthogonal sub-channels in the vector space, a precoding scheme is proposed to minimize the system Bit Error Rate (BER). The proposed scheme is effective in improving the BER performance and has a low feedback overhead. Simulation results show that in ideal cooperation scenarios, DP-NOCD system outperforms the non-cooperative system and the interference free OCD system by gains of 5~6.2 dB and 1~1.2 dB, respectively, when BER is 10-3.
A Distributed Precoded Non-Orthogonal Cooperative Diversity (DP-NOCD) system with limited feedback is proposed, which utilizes the multi-antenna characteristic at the destination to make the preprocessed relay signals transmitted on the shared channel resources, so that the spectral efficiency and reliability of the traditional Orthogonal Cooperative Diversity (OCD) system can be improved simultaneously. With the Decode and Forward (DF) relay channel model and the virtual two-input multiple-output channel decomposed into two orthogonal sub-channels in the vector space, a precoding scheme is proposed to minimize the system Bit Error Rate (BER). The proposed scheme is effective in improving the BER performance and has a low feedback overhead. Simulation results show that in ideal cooperation scenarios, DP-NOCD system outperforms the non-cooperative system and the interference free OCD system by gains of 5~6.2 dB and 1~1.2 dB, respectively, when BER is 10-3.
2010, 32(4): 805-810.
doi: 10.3724/SP.J.1146.2009.01145
Abstract:
A novel improved initial ranging algorithm in frequency domain is proposed for the uplink OFDMA system based on IEEE 802.16e. The initial ranging progress in frequency domain is completed by correlating the receiving signal with ranging codes in the frequency domain and detecting the used ranging code and timing offset of different users according to the power of ranging signal. Combined with serial interference cancellation, the proposed algorithm increases the correct detection probability of the system and decreases the false alarm rate when multi-users are accessing to the system simultaneously. The performance of the proposed algorithm and algorithms both in time and frequency domain is compared by simulation. The simulation results show that the proposed algorithm outperforms the original time and frequency domain algorithms both in performance and calculation complexity.
A novel improved initial ranging algorithm in frequency domain is proposed for the uplink OFDMA system based on IEEE 802.16e. The initial ranging progress in frequency domain is completed by correlating the receiving signal with ranging codes in the frequency domain and detecting the used ranging code and timing offset of different users according to the power of ranging signal. Combined with serial interference cancellation, the proposed algorithm increases the correct detection probability of the system and decreases the false alarm rate when multi-users are accessing to the system simultaneously. The performance of the proposed algorithm and algorithms both in time and frequency domain is compared by simulation. The simulation results show that the proposed algorithm outperforms the original time and frequency domain algorithms both in performance and calculation complexity.
2010, 32(4): 811-815.
doi: 10.3724/SP.J.1146.2009.00475
Abstract:
A new method of pseudonoise (PN) code acquisition is proposed in this paper to realize the code acquisition in weakly dependent non-Gaussian impulsive channels. Modeling the acquisition issue as a hypothesis testing issue, a detector is derived for dependent non-Gaussian impulsive noise, which is modeled as First Order Moving Average (FOMA) SS noise model, based on the locally optimum detection technique. On the base of the proposed detector, a simpler-structure is also derived. Numerical results show that the proposed detector can offer substantial performance improvement over the conventional schemes in weakly dependent non-Gaussian impulsive noise channels, and the proposed detector performs better as the impulsiveness becomes higher.
A new method of pseudonoise (PN) code acquisition is proposed in this paper to realize the code acquisition in weakly dependent non-Gaussian impulsive channels. Modeling the acquisition issue as a hypothesis testing issue, a detector is derived for dependent non-Gaussian impulsive noise, which is modeled as First Order Moving Average (FOMA) SS noise model, based on the locally optimum detection technique. On the base of the proposed detector, a simpler-structure is also derived. Numerical results show that the proposed detector can offer substantial performance improvement over the conventional schemes in weakly dependent non-Gaussian impulsive noise channels, and the proposed detector performs better as the impulsiveness becomes higher.
2010, 32(4): 816-820.
doi: 10.3724/SP.J.1146.2009.00456
Abstract:
An intelligent eavesdropper may easily intercept the frequency transition relations if a constant G function is used in Differential Frequency Hopping (DFH) system because of the poor 2-dimensional continuity of constant DFH sequence. To solve the problems, this paper puts forward a DFH code generator construction method which combines G function with PN sequence in a DFH system. The 2-Dimensional (2D) continuity performance of generated DFH patterns between constant G function and DFH code generator are tested and compared. The Symbol Error Rate (SER) performance in AWGN is analyzed in theory, and the corresponding simulation results are given. Theoretical analysis and simulation results show that the DFH code generator which combines G with PN sequence increases security of DFH system without decreasing the SER performance.
An intelligent eavesdropper may easily intercept the frequency transition relations if a constant G function is used in Differential Frequency Hopping (DFH) system because of the poor 2-dimensional continuity of constant DFH sequence. To solve the problems, this paper puts forward a DFH code generator construction method which combines G function with PN sequence in a DFH system. The 2-Dimensional (2D) continuity performance of generated DFH patterns between constant G function and DFH code generator are tested and compared. The Symbol Error Rate (SER) performance in AWGN is analyzed in theory, and the corresponding simulation results are given. Theoretical analysis and simulation results show that the DFH code generator which combines G with PN sequence increases security of DFH system without decreasing the SER performance.
2010, 32(4): 821-824.
doi: 10.3724/SP.J.1146.2009.00430
Abstract:
In this paper, a new class of generalized cyclotomic sequences of period pm(p odd prime and m1) with arbitrary order is constructed and its minimal polynomial is determined. Hence the linear complexity of it is obtained. The possible values of its linear complexity are pointed out, which is pm-1, pm, (pm-1)/2 and (pm+1)/2. The research also indicate that linear complexity of the sequences always take the values as above when the corresponding characteristic sets satisfies certain conditions. The results show that most of these sequences have good linear complexity.
In this paper, a new class of generalized cyclotomic sequences of period pm(p odd prime and m1) with arbitrary order is constructed and its minimal polynomial is determined. Hence the linear complexity of it is obtained. The possible values of its linear complexity are pointed out, which is pm-1, pm, (pm-1)/2 and (pm+1)/2. The research also indicate that linear complexity of the sequences always take the values as above when the corresponding characteristic sets satisfies certain conditions. The results show that most of these sequences have good linear complexity.
2010, 32(4): 825-829.
doi: 10.3724/SP.J.1146.2009.00388
Abstract:
Reduced List Syndrome Decoding (RLSD) algorithm and QC-LDPC codes are investigated in this paper, based on which, a new BP-RLSD concatenation algorithm for QC-LDPC codes is proposed. When the Belief Propagation (BP) algorithm fails, the soft LLR reliable information is sent to the RLSD algorithm. Based on the regular structure of permutation sub matrices, this paper proposes a method to reduce the search space of error patterns according to the weight of syndrome. This paper also proposes a fast look-up table method to search out a part of error positions. Those methods, combined with the information of Least Reliable Independent Positions (LRIPs), can achieve an efficient search for the Maximum Likelihood (ML) code, and substantially reduce the computation time. The simulation results show that the proposed methods are effective. The improved algorithm combined with the BP algorithm, can achieves a good tradeoff between computational complexity and decoding performance.
Reduced List Syndrome Decoding (RLSD) algorithm and QC-LDPC codes are investigated in this paper, based on which, a new BP-RLSD concatenation algorithm for QC-LDPC codes is proposed. When the Belief Propagation (BP) algorithm fails, the soft LLR reliable information is sent to the RLSD algorithm. Based on the regular structure of permutation sub matrices, this paper proposes a method to reduce the search space of error patterns according to the weight of syndrome. This paper also proposes a fast look-up table method to search out a part of error positions. Those methods, combined with the information of Least Reliable Independent Positions (LRIPs), can achieve an efficient search for the Maximum Likelihood (ML) code, and substantially reduce the computation time. The simulation results show that the proposed methods are effective. The improved algorithm combined with the BP algorithm, can achieves a good tradeoff between computational complexity and decoding performance.
2010, 32(4): 830-835.
doi: 10.3724/SP.J.1146.2009.00489
Abstract:
A Node Staying Probability based Path Compression Algorithm (NSP-PCA) is proposed in this paper. In NSP-PCA, the stability of new local paths is predicted by computing the probability that one node keeps staying in another nodes transmission range. The compressing operation is performed based on the prediction to reduce the blindness of compression. Simulation results show that NSP-PCA lessens the ephemeral and multiple short-cuts observably and achieves lower end-to-end delay, lower routing overhead and higher packet delivery rate compared with both SHORT and PCA.
A Node Staying Probability based Path Compression Algorithm (NSP-PCA) is proposed in this paper. In NSP-PCA, the stability of new local paths is predicted by computing the probability that one node keeps staying in another nodes transmission range. The compressing operation is performed based on the prediction to reduce the blindness of compression. Simulation results show that NSP-PCA lessens the ephemeral and multiple short-cuts observably and achieves lower end-to-end delay, lower routing overhead and higher packet delivery rate compared with both SHORT and PCA.
2010, 32(4): 836-840.
doi: 10.3724/SP.J.1146.2009.00270
Abstract:
In multi-domain environment, symptoms caused by inter-domain fault propagation will affect fault diagnosis algorithm. A distributed dependency model is proposed to build the dependencies in service system. Based on the dependency model, a distributed fault diagnosis algorithm is proposed, and the algorithm is improved from three aspects: reduce communication cost, accurate effect evaluation function and spurious symptom probability. Simulation results show that the fault diagnosis algorithm is efficient in multi-domain service environment.
In multi-domain environment, symptoms caused by inter-domain fault propagation will affect fault diagnosis algorithm. A distributed dependency model is proposed to build the dependencies in service system. Based on the dependency model, a distributed fault diagnosis algorithm is proposed, and the algorithm is improved from three aspects: reduce communication cost, accurate effect evaluation function and spurious symptom probability. Simulation results show that the fault diagnosis algorithm is efficient in multi-domain service environment.
2010, 32(4): 841-845.
doi: 10.3724/SP.J.1146.2009.00481
Abstract:
Considering the problem of low efficiency on topology design for large scale Service Overlay Networks(SON), a linear programming model is proposed based on multi-commodity flow and algorithm with bandwidth capability constraints, which reduces the time complexity and the space complexity. The simulation results demonstrate that the proposed algorithm can improve the efficiency of constructing and resource usage.
Considering the problem of low efficiency on topology design for large scale Service Overlay Networks(SON), a linear programming model is proposed based on multi-commodity flow and algorithm with bandwidth capability constraints, which reduces the time complexity and the space complexity. The simulation results demonstrate that the proposed algorithm can improve the efficiency of constructing and resource usage.
2010, 32(4): 846-851.
doi: 10.3724/SP.J.1146.2009.00435
Abstract:
This paper firstly improves the combinatorial double auction based grid resource allocation and pricing model, and proposes a unit price based pricing algorithm. Then an equivalent price algorithm is proposed, which designs the trust-based price adjusting function, and maps the bid prices of the nodes which have different trust values into the equivalent prices under the base trust degree. Finally the grid resources are allocated by combinatorial double auction using these equivalent prices. Simulations show the algorithm has high trade rate, can prevent malicious nodes from entering the trade. The trade utility can give buyers and sellers incentives to increase and decrease their equivalent bid prices, respectively.
This paper firstly improves the combinatorial double auction based grid resource allocation and pricing model, and proposes a unit price based pricing algorithm. Then an equivalent price algorithm is proposed, which designs the trust-based price adjusting function, and maps the bid prices of the nodes which have different trust values into the equivalent prices under the base trust degree. Finally the grid resources are allocated by combinatorial double auction using these equivalent prices. Simulations show the algorithm has high trade rate, can prevent malicious nodes from entering the trade. The trade utility can give buyers and sellers incentives to increase and decrease their equivalent bid prices, respectively.
2010, 32(4): 852-856.
doi: 10.3724/SP.J.1146.2009.00169
Abstract:
Ad hoc Networks is characteristic of limited energy and memory. A novel entity authentication scheme based on HuffMHT for Ad hoc Networks is proposed to solve such problems. This method using the concept of HuffMHT can obtain an effective safe strategy. At the same time, symmetrical key algorithm and public key algorithm are just combined to reduce the authentication delay effectively and increase the network lifetime and enhances the security of the networks. Moreover, when clustering head and HuffMHT is built up in the Ad hoc Networks, Power-consumption-least algorithm are designed and Christofides algorithm are used in this paper, respectively, distance which notes effectively transmit signal together are reduced, the power which notes consume is debased, the network lifetime is increased.
Ad hoc Networks is characteristic of limited energy and memory. A novel entity authentication scheme based on HuffMHT for Ad hoc Networks is proposed to solve such problems. This method using the concept of HuffMHT can obtain an effective safe strategy. At the same time, symmetrical key algorithm and public key algorithm are just combined to reduce the authentication delay effectively and increase the network lifetime and enhances the security of the networks. Moreover, when clustering head and HuffMHT is built up in the Ad hoc Networks, Power-consumption-least algorithm are designed and Christofides algorithm are used in this paper, respectively, distance which notes effectively transmit signal together are reduced, the power which notes consume is debased, the network lifetime is increased.
2010, 32(4): 857-863.
doi: 10.3724/SP.J.1146.2009.00342
Abstract:
In this paper, defining energy cost function constructed by remain energy, neighborhood numbers and communication cost of nodes as topology weight to synthetically reflect the energy efficiency of dominator and the contribution of reduced whole energy consumption, an Energy Cost based topology control algorithm for Minimum-total-weight Connected Dominating Set (ECMCDS) is proposed to solve the problem that the energy consumption of minimum connected dominating set is not minimum. The algorithm locally selects the node with a low-weight undertaking dominating mission to construct minimum-total-weight dominating set, and minimums the total energy consumption of networks. The experimental results show that the algorithm not only has the energy saved characters, but also ensures the reliability of topology links and extends the network life-cycle efficiently.
In this paper, defining energy cost function constructed by remain energy, neighborhood numbers and communication cost of nodes as topology weight to synthetically reflect the energy efficiency of dominator and the contribution of reduced whole energy consumption, an Energy Cost based topology control algorithm for Minimum-total-weight Connected Dominating Set (ECMCDS) is proposed to solve the problem that the energy consumption of minimum connected dominating set is not minimum. The algorithm locally selects the node with a low-weight undertaking dominating mission to construct minimum-total-weight dominating set, and minimums the total energy consumption of networks. The experimental results show that the algorithm not only has the energy saved characters, but also ensures the reliability of topology links and extends the network life-cycle efficiently.
2010, 32(4): 864-868.
doi: 10.3724/SP.J.1146.2009.00519
Abstract:
A localization and tracking algorithm suitable for mobile wireless sensor network is proposed. The algorithm uses controlled flood method to improve the using efficiency of the anchor nodes, uses cross operation to accelerate the sampling process and interpolation operation to predict the velocity and angle. An estimate precision function is also proposed so that one node could make the full use of all of the outstanding neighbor nodes information. Simulation results show that the algorithm outperforms the traditional algorithm in the convergence speed, localization accuracy and the requirement of anchor density.
A localization and tracking algorithm suitable for mobile wireless sensor network is proposed. The algorithm uses controlled flood method to improve the using efficiency of the anchor nodes, uses cross operation to accelerate the sampling process and interpolation operation to predict the velocity and angle. An estimate precision function is also proposed so that one node could make the full use of all of the outstanding neighbor nodes information. Simulation results show that the algorithm outperforms the traditional algorithm in the convergence speed, localization accuracy and the requirement of anchor density.
2010, 32(4): 869-874.
doi: 10.3724/SP.J.1146.2009.00349
Abstract:
In this paper, a lightweight key establishment protocol for wireless sensor networks is proposed. By optimizing information exchanges in the process of key establishment, this temporal initial key based protocol is able to achieve better extensibility and lower energy consumption. Theoretical analysis of finish time and totally connected probability verifies that this protocol is feasible. The simulation results show that, the connected probability is larger than 97% for typical network density. Compared with similar protocols, this protocol needs much less time to finish with enough connected probability. The finish time is less than 5.2s at the network density of 30 nodes per hop. Moreover, energy consumption is only 25% of those of similar protocols, which makes this protocol more suitable for resource constrained sensor nodes.
In this paper, a lightweight key establishment protocol for wireless sensor networks is proposed. By optimizing information exchanges in the process of key establishment, this temporal initial key based protocol is able to achieve better extensibility and lower energy consumption. Theoretical analysis of finish time and totally connected probability verifies that this protocol is feasible. The simulation results show that, the connected probability is larger than 97% for typical network density. Compared with similar protocols, this protocol needs much less time to finish with enough connected probability. The finish time is less than 5.2s at the network density of 30 nodes per hop. Moreover, energy consumption is only 25% of those of similar protocols, which makes this protocol more suitable for resource constrained sensor nodes.
2010, 32(4): 875-879.
doi: 10.3724/SP.J.1146.2009.00408
Abstract:
This paper presents a TPM-based architecture DIMA (Dynamic Integrity Measurement Architecture), which helps the administrators check the integrity of the processes and modules dynamically. Compares with other measurement architectures, DIMA uses a new mechanism to provide dynamic measurement of the running processes and kernel modules. Some attacks to running processes which use to be invisible to other integrity measurement architectures can be now detected. In this case, DIMA solves the TOC-TOU problem which always bothers others before. In addition, instead of measuring the whole file on the hard disk, the object is divided into some small pieces: code, parameter, stack and so on to make a fine-grained measurement result. Finally, the DIMA implementation using Trust Computing Module (TPM) is discussed and the performance data is presented.
This paper presents a TPM-based architecture DIMA (Dynamic Integrity Measurement Architecture), which helps the administrators check the integrity of the processes and modules dynamically. Compares with other measurement architectures, DIMA uses a new mechanism to provide dynamic measurement of the running processes and kernel modules. Some attacks to running processes which use to be invisible to other integrity measurement architectures can be now detected. In this case, DIMA solves the TOC-TOU problem which always bothers others before. In addition, instead of measuring the whole file on the hard disk, the object is divided into some small pieces: code, parameter, stack and so on to make a fine-grained measurement result. Finally, the DIMA implementation using Trust Computing Module (TPM) is discussed and the performance data is presented.
2010, 32(4): 880-883.
doi: 10.3724/SP.J.1146.2009.00410
Abstract:
Based on the correlative matrices, which are designed by dividing secret images and shares into vertical areas, a new multi-secret visual cryptography scheme is presented in this paper. Compared to the previous ones, the proposed scheme makes the number of secret images not restricted and has obviously improved the pixel expansion and the relative difference.
Based on the correlative matrices, which are designed by dividing secret images and shares into vertical areas, a new multi-secret visual cryptography scheme is presented in this paper. Compared to the previous ones, the proposed scheme makes the number of secret images not restricted and has obviously improved the pixel expansion and the relative difference.
2010, 32(4): 884-888.
doi: 10.3724/SP.J.1146.2009.00359
Abstract:
In order to solve the problem of the blind recognition of channel coding, a method of blind recognition of (n, 1, m) convolutional coding in the high bit error rate condition is proposed. Firstly, the mathematical model of the blind recognition is given, and then the application field of walsh-hadamard transform is enlarged. The authors show that putting the intercepted codes walsh-hadamard transform can be used to solve the problem of blind recognition of convolutional codes, which is usually applied to adaptive communication, information interception and cryptanalysis. The simulation experiments show the proposed methods can recognize the convolutional coding parameters effectively.
In order to solve the problem of the blind recognition of channel coding, a method of blind recognition of (n, 1, m) convolutional coding in the high bit error rate condition is proposed. Firstly, the mathematical model of the blind recognition is given, and then the application field of walsh-hadamard transform is enlarged. The authors show that putting the intercepted codes walsh-hadamard transform can be used to solve the problem of blind recognition of convolutional codes, which is usually applied to adaptive communication, information interception and cryptanalysis. The simulation experiments show the proposed methods can recognize the convolutional coding parameters effectively.
2010, 32(4): 889-893.
doi: 10.3724/SP.J.1146.2009.00547
Abstract:
In passive millimeter wave imaging, the problem of poor resolution of acquired image stems mainly from system antenna size limitations. In order to achieve resolution improvements, a Projected Wavelet-domain Maximum A Posteriori (PWMAP) estimation super-resolution algorithm is proposed in this paper. This algorithm restores the spectrum in the pass-band based on wavelet domain using the generalized Gaussian distribution and the MAP estimate; then extrapolate the spectrum by using the non-linear projection operation. This algorithm can not only provide a more accurate priori model than previous algorithms, but also updates the parameter adaptively at each iteration. Experimental results show the effectiveness and superiority of the algorithm.
In passive millimeter wave imaging, the problem of poor resolution of acquired image stems mainly from system antenna size limitations. In order to achieve resolution improvements, a Projected Wavelet-domain Maximum A Posteriori (PWMAP) estimation super-resolution algorithm is proposed in this paper. This algorithm restores the spectrum in the pass-band based on wavelet domain using the generalized Gaussian distribution and the MAP estimate; then extrapolate the spectrum by using the non-linear projection operation. This algorithm can not only provide a more accurate priori model than previous algorithms, but also updates the parameter adaptively at each iteration. Experimental results show the effectiveness and superiority of the algorithm.
2010, 32(4): 894-897.
doi: 10.3724/SP.J.1146.2009.01202
Abstract:
A moving object detection algorithm based on three-frame-differencing and edge information is presented in this paper. Firstly, three continuous edge images are obtained by edge extract from three continuous images, and then, the motion information is detected with three-frame-differencing, finally, the object is extracted with threshold segmentation and morphology. The result shows that the detection algorithm can detect object accurately and quickly.
A moving object detection algorithm based on three-frame-differencing and edge information is presented in this paper. Firstly, three continuous edge images are obtained by edge extract from three continuous images, and then, the motion information is detected with three-frame-differencing, finally, the object is extracted with threshold segmentation and morphology. The result shows that the detection algorithm can detect object accurately and quickly.
2010, 32(4): 898-901.
doi: 10.3724/SP.J.1146.2009.00394
Abstract:
To resolve the problem of track correlation in distributed multi-sensor system, a track correlation algorithm based on multi-dimension assignment and gray theory is presented in this paper. Firstly, gray theory is applied to acquire a global statistical vector in the distributed multi-sensor system. Then, a multi-dimension gray similar degree matrix is build according to the global statistical vector. Based on this matrix, the track correlation results can be got by a multi-dimension assignment method. At last, the algorithm is compared with gray track correlation algorithm. The simulation results show that the performance of the algorithm here is much better than that of the gray track correlation algorithm in dense multi-target environments, more cross, split and maneuvering track situations. In this situations, its correct correlation rate is improved about 8.8 percent over that of the gray track correlation algorithm.
To resolve the problem of track correlation in distributed multi-sensor system, a track correlation algorithm based on multi-dimension assignment and gray theory is presented in this paper. Firstly, gray theory is applied to acquire a global statistical vector in the distributed multi-sensor system. Then, a multi-dimension gray similar degree matrix is build according to the global statistical vector. Based on this matrix, the track correlation results can be got by a multi-dimension assignment method. At last, the algorithm is compared with gray track correlation algorithm. The simulation results show that the performance of the algorithm here is much better than that of the gray track correlation algorithm in dense multi-target environments, more cross, split and maneuvering track situations. In this situations, its correct correlation rate is improved about 8.8 percent over that of the gray track correlation algorithm.
2010, 32(4): 902-907.
doi: 10.3724/SP.J.1146.2009.00455
Abstract:
In the case of clutter, conventional multi-channel SAR (Synthetic Aperture Radar) executes moving target detection immediately after clutter cancellation, and the influence of range migration is ignored. However, this approach may lose some fast target. The target that not so fast can be detected, but the estimated velocity may be ambiguous, and target accurate location becomes even difficult. So three-frequency three-aperture SAR is presented in this paper to remove the Doppler ambiguity of fast target after clutter suppression. Dual Frequency Conjugated Processing (DFCP) and Keystone transform are employed to remove the Doppler ambiguity and correct range migration of moving target. Thus the Signal Noise Ratio (SNR) can be improved, and then target detection, velocity estimation and location without ambiguity can be completed. The proposed method can achieve the same location accuracy with the existing method as well as increase greatly the velocity range of fast target that can be detected and located. The simulation results show the effectiveness of proposed method.
In the case of clutter, conventional multi-channel SAR (Synthetic Aperture Radar) executes moving target detection immediately after clutter cancellation, and the influence of range migration is ignored. However, this approach may lose some fast target. The target that not so fast can be detected, but the estimated velocity may be ambiguous, and target accurate location becomes even difficult. So three-frequency three-aperture SAR is presented in this paper to remove the Doppler ambiguity of fast target after clutter suppression. Dual Frequency Conjugated Processing (DFCP) and Keystone transform are employed to remove the Doppler ambiguity and correct range migration of moving target. Thus the Signal Noise Ratio (SNR) can be improved, and then target detection, velocity estimation and location without ambiguity can be completed. The proposed method can achieve the same location accuracy with the existing method as well as increase greatly the velocity range of fast target that can be detected and located. The simulation results show the effectiveness of proposed method.
2010, 32(4): 908-912.
doi: 10.3724/SP.J.1146.2009.00493
Abstract:
A new detection algorithm based on Expectation Maximization (EM) algorithm and information- theoretic criteria for a spatially distributed, range walking and rotating target during a Coherent Processing Interval (CPI) are proposed. The proposed detector is acquired by estimating signal from every range cell in each given velocity through the information-theoretic criteria and EM method and utilizing the characteristics of strong scattering cells relevant to the targets scattering geometry and the correlation of adjacent given velocities. Furthermore , Constant False Alarm Rate (CFAR) property with respect to the unknown noise power is proved. Finally, experimental results for measured data of two planes illustrate that the proposed algorithm achieve a visible performance improvement comparing with conventional GLRT and non-coherent integration.
A new detection algorithm based on Expectation Maximization (EM) algorithm and information- theoretic criteria for a spatially distributed, range walking and rotating target during a Coherent Processing Interval (CPI) are proposed. The proposed detector is acquired by estimating signal from every range cell in each given velocity through the information-theoretic criteria and EM method and utilizing the characteristics of strong scattering cells relevant to the targets scattering geometry and the correlation of adjacent given velocities. Furthermore , Constant False Alarm Rate (CFAR) property with respect to the unknown noise power is proved. Finally, experimental results for measured data of two planes illustrate that the proposed algorithm achieve a visible performance improvement comparing with conventional GLRT and non-coherent integration.
2010, 32(4): 913-918.
doi: 10.3724/SP.J.1146.2009.00336
Abstract:
A new theoretical model for target scattering characteristic measurement based on ordinary mono-pulse radar system is proposed in this paper. The complexity of polarization structure in mono-pulse antenna is proved firstly. It is verified to be sensitivity to the polarization of target returns. Based on the polarization characteristics of sum-and-difference channel in mono-pulse radar, the PSM of target can be measured by signal processing of received signal in only one pulse interval which can be implemented greatly reduce the development complexity and production cost of fully polarimetric radar. By processing the electromagnetism computation data and simulation experiments, the validity of research work is demonstrated. All above have significant illumining and directing meaning for exploiting polarimetric measurements capability of current radar equipments and enhancing their information acquisition and processing capacity.
A new theoretical model for target scattering characteristic measurement based on ordinary mono-pulse radar system is proposed in this paper. The complexity of polarization structure in mono-pulse antenna is proved firstly. It is verified to be sensitivity to the polarization of target returns. Based on the polarization characteristics of sum-and-difference channel in mono-pulse radar, the PSM of target can be measured by signal processing of received signal in only one pulse interval which can be implemented greatly reduce the development complexity and production cost of fully polarimetric radar. By processing the electromagnetism computation data and simulation experiments, the validity of research work is demonstrated. All above have significant illumining and directing meaning for exploiting polarimetric measurements capability of current radar equipments and enhancing their information acquisition and processing capacity.
2010, 32(4): 919-924.
doi: 10.3724/SP.J.1146.2009.00291
Abstract:
A method of the mixing matrix estimation in the underdetermined source separation is proposed in which the sources are not sparse enough to estimate the mixing matrix. Getting many sub matrixes through applying Independent component analysis(ICA) for observation signals and removing the elements do not belong to the mixing matrix, the mixing matrix is estimated precisely with C-means clustering agglomeration. Then, the source signals can be recovered with the statistically sparse decomposition principle. The experiment shows that the method have better accuracy and validity than K-means and searching-and-averaging method in the time domain in estimating the mixing matrix.
A method of the mixing matrix estimation in the underdetermined source separation is proposed in which the sources are not sparse enough to estimate the mixing matrix. Getting many sub matrixes through applying Independent component analysis(ICA) for observation signals and removing the elements do not belong to the mixing matrix, the mixing matrix is estimated precisely with C-means clustering agglomeration. Then, the source signals can be recovered with the statistically sparse decomposition principle. The experiment shows that the method have better accuracy and validity than K-means and searching-and-averaging method in the time domain in estimating the mixing matrix.
2010, 32(4): 925-931.
doi: 10.3724/SP.J.1146.2009.00512
Abstract:
Based on the statistical model in stationary wavelet domain, an algorithm of SAR image despeckling is developed. Firstly, nonlogarithmic additive model is applied to SAR image, and then a statistical distributionLocal Translation-Rayleigh Distribution Model (LTRDM) is proposed for the noise within nonlogarithmic additive model in the image domain. Finally, based on this model and in the stationary wavelet domain, the solution of real signal coefficients are given by using Maximum A Posteriori(MAP). Experiments show that local translation-Rayleigh distribution model is effective, and also indicate that a despeckling algorithm based on LTRDM proposed in this paper is robust, and possess high performance over many traditional algorithms.
Based on the statistical model in stationary wavelet domain, an algorithm of SAR image despeckling is developed. Firstly, nonlogarithmic additive model is applied to SAR image, and then a statistical distributionLocal Translation-Rayleigh Distribution Model (LTRDM) is proposed for the noise within nonlogarithmic additive model in the image domain. Finally, based on this model and in the stationary wavelet domain, the solution of real signal coefficients are given by using Maximum A Posteriori(MAP). Experiments show that local translation-Rayleigh distribution model is effective, and also indicate that a despeckling algorithm based on LTRDM proposed in this paper is robust, and possess high performance over many traditional algorithms.
2010, 32(4): 932-936.
doi: 10.3724/SP.J.1146.2009.00502
Abstract:
This paper deals with the general spotlight bistatic SAR data based on series reversion. It is rather difficult to obtain the two-dimensional frequency spectrum which is hard for latter processing. By using the series reversion, the two-dimensional point target spectrum can be easily got, secondary range compression can be finished in the 2-D frequency domain and the range cell migration can be corrected in the range-Doppler domain. This algorithm has the advantage of range Doppler algorithm and it is suitable for large synthetic aperture. The accuracy of the proposed approach is verified with a simulation.
This paper deals with the general spotlight bistatic SAR data based on series reversion. It is rather difficult to obtain the two-dimensional frequency spectrum which is hard for latter processing. By using the series reversion, the two-dimensional point target spectrum can be easily got, secondary range compression can be finished in the 2-D frequency domain and the range cell migration can be corrected in the range-Doppler domain. This algorithm has the advantage of range Doppler algorithm and it is suitable for large synthetic aperture. The accuracy of the proposed approach is verified with a simulation.
2010, 32(4): 937-940.
doi: 10.3724/SP.J.1146.2009.00480
Abstract:
For spaceborne Synthetic Aperture Radar(SAR) processing, Doppler property influences the azimuth performance and the imaging accuracy directly. In this paper, the Doppler centroid frequency expression based on elliptical orbit is derived analytically. The influences of the yaw steering and the pitch steering are analyzed respectively. Then, a new method called elliptic orbit total Zero Doppler steering is proposed, and a simulation for TerraSAR-X parameters shows its advantages. The simulation results indicate that this method reduces the Doppler centroid 100 times smaller than currently applied yaw steering methods, and 5 times smaller than the circular orbit total Zero Doppler steering, which illuminates the applicability of the method.
For spaceborne Synthetic Aperture Radar(SAR) processing, Doppler property influences the azimuth performance and the imaging accuracy directly. In this paper, the Doppler centroid frequency expression based on elliptical orbit is derived analytically. The influences of the yaw steering and the pitch steering are analyzed respectively. Then, a new method called elliptic orbit total Zero Doppler steering is proposed, and a simulation for TerraSAR-X parameters shows its advantages. The simulation results indicate that this method reduces the Doppler centroid 100 times smaller than currently applied yaw steering methods, and 5 times smaller than the circular orbit total Zero Doppler steering, which illuminates the applicability of the method.
2010, 32(4): 941-947.
doi: 10.3724/SP.J.1146.2009.00377
Abstract:
This paper mainly analyzes the important factors that influence the accuracy of airborne differential SAR interferometry. The error induced by the processing procedure of differential SAR interferometry is first considered, and it is point out that external DEM is indispensable to achieve high accuracy in detecting and monitoring deformations of the earths surface. Then several factors, i.e. system parameters, coherence and atmosphere, are discussed in detail. Among these factors, baseline length and orientation play a much more crucial role, and that means high quality of motion compensations are necessary. By connecting the coherence with the accuracy of airborne SAR differential interferometry, the flight path for repeat-pass interferomtery have to be precisely controlled to meet the baseline requirement. Similar to spaceborne SAR, Airborne SAR also suffers atmosphere effect. After discussing all these factors, the mathematical expressions of the accuracy are presented for airborne differential SAR interferometry.
This paper mainly analyzes the important factors that influence the accuracy of airborne differential SAR interferometry. The error induced by the processing procedure of differential SAR interferometry is first considered, and it is point out that external DEM is indispensable to achieve high accuracy in detecting and monitoring deformations of the earths surface. Then several factors, i.e. system parameters, coherence and atmosphere, are discussed in detail. Among these factors, baseline length and orientation play a much more crucial role, and that means high quality of motion compensations are necessary. By connecting the coherence with the accuracy of airborne SAR differential interferometry, the flight path for repeat-pass interferomtery have to be precisely controlled to meet the baseline requirement. Similar to spaceborne SAR, Airborne SAR also suffers atmosphere effect. After discussing all these factors, the mathematical expressions of the accuracy are presented for airborne differential SAR interferometry.
2010, 32(4): 948-952.
doi: 10.3724/SP.J.1146.2008.01348
Abstract:
An elevation adaptive algorithm is presented according to the characteristics that clutter Doppler varies with range in airborne phased array radar. This algorithm, by making use of the elevation freedom in plane array antenna, is first to select the training data from Doppler cells in the short-range clutter region to evaluate the covariance matrix, and then to calculate the elevation optimum weight. When the PRF is chosen so that radar is range ambiguous, this algorithm can eliminate short-range clutter while reserving the long-range clutter. Thus, the clutter range dependence is alleviated, which is of great help for latter STAP in azimuth and time domain.
An elevation adaptive algorithm is presented according to the characteristics that clutter Doppler varies with range in airborne phased array radar. This algorithm, by making use of the elevation freedom in plane array antenna, is first to select the training data from Doppler cells in the short-range clutter region to evaluate the covariance matrix, and then to calculate the elevation optimum weight. When the PRF is chosen so that radar is range ambiguous, this algorithm can eliminate short-range clutter while reserving the long-range clutter. Thus, the clutter range dependence is alleviated, which is of great help for latter STAP in azimuth and time domain.
2010, 32(4): 953-958.
doi: 10.3724/SP.J.1146.2009.00515
Abstract:
A high-resolution algorithm for 2D DOA estimation is proposed to reduce the computational complexity of traditional high-resolution methods. The objective function of the optimization issue based on norm constraint is developed firstly. Then the sparse solution corresponding to the received data along the azimuth dimension is deduced by solving the minimization problem using the iteration algorithm, then it is used to obtain the angular frequencies in which azimuth and elevation angles are coupled, and signals of different angular frequencies are separated. Finally, the sparse solution relating to each signal is obtained to get the elevation angle and then compute the corresponding azimuth angle. A modified method is presented to overcome the blind angular region problem occurred in the algorithm. Compared with the traditional high-resolution methods, the proposed method has lower SNR threshold and simple procedure to achieve high precision with lower sidelobe level. Numerical simulation results verify the effectiveness of the method.
A high-resolution algorithm for 2D DOA estimation is proposed to reduce the computational complexity of traditional high-resolution methods. The objective function of the optimization issue based on norm constraint is developed firstly. Then the sparse solution corresponding to the received data along the azimuth dimension is deduced by solving the minimization problem using the iteration algorithm, then it is used to obtain the angular frequencies in which azimuth and elevation angles are coupled, and signals of different angular frequencies are separated. Finally, the sparse solution relating to each signal is obtained to get the elevation angle and then compute the corresponding azimuth angle. A modified method is presented to overcome the blind angular region problem occurred in the algorithm. Compared with the traditional high-resolution methods, the proposed method has lower SNR threshold and simple procedure to achieve high precision with lower sidelobe level. Numerical simulation results verify the effectiveness of the method.
2010, 32(4): 959-962.
doi: 10.3724/SP.J.1146.2009.00425
Abstract:
Compared with traditional imaging algorithm based on Delay and Sum (DAS), Capon algorithm can increase horizontal resolution of medical ultrasound image, but the contrast is not improved. A new algorithm named Chirp_Caopn is proposed in this paper, which combines ultrasound coded exciting technology with Capon algorithm, so the excellent relation of coded signal can remedy the common contrast of Capon algorithm. The simulated result shows that compared with Capon algorithm, the new algorithm not only improves the horizonta resolution, but also can give a high contrast image with less noise.
Compared with traditional imaging algorithm based on Delay and Sum (DAS), Capon algorithm can increase horizontal resolution of medical ultrasound image, but the contrast is not improved. A new algorithm named Chirp_Caopn is proposed in this paper, which combines ultrasound coded exciting technology with Capon algorithm, so the excellent relation of coded signal can remedy the common contrast of Capon algorithm. The simulated result shows that compared with Capon algorithm, the new algorithm not only improves the horizonta resolution, but also can give a high contrast image with less noise.
2010, 32(4): 963-966.
doi: 10.3724/SP.J.1146.2009.00175
Abstract:
A novel version of PM-Root-MUSIC algorithm is developed in this paper. The algorithm is based on the Manifold Separation Technique (MST) for fast DOA estimation of Uniform Circular Array (UCA) when elements of UCA are sparse. The method does not suffer from the mapping error caused by classic beamspace transform and does not need eigenvalue decomposition, so the computational burden is greatly reduced and the estimation results are close to CRB performance. Simulation results show the method is effective.
A novel version of PM-Root-MUSIC algorithm is developed in this paper. The algorithm is based on the Manifold Separation Technique (MST) for fast DOA estimation of Uniform Circular Array (UCA) when elements of UCA are sparse. The method does not suffer from the mapping error caused by classic beamspace transform and does not need eigenvalue decomposition, so the computational burden is greatly reduced and the estimation results are close to CRB performance. Simulation results show the method is effective.
2010, 32(4): 967-972.
doi: 10.3724/SP.J.1146.2008.01176
Abstract:
A new DOA estimation method based on orthogonal joint diagonalization of high-order cumulant is proposed. The high-order cumulant matrices are jointly utilized to DOA. Through processing high-order cumulant matrices by the new technique of orthogonal joint diagonalization, the spatial spectrum can be defined by both joint diagonalization matrix and a set of diagonal matrices. This method is also proved to process the coherent sources, and can be used in the colored noise environment. Compared to cumulant-based method using single high-order cumulant matrix, the proposed method has the higher resolution, lower RMSE and stronger robust.
A new DOA estimation method based on orthogonal joint diagonalization of high-order cumulant is proposed. The high-order cumulant matrices are jointly utilized to DOA. Through processing high-order cumulant matrices by the new technique of orthogonal joint diagonalization, the spatial spectrum can be defined by both joint diagonalization matrix and a set of diagonal matrices. This method is also proved to process the coherent sources, and can be used in the colored noise environment. Compared to cumulant-based method using single high-order cumulant matrix, the proposed method has the higher resolution, lower RMSE and stronger robust.
2010, 32(4): 973-977.
doi: 10.3724/SP.J.1146.2009.00532
Abstract:
The design method of a class of symmetric biorthogonal wavelets is proposed in this paper. The filter banks of the wavelets possess lattice structure, the analysis and synthesis filter banks for wavelets meet biorthogonality and regularly conditions, and the filters are all real binary coefficients. Therefore, the wavelet transform is suitable for high-speed VLSI implementation. Both the mathematical derivations and the design examples in the paper verify the effectiveness of proposed method.
The design method of a class of symmetric biorthogonal wavelets is proposed in this paper. The filter banks of the wavelets possess lattice structure, the analysis and synthesis filter banks for wavelets meet biorthogonality and regularly conditions, and the filters are all real binary coefficients. Therefore, the wavelet transform is suitable for high-speed VLSI implementation. Both the mathematical derivations and the design examples in the paper verify the effectiveness of proposed method.
2010, 32(4): 978-982.
doi: 10.3724/SP.J.1146.2009.00402
Abstract:
For Multiple-Input Multiple-Output (MIMO) space-time coding architecture with distributed transmit antennas, the location of transmit antennas will impact the system performance. To solve this problem, the area averaged bit error ratio (AABER) of V-BLAST with two distributed transmit antennas in a linear cell is studied, considering the effects of channel propagation delay, path loss, shadow fading, multipath fading and white Gaussian noise. Theoretical analyses show that the antennas should be located symmetrically about the cell center in order to achieve the best AABER. This location can be calculated by numerical method. Simulation results prove the correctness of the theoretical analysis.
For Multiple-Input Multiple-Output (MIMO) space-time coding architecture with distributed transmit antennas, the location of transmit antennas will impact the system performance. To solve this problem, the area averaged bit error ratio (AABER) of V-BLAST with two distributed transmit antennas in a linear cell is studied, considering the effects of channel propagation delay, path loss, shadow fading, multipath fading and white Gaussian noise. Theoretical analyses show that the antennas should be located symmetrically about the cell center in order to achieve the best AABER. This location can be calculated by numerical method. Simulation results prove the correctness of the theoretical analysis.
2010, 32(4): 983-987.
doi: 10.3724/SP.J.1146.2009.00358
Abstract:
This paper focuses on the Hopf bifurcation analysis of a fluid-flow model with time-delay for the congestion control algorithm in the wireless networks. By choosing the communication delay as a bifurcation parameter, the model exhibits of Hopf bifurcation are proved. The formulas for determining the direction of the Hopf bifurcation and the stability of bifurcating periodic solutions are obtained by applying the center manifold theorem and the normal form theory. Finally, a numerical simulation is presented to verify the theoretical results.
This paper focuses on the Hopf bifurcation analysis of a fluid-flow model with time-delay for the congestion control algorithm in the wireless networks. By choosing the communication delay as a bifurcation parameter, the model exhibits of Hopf bifurcation are proved. The formulas for determining the direction of the Hopf bifurcation and the stability of bifurcating periodic solutions are obtained by applying the center manifold theorem and the normal form theory. Finally, a numerical simulation is presented to verify the theoretical results.
2010, 32(4): 988-992.
doi: 10.3724/SP.J.1146.2009.00634
Abstract:
The characteristic of heterogeneous grid environment determines that the task scheduling is constrained by a number of factors such as the length of scheduling, the performance of security, the cost of scheduling and etc. Firstly, based on the characteristics of grid task scheduling, a security benefit function and an efficient nodes credibility dynamic evaluation model are constructed. Then a constrained multi-objective grid task scheduling model is proposed. Secondly, by using the subjection degree function, the multi-objective optimization is transformed into a single objective optimization issue. Thirdly, Through the design of new evolutionary operators, a new genetic algorithm is proposed. The convergence of this algorithm is analyzed. Simulation results show that the proposed algorithm is better than the compared ones in terms of the length of the task scheduling, security efficiency value, reliability and scheduling costs.
The characteristic of heterogeneous grid environment determines that the task scheduling is constrained by a number of factors such as the length of scheduling, the performance of security, the cost of scheduling and etc. Firstly, based on the characteristics of grid task scheduling, a security benefit function and an efficient nodes credibility dynamic evaluation model are constructed. Then a constrained multi-objective grid task scheduling model is proposed. Secondly, by using the subjection degree function, the multi-objective optimization is transformed into a single objective optimization issue. Thirdly, Through the design of new evolutionary operators, a new genetic algorithm is proposed. The convergence of this algorithm is analyzed. Simulation results show that the proposed algorithm is better than the compared ones in terms of the length of the task scheduling, security efficiency value, reliability and scheduling costs.
2010, 32(4): 993-997.
doi: 10.3724/SP.J.1146.2009.00127
Abstract:
A new method of TOA (time of arrival) estimation in frequency domain used for wireless sensor nodes acoustic ranging is proposed in this paper. It shows higher accurate results under lower signal-noise-ratio applications compare to the time-domain method such as amplitude detection. It is based on Goertzel algorithm for short time frequency analyses. Fixed point algorithm is achieved by adjusting acoustic ranging signals frequency and window length. The computation of the algorithm can be done during one sampling period of microcontroller. A multi-magnitude threshold TOA estimate method is also studied for reducing the errors further. The algorithm has been tested on a node with a dsPIC6014A microprocessor. Experimental results show that the performance is better than those in time-domain. The acoustic ranging error is less than 3% at the distance of 25 meters.
A new method of TOA (time of arrival) estimation in frequency domain used for wireless sensor nodes acoustic ranging is proposed in this paper. It shows higher accurate results under lower signal-noise-ratio applications compare to the time-domain method such as amplitude detection. It is based on Goertzel algorithm for short time frequency analyses. Fixed point algorithm is achieved by adjusting acoustic ranging signals frequency and window length. The computation of the algorithm can be done during one sampling period of microcontroller. A multi-magnitude threshold TOA estimate method is also studied for reducing the errors further. The algorithm has been tested on a node with a dsPIC6014A microprocessor. Experimental results show that the performance is better than those in time-domain. The acoustic ranging error is less than 3% at the distance of 25 meters.
2010, 32(4): 998-1002.
doi: 10.3724/SP.J.1146.2009.00337
Abstract:
In order to test the measurement capability of HF ground wave radar system (OSMAR-S) with compact receiving antenna, verification tests of OSMAR-S system againstin situ measurements had been accomplished on Nov. 12~17, 2007. A observation method is used in the test for continuously comparison. The method is composed of fixed point survey at fixed depth and section survey at bottom, and compared the surface current data acquired by OSMAR-S with different depth current data acquired by others. The results confirm the detection precision and depth of OSMAR-S and indicate OSMAR-S is meet the need of sea real-time monitoring. The test also fill the blank that it is no test to confirm the detection depth of HF ground wave radar.
In order to test the measurement capability of HF ground wave radar system (OSMAR-S) with compact receiving antenna, verification tests of OSMAR-S system againstin situ measurements had been accomplished on Nov. 12~17, 2007. A observation method is used in the test for continuously comparison. The method is composed of fixed point survey at fixed depth and section survey at bottom, and compared the surface current data acquired by OSMAR-S with different depth current data acquired by others. The results confirm the detection precision and depth of OSMAR-S and indicate OSMAR-S is meet the need of sea real-time monitoring. The test also fill the blank that it is no test to confirm the detection depth of HF ground wave radar.
2010, 32(4): 1003-1007.
doi: 10.3724/SP.J.1146.2009.00343
Abstract:
In order to enhance the utilization of the correlation between different acoustic units in speech recognition, a novel model training approach based on the Spatial Correlation Transformation (SCT) framework is proposed in this paper, in which the speaker-independent model parameters are re-estimated using the spatial correlation information in the training data. In this algorithm, SCT is applied to all training data, to decrease the correlation among the training data, make the model re-estimated less dependent on the training data, and then improve the performance of the model. Experiments show that the combination of SCT-based model training and SCT-based feature transformation achieves a relative reduction of 18% of average syllable error rate compared to the baseline system.
In order to enhance the utilization of the correlation between different acoustic units in speech recognition, a novel model training approach based on the Spatial Correlation Transformation (SCT) framework is proposed in this paper, in which the speaker-independent model parameters are re-estimated using the spatial correlation information in the training data. In this algorithm, SCT is applied to all training data, to decrease the correlation among the training data, make the model re-estimated less dependent on the training data, and then improve the performance of the model. Experiments show that the combination of SCT-based model training and SCT-based feature transformation achieves a relative reduction of 18% of average syllable error rate compared to the baseline system.
2010, 32(4): 1008-1011.
doi: 10.3724/SP.J.1146.2009.00392
Abstract:
In this paper, the concepts of (T, S)-vague equivalence relation and dominatment of a pair of t-norm based on t-norm and t-conorm are preposed based on the characteristic which a vague set has truth- membership function and false- membership function. Furthermore, a desired one-to-one correspondence between (T, S)-vague equivalences and (T, S)-vague partitions is presented. The refinement of (T, S)-vague partitions is discussed, leading to the sufficient and necessary condition which the(T*, S*) -refinement of any two (T, S)-vague partitions is again a (T, S)-vague partition.
In this paper, the concepts of (T, S)-vague equivalence relation and dominatment of a pair of t-norm based on t-norm and t-conorm are preposed based on the characteristic which a vague set has truth- membership function and false- membership function. Furthermore, a desired one-to-one correspondence between (T, S)-vague equivalences and (T, S)-vague partitions is presented. The refinement of (T, S)-vague partitions is discussed, leading to the sufficient and necessary condition which the(T*, S*) -refinement of any two (T, S)-vague partitions is again a (T, S)-vague partition.
2010, 32(4): 1012-1016.
doi: 10.3724/SP.J.1146.2009.00247
Abstract:
Usually in a sigma-delta ADC, the digital filter takes most of the chip area. In this paper, a novel digital filer topology is proposed, in which the differentiator is constructed with a control unit and an adder instead of the multiple of adders in the Hogenauer structure filter, so that the digital circuit area should be reduced. A fourth order digital filter employing such topology is implemented in a Cyclone-II FPGA, and costs chip resources 29 percent less than in a Hogenauer structure.
Usually in a sigma-delta ADC, the digital filter takes most of the chip area. In this paper, a novel digital filer topology is proposed, in which the differentiator is constructed with a control unit and an adder instead of the multiple of adders in the Hogenauer structure filter, so that the digital circuit area should be reduced. A fourth order digital filter employing such topology is implemented in a Cyclone-II FPGA, and costs chip resources 29 percent less than in a Hogenauer structure.