Email alert
2011 Vol. 33, No. 6
Display Method:
2011, 33(6): 1271-1276.
doi: 10.3724/SP.J.1146.2010.01104
Abstract:
Utilizing the Shim's identity-based signature scheme, a new identity-based verifiably encrypted signature scheme is proposed. As a building block of the fair exchange protocol, this approach does not use any zero-knowledge proofs to provide verifiability, it avoids most of the costly computations. Compared to the previous identity-based verifiably encrypted signature schemes, the proposed scheme is more efficiency. The performance analysis results show that the scheme is provably secure in the random oracle model under the CDH problem assumption.
Utilizing the Shim's identity-based signature scheme, a new identity-based verifiably encrypted signature scheme is proposed. As a building block of the fair exchange protocol, this approach does not use any zero-knowledge proofs to provide verifiability, it avoids most of the costly computations. Compared to the previous identity-based verifiably encrypted signature schemes, the proposed scheme is more efficiency. The performance analysis results show that the scheme is provably secure in the random oracle model under the CDH problem assumption.
2011, 33(6): 1277-1281.
doi: 10.3724/SP.J.1146.2010.00899
Abstract:
Key-Dependent Message (KDM) security was introduced to address the case where message is a function of secret key of encryption scheme. By using universal hash function, this paper presents a stateless symmetric encryption scheme that is information-theoretically KDM secure based on an extended version of left-over hash lemma in the standard model. The scheme is secure in face of a bounded of nearly exponent number of encryptions for which the messages depend in an arbitrary way on the secret key. Finally, through choosing parameters properly and comparing with existing schemes, the security and efficiency of the constructed scheme are proved to be improved.
Key-Dependent Message (KDM) security was introduced to address the case where message is a function of secret key of encryption scheme. By using universal hash function, this paper presents a stateless symmetric encryption scheme that is information-theoretically KDM secure based on an extended version of left-over hash lemma in the standard model. The scheme is secure in face of a bounded of nearly exponent number of encryptions for which the messages depend in an arbitrary way on the secret key. Finally, through choosing parameters properly and comparing with existing schemes, the security and efficiency of the constructed scheme are proved to be improved.
2011, 33(6): 1282-1289.
doi: 10.3724/SP.J.1146.2010.01119
Abstract:
The efficiency of contact probing has an important impact on the performance of Delay-Tolerant Mobility Sensor Network (DTMSN). An Adaptive Contact Probing Scheme (ACPS) of DTMSN is proposed, which is based on the study of stochastic properties?of Random Way-Point (RWP) mobility model. The main idea of ACPS is to adjust adaptively the time and the number of contact probing according to the arrival rate of contact arrival process. The scheme can effectively deduce the probing energy cost and contact discovery delay by improving the probe efficiency and accuracy. Simulation results show that the proposed ACPS has higher discovery ratio and lower discovery delay than the Fixed-cycle Probing Scheme (FPS).
The efficiency of contact probing has an important impact on the performance of Delay-Tolerant Mobility Sensor Network (DTMSN). An Adaptive Contact Probing Scheme (ACPS) of DTMSN is proposed, which is based on the study of stochastic properties?of Random Way-Point (RWP) mobility model. The main idea of ACPS is to adjust adaptively the time and the number of contact probing according to the arrival rate of contact arrival process. The scheme can effectively deduce the probing energy cost and contact discovery delay by improving the probe efficiency and accuracy. Simulation results show that the proposed ACPS has higher discovery ratio and lower discovery delay than the Fixed-cycle Probing Scheme (FPS).
2011, 33(6): 1290-1293.
doi: 10.3724/SP.J.1146.2010.01106
Abstract:
To improve network performance in dynamic traffic load conditions, an adaptive polling periods MAC protocol called AA-MAC is proposed. Based on short preamble sampling technology, after receiving messages rather nodes in AA-MAC perform some additional polling periods than switch off radios immediately. It allows several transmissions upon one rendezvous between the sender and its destination especially when network traffic is high. To give insight into protocol, energy consumption and network latency are both modeled. Simulations on a 9 hop linear topology illustrate that AA-MAC is superior to S-MAC in any traffic conditions and it performs equally to X-MAC in light traffic conditions and performs better than X-MAC in high traffic conditions. When traffic load is high, AA-MAC decrease network latency by 56% compared to X-MAC.
To improve network performance in dynamic traffic load conditions, an adaptive polling periods MAC protocol called AA-MAC is proposed. Based on short preamble sampling technology, after receiving messages rather nodes in AA-MAC perform some additional polling periods than switch off radios immediately. It allows several transmissions upon one rendezvous between the sender and its destination especially when network traffic is high. To give insight into protocol, energy consumption and network latency are both modeled. Simulations on a 9 hop linear topology illustrate that AA-MAC is superior to S-MAC in any traffic conditions and it performs equally to X-MAC in light traffic conditions and performs better than X-MAC in high traffic conditions. When traffic load is high, AA-MAC decrease network latency by 56% compared to X-MAC.
2011, 33(6): 1294-1300.
doi: 10.3724/SP.J.1146.2010.01072
Abstract:
As mobile ad hoc networks have become increasingly intricate and intelligent, selfish mobile nodes with different global goals need to be stimulated to cooperation and share resources. Emotion has a crucial impact on an agent's cognition and behaviors, enhances its flexibility to adapt to the resources-restricted and unpredictable conditions.This paper represents an Emotion-driven Negotiation of Selfish Nodes in the MANETs (ENSNM), in which the emotion models such as achievement motivation, Weber-Fechner law that execute throughout negotiation as inherent attributes of mobile nodes. Achievement motivation determines whether a node sponsors or participates in a negotiation as well as the principle of setting initial price in the preliminary stage. Moreover, the deadline of a negotiator is dynamically modified by the Weber-Fechner law throughout the negotiation. Simulation results demonstrate the model can improve negotiation efficiency, reduce network traffic and energy consumption.
As mobile ad hoc networks have become increasingly intricate and intelligent, selfish mobile nodes with different global goals need to be stimulated to cooperation and share resources. Emotion has a crucial impact on an agent's cognition and behaviors, enhances its flexibility to adapt to the resources-restricted and unpredictable conditions.This paper represents an Emotion-driven Negotiation of Selfish Nodes in the MANETs (ENSNM), in which the emotion models such as achievement motivation, Weber-Fechner law that execute throughout negotiation as inherent attributes of mobile nodes. Achievement motivation determines whether a node sponsors or participates in a negotiation as well as the principle of setting initial price in the preliminary stage. Moreover, the deadline of a negotiator is dynamically modified by the Weber-Fechner law throughout the negotiation. Simulation results demonstrate the model can improve negotiation efficiency, reduce network traffic and energy consumption.
2011, 33(6): 1301-1306.
doi: 10.3724/SP.J.1146.2010.01130
Abstract:
How to construct Virtual Networks (VNs) which satisfies users demand efficiently under the situation of limited resources is a hot issue. Mathematics model of VN construction is analyzed. Under some important principles of VN construction,balanced link load VN construction algorithm and balanced node load VN construction algorithm are given. Based on these two algorithms Balanced Adaptive VN Construction Algorithm (BACA) is proposed. The remapping of failure virtual links and VN reconfiguration are discussed when link failure occurs. The efficiency of BACA is evaluated by emulation experiment according to construction requirements acceptance ratio and link and node load balance ratio of the whole substrate network.
How to construct Virtual Networks (VNs) which satisfies users demand efficiently under the situation of limited resources is a hot issue. Mathematics model of VN construction is analyzed. Under some important principles of VN construction,balanced link load VN construction algorithm and balanced node load VN construction algorithm are given. Based on these two algorithms Balanced Adaptive VN Construction Algorithm (BACA) is proposed. The remapping of failure virtual links and VN reconfiguration are discussed when link failure occurs. The efficiency of BACA is evaluated by emulation experiment according to construction requirements acceptance ratio and link and node load balance ratio of the whole substrate network.
2011, 33(6): 1307-1313.
doi: 10.3724/SP.J.1146.2010.01120
Abstract:
With the rapid growth and wide application of Web services, the research on how to accurately, efficiently and rapidly find the desired Web services has become a challenging subject. In order to improve the efficiency and precision for Web service discovery, a semantic Web services discovery framework based on Kernel Batch SOM neural network is proposed. Firstly, by introducing the WordNet and Latent Semantic Index (LSI) into the VSM lexical vectors to extend semantics and reduce the dimension, the resulting VSM semantic vectors can well describe Web services true semantic characterization; Secondly, by using the kernel trick to modify Regular Batch SOMs weight updating rule, a kernel Batch SOM neural network is proposed to cluster Web services automatically; Thirdly, a kernel Cosine-based similarity matching mechanism is presented to well estimate the similarity of Web services. Finally, the experiments performed on the real-world Web services collection demonstrate the feasibility of the proposed approaches.
With the rapid growth and wide application of Web services, the research on how to accurately, efficiently and rapidly find the desired Web services has become a challenging subject. In order to improve the efficiency and precision for Web service discovery, a semantic Web services discovery framework based on Kernel Batch SOM neural network is proposed. Firstly, by introducing the WordNet and Latent Semantic Index (LSI) into the VSM lexical vectors to extend semantics and reduce the dimension, the resulting VSM semantic vectors can well describe Web services true semantic characterization; Secondly, by using the kernel trick to modify Regular Batch SOMs weight updating rule, a kernel Batch SOM neural network is proposed to cluster Web services automatically; Thirdly, a kernel Cosine-based similarity matching mechanism is presented to well estimate the similarity of Web services. Finally, the experiments performed on the real-world Web services collection demonstrate the feasibility of the proposed approaches.
2011, 33(6): 1314-1318.
doi: 10.3724/SP.J.1146.2010.00179
Abstract:
Classical trust model of P2P networks calculates the global trust value by iteration of local trust value. Every transaction will cause iteration throughout the whole networks resulting in computational complexity, huge communication traffic. These also face collusion attack, smear attack, sleeping attack and so on that caused by sparse transaction data and inaccurate computing result. To ensure the density of transaction data and the accuracy of computing result, a novel P2P global Probability and Statistics based trust (PStrust) model is presented. The history records of transaction are used to figure out the trust value of every peer by methods of the maximum likelihood estimation and hypothesis testing. Every peer trades with the peer with high credibility. Mathematical analysis and simulation show PStrust can resist attacks of malicious peers and improve the successful download rate of the whole P2P system compared with traditional model Eigentrust.
Classical trust model of P2P networks calculates the global trust value by iteration of local trust value. Every transaction will cause iteration throughout the whole networks resulting in computational complexity, huge communication traffic. These also face collusion attack, smear attack, sleeping attack and so on that caused by sparse transaction data and inaccurate computing result. To ensure the density of transaction data and the accuracy of computing result, a novel P2P global Probability and Statistics based trust (PStrust) model is presented. The history records of transaction are used to figure out the trust value of every peer by methods of the maximum likelihood estimation and hypothesis testing. Every peer trades with the peer with high credibility. Mathematical analysis and simulation show PStrust can resist attacks of malicious peers and improve the successful download rate of the whole P2P system compared with traditional model Eigentrust.
2011, 33(6): 1319-1325.
doi: 10.3724/SP.J.1146.2010.01207
Abstract:
Although the Feedback-based Two-stage Switch Architecture (FTSA) shows excellent performances in simulation, it can not be realized under present technology condition because of the time restrictions on the scheduling algorithms. To relax the time constraint of the FTSA, this paper proposed an improved two-stage switch architecture called FTSA-2-SS (FTSA using the 2-Staggered Symmetry connection pattern), which enables the cell transmission to take place in parallel with the scheduling process with the adoption of 2-staggered symmetry connection pattern and thus extends the time space of the scheduling algorithm to the whole time slot. In addition, FTSA-2-SS uses the double-cell-buffer mode and Re-sequencing Buffer (RB) to solve the consequent problem such as cell conflict and disordering. Theoretical analysis shows that FTSA-2-SS has the same stability as FTSA and the simulation results show that FTSA-2-SS has a better delay performance compared with the other non-feedback two-stage switch architecture.
Although the Feedback-based Two-stage Switch Architecture (FTSA) shows excellent performances in simulation, it can not be realized under present technology condition because of the time restrictions on the scheduling algorithms. To relax the time constraint of the FTSA, this paper proposed an improved two-stage switch architecture called FTSA-2-SS (FTSA using the 2-Staggered Symmetry connection pattern), which enables the cell transmission to take place in parallel with the scheduling process with the adoption of 2-staggered symmetry connection pattern and thus extends the time space of the scheduling algorithm to the whole time slot. In addition, FTSA-2-SS uses the double-cell-buffer mode and Re-sequencing Buffer (RB) to solve the consequent problem such as cell conflict and disordering. Theoretical analysis shows that FTSA-2-SS has the same stability as FTSA and the simulation results show that FTSA-2-SS has a better delay performance compared with the other non-feedback two-stage switch architecture.
2011, 33(6): 1326-1331.
doi: 10.3724/SP.J.1146.2010.01090
Abstract:
Considering the characteristic of high energy consumption in underwater acoustic sensor networks, an Energy-efficient Routing protocol Based on Spatial Wakeup (ERBSW) is presented. It divides three dimensional network space into wakeup layers and sleep layers, each node makes local decision on whether to wake up or to sleep according to its current depth. In addition, ERBSW gets wakeup neighbor sets by broadcasting Hello packets periodically, and delivers data from nodes in higher wakeup layer to nodes in lower wakeup layer, which avoids energy consumption caused by idle listening and unnecessary data reception of redundant nodes. Compared with the Vector-Based Forwarding (VBF) protocol, simulation tests show that the proposed protocol can save energy cost by about 16%~48% in various network density.
Considering the characteristic of high energy consumption in underwater acoustic sensor networks, an Energy-efficient Routing protocol Based on Spatial Wakeup (ERBSW) is presented. It divides three dimensional network space into wakeup layers and sleep layers, each node makes local decision on whether to wake up or to sleep according to its current depth. In addition, ERBSW gets wakeup neighbor sets by broadcasting Hello packets periodically, and delivers data from nodes in higher wakeup layer to nodes in lower wakeup layer, which avoids energy consumption caused by idle listening and unnecessary data reception of redundant nodes. Compared with the Vector-Based Forwarding (VBF) protocol, simulation tests show that the proposed protocol can save energy cost by about 16%~48% in various network density.
2011, 33(6): 1332-1338.
doi: 10.3724/SP.J.1146.2010.01134
Abstract:
To solve the problem of Motion Compensation (MC) for multiple standards efficiently, a modified hardware-efficient computing architecture of MC interpolation for multiple standards is developed with the proposed Rounding Last (RL) and Diagonal Two Step (DTS) strategies. A re-configurable MC interpolation hardware based on the new computing architecture is implemented efficiently based on the variable block size. Compared with the fixed-size 44 block-based MC in JM8.4, the bandwidth reduction is about 27%~50%, and the average burst length of each access to the external memory is improved to 1.22~2.25 times longer. When work at 125 MHz, the MC hardware is capable to accomplish the real-time decoding of video streams of the supported standards at 1080 p (19201080) 30 f/s.
To solve the problem of Motion Compensation (MC) for multiple standards efficiently, a modified hardware-efficient computing architecture of MC interpolation for multiple standards is developed with the proposed Rounding Last (RL) and Diagonal Two Step (DTS) strategies. A re-configurable MC interpolation hardware based on the new computing architecture is implemented efficiently based on the variable block size. Compared with the fixed-size 44 block-based MC in JM8.4, the bandwidth reduction is about 27%~50%, and the average burst length of each access to the external memory is improved to 1.22~2.25 times longer. When work at 125 MHz, the MC hardware is capable to accomplish the real-time decoding of video streams of the supported standards at 1080 p (19201080) 30 f/s.
2011, 33(6): 1339-1344.
doi: 10.3724/SP.J.1146.2010.01128
Abstract:
Conventional Rate Control (RC) schemes take mostly objective metric as distortion measurement, which can not acquire optimal subjective quality. This paper applies Structural SIMilarity (SSIM) based subjective distortion to Rate Distortion Optimization (RDO) and RC in H.264 video coding, and proposes a SSIM optimal MacroBlock (MB) layer RC algorithm. First, an empirical SSIM linear distortion model is put forward. Then an improved quadratic Rate-Quantization (R-Q) model is combined to obtain the close-form solution of SSIM optimal MB layer quantization step by Lagrange multiplier. Experimental results show that the proposed method preserves much more image structural information and thus acquires better subjective quality compared with objective quality optimal MB layer RC scheme JVT-O016.
Conventional Rate Control (RC) schemes take mostly objective metric as distortion measurement, which can not acquire optimal subjective quality. This paper applies Structural SIMilarity (SSIM) based subjective distortion to Rate Distortion Optimization (RDO) and RC in H.264 video coding, and proposes a SSIM optimal MacroBlock (MB) layer RC algorithm. First, an empirical SSIM linear distortion model is put forward. Then an improved quadratic Rate-Quantization (R-Q) model is combined to obtain the close-form solution of SSIM optimal MB layer quantization step by Lagrange multiplier. Experimental results show that the proposed method preserves much more image structural information and thus acquires better subjective quality compared with objective quality optimal MB layer RC scheme JVT-O016.
2011, 33(6): 1345-1349.
doi: 10.3724/SP.J.1146.2010.01189
Abstract:
Present adaptive predistortion structures make against the utilization of efficient least-square algorithms in the parameters update of Hammerstein predistorter. In order to solve the problem, a novel adaptive structure is proposed. Using the structure, errors of the two subsystems of a Hammerstein predistorter can be obtained, so efficient least-square algorithm can be used in the parameters update of a Hammerstein predistorter directly. By this means, the effect on the performance of predistorter induced by the structure error and the imprecision of subsystem errors become evitable. It is confirmed by computer simulation that Hammerstein predistorter could more efficiently compensate the nonlinear distortion of power amplifier with memory effects by the proposed adaptive predistortion structure.
Present adaptive predistortion structures make against the utilization of efficient least-square algorithms in the parameters update of Hammerstein predistorter. In order to solve the problem, a novel adaptive structure is proposed. Using the structure, errors of the two subsystems of a Hammerstein predistorter can be obtained, so efficient least-square algorithm can be used in the parameters update of a Hammerstein predistorter directly. By this means, the effect on the performance of predistorter induced by the structure error and the imprecision of subsystem errors become evitable. It is confirmed by computer simulation that Hammerstein predistorter could more efficiently compensate the nonlinear distortion of power amplifier with memory effects by the proposed adaptive predistortion structure.
2011, 33(6): 1350-1355.
doi: 10.3724/SP.J.1146.2010.01117
Abstract:
The optimum design of the cooperative precoding matrix is investigated for the system consisted of cooperative multi-antenna base stations and one multi-antenna mobile terminal. A mathematical model is first established for the cooperative precoding matrix optimization which is based on the minimization of the mean squared error. In order to deal with the difficulty as a result of the block diagonal structure of the cooperative precoding matrix, the original problem is then converted into an equivalent problem which involves a long column vector consisted of all non-zero elements in the block diagonal cooperative matrix. With the equivalent problem, the Lagrangian multiplier method is finally employed to obtain the optimum solution in an analytical form to the original problem. Moreover, based on this analytical expression of Cooperative Multi-Points (CoMP) precoder, an iterative algorithm is developed to jointly optimize the precoder of CoMP and the receiver of MT. Numerical simulations show the effectiveness of the proposed cooperative precoding schemes in terms of bit error rate, symbol error rate and the spectral efficiency of the whole system.
The optimum design of the cooperative precoding matrix is investigated for the system consisted of cooperative multi-antenna base stations and one multi-antenna mobile terminal. A mathematical model is first established for the cooperative precoding matrix optimization which is based on the minimization of the mean squared error. In order to deal with the difficulty as a result of the block diagonal structure of the cooperative precoding matrix, the original problem is then converted into an equivalent problem which involves a long column vector consisted of all non-zero elements in the block diagonal cooperative matrix. With the equivalent problem, the Lagrangian multiplier method is finally employed to obtain the optimum solution in an analytical form to the original problem. Moreover, based on this analytical expression of Cooperative Multi-Points (CoMP) precoder, an iterative algorithm is developed to jointly optimize the precoder of CoMP and the receiver of MT. Numerical simulations show the effectiveness of the proposed cooperative precoding schemes in terms of bit error rate, symbol error rate and the spectral efficiency of the whole system.
2011, 33(6): 1356-1360.
doi: 10.3724/SP.J.1146.2010.01074
Abstract:
This paper addresses the waveform adaptation issue of multiple competitive Multiple-In Multiple-Out Cognitive Radios (MIMO-CR) respectively maximizing their information rates under the interference-temperature constraint of primary users. This issue is formulated as a Nash equilibrium issue from a non-cooperative game theoretic viewpoint, conditions for the existence and uniqueness of the Nash equilibrium are provided and a decentralized Iterative Water-Filling Algorithm (IWFA) with a punishing price is proposed to solve the above Nash equilibrium issue, the punishing price is imposed on the interference generated by MIMO-CRs in order to make the interference-temperature constraint satisfied while MIMO-CRs achieve the Nash equilibrium. Simulation results show, when compared to the classical IWFA which does not consider the interference-temperature constraint, the proposed algorithm satisfies the interference-temperature constraint and hence is applicable to cognitive radio networks.
This paper addresses the waveform adaptation issue of multiple competitive Multiple-In Multiple-Out Cognitive Radios (MIMO-CR) respectively maximizing their information rates under the interference-temperature constraint of primary users. This issue is formulated as a Nash equilibrium issue from a non-cooperative game theoretic viewpoint, conditions for the existence and uniqueness of the Nash equilibrium are provided and a decentralized Iterative Water-Filling Algorithm (IWFA) with a punishing price is proposed to solve the above Nash equilibrium issue, the punishing price is imposed on the interference generated by MIMO-CRs in order to make the interference-temperature constraint satisfied while MIMO-CRs achieve the Nash equilibrium. Simulation results show, when compared to the classical IWFA which does not consider the interference-temperature constraint, the proposed algorithm satisfies the interference-temperature constraint and hence is applicable to cognitive radio networks.
2011, 33(6): 1361-1366.
doi: 10.3724/SP.J.1146.2010.01187
Abstract:
In order to design precise and feasible ranging method with Impulse Radio Ultra Wide Band (IR-UWB) signal during energy detection, two new Time Of Arrival (TOA) estimation algorithms based on optimal and suboptimal thresholds are respectively proposed. For optimal method, with the relationship between energys statistics in receiver and small-scale attenuation, a closed form of threshold is derived, and the TOA estimation is obtained under the Minimum Mean Square Error (MMSE). For suboptimal method based on optimal threshold analysis, a recursive form of threshold selection using Newton iteration is developed with false alarm probability constraint.Through simulations, compared with other similar algorithms, the optimal method enhances greatly the ranging accuracy, the suboptimal method is easier to implement while its performance reduces less.
In order to design precise and feasible ranging method with Impulse Radio Ultra Wide Band (IR-UWB) signal during energy detection, two new Time Of Arrival (TOA) estimation algorithms based on optimal and suboptimal thresholds are respectively proposed. For optimal method, with the relationship between energys statistics in receiver and small-scale attenuation, a closed form of threshold is derived, and the TOA estimation is obtained under the Minimum Mean Square Error (MMSE). For suboptimal method based on optimal threshold analysis, a recursive form of threshold selection using Newton iteration is developed with false alarm probability constraint.Through simulations, compared with other similar algorithms, the optimal method enhances greatly the ranging accuracy, the suboptimal method is easier to implement while its performance reduces less.
2011, 33(6): 1367-1372.
doi: 10.3724/SP.J.1146.2010.01091
Abstract:
In order to overcome the disadvantages of the traditional spectrum sensing methods in Cognitive Radio (CR), a novel cooperative sensing algorithm based on the maximum eigenvalue and average energy of the received signal covariance matrix is presented in this paper. The proposed algorithm exploits the ratio of the Maximum Eigenvalue to Energy Detection (ME-ED) to determine whether the Primary User (PU) is present or not. Through the theoretical analysis, ME-ED scheme can work well without the knowledge of the PU signal and the noise power. In addition, simulations show ME-ED is not sensitive to noise uncertainty, and can obtain the optimal sensing performance and the strongest robustness under the noise uncertainty compared to MED and ED.
In order to overcome the disadvantages of the traditional spectrum sensing methods in Cognitive Radio (CR), a novel cooperative sensing algorithm based on the maximum eigenvalue and average energy of the received signal covariance matrix is presented in this paper. The proposed algorithm exploits the ratio of the Maximum Eigenvalue to Energy Detection (ME-ED) to determine whether the Primary User (PU) is present or not. Through the theoretical analysis, ME-ED scheme can work well without the knowledge of the PU signal and the noise power. In addition, simulations show ME-ED is not sensitive to noise uncertainty, and can obtain the optimal sensing performance and the strongest robustness under the noise uncertainty compared to MED and ED.
2011, 33(6): 1373-1378.
doi: 10.3724/SP.J.1146.2010.00876
Abstract:
In the wireless fading channel environment, cooperative relaying is an effective way to increase additional diversity gain, and network coding can be used to increase network throughput. Network coding uses XOR (exclusive or) to mix packets from two different sources, its robustness is low and each relay can only service two sources. This paper presents an adaptive network convolutional coding for cooperative relaying, employing convolutional coding at the relay station instead of XOR to combine packets from two or more different sources, and then forwarding the mixed packets which can be seen as an output of the convolutional encoder to the destination side. When channels are in deep fading, this paper proposes a loose adaptive network convolutional coding for cooperative relaying, relaxing the conditions of the relay participating in forwarding. The proposed method can offer an increased robustness, adapt to the changing network topology, and reduce the number of relays required in network coding for cooperative relaying. Theoretical analysis and simulations indicate that the proposed method compared with network coding for cooperative relaying can get substantial increase in performance especially with the increased number of collaboration nodes and ensures the system of additional stability full diversity gain.
In the wireless fading channel environment, cooperative relaying is an effective way to increase additional diversity gain, and network coding can be used to increase network throughput. Network coding uses XOR (exclusive or) to mix packets from two different sources, its robustness is low and each relay can only service two sources. This paper presents an adaptive network convolutional coding for cooperative relaying, employing convolutional coding at the relay station instead of XOR to combine packets from two or more different sources, and then forwarding the mixed packets which can be seen as an output of the convolutional encoder to the destination side. When channels are in deep fading, this paper proposes a loose adaptive network convolutional coding for cooperative relaying, relaxing the conditions of the relay participating in forwarding. The proposed method can offer an increased robustness, adapt to the changing network topology, and reduce the number of relays required in network coding for cooperative relaying. Theoretical analysis and simulations indicate that the proposed method compared with network coding for cooperative relaying can get substantial increase in performance especially with the increased number of collaboration nodes and ensures the system of additional stability full diversity gain.
2011, 33(6): 1379-1384.
doi: 10.3724/SP.J.1146.2010.01311
Abstract:
A new algorithm for jointly estimating multi-parameters (the frequency, Direction Of Arrival (DOA) and range) of sources is proposed. The proposed algorithm does not require spectral peak search, and can be applied to arbitrary Gaussian noise environment. It can reduce the aperture loss. Moreover, the fourth order cumulant matrices are constructed using the special sensor outputs, and the rank reduction of matrices can be avoided when the far-field sources impinging on an array of sensors. So the proposed algorithm can be used to estimate the parameters of near-field, far-field and mixed sources. The performance of the proposed method is validated by simulations.
A new algorithm for jointly estimating multi-parameters (the frequency, Direction Of Arrival (DOA) and range) of sources is proposed. The proposed algorithm does not require spectral peak search, and can be applied to arbitrary Gaussian noise environment. It can reduce the aperture loss. Moreover, the fourth order cumulant matrices are constructed using the special sensor outputs, and the rank reduction of matrices can be avoided when the far-field sources impinging on an array of sensors. So the proposed algorithm can be used to estimate the parameters of near-field, far-field and mixed sources. The performance of the proposed method is validated by simulations.
2011, 33(6): 1385-1389.
doi: 10.3724/SP.J.1146.2010.01139
Abstract:
This paper considers detection of a signal in underwater colored noise on small aperture array, and an Adaptive Matched Filter based on Maximum Likelihood (ML-AMF) is proposed. The Direction-Of-Arrival (DOA) of signal is firstly estimated, and then energy detection is carried out by using the pre-estimated DOA. The test statistic is deduced. ML-AMF method is robust to the uncertainties of steering vector. Simulation and experiment results show the effectiveness of the method. Experiment results on an 8 element array show that ML-AMF performs better than Minimum Variance Distortionless Response (MVDR) and Conventional BeamForming (CBF) of 1~5 dB and 12~17 dB respectively.
This paper considers detection of a signal in underwater colored noise on small aperture array, and an Adaptive Matched Filter based on Maximum Likelihood (ML-AMF) is proposed. The Direction-Of-Arrival (DOA) of signal is firstly estimated, and then energy detection is carried out by using the pre-estimated DOA. The test statistic is deduced. ML-AMF method is robust to the uncertainties of steering vector. Simulation and experiment results show the effectiveness of the method. Experiment results on an 8 element array show that ML-AMF performs better than Minimum Variance Distortionless Response (MVDR) and Conventional BeamForming (CBF) of 1~5 dB and 12~17 dB respectively.
2011, 33(6): 1390-1394.
doi: 10.3724/SP.J.1146.2010.01077
Abstract:
The detection of the number of space signals is one of the key issues in array signal processing. In view of the poor performance of traditional detection methods at low signal-to-noise ratios, a new method called Detection Technique based on Approximate Eigenvectors (DTAE) is proposed to improve the detection performance of sensor arrays at low signal-to-noise ratios. In the proposed method the direction of the centroid of the cluster of signals is first estimated by some kind of beamform scanning in the space, next the approximate eigenvectors of the data covariance matrix is calculated according to the centroid estimate, then the array output data are weighted by the approximate eigenvectors, finally the estimate of the number of signals is acquired by some kind of manipulating of the peak-to-average power ratio of the weighted data in frequency domain. Simulations show the proposed method DTAE demonstrates much better performance than Akaike Information Criterion (AIC) and other methods at low signal-to-noise ratios, which is valuable in engineering practice.
The detection of the number of space signals is one of the key issues in array signal processing. In view of the poor performance of traditional detection methods at low signal-to-noise ratios, a new method called Detection Technique based on Approximate Eigenvectors (DTAE) is proposed to improve the detection performance of sensor arrays at low signal-to-noise ratios. In the proposed method the direction of the centroid of the cluster of signals is first estimated by some kind of beamform scanning in the space, next the approximate eigenvectors of the data covariance matrix is calculated according to the centroid estimate, then the array output data are weighted by the approximate eigenvectors, finally the estimate of the number of signals is acquired by some kind of manipulating of the peak-to-average power ratio of the weighted data in frequency domain. Simulations show the proposed method DTAE demonstrates much better performance than Akaike Information Criterion (AIC) and other methods at low signal-to-noise ratios, which is valuable in engineering practice.
2011, 33(6): 1395-1400.
doi: 10.3724/SP.J.1146.2010.01118
Abstract:
One equivalent definition of Discrete Time Fourier Transform (DTFT) is introduced in this paper. The relationship and differences between DTFT and Chirp-Z transform are analyzed. It is pointed out that DTFT, with spectrum zoom character, is a special form of Chirp-Z transform. Moreover, one fast algorithm and its detailed process of DTFT are given. Computational complexity analysis shows that fast algorithm of DTFT is less complicated than Chirp-Z with the same frequency resolution. Simulation results prove the validity of the theoretical results and the advantage of DTFT in frequency estimation.
One equivalent definition of Discrete Time Fourier Transform (DTFT) is introduced in this paper. The relationship and differences between DTFT and Chirp-Z transform are analyzed. It is pointed out that DTFT, with spectrum zoom character, is a special form of Chirp-Z transform. Moreover, one fast algorithm and its detailed process of DTFT are given. Computational complexity analysis shows that fast algorithm of DTFT is less complicated than Chirp-Z with the same frequency resolution. Simulation results prove the validity of the theoretical results and the advantage of DTFT in frequency estimation.
2011, 33(6): 1401-1406.
doi: 10.3724/SP.J.1146.2010.01087
Abstract:
As for the inhomogenous images, it is difficult and ineffective to segment Regions Of Interest (ROI). In order to solve these problems, this paper proposes an image segmentation algorithm based on the active contour model. Different from the ones in traditional level set techniques, which only use single information, a new energy function is defined by combining object edge information and regional statistical information. Utilization of edge information is in favor of the contours evolving into the object boundaries quickly and accurately. Regional statistical information consists of both local and global statistical information inside and outside the evolving contours. On the one hand, utilization of local region information facilitates the method to deal with intensity inhomogeneity. On the other hand, using global region information can avoid the evolved contour trapping into the local minima. In addition, in the evolution process of the contour, a Gaussian filter is adopted to quickly regularize the level set function, which avoids an expensive computational re-initialization or regularization. Experimental results using synthetic and real images show that the proposed approach can not only effectively segment objects with the weak boundaries in inhomogenous images, but also accurately segment the complex structure objects with multi-gray levels. At the same time, the method is robust to noise and the initial contour.
As for the inhomogenous images, it is difficult and ineffective to segment Regions Of Interest (ROI). In order to solve these problems, this paper proposes an image segmentation algorithm based on the active contour model. Different from the ones in traditional level set techniques, which only use single information, a new energy function is defined by combining object edge information and regional statistical information. Utilization of edge information is in favor of the contours evolving into the object boundaries quickly and accurately. Regional statistical information consists of both local and global statistical information inside and outside the evolving contours. On the one hand, utilization of local region information facilitates the method to deal with intensity inhomogeneity. On the other hand, using global region information can avoid the evolved contour trapping into the local minima. In addition, in the evolution process of the contour, a Gaussian filter is adopted to quickly regularize the level set function, which avoids an expensive computational re-initialization or regularization. Experimental results using synthetic and real images show that the proposed approach can not only effectively segment objects with the weak boundaries in inhomogenous images, but also accurately segment the complex structure objects with multi-gray levels. At the same time, the method is robust to noise and the initial contour.
2011, 33(6): 1407-1412.
doi: 10.3724/SP.J.1146.2010.01092
Abstract:
A novel super-resolution reconstruction method based on non-local simultaneous sparse approximation is presented, which combines simultaneous sparse approximation method and non-local self-similarity. The sparse association between high- and low-resolution patches pairs of cross-scale self-similar sets via simultaneous sparse coding is defined, and the association as a priori knowledge is used for super-resolution reconstruction. This method keeps the patches pairs the same sparsity patterns, and makes efficiently use of the self-similar information. The adaptability is enhanced. Several experiments using nature images show that the presented method outperforms other several learning-based super-resolution methods.
A novel super-resolution reconstruction method based on non-local simultaneous sparse approximation is presented, which combines simultaneous sparse approximation method and non-local self-similarity. The sparse association between high- and low-resolution patches pairs of cross-scale self-similar sets via simultaneous sparse coding is defined, and the association as a priori knowledge is used for super-resolution reconstruction. This method keeps the patches pairs the same sparsity patterns, and makes efficiently use of the self-similar information. The adaptability is enhanced. Several experiments using nature images show that the presented method outperforms other several learning-based super-resolution methods.
2011, 33(6): 1413-1419.
doi: 10.3724/SP.J.1146.2010.01042
Abstract:
Human pose estimation is an essential issue in computer vision area since it has many applications such as human activity analysis, human computer interaction and visual surveillance. In this paper, 2D human estimation issue in monocular images and videos is addressed. The observation model and the inference method are improved based on part based graph inference method. A rotation invariant edge field feature is designed and based on which a Boosting classifier is learnt as the observation model. The human pose estimation is done with a particle based belief propagation inference method. Experiments show the effectiveness and the speed of the proposed method.
Human pose estimation is an essential issue in computer vision area since it has many applications such as human activity analysis, human computer interaction and visual surveillance. In this paper, 2D human estimation issue in monocular images and videos is addressed. The observation model and the inference method are improved based on part based graph inference method. A rotation invariant edge field feature is designed and based on which a Boosting classifier is learnt as the observation model. The human pose estimation is done with a particle based belief propagation inference method. Experiments show the effectiveness and the speed of the proposed method.
2011, 33(6): 1420-1426.
doi: 10.3724/SP.J.1146.2010.01124
Abstract:
During the downward movement for missile-borne SAR, the assumption that echo signal invariance in azimuth is not accurate, which is caused by high vertical velocity and acceleration, making SAR imaging difficult to process. Due to this reason above, by using azimuth NonLinear Chirp Scaling (NLCS), an imaging algorithm for missile-borne SAR is proposed in this paper. After range cell migration correction and range compression in the 2-D frequency domain, via the Method of Series Reversion (MSR), azimuth variation of Doppler FM rates for echo signal can be compensated with the operation of azimuth nonlinear chirp scaling, which effectively improves focusing depth and focusing effect. Simulation results are provided to validate the effectiveness of the proposed algorithm.
During the downward movement for missile-borne SAR, the assumption that echo signal invariance in azimuth is not accurate, which is caused by high vertical velocity and acceleration, making SAR imaging difficult to process. Due to this reason above, by using azimuth NonLinear Chirp Scaling (NLCS), an imaging algorithm for missile-borne SAR is proposed in this paper. After range cell migration correction and range compression in the 2-D frequency domain, via the Method of Series Reversion (MSR), azimuth variation of Doppler FM rates for echo signal can be compensated with the operation of azimuth nonlinear chirp scaling, which effectively improves focusing depth and focusing effect. Simulation results are provided to validate the effectiveness of the proposed algorithm.
2011, 33(6): 1427-1433.
doi: 10.3724/SP.J.1146.2010.01309
Abstract:
In order to solve the registration problem of sequence images in forward-looking imaging radar, this paper proposes an image registration algorithm, which is based on combination of imaging theory of forward-looking array radar and Hausdorff distance. First the sensor information is used to estimate range offset between images. Then the length of array aperture is modified and image resolution could be corrected. To solve the difficulty in angle estimating which is brought by the isotropy of landmine, the Hausdorff distance is introduced here. Combined with imaging theory, the Hausdorff distance is mapped from image field to echo field. Consequently the resolution correction and image registration are incorporated, which could improve the speed and precision of registration. It is proved by real data that the method is applicable to forward-looking array radar and the registration precision and detection rate are also improved.
In order to solve the registration problem of sequence images in forward-looking imaging radar, this paper proposes an image registration algorithm, which is based on combination of imaging theory of forward-looking array radar and Hausdorff distance. First the sensor information is used to estimate range offset between images. Then the length of array aperture is modified and image resolution could be corrected. To solve the difficulty in angle estimating which is brought by the isotropy of landmine, the Hausdorff distance is introduced here. Combined with imaging theory, the Hausdorff distance is mapped from image field to echo field. Consequently the resolution correction and image registration are incorporated, which could improve the speed and precision of registration. It is proved by real data that the method is applicable to forward-looking array radar and the registration precision and detection rate are also improved.
2011, 33(6): 1434-1439.
doi: 10.3724/SP.J.1146.2010.01068
Abstract:
There are two main problems through the implementation of Polar Format Algorithm (PFA). First, the error of Residual Video Phase (RVP) arises after the dechirp operation. Second, the interpolation has influence on the computation efficiency and imaging precision. This paper proposes a novel algorithm where range resampling is based on the principle of Scaling and Chirp-Z transform is adopted on azimuth dimension. The presented approach only consists in FFTs and multiplications, which effectively helps to decrease the computational burden and improve the imaging quality. Besides, the presented algorithm is much simpler than the existing range CZT approach. Point target simulation validates effectiveness of the presented algorithm.
There are two main problems through the implementation of Polar Format Algorithm (PFA). First, the error of Residual Video Phase (RVP) arises after the dechirp operation. Second, the interpolation has influence on the computation efficiency and imaging precision. This paper proposes a novel algorithm where range resampling is based on the principle of Scaling and Chirp-Z transform is adopted on azimuth dimension. The presented approach only consists in FFTs and multiplications, which effectively helps to decrease the computational burden and improve the imaging quality. Besides, the presented algorithm is much simpler than the existing range CZT approach. Point target simulation validates effectiveness of the presented algorithm.
2011, 33(6): 1440-1446.
doi: 10.3724/SP.J.1146.2010.01171
Abstract:
High resolution and wide swath Synthetic Aperture Radar (SAR) imaging increases severely data transmission and storage load. To mitigate this problem, a compressive sensing imaging method is proposed based on wavelet sparse representation of scatter coefficients for stripmap mode SAR. In the presented method, firstly, the signal is sparsely and randomly sampled in the azimuth direction. Secondly, the matched filter is used to perform pulse compression in the range direction. Finally, the wavelet basis is adopted for the sparse basis, and then the azimuth scatter coefficients can be reconstructed by solving the l1 minimization optimization. Even if fewer samples can be obtained in the azimuth direction, the proposed algorithm can produce the unambiguous SAR image. Real SAR data experiments demonstrate that the effectiveness and stability of the proposed algorithm.
High resolution and wide swath Synthetic Aperture Radar (SAR) imaging increases severely data transmission and storage load. To mitigate this problem, a compressive sensing imaging method is proposed based on wavelet sparse representation of scatter coefficients for stripmap mode SAR. In the presented method, firstly, the signal is sparsely and randomly sampled in the azimuth direction. Secondly, the matched filter is used to perform pulse compression in the range direction. Finally, the wavelet basis is adopted for the sparse basis, and then the azimuth scatter coefficients can be reconstructed by solving the l1 minimization optimization. Even if fewer samples can be obtained in the azimuth direction, the proposed algorithm can produce the unambiguous SAR image. Real SAR data experiments demonstrate that the effectiveness and stability of the proposed algorithm.
2011, 33(6): 1447-1452.
doi: 10.3724/SP.J.1146.2010.01089
Abstract:
A bistatic Frequency Scaling (FS) algorithm is proposed based on an exact analytical bistatic Point Target (PT) spectrum obtained with a Geometry-based Bistatic Formula (GBF) method in TanDEM configuration. Since numerical calculation is no longer needed when calculating the important parameter- the Half Quasi Bistatic Angle (HQBA), fast imaging process is achieved. Unlike the existing algorithms, it avoids the influence of the baseline to range ratio because of its precise spectrum. Therefore, the method can handle the bistatic data with a large baseline (even in extreme conditions). The proposed method is proved using simulated data. By comparing the experimental results between the proposed method and the numerical calculation method which has a heavy computational burden, the images were almost the same, which further proves the advantages and correctness of the proposed method.
A bistatic Frequency Scaling (FS) algorithm is proposed based on an exact analytical bistatic Point Target (PT) spectrum obtained with a Geometry-based Bistatic Formula (GBF) method in TanDEM configuration. Since numerical calculation is no longer needed when calculating the important parameter- the Half Quasi Bistatic Angle (HQBA), fast imaging process is achieved. Unlike the existing algorithms, it avoids the influence of the baseline to range ratio because of its precise spectrum. Therefore, the method can handle the bistatic data with a large baseline (even in extreme conditions). The proposed method is proved using simulated data. By comparing the experimental results between the proposed method and the numerical calculation method which has a heavy computational burden, the images were almost the same, which further proves the advantages and correctness of the proposed method.
2011, 33(6): 1453-1458.
doi: 10.3724/SP.J.1146.2010.01192
Abstract:
The Doppler parameters of missile-borne SAR received signal vary much with slant range due to missiles high speed and non-ideal movements. Thus applying the classical wave-number domain algorithm can hardly achieve high precision for missile-borne SAR imaging processing. This paper proposes a modified wave-number domain algorithm based on the classical one to meet the demand of high resolution and wide swath for missile-borne SAR imaging. Azimuth compression is implemented in the range time domain or range space domain instead of two dimensional wave-number domain in which it is carried out for the classical algorithm. And Doppler parameters can not correspond to the variation of slant range in the two dimensional wave-number domain. Consequently the modified approach can eliminate the phase error brought by using the same Doppler parameters for the classical algorithm. Simulation result illustrates the validity of the approach.
The Doppler parameters of missile-borne SAR received signal vary much with slant range due to missiles high speed and non-ideal movements. Thus applying the classical wave-number domain algorithm can hardly achieve high precision for missile-borne SAR imaging processing. This paper proposes a modified wave-number domain algorithm based on the classical one to meet the demand of high resolution and wide swath for missile-borne SAR imaging. Azimuth compression is implemented in the range time domain or range space domain instead of two dimensional wave-number domain in which it is carried out for the classical algorithm. And Doppler parameters can not correspond to the variation of slant range in the two dimensional wave-number domain. Consequently the modified approach can eliminate the phase error brought by using the same Doppler parameters for the classical algorithm. Simulation result illustrates the validity of the approach.
2011, 33(6): 1459-1464.
doi: 10.3724/SP.J.1146.2010.01131
Abstract:
The impacts of Keystone formatting on SATP are first analyzed. Conclusions are obtained from the study that Keystone formatting degrades the performance of STAP by broadening the clutter ridge and increasing the number of clutter degree of freedom. Based on the above reasons, a novel STAP method is proposed for the detection of fast air moving dim targets when the clutter has no range walk, which removes the clutter firstly, then Keystone formatting is applied for the targets range walk compensation, finally, target is accumulated by conventional space-time beamforming. Hence, the effects of Keystone formatting on the clutter distributions and further on the performance of STAP are avoided. Therefore the good detection performance of fast air moving dim target can be achieved. Effectiveness of the new method is verified via simulation examples.
The impacts of Keystone formatting on SATP are first analyzed. Conclusions are obtained from the study that Keystone formatting degrades the performance of STAP by broadening the clutter ridge and increasing the number of clutter degree of freedom. Based on the above reasons, a novel STAP method is proposed for the detection of fast air moving dim targets when the clutter has no range walk, which removes the clutter firstly, then Keystone formatting is applied for the targets range walk compensation, finally, target is accumulated by conventional space-time beamforming. Hence, the effects of Keystone formatting on the clutter distributions and further on the performance of STAP are avoided. Therefore the good detection performance of fast air moving dim target can be achieved. Effectiveness of the new method is verified via simulation examples.
2011, 33(6): 1465-1470.
doi: 10.3724/SP.J.1146.2010.01176
Abstract:
In the high-resolution, wide-swath imaging application of spaceborne SAR, the technique of Digital Beam-Forming (DBF) can be employed to receive and process echo data with multiple channels on elevation, increase the receive gain and Signal-to-Noise Ratio(SNR). According to the geometric relationship among these channels, this paper analyzes the impulse response of the elevation channel, and derives the explicit expression of it. Based on this, this paper proposes a DBF processing scheme which combines the time-variant weighting and Finite Impulse Response (FIR) filtering, and further presents the block diagram of system realization. Compared with other DBF methods, this approach can maximize the receive gain and optimize the system performance for the long pulse, without increasing the complexity of the digital beam-former hardware. Simulation results indicate that it allows the system to achieve the theoretically optimal performance.
In the high-resolution, wide-swath imaging application of spaceborne SAR, the technique of Digital Beam-Forming (DBF) can be employed to receive and process echo data with multiple channels on elevation, increase the receive gain and Signal-to-Noise Ratio(SNR). According to the geometric relationship among these channels, this paper analyzes the impulse response of the elevation channel, and derives the explicit expression of it. Based on this, this paper proposes a DBF processing scheme which combines the time-variant weighting and Finite Impulse Response (FIR) filtering, and further presents the block diagram of system realization. Compared with other DBF methods, this approach can maximize the receive gain and optimize the system performance for the long pulse, without increasing the complexity of the digital beam-former hardware. Simulation results indicate that it allows the system to achieve the theoretically optimal performance.
2011, 33(6): 1471-1474.
doi: 10.3724/SP.J.1146.2010.01136
Abstract:
A novel compact Tapered Slot Antenna (TSA), which is used for high resolution polar ice penetrating radar (0.5 GHz-2 GHz), is proposed. This antenna is fed by a CoPlanar Waveguide (CPW) to slot-line transition. The lateral edges of the antenna are corrugated, and lengths of the corrugations gradually decrease from the feeding terminal to the open aperture. Simulated results show that the lowest operating frequency of this novel antenna is lower compared to conventional TSA, and the peak gain is enhanced about 3 dB at low frequencies. Measured bandwidth is larger than 10:1(S11-10 dB), which agrees well with the simulated results except at about 550 MHz (S11-8.2 dB). Furthermore, in the operating band of shallow icecap detection radar, the measured patterns agree well with the simulated results at most frequencies except that the back lobes are higher, and the measured gain is higher than 3.9 dBi.
A novel compact Tapered Slot Antenna (TSA), which is used for high resolution polar ice penetrating radar (0.5 GHz-2 GHz), is proposed. This antenna is fed by a CoPlanar Waveguide (CPW) to slot-line transition. The lateral edges of the antenna are corrugated, and lengths of the corrugations gradually decrease from the feeding terminal to the open aperture. Simulated results show that the lowest operating frequency of this novel antenna is lower compared to conventional TSA, and the peak gain is enhanced about 3 dB at low frequencies. Measured bandwidth is larger than 10:1(S11-10 dB), which agrees well with the simulated results except at about 550 MHz (S11-8.2 dB). Furthermore, in the operating band of shallow icecap detection radar, the measured patterns agree well with the simulated results at most frequencies except that the back lobes are higher, and the measured gain is higher than 3.9 dBi.
2011, 33(6): 1475-1480.
doi: 10.3724/SP.J.1146.2010.00954
Abstract:
The amount of data increases rapidly, and the types of data need to be handled become more and more various, a new algorithm with better generalization performance and higher classification accuracy is indispensable. In this paper, a new hybrid algorithm is proposed, which takes the advantage of the insensitivity to the input data of the Diversity Ensemble Creation by Oppositional Relabeling of Artificial Training Examples (DECORATE) algorithm and the efficiency of the radial basis functions neural network model. Asymptotic P value to decide the relationship between the area under receiver operator characteristic with 0.5 which belong to redundant features, and the oppositional relabeling artificial data is used to train the classifier. Then the new classifier is added which will lower training error get down to the original model, and the most vote is used to get the decision fusion result. Finally, this method is applied to UCI dataset, the results show that it can adapt to the different kinds of data and give the higher accuracy of classification.
The amount of data increases rapidly, and the types of data need to be handled become more and more various, a new algorithm with better generalization performance and higher classification accuracy is indispensable. In this paper, a new hybrid algorithm is proposed, which takes the advantage of the insensitivity to the input data of the Diversity Ensemble Creation by Oppositional Relabeling of Artificial Training Examples (DECORATE) algorithm and the efficiency of the radial basis functions neural network model. Asymptotic P value to decide the relationship between the area under receiver operator characteristic with 0.5 which belong to redundant features, and the oppositional relabeling artificial data is used to train the classifier. Then the new classifier is added which will lower training error get down to the original model, and the most vote is used to get the decision fusion result. Finally, this method is applied to UCI dataset, the results show that it can adapt to the different kinds of data and give the higher accuracy of classification.
2011, 33(6): 1481-1486.
doi: 10.3724/SP.J.1146.2010.01114
Abstract:
Interconnect delay and power consumption are two of the main issues in deep-submicron meter technology and nano-meter technology. This paper proposes a long chain design method which takes power consumption and delay into consideration. This paper proposes a hybrid evolution particle swamp algorithm which by introducing inertia weighted operator and hybrid mutant operation overcomes such drawbacks such as low convergent speed, prematurity and local convergence. Tests employing benchmark function prove that the proposed algorithm is valid and efficient. The algorithm is applied to long chain design based on minimum energy delay product, simulation results show that in minimum power delay model the PDP is 26.34% lower than in minimum delay model, while in minimum energy delay model the EDP is 18.74% lower than in minimum delay model, simulations indicate the efficacy of such design method with HSPICE.
Interconnect delay and power consumption are two of the main issues in deep-submicron meter technology and nano-meter technology. This paper proposes a long chain design method which takes power consumption and delay into consideration. This paper proposes a hybrid evolution particle swamp algorithm which by introducing inertia weighted operator and hybrid mutant operation overcomes such drawbacks such as low convergent speed, prematurity and local convergence. Tests employing benchmark function prove that the proposed algorithm is valid and efficient. The algorithm is applied to long chain design based on minimum energy delay product, simulation results show that in minimum power delay model the PDP is 26.34% lower than in minimum delay model, while in minimum energy delay model the EDP is 18.74% lower than in minimum delay model, simulations indicate the efficacy of such design method with HSPICE.
2011, 33(6): 1487-1491.
doi: 10.3724/SP.J.1146.2010.01174
Abstract:
This paper analyzes the issue of calculating the threshold for energy detection when an estimated noise power is used. The corresponding closed-form detection performances of energy detection are derived in the terms of Q function. To achieve the expected detection performance, the threshold cannot be derived by replacing the exact noise power with the estimated ones and must be modified. Moreover, the closed-form expressions of modified thresholds are given in this paper, which simplify the analysis of the detection performances and the calculation of the threshold. Simulation results show that when the sample number is 20, compared to the original threshold, the false alarm probability based on the modified threshold decreases 15%. The throughput of Cognitive Radio (CR) networks can be effectively increased.
This paper analyzes the issue of calculating the threshold for energy detection when an estimated noise power is used. The corresponding closed-form detection performances of energy detection are derived in the terms of Q function. To achieve the expected detection performance, the threshold cannot be derived by replacing the exact noise power with the estimated ones and must be modified. Moreover, the closed-form expressions of modified thresholds are given in this paper, which simplify the analysis of the detection performances and the calculation of the threshold. Simulation results show that when the sample number is 20, compared to the original threshold, the false alarm probability based on the modified threshold decreases 15%. The throughput of Cognitive Radio (CR) networks can be effectively increased.
2011, 33(6): 1492-1495.
doi: 10.3724/SP.J.1146.2010.01108
Abstract:
To solve the optimization locations of base station in Wideband Code Division Multiple Access (WCDMA) network, a solution?of optimization locations based on immune algorithm is proposed. The cell area subject to cell capacity is expounded, the framework of immune optimization algorithm is given, and simulation experiments are done to validate algorithm. Experimental results show that the proposed solution?of optimization locations can meet the coverage needs with low cost of network construction relatively,?has the advantages of good application value.
To solve the optimization locations of base station in Wideband Code Division Multiple Access (WCDMA) network, a solution?of optimization locations based on immune algorithm is proposed. The cell area subject to cell capacity is expounded, the framework of immune optimization algorithm is given, and simulation experiments are done to validate algorithm. Experimental results show that the proposed solution?of optimization locations can meet the coverage needs with low cost of network construction relatively,?has the advantages of good application value.
2011, 33(6): 1496-1500.
doi: 10.3724/SP.J.1146.2010.00890
Abstract:
For undistorted images, the wavelet coefficients of between-scale coefficients in the same orientation are correlated, while compression coding reduce the correlation. Cosine similarity is used in this work to model the correlation of between-scale subbands, and statistical regression is applied to analyze the relationship between human subjective assessment Mean Opinion Scores (MOS) and subbands cosine similarity. The accurate quality model is obtained by regression analysis. Experimental results show that the proposed no-reference method has a high correlation with the MOS measurement, and a considerably lower computational complexity and less run time.
For undistorted images, the wavelet coefficients of between-scale coefficients in the same orientation are correlated, while compression coding reduce the correlation. Cosine similarity is used in this work to model the correlation of between-scale subbands, and statistical regression is applied to analyze the relationship between human subjective assessment Mean Opinion Scores (MOS) and subbands cosine similarity. The accurate quality model is obtained by regression analysis. Experimental results show that the proposed no-reference method has a high correlation with the MOS measurement, and a considerably lower computational complexity and less run time.
2011, 33(6): 1501-1504.
doi: 10.3724/SP.J.1146.2010.00626
Abstract:
For catadioptric imaging may arise mirror effect between omnidirectional image and perspective image, and scale invariant feature transform algorithm is not invariant to image mirroring. This paper proposes flip horizontally perspective image, and then matches separately the original image and the flipped image with omnidirectional image, takes a better match as the final result to achieve the mirror invariant. For the ring distortion of the omnidirectional image, the perspective image is transformed to the fan-shaped image before the matching, and two methods are provided to transform the perspective image to the fan-shaped image. Experimental results on the real image show, after the perspective image is transformed to the fan-shaped image and then match with omnidirectional images, the total number of matching points is increased , while the number of the wrong matching points is reduced, matching results are better than one without the transformation.
For catadioptric imaging may arise mirror effect between omnidirectional image and perspective image, and scale invariant feature transform algorithm is not invariant to image mirroring. This paper proposes flip horizontally perspective image, and then matches separately the original image and the flipped image with omnidirectional image, takes a better match as the final result to achieve the mirror invariant. For the ring distortion of the omnidirectional image, the perspective image is transformed to the fan-shaped image before the matching, and two methods are provided to transform the perspective image to the fan-shaped image. Experimental results on the real image show, after the perspective image is transformed to the fan-shaped image and then match with omnidirectional images, the total number of matching points is increased , while the number of the wrong matching points is reduced, matching results are better than one without the transformation.
2011, 33(6): 1505-1509.
doi: 10.3724/SP.J.1146.2010.01177
Abstract:
In this paper, a objective evaluation method for synthetic aperture radar jamming effect assessment is proposed, by which it can achieve an index consistent with perceptual property of Human Visual System (HVS). This method acts according to the Contrast Sensitivity Function (CSF) of HVS, as well as the characteristic of wavelet transformation which matches with the humanity vision system's multichannel model. Firstly wavelet decompositions of jammed image and the primitive image are accomplished, then in each subband, the wavelet transformation coefficients of these two images are used to accomplish correlation coefficient calculation, and the correlation coefficients in each subband are weighted by CSF functions mean value. Eventually the final index Wavelets Weighted correlation Coefficient (WWC) is obtained by nonlinear combing all weighted correlation coefficients acquired in former step. Simulation results show that the WWC index can not only reflect the degrade of SAR image with the increase of jamming energy but also reflect peoples subjective feelings in a better way compared with existed indices.
In this paper, a objective evaluation method for synthetic aperture radar jamming effect assessment is proposed, by which it can achieve an index consistent with perceptual property of Human Visual System (HVS). This method acts according to the Contrast Sensitivity Function (CSF) of HVS, as well as the characteristic of wavelet transformation which matches with the humanity vision system's multichannel model. Firstly wavelet decompositions of jammed image and the primitive image are accomplished, then in each subband, the wavelet transformation coefficients of these two images are used to accomplish correlation coefficient calculation, and the correlation coefficients in each subband are weighted by CSF functions mean value. Eventually the final index Wavelets Weighted correlation Coefficient (WWC) is obtained by nonlinear combing all weighted correlation coefficients acquired in former step. Simulation results show that the WWC index can not only reflect the degrade of SAR image with the increase of jamming energy but also reflect peoples subjective feelings in a better way compared with existed indices.
2011, 33(6): 1510-1514.
doi: 10.3724/SP.J.1146.2010.01157
Abstract:
This paper studies the principle of realizing wide swath of space-borne SAR by using range Digital Beam-Forming (DBF). Then the effect of the time domain DBF processing on the amplitude and resolution of spaceborne SAR image is summarized based on the theoretical analysis and simulation. Four range DBF processing methods corresponding to scan-on-receive are explored, which are suitable for different duration of transmit signals. Simulation results show that all methods can realize wide swath signal receiving effectively in its precondition.
This paper studies the principle of realizing wide swath of space-borne SAR by using range Digital Beam-Forming (DBF). Then the effect of the time domain DBF processing on the amplitude and resolution of spaceborne SAR image is summarized based on the theoretical analysis and simulation. Four range DBF processing methods corresponding to scan-on-receive are explored, which are suitable for different duration of transmit signals. Simulation results show that all methods can realize wide swath signal receiving effectively in its precondition.
2011, 33(6): 1515-1519.
doi: 10.3724/SP.J.1146.2010.01180
Abstract:
The detection of presence of decoy is the foundation of countering the Towed Radar Active Decoy (TRAD) and deciding the effect of ECCM. Based on the distinction of the amplitude characteristic on the condition of the decoy is present or not, the conditioned probability density functions (pdfs) of the measured amplitude are given. Generalized Maximum Likelihood (GML) detection of the presence of TRAD is developed with the amplitude characteristic. Simulation results with different condition illustrate the performance of the method.
The detection of presence of decoy is the foundation of countering the Towed Radar Active Decoy (TRAD) and deciding the effect of ECCM. Based on the distinction of the amplitude characteristic on the condition of the decoy is present or not, the conditioned probability density functions (pdfs) of the measured amplitude are given. Generalized Maximum Likelihood (GML) detection of the presence of TRAD is developed with the amplitude characteristic. Simulation results with different condition illustrate the performance of the method.
2011, 33(6): 1520-1524.
doi: 10.3724/SP.J.1146.2011.00011
Abstract:
In order to realize the multi-fault diagnosis for wide-deviation analog circuits, this paper designs a classification model based on Self-Organizing Map-Learning Vector Quantization(SOM-LVQ) network, and also presents an Enhanced LVQ (ELVQ) algorithm, in which the win-probability of neural can be balanced and the point density of the neural around the Bayesian decision surfaces can be reduced. The results of simulation indicate that the proposed algorithm has advantages of rapid convergence and low classification error.
In order to realize the multi-fault diagnosis for wide-deviation analog circuits, this paper designs a classification model based on Self-Organizing Map-Learning Vector Quantization(SOM-LVQ) network, and also presents an Enhanced LVQ (ELVQ) algorithm, in which the win-probability of neural can be balanced and the point density of the neural around the Bayesian decision surfaces can be reduced. The results of simulation indicate that the proposed algorithm has advantages of rapid convergence and low classification error.