Email alert
2015 Vol. 37, No. 7
column
Display Method:
2015, 37(7): 1525-1530.
doi: 10.11999/JEIT141540
Abstract:
Efficient and accurate spectrum sensing is a necessary part in cognitive radio network. This paper focuses on the spectrum sensing in MIMO environment. Based on the fact that the correlation structure of the received signals is different between the cases of signal-presence and signal-absence, a new concept of local variance is presented and the test statistic is constructed by the local variance. The theoretical threshold is derived according to the asymptotic distribution theorem. Finally, the detection performance comparison with other methods in AWGN channel and Rayleigh channel are simulated respectively. The results show that the proposed method outperforms other algorithms and needs small sample numbers, thus it has higher sensing accuracy and efficiency.
Efficient and accurate spectrum sensing is a necessary part in cognitive radio network. This paper focuses on the spectrum sensing in MIMO environment. Based on the fact that the correlation structure of the received signals is different between the cases of signal-presence and signal-absence, a new concept of local variance is presented and the test statistic is constructed by the local variance. The theoretical threshold is derived according to the asymptotic distribution theorem. Finally, the detection performance comparison with other methods in AWGN channel and Rayleigh channel are simulated respectively. The results show that the proposed method outperforms other algorithms and needs small sample numbers, thus it has higher sensing accuracy and efficiency.
2015, 37(7): 1538-1543.
doi: 10.11999/JEIT141275
Abstract:
In the context of multi-tap self-interference cancellation in the multipath channel in the Co-time Co- frequency Full Duplex (CCFD) system, the current studies focus on the experimental verification technologies in the RF-domain self-interference cancellation. The lack of performance analysis of multi-tap settings, amplitude, and phase on the self-interference is not conducive to the selection of engineering parameters. In the conditions of particular tap number and delay, this study gives the derivations for amplitude and phase of each tap, and also the influence of amplitude and phase error on the self-interference cancellation. Both the analysis and simulation show that firstly, for a specific number of taps, when the max tap delay is less than the delay of the main path of self-interference, the self-interference cancellation value is increased with the increase of the max tap delay, while the max tap delay is about two times larger than the delay of the main path of self-interference, the self-interference cancellation value decreases with the increasing of the max tap delay; secondly, for the particular tap delay coverage, when the tap number is increased or the bandwidth of self-interference signal is reduced, the self-interference cancellation value increases; thirdly, for the specific the tap number and delay setting, with the increase of amplitude or phase error, the self-interference cancellation value is more and more small.
In the context of multi-tap self-interference cancellation in the multipath channel in the Co-time Co- frequency Full Duplex (CCFD) system, the current studies focus on the experimental verification technologies in the RF-domain self-interference cancellation. The lack of performance analysis of multi-tap settings, amplitude, and phase on the self-interference is not conducive to the selection of engineering parameters. In the conditions of particular tap number and delay, this study gives the derivations for amplitude and phase of each tap, and also the influence of amplitude and phase error on the self-interference cancellation. Both the analysis and simulation show that firstly, for a specific number of taps, when the max tap delay is less than the delay of the main path of self-interference, the self-interference cancellation value is increased with the increase of the max tap delay, while the max tap delay is about two times larger than the delay of the main path of self-interference, the self-interference cancellation value decreases with the increasing of the max tap delay; secondly, for the particular tap delay coverage, when the tap number is increased or the bandwidth of self-interference signal is reduced, the self-interference cancellation value increases; thirdly, for the specific the tap number and delay setting, with the increase of amplitude or phase error, the self-interference cancellation value is more and more small.
2015, 37(7): 1544-1549.
doi: 10.11999/JEIT141382
Abstract:
In this paper, a new distributed beamforming technique for relay networks in frequency selective channels is proposed. The relay network consists of one transmitter, multiple relays and one receiver. To equalize the transmitter-to-relay and relay-to-receiver frequency selective channels, a Finite Impulse Response (FIR) filter at the receiver is used in addition to employing Filter-and-Forward (FF) relaying strategy at the relays. The FIR filters at the relays and the receiver are designed jointly to maximize the receive quality-of-service subject to the constraint of total relay transmitted power. Simulation results demonstrate that the proposed beamforming technique outperforms both the amplify-and-forward based beamforming method and the FF based beamforming technique without receiver filtering in frequency selective fading environments.
In this paper, a new distributed beamforming technique for relay networks in frequency selective channels is proposed. The relay network consists of one transmitter, multiple relays and one receiver. To equalize the transmitter-to-relay and relay-to-receiver frequency selective channels, a Finite Impulse Response (FIR) filter at the receiver is used in addition to employing Filter-and-Forward (FF) relaying strategy at the relays. The FIR filters at the relays and the receiver are designed jointly to maximize the receive quality-of-service subject to the constraint of total relay transmitted power. Simulation results demonstrate that the proposed beamforming technique outperforms both the amplify-and-forward based beamforming method and the FF based beamforming technique without receiver filtering in frequency selective fading environments.
2015, 37(7): 1550-1555.
doi: 10.11999/JEIT141455
Abstract:
Two-way relay cooperative communication system demonstrates significant gain in spectral efficiency when two user nodes exchange information via a relay node. For improving the Bit Error Rate (BER) performance, a joint precoding and detection algorithm is proposed based on Lattice Reduction (LR) when all the nodes equipped with multiple antennas. The proposed algorithm modifies the channel gain matrix into more orthogonal matrix by the complex lattice reduction. Precoding and detection algorithms are combined to apply the modified matrix. The relay node processes the receiving signal by simple modulo operation and amplification. The main complexity of the algorithm is in the two user nodes. The simulation evaluations show that, the computational complexity of the algorithm is increasing just because the channel gain matrix is modified by lattice reduction. The proposed joint precoding and detection algorithm can significantly improve the performance of BER and achieve full diversity, hence it is applicable to engineering.
Two-way relay cooperative communication system demonstrates significant gain in spectral efficiency when two user nodes exchange information via a relay node. For improving the Bit Error Rate (BER) performance, a joint precoding and detection algorithm is proposed based on Lattice Reduction (LR) when all the nodes equipped with multiple antennas. The proposed algorithm modifies the channel gain matrix into more orthogonal matrix by the complex lattice reduction. Precoding and detection algorithms are combined to apply the modified matrix. The relay node processes the receiving signal by simple modulo operation and amplification. The main complexity of the algorithm is in the two user nodes. The simulation evaluations show that, the computational complexity of the algorithm is increasing just because the channel gain matrix is modified by lattice reduction. The proposed joint precoding and detection algorithm can significantly improve the performance of BER and achieve full diversity, hence it is applicable to engineering.
2015, 37(7): 1556-1561.
doi: 10.11999/JEIT141322
Abstract:
In this paper an approach of eigen-decomposition and Chirp Z Transform (CZT) for DSSS signals with unknown carrier frequency under narrow band interferences is proposed to estimate the PN sequence.The period and chip rate of PN sequence need to be known. Firstly, the received signal is divided into vectors. Then, the eigenvalue decomposition is applied to the correlation matrix of the vectors. Finally, the narrow band interference, the carrier frequency and the PN sequence can be estimated by applying the improved Minimum Description Length (MDL) criteria and the Chirp-Z Transform (CZT) method on the eigenvectors. Simulation experiments show the performance curves at different SNRs with different periods of PN sequence under different narrow band interferences. Theory analysis and simulation results show that the approach can work effectively under low SNR.
In this paper an approach of eigen-decomposition and Chirp Z Transform (CZT) for DSSS signals with unknown carrier frequency under narrow band interferences is proposed to estimate the PN sequence.The period and chip rate of PN sequence need to be known. Firstly, the received signal is divided into vectors. Then, the eigenvalue decomposition is applied to the correlation matrix of the vectors. Finally, the narrow band interference, the carrier frequency and the PN sequence can be estimated by applying the improved Minimum Description Length (MDL) criteria and the Chirp-Z Transform (CZT) method on the eigenvectors. Simulation experiments show the performance curves at different SNRs with different periods of PN sequence under different narrow band interferences. Theory analysis and simulation results show that the approach can work effectively under low SNR.
2015, 37(7): 1569-1574.
doi: 10.11999/JEIT141364
Abstract:
Most of the existing research on Physical-layer Network Coding (PNC) is based on the assumption that the symbol timing at the relay is ideally synchronized, and rarely discusses the issue of symbol synchronization. However, in practice, the symbol timing is indispensable in PNC systems. To tackle this problem, this paper proposes a novel symbol timing estimation scheme based on the orthogonal training sequences for PNC in two-way relay channels. According to the maximum-likelihood estimation criterion, a Discrete Fourier Transformation (DFT) based interpolation algorithm is applied to improve the estimation accuracy. It is shown by analysis and simulation that the proposed DFT-based symbol timing estimator exhibits superior performance. The Mean Square Error (MSE) performance of the estimator is one order of magnitude better than that of the conventional optimum sample algorithm for Signal-to-Noise Ratio (SNR) greater than 10 dB, and is very close to the Modified Cramer-Rao Bound (MCRB).
Most of the existing research on Physical-layer Network Coding (PNC) is based on the assumption that the symbol timing at the relay is ideally synchronized, and rarely discusses the issue of symbol synchronization. However, in practice, the symbol timing is indispensable in PNC systems. To tackle this problem, this paper proposes a novel symbol timing estimation scheme based on the orthogonal training sequences for PNC in two-way relay channels. According to the maximum-likelihood estimation criterion, a Discrete Fourier Transformation (DFT) based interpolation algorithm is applied to improve the estimation accuracy. It is shown by analysis and simulation that the proposed DFT-based symbol timing estimator exhibits superior performance. The Mean Square Error (MSE) performance of the estimator is one order of magnitude better than that of the conventional optimum sample algorithm for Signal-to-Noise Ratio (SNR) greater than 10 dB, and is very close to the Modified Cramer-Rao Bound (MCRB).
2015, 37(7): 1575-1579.
doi: 10.11999/JEIT141459
Abstract:
For a long time, the circularity of the tail-biting trellis is ignored in conventional decoding algorithms of Tail-Biting Convolutional Codes (TBCC). This kind of algorithm starts decoding from the fixed location, and consequently exhibits relatively lower decoding efficiency. For the first time, this paper proves that the decoding result of the tail-biting convolutional codes is independent on the decoding starting location. It means that the Maximum Likelihood (ML) tail-biting path, which starts from any location of the tail-biting trellises, is the global ML tail-biting path. Based on this observation, a new ML decoding algorithm is proposed. The new algorithm ranks the belief-value of each location on the trellis at first, and then selects the location with the highest belief- value as the decoding starting location. Compared with other existing ML decoders, the new decoder exhibits higher convergence speed.
For a long time, the circularity of the tail-biting trellis is ignored in conventional decoding algorithms of Tail-Biting Convolutional Codes (TBCC). This kind of algorithm starts decoding from the fixed location, and consequently exhibits relatively lower decoding efficiency. For the first time, this paper proves that the decoding result of the tail-biting convolutional codes is independent on the decoding starting location. It means that the Maximum Likelihood (ML) tail-biting path, which starts from any location of the tail-biting trellises, is the global ML tail-biting path. Based on this observation, a new ML decoding algorithm is proposed. The new algorithm ranks the belief-value of each location on the trellis at first, and then selects the location with the highest belief- value as the decoding starting location. Compared with other existing ML decoders, the new decoder exhibits higher convergence speed.
2015, 37(7): 1580-1585.
doi: 10.11999/JEIT141294
Abstract:
Efficient Error Control Coding (ECC) can be used to enhance the stability and energy efficiency of Wireless Sensor Networks (WSNs). In order to deal with high bit error probability due to poor channel environment in WSNs, the paper study the error control coding techniques based on full diversity root check Low Density Parity Check (LDPC) codes, which takes advantage of the diversity resources in a wireless sensor network. Firstly, the encoding scheme based on root check full diversity LDPC code is put forward in clustering WSNs, then the diversity gain is analyzed. Secondly, the structure of rate compatible full diversity LDPC codes for the proposed scheme is proposed. Finally, the energy efficiency of the coding WSNs system is given. Simulation results show that the proposed coding scheme can significantly improve the energy efficiency of wireless sensor networks in poor channel conditions environment (In the simulation, channel noise is greater than410-4 mW).
Efficient Error Control Coding (ECC) can be used to enhance the stability and energy efficiency of Wireless Sensor Networks (WSNs). In order to deal with high bit error probability due to poor channel environment in WSNs, the paper study the error control coding techniques based on full diversity root check Low Density Parity Check (LDPC) codes, which takes advantage of the diversity resources in a wireless sensor network. Firstly, the encoding scheme based on root check full diversity LDPC code is put forward in clustering WSNs, then the diversity gain is analyzed. Secondly, the structure of rate compatible full diversity LDPC codes for the proposed scheme is proposed. Finally, the energy efficiency of the coding WSNs system is given. Simulation results show that the proposed coding scheme can significantly improve the energy efficiency of wireless sensor networks in poor channel conditions environment (In the simulation, channel noise is greater than410-4 mW).
2015, 37(7): 1586-1590.
doi: 10.11999/JEIT141219
Abstract:
In order to solve the problems of the low accuracy and the high energy cost by the existing abnormal event detection algorithm in Wireless Sensor Networks (WSN), this paper proposes an abnormal event detection algorithm based on Compressive Sensing (CS) and Grey Model(1,1) (GM(1,1)). Firstly, the network is divided into the clusters, and the data are sampled based on compressive sensing and are forwarded to the Sink. According to the characteristics of the unknown data sparsity in WSN, this paper proposes a block-sparse signal reconstruction algorithm based on the adaptive step. Then the abnormal event is predicted based on the GM(1,1) at the Sink node, and the work status of the node is adaptively adjusted. The simulation results show that, compared with the other anomaly detection algorithms, the proposed algorithm has lower probability of false detection and missed detection, and effectively saves the energy of nodes, with assurance the reliability of abnormal event detection at the same time.
In order to solve the problems of the low accuracy and the high energy cost by the existing abnormal event detection algorithm in Wireless Sensor Networks (WSN), this paper proposes an abnormal event detection algorithm based on Compressive Sensing (CS) and Grey Model(1,1) (GM(1,1)). Firstly, the network is divided into the clusters, and the data are sampled based on compressive sensing and are forwarded to the Sink. According to the characteristics of the unknown data sparsity in WSN, this paper proposes a block-sparse signal reconstruction algorithm based on the adaptive step. Then the abnormal event is predicted based on the GM(1,1) at the Sink node, and the work status of the node is adaptively adjusted. The simulation results show that, compared with the other anomaly detection algorithms, the proposed algorithm has lower probability of false detection and missed detection, and effectively saves the energy of nodes, with assurance the reliability of abnormal event detection at the same time.
2015, 37(7): 1598-1605.
doi: 10.11999/JEIT141336
Abstract:
Since the conflict between the limitation of measurement resources and the diversity of measurement requirements becomes more and more serious, this paper models the issue of measurement task deployment and proposes a new deployment algorithm based on the network measurement reconfiguration model. By using the theory of multiple using and composing of measurement components, the proposed algorithm can not only allocate the measurement resources effectively, but also support the concurrent various measurement tasks. The simulation result shows that the performance of the proposed algorithm on success ratio and average waiting time is more excellent than the Task-execution Scheduling schemes based on Graph Coloring (GCTS). The success ration of the proposed algorithm is more than 90%.
Since the conflict between the limitation of measurement resources and the diversity of measurement requirements becomes more and more serious, this paper models the issue of measurement task deployment and proposes a new deployment algorithm based on the network measurement reconfiguration model. By using the theory of multiple using and composing of measurement components, the proposed algorithm can not only allocate the measurement resources effectively, but also support the concurrent various measurement tasks. The simulation result shows that the performance of the proposed algorithm on success ratio and average waiting time is more excellent than the Task-execution Scheduling schemes based on Graph Coloring (GCTS). The success ration of the proposed algorithm is more than 90%.
2015, 37(7): 1606-1611.
doi: 10.11999/JEIT141379
Abstract:
The network traffic measurement and anomaly detection for high-speed IP network become the hotspot research of network measurement field. Because the current measurement algorithms have large estimation error for the mice flows and poor performance for the sampling anomaly traffic, an Adaptive Flow sampling algorithm based on the sampled Packets and force sampling Threshold S (AFPT) is proposed. According to the force sampling threshold S, the AFPT is able to sample the mice flows which is sensitive to the anomaly traffic, while adaptive adjustment the probability of sampling based on the sampled packets. The simulation and experimental results show that the estimation error of AFPT is consistent with the theoretical upper bound, and provide better performance for the anomaly traffic sampled. The proposed algorithm can effectively improve the accuracy of anomaly detection algorithm.
The network traffic measurement and anomaly detection for high-speed IP network become the hotspot research of network measurement field. Because the current measurement algorithms have large estimation error for the mice flows and poor performance for the sampling anomaly traffic, an Adaptive Flow sampling algorithm based on the sampled Packets and force sampling Threshold S (AFPT) is proposed. According to the force sampling threshold S, the AFPT is able to sample the mice flows which is sensitive to the anomaly traffic, while adaptive adjustment the probability of sampling based on the sampled packets. The simulation and experimental results show that the estimation error of AFPT is consistent with the theoretical upper bound, and provide better performance for the anomaly traffic sampled. The proposed algorithm can effectively improve the accuracy of anomaly detection algorithm.
2015, 37(7): 1620-1625.
doi: 10.11999/JEIT141415
Abstract:
There is a great challenge in the data stream clustering due to a limitation of time and space. In order to solve this problem, a new fuzzy-clustering algorithm, called Weight Decay Streaming Micro Clustering (WDSMC), is presented in this paper. The algorithm uses a reformed weighted Fuzzy C-Means (FCM) algorithm, and improves the quality of clustering by the structures of micro-clusters and weight-decay. Experimental results show that this algorithm has better accuracy than Stream Weight Fuzzy C-Means (SWFCM) and StreamKM++ algorithm.
There is a great challenge in the data stream clustering due to a limitation of time and space. In order to solve this problem, a new fuzzy-clustering algorithm, called Weight Decay Streaming Micro Clustering (WDSMC), is presented in this paper. The algorithm uses a reformed weighted Fuzzy C-Means (FCM) algorithm, and improves the quality of clustering by the structures of micro-clusters and weight-decay. Experimental results show that this algorithm has better accuracy than Stream Weight Fuzzy C-Means (SWFCM) and StreamKM++ algorithm.
2015, 37(7): 1626-1632.
doi: 10.11999/JEIT141433
Abstract:
Classification is a supervised learning. It determines the class label of an unlabeled instance by learning model based on the training dataset. Unlike traditional classification, this paper views classification problem from another perspective, that is influential function. That is, the class label of an unlabeled instance is determined by the influence of the training data set. Firstly, the idea of classification is introduced based on influence function. Secondly, the definition of influence function is given and three influence functions are designed. Finally, this paper proposes k-nearest neighbor classification method based on these three influence functions and applies it to the classification of imbalanced data sets. The experimental results on 18 UCI data sets show that the proposed method improves effectively the k-nearest neighbor generalization ability. Besides, the proposed method is effective for imbalanced classification.
Classification is a supervised learning. It determines the class label of an unlabeled instance by learning model based on the training dataset. Unlike traditional classification, this paper views classification problem from another perspective, that is influential function. That is, the class label of an unlabeled instance is determined by the influence of the training data set. Firstly, the idea of classification is introduced based on influence function. Secondly, the definition of influence function is given and three influence functions are designed. Finally, this paper proposes k-nearest neighbor classification method based on these three influence functions and applies it to the classification of imbalanced data sets. The experimental results on 18 UCI data sets show that the proposed method improves effectively the k-nearest neighbor generalization ability. Besides, the proposed method is effective for imbalanced classification.
2015, 37(7): 1633-1638.
doi: 10.11999/JEIT141429
Abstract:
Due to the deficiencies in prefetch distance controlling of most threaded data prefetching methods for pointer application, a prefetch distance control strategy based on the cache behavior characteristics is proposed. In this paper, the prefetch distance control model is constructed using the runtime data cache features of pointer applications to reduce cache pollution and system resources contention. By skipping loop-carried independencies data accesses, the task between main thread and helper thread is balanced and the timeliness of threaded prefetching is improved. The experimental results show that the proposed approach can optimize the performance of threaded prefetching mechanism.
Due to the deficiencies in prefetch distance controlling of most threaded data prefetching methods for pointer application, a prefetch distance control strategy based on the cache behavior characteristics is proposed. In this paper, the prefetch distance control model is constructed using the runtime data cache features of pointer applications to reduce cache pollution and system resources contention. By skipping loop-carried independencies data accesses, the task between main thread and helper thread is balanced and the timeliness of threaded prefetching is improved. The experimental results show that the proposed approach can optimize the performance of threaded prefetching mechanism.
2015, 37(7): 1639-1645.
doi: 10.11999/JEIT141324
Abstract:
Currently the application research of compressive measurements is still focused on the image recovery, but the ultimate purpose is a task of target detection and tracking in many special applications. And the issue performing target detection and tracking based on compressive measurements is not yet solved. The mapping model is firstly exploited to locate the target in the spatial domain through the measurements in the compressive domain. Further, a method tracking point targets through decoding targets location in the low-dimensional compressive measurements without reconstructed image is proposed for the possible application in space based infrared detection. The method uses the Hadamard matrix to design infrared compressive imaging system, and separates the background and foreground image from the low-dimensional compressive measurements by the adaptive compressive background subtraction. With the mapping relation from the compressive domain into the spatial domain, the target location is possibly decoded. Then the task of point target tracking in the clutter environment can be done by the associated data association and Kalman filtering algorithm. The theoretical analysis and numerical simulations demonstrate the approach proposed is able to accomplish a task of target tracking only by using less compressive measurements, and reduce detector scale, computation complexity and storage cost.
Currently the application research of compressive measurements is still focused on the image recovery, but the ultimate purpose is a task of target detection and tracking in many special applications. And the issue performing target detection and tracking based on compressive measurements is not yet solved. The mapping model is firstly exploited to locate the target in the spatial domain through the measurements in the compressive domain. Further, a method tracking point targets through decoding targets location in the low-dimensional compressive measurements without reconstructed image is proposed for the possible application in space based infrared detection. The method uses the Hadamard matrix to design infrared compressive imaging system, and separates the background and foreground image from the low-dimensional compressive measurements by the adaptive compressive background subtraction. With the mapping relation from the compressive domain into the spatial domain, the target location is possibly decoded. Then the task of point target tracking in the clutter environment can be done by the associated data association and Kalman filtering algorithm. The theoretical analysis and numerical simulations demonstrate the approach proposed is able to accomplish a task of target tracking only by using less compressive measurements, and reduce detector scale, computation complexity and storage cost.
2015, 37(7): 1646-1653.
doi: 10.11999/JEIT141362
Abstract:
To solve the problem that the tracking algorithm often leads to drift and failure based on the appearance model and traditional machine learning, a tracking algorithm is proposed based on the enhanced Flock of Tracker (FoT) and deep learning under the Tracking-Learning-Detection (TLD) framework. The target is predicted and tracked by the FoT, the cascaded predictor is added to improve the precision of the local tracker based on the spatio-temporal context, and the global motion model is evaluated by the speed-up random sample consensus algorithm to improve the accuracy. A deep detector is composed of the stacked denoising autoencoder and Support Vector Machine (SVM), combines with a multi-scale scanning window with global search strategy to detect the possible targets. Each sample is weighted by the weighted P-N learning to improve the precision of the deep detector. Compared with the state-of-the-art trackers, according to the results of experiments on variant challenging image sequences in the complex environment, the proposed algorithm has more accuracy and better robust, especially for the occlusions, the background clutter and so on.
To solve the problem that the tracking algorithm often leads to drift and failure based on the appearance model and traditional machine learning, a tracking algorithm is proposed based on the enhanced Flock of Tracker (FoT) and deep learning under the Tracking-Learning-Detection (TLD) framework. The target is predicted and tracked by the FoT, the cascaded predictor is added to improve the precision of the local tracker based on the spatio-temporal context, and the global motion model is evaluated by the speed-up random sample consensus algorithm to improve the accuracy. A deep detector is composed of the stacked denoising autoencoder and Support Vector Machine (SVM), combines with a multi-scale scanning window with global search strategy to detect the possible targets. Each sample is weighted by the weighted P-N learning to improve the precision of the deep detector. Compared with the state-of-the-art trackers, according to the results of experiments on variant challenging image sequences in the complex environment, the proposed algorithm has more accuracy and better robust, especially for the occlusions, the background clutter and so on.
2015, 37(7): 1654-1659.
doi: 10.11999/JEIT141325
Abstract:
The existing subspace tracking methods have well solved appearance changes and occlusions. However, they are weakly robust to complex background. To deal with this problem, firstly, this paper proposes an online discrimination dictionary learning model based on the Fisher criterion. The online discrimination dictionary learning algorithm for template updating in visual tracking is designed by using the block coordinate descent and replacing operations. Secondly, the distance between the target candidate coding coefficient and the mean of target samples coding coefficients is defined as the coefficient error. The robust visual tracking is achieved by taking the combination of the reconstruction error and the coefficient error as observation likelihood in particle filter framework. The experimental results show that the proposed method has better robustness and accuracy than the state-of-the-art trackers.
The existing subspace tracking methods have well solved appearance changes and occlusions. However, they are weakly robust to complex background. To deal with this problem, firstly, this paper proposes an online discrimination dictionary learning model based on the Fisher criterion. The online discrimination dictionary learning algorithm for template updating in visual tracking is designed by using the block coordinate descent and replacing operations. Secondly, the distance between the target candidate coding coefficient and the mean of target samples coding coefficients is defined as the coefficient error. The robust visual tracking is achieved by taking the combination of the reconstruction error and the coefficient error as observation likelihood in particle filter framework. The experimental results show that the proposed method has better robustness and accuracy than the state-of-the-art trackers.
2015, 37(7): 1660-1666.
doi: 10.11999/JEIT141321
Abstract:
In order to avoid the loss of background and spatial information in mean shift tracker, a dual-kernel tracking approach based on the second-order spatiogram is proposed. In the method, the second-order spatiogram is employed to represent a target, the similarity and contrast are considered simultaneously when evaluating the target candidate, and they are adaptively integrated into a novel objective function. By performing multi-variable Taylor series expansion and maximization on the objective function, a dual-kernel target location-shift formula is induced. Finally, the optimal target location is gained recursively by applying the mean shift procedure. Experimental evaluations on several image sequences demonstrate the effectiveness of the proposed algorithm.
In order to avoid the loss of background and spatial information in mean shift tracker, a dual-kernel tracking approach based on the second-order spatiogram is proposed. In the method, the second-order spatiogram is employed to represent a target, the similarity and contrast are considered simultaneously when evaluating the target candidate, and they are adaptively integrated into a novel objective function. By performing multi-variable Taylor series expansion and maximization on the objective function, a dual-kernel target location-shift formula is induced. Finally, the optimal target location is gained recursively by applying the mean shift procedure. Experimental evaluations on several image sequences demonstrate the effectiveness of the proposed algorithm.
2015, 37(7): 1667-1673.
doi: 10.11999/JEIT141271
Abstract:
For the consideration of the multiple copy-move forgery detection of digital images, and to avoid missing the matching feature points when generalized 2 Nearest-Neighbor (g2NN) algorithm is applied, Reversed generalized 2 Nearest-Neighbor (Rg2NN) algorithm is proposed. Reverse order is used in feature points matching, so that all feature points that match with the detected point can be calculated accurately. The experiment results show that the matching feature points calculated by Rg2NN are more accurate than by g2NN, and the ability of g2NN in detecting multiple copy-move forgery is improved. When one patch in the image is copied and pasted multiple times or two or more patches are copied and pasted, the copy-move map can be localized precisely by the Rg2NN algorithm.
For the consideration of the multiple copy-move forgery detection of digital images, and to avoid missing the matching feature points when generalized 2 Nearest-Neighbor (g2NN) algorithm is applied, Reversed generalized 2 Nearest-Neighbor (Rg2NN) algorithm is proposed. Reverse order is used in feature points matching, so that all feature points that match with the detected point can be calculated accurately. The experiment results show that the matching feature points calculated by Rg2NN are more accurate than by g2NN, and the ability of g2NN in detecting multiple copy-move forgery is improved. When one patch in the image is copied and pasted multiple times or two or more patches are copied and pasted, the copy-move map can be localized precisely by the Rg2NN algorithm.
2015, 37(7): 1674-1680.
doi: 10.11999/JEIT141501
Abstract:
The Iteration-based Variable Step-Size LMS (IVSSLMS) algorithm is proposed to overcome the compromise between the convergence speed and the steady-state misadjustment error, which cannot be handled in Fixed Step-Size LMS algorithm (FXSSLMS). This algorithm is different from other Variable Step-Size LMS (VSSLMS) algorithms, since its step-size is not controlled by the error signal but the iteration time. In the other words, a modified Logistic-function nonlinear relationship is constructed between the iteration time and the step-size to conquer the slow convergence speed of the FXSSLMS and interference of the current VSSLMS. Finally, the performance and parameters settings of the proposed algorithm are analyzed. The theoretical analysis and simulations verify that the proposed algorithm has not only faster convergence speed but also smaller misadjustment error. The misadjustment error of this algorithm, with color noise interfering, is 7 dB less than existing methods.
The Iteration-based Variable Step-Size LMS (IVSSLMS) algorithm is proposed to overcome the compromise between the convergence speed and the steady-state misadjustment error, which cannot be handled in Fixed Step-Size LMS algorithm (FXSSLMS). This algorithm is different from other Variable Step-Size LMS (VSSLMS) algorithms, since its step-size is not controlled by the error signal but the iteration time. In the other words, a modified Logistic-function nonlinear relationship is constructed between the iteration time and the step-size to conquer the slow convergence speed of the FXSSLMS and interference of the current VSSLMS. Finally, the performance and parameters settings of the proposed algorithm are analyzed. The theoretical analysis and simulations verify that the proposed algorithm has not only faster convergence speed but also smaller misadjustment error. The misadjustment error of this algorithm, with color noise interfering, is 7 dB less than existing methods.
2015, 37(7): 1681-1687.
doi: 10.11999/JEIT141450
Abstract:
Considering the influence of the projection matrix on Compressed Censing (CS), a novel method is proposed to optimize the projection matrix. In order to improve the signals reconstruction precise and the stability of the optimization algorithm of the projection matrix, the proposed method adopts a differentiable threshold function to shrink the off-diagonal items of a Gram matrix corresponding to the mutual coherence between the projection matrix and sparse dictionary, and introduces a gradient descent approach based on the Wolfs-conditions to solve the optimization projection matrix. The Basis-Pursuit (BP) algorithm and the Orthogonal Matching Pursuit (OMP) algorithm are applied to find the solution of the minimuml0-norm optimization issue and the compressed sensing are utilized to sense and reconstruct the random vectors, wavelets noise test signals and pictures. The results of the simulation show the proposed method based on the projection matrix optimization is able to improve the quality of the reconstruction performance.
Considering the influence of the projection matrix on Compressed Censing (CS), a novel method is proposed to optimize the projection matrix. In order to improve the signals reconstruction precise and the stability of the optimization algorithm of the projection matrix, the proposed method adopts a differentiable threshold function to shrink the off-diagonal items of a Gram matrix corresponding to the mutual coherence between the projection matrix and sparse dictionary, and introduces a gradient descent approach based on the Wolfs-conditions to solve the optimization projection matrix. The Basis-Pursuit (BP) algorithm and the Orthogonal Matching Pursuit (OMP) algorithm are applied to find the solution of the minimuml0-norm optimization issue and the compressed sensing are utilized to sense and reconstruct the random vectors, wavelets noise test signals and pictures. The results of the simulation show the proposed method based on the projection matrix optimization is able to improve the quality of the reconstruction performance.
2015, 37(7): 1688-1694.
doi: 10.11999/JEIT141513
Abstract:
The conventional magnitude constraints beamformer is a non-convex issue which is reformulated as a linear programming issue. A robust beamformer with Phase response Fixed and Magnitude response Constraint (PFMC) is proposed for Uniform Linear Array (ULA). Making use of the property that there is only a phase factor difference between the transfer function of the inverse sequence of the weight vector and the array response function, the phase response is set to be linear and the real magnitude response is constrained. Thus, a convex optimization cost function is established whose optimal solution can be found out by highly efficient interior point method. The PFMC-WC method against covariance matrix error is proposed based on Worst Case (WC) performance optimization to improve the performance of PFMC. Compared with the conventional magnitude response constraint beamformer, the proposed method reduces the number of constraints and leaves out the processing of recovering the weight vector, therefore, the calculation cost is reduced. In addition, due to the guarantee of the phase response, the proposed beamformer has better performance than the traditional magnitude constraint beamformer. Simulation results demonstrate the effectiveness of the proposed beamformer.
The conventional magnitude constraints beamformer is a non-convex issue which is reformulated as a linear programming issue. A robust beamformer with Phase response Fixed and Magnitude response Constraint (PFMC) is proposed for Uniform Linear Array (ULA). Making use of the property that there is only a phase factor difference between the transfer function of the inverse sequence of the weight vector and the array response function, the phase response is set to be linear and the real magnitude response is constrained. Thus, a convex optimization cost function is established whose optimal solution can be found out by highly efficient interior point method. The PFMC-WC method against covariance matrix error is proposed based on Worst Case (WC) performance optimization to improve the performance of PFMC. Compared with the conventional magnitude response constraint beamformer, the proposed method reduces the number of constraints and leaves out the processing of recovering the weight vector, therefore, the calculation cost is reduced. In addition, due to the guarantee of the phase response, the proposed beamformer has better performance than the traditional magnitude constraint beamformer. Simulation results demonstrate the effectiveness of the proposed beamformer.
2015, 37(7): 1710-1715.
doi: 10.11999/JEIT141339
Abstract:
For distributed aperture coherent radar of general signal processing architecture, the estimating performance of time delay differences and phase differences, and the output Signal to Noise Ratio gain (oSNRg) based on cross-correlation are studied in this paper. Firstly, the signal models are proposed in Multiple-Input Multiple-Output (MIMO) mode and Fully Coherent (FC) mode respectively. Then the phase ambiguity with estimating phase differences by cross-correlation is deeply analyzed, and an effective method for resolving ambiguity is proposed to estimate robustly the phase differences. Numerical examples demonstrate that when the input SNR is high enough, the coherence parameters can be estimated robustly and the oSNRg approaches the ideal value by cross-correlation method.
For distributed aperture coherent radar of general signal processing architecture, the estimating performance of time delay differences and phase differences, and the output Signal to Noise Ratio gain (oSNRg) based on cross-correlation are studied in this paper. Firstly, the signal models are proposed in Multiple-Input Multiple-Output (MIMO) mode and Fully Coherent (FC) mode respectively. Then the phase ambiguity with estimating phase differences by cross-correlation is deeply analyzed, and an effective method for resolving ambiguity is proposed to estimate robustly the phase differences. Numerical examples demonstrate that when the input SNR is high enough, the coherence parameters can be estimated robustly and the oSNRg approaches the ideal value by cross-correlation method.
2015, 37(7): 1723-1728.
doi: 10.11999/JEIT141024
Abstract:
Aiming at the problem that different Ground-Based Radar (GBR) deployment way influences the detection performance to the near-space hypersonic target, the near-space hypersonic target model and GBR detection model are established. On the basis of target Radar Cross Section (RCS), detection distance and angle with the change of time, the detection probability, tracking coefficient and resource redundancy rate of 3 kinds of radar detection performance evaluation indicators are put forward, GBR forward deployment, relay deployment and reclaiming deployment way affect the detection performance to near-space hypersonic target are simulation analyzed. The results show that the detection effect of forward deployment combines relay deployment is good, forward deployed found the target distance for the first time is far that can provide the longer warning time, the tracking time of reclaiming deployment is short and has high resource redundancy rate. It has certain practical significance and engineering practical and can provide a theoretical basis and technical support to the deployment of GBR for near-space early warning system.
Aiming at the problem that different Ground-Based Radar (GBR) deployment way influences the detection performance to the near-space hypersonic target, the near-space hypersonic target model and GBR detection model are established. On the basis of target Radar Cross Section (RCS), detection distance and angle with the change of time, the detection probability, tracking coefficient and resource redundancy rate of 3 kinds of radar detection performance evaluation indicators are put forward, GBR forward deployment, relay deployment and reclaiming deployment way affect the detection performance to near-space hypersonic target are simulation analyzed. The results show that the detection effect of forward deployment combines relay deployment is good, forward deployed found the target distance for the first time is far that can provide the longer warning time, the tracking time of reclaiming deployment is short and has high resource redundancy rate. It has certain practical significance and engineering practical and can provide a theoretical basis and technical support to the deployment of GBR for near-space early warning system.
2015, 37(7): 1736-1742.
doi: 10.11999/JEIT141491
Abstract:
Aiming at the problem of complex range processing and strong azimuth variant in GEosynchronous Orbit SAR (GEO SAR) imaging processing, an imaging algorithm based on Chirp_z transform and the azimuth scaling is proposed. Firstly, the stop-go-stop phase error is considered in the echo model, and then the slant distance expression which is expanded using the Taylor series is given. After that, a series of inversion method is used to get the signal spectrum expression. In order to simplifying the imaging processing at range direction, the Chirp_z transform is used to correct the range walk. And then, an improved scale factor is used to correct the linear time-varying coefficient of signal expand included the first order, the second order and the three order. The phase error introduced by scaling operation is compensated in frequency domain. Finally, the simulation results show that the algorithm can realize geosynchronous orbit SAR imaging.
Aiming at the problem of complex range processing and strong azimuth variant in GEosynchronous Orbit SAR (GEO SAR) imaging processing, an imaging algorithm based on Chirp_z transform and the azimuth scaling is proposed. Firstly, the stop-go-stop phase error is considered in the echo model, and then the slant distance expression which is expanded using the Taylor series is given. After that, a series of inversion method is used to get the signal spectrum expression. In order to simplifying the imaging processing at range direction, the Chirp_z transform is used to correct the range walk. And then, an improved scale factor is used to correct the linear time-varying coefficient of signal expand included the first order, the second order and the three order. The phase error introduced by scaling operation is compensated in frequency domain. Finally, the simulation results show that the algorithm can realize geosynchronous orbit SAR imaging.
2015, 37(7): 1743-1750.
doi: 10.11999/JEIT141383
Abstract:
Due to the skew Region Of Support (ROS) of the two-dimensional wavenumber domain for high squint SAR data, conventional Omega-K algorithm can not exploit the ROS efficiently enough and degrades the resolution when choosing the square region to process. So a modified Omega-K algorithm is proposed in this paper to deal with the high squint SAR data for sub-aperture imaging. The maximum usage of ROS is obtained by the coordinate axis rotation. For the following azimuth dependence, the method of azimuth resampling is adopted to realize the uniform focusing. Compared with the traditional Omega-K method, the modified Omega-K algorithm is focused in azimuth wavenumber-domain because of the limitation of the azimuth sub-aperture ROS in order to avoid zero padding operation, and increase efficiency. Simulation results and raw data processing validate the effectiveness of the proposed algorithm.
Due to the skew Region Of Support (ROS) of the two-dimensional wavenumber domain for high squint SAR data, conventional Omega-K algorithm can not exploit the ROS efficiently enough and degrades the resolution when choosing the square region to process. So a modified Omega-K algorithm is proposed in this paper to deal with the high squint SAR data for sub-aperture imaging. The maximum usage of ROS is obtained by the coordinate axis rotation. For the following azimuth dependence, the method of azimuth resampling is adopted to realize the uniform focusing. Compared with the traditional Omega-K method, the modified Omega-K algorithm is focused in azimuth wavenumber-domain because of the limitation of the azimuth sub-aperture ROS in order to avoid zero padding operation, and increase efficiency. Simulation results and raw data processing validate the effectiveness of the proposed algorithm.
2015, 37(7): 1751-1756.
doi: 10.11999/JEIT141375
Abstract:
If the azimuth angle between the interference and the detected target is small, the energy of the received signal reflected by the target may have a great loss when the interference is cancelled for an omni-directional VHF radar with single circular array. To resolve this problem, an improved algorithm based on double circular arrays is proposed. The algorithm firstly utilizes the excited phase mode -1, 0 and 1 of the small array to estimate a rough azimuth angle of the interference, and then acquires an accurate one by symmetrical phase modes of high orders of the large array. Finally, interference cancellation and estimation of targets azimuth angle are fulfilled via unsymmetrical phase modes of the large array. Compared with the algorithm based on single array, the proposed algorithm can reduce the energy loss of the useful signal and increase the estimation precision of targets azimuth angle. Simulation results demonstrate the effectiveness of the proposed algorithm.
If the azimuth angle between the interference and the detected target is small, the energy of the received signal reflected by the target may have a great loss when the interference is cancelled for an omni-directional VHF radar with single circular array. To resolve this problem, an improved algorithm based on double circular arrays is proposed. The algorithm firstly utilizes the excited phase mode -1, 0 and 1 of the small array to estimate a rough azimuth angle of the interference, and then acquires an accurate one by symmetrical phase modes of high orders of the large array. Finally, interference cancellation and estimation of targets azimuth angle are fulfilled via unsymmetrical phase modes of the large array. Compared with the algorithm based on single array, the proposed algorithm can reduce the energy loss of the useful signal and increase the estimation precision of targets azimuth angle. Simulation results demonstrate the effectiveness of the proposed algorithm.
2015, 37(7): 1757-1762.
doi: 10.11999/JEIT141228
Abstract:
A modified model-based method is proposed to obtain sufficient prior templates and reduce the computational complexity on Synthetic Aperture Sonar (SAS) automatic target recognition. First, a quick method based on build convex hull is proposed to estimate the target pose quickly as well as the SAS imaging geometry for the specified target. Second, an improved method based on Hidden Point Removal (HPR) algorithm is proposed to obtain the target SAS simulation image effectively. Accordingly, the target recognition is realized by the correlation between the test image and the simulated image. Finally, the effectiveness of the proposed method is verified by the simulation experiment. It is shown that the proposed method can achieve higher computational efficiency than the conventional direct templet-based method, but remain the same high recognition rate.
A modified model-based method is proposed to obtain sufficient prior templates and reduce the computational complexity on Synthetic Aperture Sonar (SAS) automatic target recognition. First, a quick method based on build convex hull is proposed to estimate the target pose quickly as well as the SAS imaging geometry for the specified target. Second, an improved method based on Hidden Point Removal (HPR) algorithm is proposed to obtain the target SAS simulation image effectively. Accordingly, the target recognition is realized by the correlation between the test image and the simulated image. Finally, the effectiveness of the proposed method is verified by the simulation experiment. It is shown that the proposed method can achieve higher computational efficiency than the conventional direct templet-based method, but remain the same high recognition rate.
2015, 37(7): 1763-1768.
doi: 10.11999/JEIT141347
Abstract:
According to the feature that the underwater target radiated noise contains high intensive and stable line spectrum, a weighted line spectrum detection method based on the instantaneous phases variance is proposed with regard to the problem of unknown line spectrum detection. This method utilizes the characteristic that the instantaneous phase of target line spectral frequency unit being stable, and the instantaneous phase of noise frequency unit being random, to weight each frequency unit by the instantaneous phase variance, and can further restrain the background noise energy disturbances and enhance the Signal to Noise Ratio (SNR) gain of the target line spectrum detection, and achieves detecting unknown line spectrum of underwater target radiated noise. The theoretical analysis and experimental results both show that this method can well enhance the target line spectral energy and restrain the background noise energy, and has the capacity of improving SNR. It has better application prospects to the detection and identification areas of anti-complex channel.
According to the feature that the underwater target radiated noise contains high intensive and stable line spectrum, a weighted line spectrum detection method based on the instantaneous phases variance is proposed with regard to the problem of unknown line spectrum detection. This method utilizes the characteristic that the instantaneous phase of target line spectral frequency unit being stable, and the instantaneous phase of noise frequency unit being random, to weight each frequency unit by the instantaneous phase variance, and can further restrain the background noise energy disturbances and enhance the Signal to Noise Ratio (SNR) gain of the target line spectrum detection, and achieves detecting unknown line spectrum of underwater target radiated noise. The theoretical analysis and experimental results both show that this method can well enhance the target line spectral energy and restrain the background noise energy, and has the capacity of improving SNR. It has better application prospects to the detection and identification areas of anti-complex channel.
2015, 37(7): 1769-1773.
doi: 10.11999/JEIT141403
Abstract:
Exploring a new logic element of Field Programmable Gate Array (FPGA) is always a key field in FPGAs research, while And-Inverter Cones (AIC) is the most promising one. Implementing a highly-efficient and highly-flexible mapping tool is also an important part of exploring new FPGA architecture. In this paper, a new mapper for AIC-based FPGA is implemented. Compared with an existing mapper, the new mapper has much higher flexibility, and supports adjustments of AICs architectural parameters to assit the design space exploration of AIC. Meanwhile, the new mapper provides area results 33%~36% better than the original mapper.
Exploring a new logic element of Field Programmable Gate Array (FPGA) is always a key field in FPGAs research, while And-Inverter Cones (AIC) is the most promising one. Implementing a highly-efficient and highly-flexible mapping tool is also an important part of exploring new FPGA architecture. In this paper, a new mapper for AIC-based FPGA is implemented. Compared with an existing mapper, the new mapper has much higher flexibility, and supports adjustments of AICs architectural parameters to assit the design space exploration of AIC. Meanwhile, the new mapper provides area results 33%~36% better than the original mapper.
2015, 37(7): 1774-1778.
doi: 10.11999/JEIT141371
Abstract:
Parameters of the Ultra-High-Frequency Radio Frequency IDentification (UHF RFID) tags and the mutual coupling effect have great impact on antenna gain, and a corresponding theoretical model is developed. Firstly, from the perspective of the radiation field, the tag is equivalent to dipole antenna with lumped load. Then based on the theory of dipole array, a simplified gain model of dual tag antennas with small interval is derived, and application of multiple tags is expanded simply. The simulation results show that the model is effective. Finally, the directivity and radiation efficiency are studied. The results provide a theoretical guidance for the research of intensive tags performance.
Parameters of the Ultra-High-Frequency Radio Frequency IDentification (UHF RFID) tags and the mutual coupling effect have great impact on antenna gain, and a corresponding theoretical model is developed. Firstly, from the perspective of the radiation field, the tag is equivalent to dipole antenna with lumped load. Then based on the theory of dipole array, a simplified gain model of dual tag antennas with small interval is derived, and application of multiple tags is expanded simply. The simulation results show that the model is effective. Finally, the directivity and radiation efficiency are studied. The results provide a theoretical guidance for the research of intensive tags performance.
2015, 37(7): 1531-1537.
doi: 10.11999/JEIT141283
Abstract:
In the Cognitive Radio (CR) networks, the detection performance of Primary Users (PU) randomly arriving during the sensing period is poor. So, an improved spectrum sensing method exploiting the cyclostationary feature is proposed, which adds the second half sampling values of sampling signal to the first half. The method improves the detection performance by enhancing the test statistic even though the sensing time is not added. Then, detection probability, false probability and throughput are analysed theoretically. Simulation results show that the proposed method performs better on detection performance and throughput than the conventional spectrum detection and the conventional cyclostationary spectrum sensing.
In the Cognitive Radio (CR) networks, the detection performance of Primary Users (PU) randomly arriving during the sensing period is poor. So, an improved spectrum sensing method exploiting the cyclostationary feature is proposed, which adds the second half sampling values of sampling signal to the first half. The method improves the detection performance by enhancing the test statistic even though the sensing time is not added. Then, detection probability, false probability and throughput are analysed theoretically. Simulation results show that the proposed method performs better on detection performance and throughput than the conventional spectrum detection and the conventional cyclostationary spectrum sensing.
2015, 37(7): 1562-1568.
doi: 10.11999/JEIT141204
Abstract:
For the problems of reduced secrecy rate and energy according to noncooperation of selfish relay, this paper presents a security coalition method based on the game theory. Firstly, this paper introduces a benefit-sharing mechanism of cooperative game for the adaptive security coalition method, and models the transmitter coalition. And then this paper makes the increased part of total network secrecy rate compared with the initial as the transferable benefit, and allocates it to all transmitters in the coalition. After that, the transmitters iterate through the average utility under all coalitions, and find the largest benefit coalition. Finally, this coalition of transmitters is formed autonomously, the transmitter which is badly in need of sending signal or the transmitter with the worst eavesdropping channel conditions under the same need sends signal, the rest cooperate by sending artificial noise. Simulations and analyses show the fairness and effectiveness of this method, when sending power is 20 mW, the average network secrecy rate under the Gaussian channel compared with the initial stated is improved by 1.8 .
For the problems of reduced secrecy rate and energy according to noncooperation of selfish relay, this paper presents a security coalition method based on the game theory. Firstly, this paper introduces a benefit-sharing mechanism of cooperative game for the adaptive security coalition method, and models the transmitter coalition. And then this paper makes the increased part of total network secrecy rate compared with the initial as the transferable benefit, and allocates it to all transmitters in the coalition. After that, the transmitters iterate through the average utility under all coalitions, and find the largest benefit coalition. Finally, this coalition of transmitters is formed autonomously, the transmitter which is badly in need of sending signal or the transmitter with the worst eavesdropping channel conditions under the same need sends signal, the rest cooperate by sending artificial noise. Simulations and analyses show the fairness and effectiveness of this method, when sending power is 20 mW, the average network secrecy rate under the Gaussian channel compared with the initial stated is improved by 1.8 .
2015, 37(7): 1591-1597.
doi: 10.11999/JEIT141198
Abstract:
Packets in ICWN (Intermittently Connected Wireless Network) are delivered in a cooperation manner between nodes, which means the existence of malicious nodes will degrade the network performance. A malicious node tolerant message forwarding mechanism is proposed in this paper. By exploiting the historical node behavior information, directly observed information is combined with the recommendation information from neighbor nodes to perceive malicious node behavior according to the dynamical reputation threshold. Consequently, evidence theory is utilized to quantify the node relation to detect malicious nodes, and then the optimal relay nodes selecting can be achieved. Results show that, under the attack of collaborating malicious nodes, malicious nodes can be accurately identified by the proposed mechanism, which leads to the notably improving packet delivery ratio and average delay.
Packets in ICWN (Intermittently Connected Wireless Network) are delivered in a cooperation manner between nodes, which means the existence of malicious nodes will degrade the network performance. A malicious node tolerant message forwarding mechanism is proposed in this paper. By exploiting the historical node behavior information, directly observed information is combined with the recommendation information from neighbor nodes to perceive malicious node behavior according to the dynamical reputation threshold. Consequently, evidence theory is utilized to quantify the node relation to detect malicious nodes, and then the optimal relay nodes selecting can be achieved. Results show that, under the attack of collaborating malicious nodes, malicious nodes can be accurately identified by the proposed mechanism, which leads to the notably improving packet delivery ratio and average delay.
2015, 37(7): 1612-1619.
doi: 10.11999/JEIT141255
Abstract:
For tackling the deficiencies of weak adaptability due to the singleness of the role establishment method, role or privilege redundancy, and little attention on resource management in the existing Role-Based Access Control (RBAC) researches, a Scalable Access Control model Based on Double-Tier Role and Organization (SDTR-OBAC) is proposed. Through double role partition, a double-tier role architecture of function role and task role is presented, solving the problem that the traditional role can not cover the requirements of both organizational level and application level at the same time. The concept of organization is introduced to integrate with the double-tier role and form an organization-role pair assigned to user instead of role only in RBAC, making model suitable to cross-domain access as well as a single domain. Through extending privileges as an operation and resource type pair, the model and its constraints including separation of duty and cardinality constraint are defined formally. The discussion of expressive power and complexity indicates that SDTR-OBAC retains all the advantages of RBAC, and can effectively reduce the administration complexity with better scalability and universality.
For tackling the deficiencies of weak adaptability due to the singleness of the role establishment method, role or privilege redundancy, and little attention on resource management in the existing Role-Based Access Control (RBAC) researches, a Scalable Access Control model Based on Double-Tier Role and Organization (SDTR-OBAC) is proposed. Through double role partition, a double-tier role architecture of function role and task role is presented, solving the problem that the traditional role can not cover the requirements of both organizational level and application level at the same time. The concept of organization is introduced to integrate with the double-tier role and form an organization-role pair assigned to user instead of role only in RBAC, making model suitable to cross-domain access as well as a single domain. Through extending privileges as an operation and resource type pair, the model and its constraints including separation of duty and cardinality constraint are defined formally. The discussion of expressive power and complexity indicates that SDTR-OBAC retains all the advantages of RBAC, and can effectively reduce the administration complexity with better scalability and universality.
2015, 37(7): 1695-1701.
doi: 10.11999/JEIT141315
Abstract:
In this paper, with the purpose of sorting frequency-hopping networks, identifying and tracking signals, a joint estimation of Direction Of Arrival (DOA) and polarization for multiple frequency-hopping signals is carried out in underdetermined condition, which is based on the spatial polarization time-frequency analysis and implemented by constructing polarization sensitive array with orthogonal dipole elements. Firstly, the observation model for polarization sensitive array of frequency-hopping signals is built. Then the spatial polarimetric time-frequency distribution matrix for every hop signal is generated in the knowledge of reference sensor time-frequency analysis. From this matrix, the polarization and spatial characteristic information can be extracted, which used for the joint estimation of DOA and polarization in linear, quadratic spatial polarization time-frequency analysis and polynomial rooting methods respectively. Finally, the Monte-Carlo simulation results show the effectiveness of the proposed algorithm.
In this paper, with the purpose of sorting frequency-hopping networks, identifying and tracking signals, a joint estimation of Direction Of Arrival (DOA) and polarization for multiple frequency-hopping signals is carried out in underdetermined condition, which is based on the spatial polarization time-frequency analysis and implemented by constructing polarization sensitive array with orthogonal dipole elements. Firstly, the observation model for polarization sensitive array of frequency-hopping signals is built. Then the spatial polarimetric time-frequency distribution matrix for every hop signal is generated in the knowledge of reference sensor time-frequency analysis. From this matrix, the polarization and spatial characteristic information can be extracted, which used for the joint estimation of DOA and polarization in linear, quadratic spatial polarization time-frequency analysis and polynomial rooting methods respectively. Finally, the Monte-Carlo simulation results show the effectiveness of the proposed algorithm.
2015, 37(7): 1702-1709.
doi: 10.11999/JEIT141317
Abstract:
For the several kinds of pseudo-random code modulated pulse trains-the Pseudo-Random Binary Code (PRBC) pulse train, including the Pulse Position Modulation (PPM) pulse train, the Pseudo-Random Binary Code and Pulse Position Modulation (PRBC-PPM) pulse train, in order to solve the problem of single channel source separation and parameters estimation of multi-component signal, an estimation method of carrier frequency and pseudo-random code based on Singular Value Ratio (SVR) spectrum and cycle accumulation is proposed. The generalized period can be estimated through SVR spectrum firstly, after that the interference of the noise can be reduced by cycle accumulating, Fast Fourier Transformation (FFT) is used to analyze the squared signal, and the exact value of carrier frequency and the pulse position can be obtain by measuring the sum value of the real part of the signal which removed carrier frequency. At last, the amplitude and the initial phase can be determined by calculating the inner product. The simulation results prove that the proposed method is effective in different SNRs.
For the several kinds of pseudo-random code modulated pulse trains-the Pseudo-Random Binary Code (PRBC) pulse train, including the Pulse Position Modulation (PPM) pulse train, the Pseudo-Random Binary Code and Pulse Position Modulation (PRBC-PPM) pulse train, in order to solve the problem of single channel source separation and parameters estimation of multi-component signal, an estimation method of carrier frequency and pseudo-random code based on Singular Value Ratio (SVR) spectrum and cycle accumulation is proposed. The generalized period can be estimated through SVR spectrum firstly, after that the interference of the noise can be reduced by cycle accumulating, Fast Fourier Transformation (FFT) is used to analyze the squared signal, and the exact value of carrier frequency and the pulse position can be obtain by measuring the sum value of the real part of the signal which removed carrier frequency. At last, the amplitude and the initial phase can be determined by calculating the inner product. The simulation results prove that the proposed method is effective in different SNRs.
2015, 37(7): 1716-1722.
doi: 10.11999/JEIT141220
Abstract:
The accuracy of the radial velocity estimated by the airborne Along-Track Interferometric SAR (ATI-SAR) is affected by the accuracy of different system parameter, such as the interferometric phase biases and the baseline components errors. Therefore, these factors must be calibrated if the higher radial velocity estimation accuracy is required. The calibration methods based on the sensitivity equations are generally used in the interferometric SAR calibration. However, the performance of these methods is limited by the matrix condition number of the sensitivity matrix, which is decided by the location strategy of Ground Control Points (GCP). This study analyses and simulates the condition number of the sensitivity matrix corresponding to the different GCP distribution ways along the swath as well as the different selections of velocities and positions of moving ground control points.
The accuracy of the radial velocity estimated by the airborne Along-Track Interferometric SAR (ATI-SAR) is affected by the accuracy of different system parameter, such as the interferometric phase biases and the baseline components errors. Therefore, these factors must be calibrated if the higher radial velocity estimation accuracy is required. The calibration methods based on the sensitivity equations are generally used in the interferometric SAR calibration. However, the performance of these methods is limited by the matrix condition number of the sensitivity matrix, which is decided by the location strategy of Ground Control Points (GCP). This study analyses and simulates the condition number of the sensitivity matrix corresponding to the different GCP distribution ways along the swath as well as the different selections of velocities and positions of moving ground control points.
2015, 37(7): 1729-1735.
doi: 10.11999/JEIT141245
Abstract:
In Synthetic Aperture Radar-Ground Moving Target Indication (SAR-GMTI) processing, it is difficult to solve the Range Cell Migration (RCM) and azimuth defocusing problems simultaneously since the motion parameters of each moving target is different, which makes the high-accuracy focusing of moving target unattainable, thus affects the accuracy of positioning. A high-accuracy focusing and positioning of moving-targets method based on instantaneous interferometry is proposed in this paper. Firstly, single moving target can be extracted. After that, by iteratively extracting the high-accuracy Instantaneous Interferometric Phase (IIP) of moving target and solving the across-track velocity ambiguity, accurate across-track velocity can be obtained and utilized in RCM removing. After all, high-accuracy focusing of moving target can be achieved, which improves the accuracy of positioning. The processing results of measured data illustrate the effectiveness of the proposed method in this paper.
In Synthetic Aperture Radar-Ground Moving Target Indication (SAR-GMTI) processing, it is difficult to solve the Range Cell Migration (RCM) and azimuth defocusing problems simultaneously since the motion parameters of each moving target is different, which makes the high-accuracy focusing of moving target unattainable, thus affects the accuracy of positioning. A high-accuracy focusing and positioning of moving-targets method based on instantaneous interferometry is proposed in this paper. Firstly, single moving target can be extracted. After that, by iteratively extracting the high-accuracy Instantaneous Interferometric Phase (IIP) of moving target and solving the across-track velocity ambiguity, accurate across-track velocity can be obtained and utilized in RCM removing. After all, high-accuracy focusing of moving target can be achieved, which improves the accuracy of positioning. The processing results of measured data illustrate the effectiveness of the proposed method in this paper.