Email alert
2015 Vol. 37, No. 6
Display Method:
2015, 37(6): 1271-1278.
doi: 10.11999/JEIT141259
Abstract:
In the resource-constrained intermittent connectivity Ad-hoc networks, the wireless media is shared by collaborated nodes for message forwarding, and the confidentiality of messages is particularly vulnerable. To protect the confidentiality of the message content, a message forwarding mechanism for intermittent connectivity Ad-hoc networks is proposed. In this mechanism, the original message is sliced into several pieces to conceal the confidentiality, then taking advantage of the redundancy of message forwarding process from multi-copy routing and the node similarity, the forwarding paths of each piece are controlled to be disjoined. Consequently, the ferry nodes collect these pieces, verify their reliability, and restore to the original message, then encrypt it where only the destination node can decrypt the ciphertext. Finally, the confidentiality and integrity of messages in the forwarding process can be achieved. Numerical analysis shows that under the premise of network performance guaranty, the proposed mechanism can effectively protect the confidentiality of the message.
In the resource-constrained intermittent connectivity Ad-hoc networks, the wireless media is shared by collaborated nodes for message forwarding, and the confidentiality of messages is particularly vulnerable. To protect the confidentiality of the message content, a message forwarding mechanism for intermittent connectivity Ad-hoc networks is proposed. In this mechanism, the original message is sliced into several pieces to conceal the confidentiality, then taking advantage of the redundancy of message forwarding process from multi-copy routing and the node similarity, the forwarding paths of each piece are controlled to be disjoined. Consequently, the ferry nodes collect these pieces, verify their reliability, and restore to the original message, then encrypt it where only the destination node can decrypt the ciphertext. Finally, the confidentiality and integrity of messages in the forwarding process can be achieved. Numerical analysis shows that under the premise of network performance guaranty, the proposed mechanism can effectively protect the confidentiality of the message.
2015, 37(6): 1279-1284.
doi: 10.11999/JEIT141303
Abstract:
The features of autonomy, anonymity and distribution make the P2P system vulnerable to malicious attack and abuse. A feasible resolution in such an open environment is to exploit a community-based trust model to build trust relationship between peers. However, the existing model ignores the dynamic feature, the scope of activity and the influence of peers. After analyzing the P2P user model, a topological potential based recommendation trust model is proposed to integrate the influences, transactions, and reputations of nodes. In this model, the trust metrics are divided into intra- and inter-community computing mechanism. Moreover, the algorithm of selecting super node is presented. Simulation results show that the proposed trust model is effective and robust.
The features of autonomy, anonymity and distribution make the P2P system vulnerable to malicious attack and abuse. A feasible resolution in such an open environment is to exploit a community-based trust model to build trust relationship between peers. However, the existing model ignores the dynamic feature, the scope of activity and the influence of peers. After analyzing the P2P user model, a topological potential based recommendation trust model is proposed to integrate the influences, transactions, and reputations of nodes. In this model, the trust metrics are divided into intra- and inter-community computing mechanism. Moreover, the algorithm of selecting super node is presented. Simulation results show that the proposed trust model is effective and robust.
2015, 37(6): 1285-1290.
doi: 10.11999/JEIT140995
Abstract:
In the Delay Tolerant Networks (DTNs), in order to discover the neighbors, the nodes in the network have to probe the surrounding environment continually. This can be an extremely energy-consuming process. If the nodes probe very frequently, they consume a lot of energy, and may be energy inefficient. On the other hand, infrequent contact probing might cause loss of many contacts, and thus missing the opportunities to exchange data. Therefore, there exists a trade-off between the energy efficiency and contact opportunities in the DTNs. In order to investigate this trade-off, this study first proposes a model to quantify the contact detecting probability when the contact probing interval is constant based on the Random Way-Point (RWP) model. Moreover, this study also demonstrates that the strategy which probes at a constant interval performs the best performance, among all contact probing strategies with the same average contact probing interval. Then, based on the proposed model, this study analyzes the trade-off between the energy efficiency and contact detecting probability under different situations. Finally, extensive simulations are conducted to validate the correctness of the proposed model.
In the Delay Tolerant Networks (DTNs), in order to discover the neighbors, the nodes in the network have to probe the surrounding environment continually. This can be an extremely energy-consuming process. If the nodes probe very frequently, they consume a lot of energy, and may be energy inefficient. On the other hand, infrequent contact probing might cause loss of many contacts, and thus missing the opportunities to exchange data. Therefore, there exists a trade-off between the energy efficiency and contact opportunities in the DTNs. In order to investigate this trade-off, this study first proposes a model to quantify the contact detecting probability when the contact probing interval is constant based on the Random Way-Point (RWP) model. Moreover, this study also demonstrates that the strategy which probes at a constant interval performs the best performance, among all contact probing strategies with the same average contact probing interval. Then, based on the proposed model, this study analyzes the trade-off between the energy efficiency and contact detecting probability under different situations. Finally, extensive simulations are conducted to validate the correctness of the proposed model.
2015, 37(6): 1291-1297.
doi: 10.11999/JEIT141182
Abstract:
To deal with the limitation that flat routing can hardly be accustomed to large scale Underwater Sensor Networks (USN), a new clustering routing algorithm Dynamic Layered Clustering Routing (DLCR) is proposed, which can be accustomed to larger scale networks. This algorithm divides the networks into several layers from top to bottom, and selects the nodes which have more remaining energy and shorter distance to sink as the cluster head nodes, thus, clusters communication energy consumption are reduced. In order to avoid the same nodes being elected to be cluster head nodes continuously, a dynamic layered mechanism that the networks are divided into different layers in each circle of data gathering is proposed. The experiment shows that DLCR not only has a better stability, but also reduces the energy consumption and prolongs the lifetime of the whole networks.
To deal with the limitation that flat routing can hardly be accustomed to large scale Underwater Sensor Networks (USN), a new clustering routing algorithm Dynamic Layered Clustering Routing (DLCR) is proposed, which can be accustomed to larger scale networks. This algorithm divides the networks into several layers from top to bottom, and selects the nodes which have more remaining energy and shorter distance to sink as the cluster head nodes, thus, clusters communication energy consumption are reduced. In order to avoid the same nodes being elected to be cluster head nodes continuously, a dynamic layered mechanism that the networks are divided into different layers in each circle of data gathering is proposed. The experiment shows that DLCR not only has a better stability, but also reduces the energy consumption and prolongs the lifetime of the whole networks.
2015, 37(6): 1298-1303.
doi: 10.11999/JEIT141158
Abstract:
Aimed at the problem of downlink multiservice adaptive scheduling in Orthogonal Frequency-Division Multiple Access (OFDMA) system, a universal model for multiservice adaptive resource allocation is built, which is to maximize system throughput, under the constraints of Quality of Service (QoS) guarantees. In order to resolve this optimization problem, a multiservice adaptive resource scheduling algorithm is proposed. In this algorithm, the real-time service is allocated as little resource as possible to guarantee its QoS by the user choosing the best channel whereas the non-real time service is allocated the residual resource by the channel choosing the best user to increase the system capacity. The simulation results show that the proposed algorithm can guarantee the throughputs of the downlink OFDMA systems and meanwhile have some advantages in aspects of delay and packet dropping rate of real-time services.
Aimed at the problem of downlink multiservice adaptive scheduling in Orthogonal Frequency-Division Multiple Access (OFDMA) system, a universal model for multiservice adaptive resource allocation is built, which is to maximize system throughput, under the constraints of Quality of Service (QoS) guarantees. In order to resolve this optimization problem, a multiservice adaptive resource scheduling algorithm is proposed. In this algorithm, the real-time service is allocated as little resource as possible to guarantee its QoS by the user choosing the best channel whereas the non-real time service is allocated the residual resource by the channel choosing the best user to increase the system capacity. The simulation results show that the proposed algorithm can guarantee the throughputs of the downlink OFDMA systems and meanwhile have some advantages in aspects of delay and packet dropping rate of real-time services.
2015, 37(6): 1304-1309.
doi: 10.11999/JEIT141340
Abstract:
In the dense distribution environment, the Ultra-High Frequency Radio-Frequency IDentification (UHF RFID) tags are interfered by the mutual impedance of adjacent tags, which can lead to a decline in reading rate. Based on the two-port network and Yagi-Uda antenna array theory, this paper firstly analyzes the relationship of antenna structure and mutual impedance. Then, the reasons for the change of the tag antenna transmission coefficient and the gain are explained. At last, a design of stacked tags for mutual impedance in intensive distribution environment is proposed. The numerical simulation and measurement results show that the new kind of RFID tag, not only has better performance in stand-alone scenario, but also has less interference between each other. It is suitable for RFID system in dense environment of multiple tags.
In the dense distribution environment, the Ultra-High Frequency Radio-Frequency IDentification (UHF RFID) tags are interfered by the mutual impedance of adjacent tags, which can lead to a decline in reading rate. Based on the two-port network and Yagi-Uda antenna array theory, this paper firstly analyzes the relationship of antenna structure and mutual impedance. Then, the reasons for the change of the tag antenna transmission coefficient and the gain are explained. At last, a design of stacked tags for mutual impedance in intensive distribution environment is proposed. The numerical simulation and measurement results show that the new kind of RFID tag, not only has better performance in stand-alone scenario, but also has less interference between each other. It is suitable for RFID system in dense environment of multiple tags.
2015, 37(6): 1310-1316.
doi: 10.11999/JEIT141037
Abstract:
RaptorQ code is a novel and efficient digital fountain code and its decoder is known to be too complicated. Considering the characteristic of the systematic code, a very fast decoding algorithm can be performed using matrix dimensionality reduction. The algorithm exploits a pre-calculated inverse matrix to achieve dimensionality reduction for the received code constraint matrix. As a result, the decoding complexity is reduced significantly while the failure-overhead curve is still identical to that of the conventional approaches. The simulations show that the decoding speed of the proposed algorithm outperforms the state-of-the-art algorithms, when the erasure probability of the channel is relatively low (less than 0.2).
RaptorQ code is a novel and efficient digital fountain code and its decoder is known to be too complicated. Considering the characteristic of the systematic code, a very fast decoding algorithm can be performed using matrix dimensionality reduction. The algorithm exploits a pre-calculated inverse matrix to achieve dimensionality reduction for the received code constraint matrix. As a result, the decoding complexity is reduced significantly while the failure-overhead curve is still identical to that of the conventional approaches. The simulations show that the decoding speed of the proposed algorithm outperforms the state-of-the-art algorithms, when the erasure probability of the channel is relatively low (less than 0.2).
Research on the Blocked Ordered Vandermonde Matrix Used as Measurement Matrix for Compressed Sensing
2015, 37(6): 1317-1322.
doi: 10.11999/JEIT140860
Abstract:
The measurement matrix is an important part of Compressed Sensing (CS). Although the deterministic matrix is easy to implement by the hardware, it performs not so well as a random matrix in the signal reconstruction. To solve this problem, a new deterministic measurement matrix which is called as the blocked ordered Vandermonde matrix is proposed. The blocked ordered Vandermonde matrix is constructed on the basis of the Vandermonde matrix, whose the vectors are linearly independent. Then the block operation is taken and its elements are sorted. The proposed new measurement matrix realizes the non-uniform sampling in the time domain and is specifically suitable for the natural images whose the dimension is usually high. The simulation results show that the proposed matrix is much superior to the Gaussian matrix in the image construction, and can be used in practice.
The measurement matrix is an important part of Compressed Sensing (CS). Although the deterministic matrix is easy to implement by the hardware, it performs not so well as a random matrix in the signal reconstruction. To solve this problem, a new deterministic measurement matrix which is called as the blocked ordered Vandermonde matrix is proposed. The blocked ordered Vandermonde matrix is constructed on the basis of the Vandermonde matrix, whose the vectors are linearly independent. Then the block operation is taken and its elements are sorted. The proposed new measurement matrix realizes the non-uniform sampling in the time domain and is specifically suitable for the natural images whose the dimension is usually high. The simulation results show that the proposed matrix is much superior to the Gaussian matrix in the image construction, and can be used in practice.
Cauchy Distribution Based Maximum-likelihood Estimator for Symbol Rate of Phase Shift Keying Signals
2015, 37(6): 1323-1329.
doi: 10.11999/JEIT141180
Abstract:
In order to solve the problem that the performance of existing algorithms for the symbol rate estimation of Phase Shift Keying (PSK) signals will significantly degrade in the Alpha stable noise environment, a novel Cauchy distribution based Maximum-Likelihood Estimator (CMLE) method for symbol rate of PSK signals is proposed. The parameters of the timing offset and the symbol rate can be estimated simultaneously through this method. The windowed procedure is utilized in the CMLE and the noise polluted PSK signal is divided into a timing offset window and the multiple windows with certain width which are non-overlapping and synchronized in the time domain, and only one code symbol is contained in each window; in the Alpha stable noise environment, the symbol in the window is utilized and a likelihood function based on Cauchy distribution is built, then the maximum-likelihood estimation of window width for the timing offset and the symbol rate can be achieved simultaneously. The simulation results show that the proposed method can suppress the Alpha stable noise efficiently and offer superior parameter estimation performance.
In order to solve the problem that the performance of existing algorithms for the symbol rate estimation of Phase Shift Keying (PSK) signals will significantly degrade in the Alpha stable noise environment, a novel Cauchy distribution based Maximum-Likelihood Estimator (CMLE) method for symbol rate of PSK signals is proposed. The parameters of the timing offset and the symbol rate can be estimated simultaneously through this method. The windowed procedure is utilized in the CMLE and the noise polluted PSK signal is divided into a timing offset window and the multiple windows with certain width which are non-overlapping and synchronized in the time domain, and only one code symbol is contained in each window; in the Alpha stable noise environment, the symbol in the window is utilized and a likelihood function based on Cauchy distribution is built, then the maximum-likelihood estimation of window width for the timing offset and the symbol rate can be achieved simultaneously. The simulation results show that the proposed method can suppress the Alpha stable noise efficiently and offer superior parameter estimation performance.
2015, 37(6): 1330-1335.
doi: 10.11999/JEIT141605
Abstract:
To solve a class of nonlinear signal denoising, an effective iteration method based on the Singular Value Decomposition (SVD) is proposed. When the signals have no obvious characteristic frequency and non-periodic change, the current difference spectrum method is not applicable by comparing the results on the two class of nonlinear signal, and then the corresponding reason is analyzed. According to the signal feature, the structure of the Hankel matrix is defined again and the valid singular values are determined. The effective denoising is realized by the repeated iteration which is based on the SVD. The results of the flight data demonstrate that the proposed method can effectively reduce the noise and improve the computing efficiency as well.
To solve a class of nonlinear signal denoising, an effective iteration method based on the Singular Value Decomposition (SVD) is proposed. When the signals have no obvious characteristic frequency and non-periodic change, the current difference spectrum method is not applicable by comparing the results on the two class of nonlinear signal, and then the corresponding reason is analyzed. According to the signal feature, the structure of the Hankel matrix is defined again and the valid singular values are determined. The effective denoising is realized by the repeated iteration which is based on the SVD. The results of the flight data demonstrate that the proposed method can effectively reduce the noise and improve the computing efficiency as well.
2015, 37(6): 1336-1342.
doi: 10.11999/JEIT141113
Abstract:
A number of measurement-conversion techniques, which are based on position measurements, are widely used in tracking applications, so that the Kalman filter can be applied to the Cartesian coordinates. However, they have fundamental limitations resulting in filtering performance degradation. In fact, in addition to position measurements, the Doppler measurement containing information of target velocity has the potential capability of improving the tracking performance. A filter is proposed which can use converted Doppler measurements (i.e. the product of the range measurements and Doppler measurements) in the Cartesian coordinates. The novel filter is theoretically optimal in the rule of the best linear unbiased estimation among all linear unbiased filters in the Cartesian coordinates, and it is free from the fundamental limitations of the measurement-conversion approach. Based on simulation experiments, an approximate, recursive implementation of the novel filter is compared with those obtained by four state-of-the-art conversion techniques recently. Simulation results demonstrate the effectiveness of the proposed filter.
A number of measurement-conversion techniques, which are based on position measurements, are widely used in tracking applications, so that the Kalman filter can be applied to the Cartesian coordinates. However, they have fundamental limitations resulting in filtering performance degradation. In fact, in addition to position measurements, the Doppler measurement containing information of target velocity has the potential capability of improving the tracking performance. A filter is proposed which can use converted Doppler measurements (i.e. the product of the range measurements and Doppler measurements) in the Cartesian coordinates. The novel filter is theoretically optimal in the rule of the best linear unbiased estimation among all linear unbiased filters in the Cartesian coordinates, and it is free from the fundamental limitations of the measurement-conversion approach. Based on simulation experiments, an approximate, recursive implementation of the novel filter is compared with those obtained by four state-of-the-art conversion techniques recently. Simulation results demonstrate the effectiveness of the proposed filter.
2015, 37(6): 1343-1349.
doi: 10.11999/JEIT141122
Abstract:
Traditional Voice Activity Detection (VAD) approaches can not effectively detect consonant as well as noisy unvoiced consonant. To address this problem, this paper proposes a VAD approach Mel Frequency Cepstrum Coefficient (F-MFCC) based on Fisher linear discriminant analysis, in consideration of two-class issue regarding to consonant and background noise. Fisher criterion rule is used to solve the optimal projection vector, building upon which we can minimize the within-class scatter can be minimized and the between-class scatter can be maximized, as a result to enhance separability between consonant and background noise. Extensive experiments are conducted to evaluate the F-MFCC performance. The results demonstrate that, under different SNR and noise conditions, the proposed approach achieves higher VAD accuracy.
Traditional Voice Activity Detection (VAD) approaches can not effectively detect consonant as well as noisy unvoiced consonant. To address this problem, this paper proposes a VAD approach Mel Frequency Cepstrum Coefficient (F-MFCC) based on Fisher linear discriminant analysis, in consideration of two-class issue regarding to consonant and background noise. Fisher criterion rule is used to solve the optimal projection vector, building upon which we can minimize the within-class scatter can be minimized and the between-class scatter can be maximized, as a result to enhance separability between consonant and background noise. Extensive experiments are conducted to evaluate the F-MFCC performance. The results demonstrate that, under different SNR and noise conditions, the proposed approach achieves higher VAD accuracy.
2015, 37(6): 1350-1356.
doi: 10.11999/JEIT141264
Abstract:
The eigenphone speaker adaptation method performs well when the amount of adaptation data is sufficient. However, it suffers from severe over-fitting when insufficient amount of adaptation data is provided. A speaker adaptation method based on eigenphone speaker subspace is proposed to overcome this problem. Firstly, a brief overview of the eigenphone speaker adaptation method is presented in case of Hidden Markov Model-Gaussian Mixture Model (HMM-GMM) based speech recognition system. Secondly, speaker subspace is introduced to model the inter-speaker correlation information among different speakers eigenphones. Thirdly, a new speaker adaptation method based on eigenphone speaker subspace is derived from estimation of a speaker dependent coordinate vector for each speaker. Finally, a comparison between the new method and traditional speaker subspace based method is discussed in detail. Experimental results on a Mandarin Chinese continuous speech recognition task show that compared with original eigenphone speaker adaptation method, the performance of the eigenphone speaker subspace method can be improved significantly when insufficient amount of adaptation data is provided. Compared with eigenvoice method, eigenphone speaker subspace method can save a great amount of storage space only at the expense of minor performance degradation.
The eigenphone speaker adaptation method performs well when the amount of adaptation data is sufficient. However, it suffers from severe over-fitting when insufficient amount of adaptation data is provided. A speaker adaptation method based on eigenphone speaker subspace is proposed to overcome this problem. Firstly, a brief overview of the eigenphone speaker adaptation method is presented in case of Hidden Markov Model-Gaussian Mixture Model (HMM-GMM) based speech recognition system. Secondly, speaker subspace is introduced to model the inter-speaker correlation information among different speakers eigenphones. Thirdly, a new speaker adaptation method based on eigenphone speaker subspace is derived from estimation of a speaker dependent coordinate vector for each speaker. Finally, a comparison between the new method and traditional speaker subspace based method is discussed in detail. Experimental results on a Mandarin Chinese continuous speech recognition task show that compared with original eigenphone speaker adaptation method, the performance of the eigenphone speaker subspace method can be improved significantly when insufficient amount of adaptation data is provided. Compared with eigenvoice method, eigenphone speaker subspace method can save a great amount of storage space only at the expense of minor performance degradation.
2015, 37(6): 1357-1364.
doi: 10.11999/JEIT141134
Abstract:
In order to solve the problems of appearance change, background distraction and occlusion in the object tracking, an efficient algorithm for visual tracking based on the local patch model and model update is proposed. This paper combines rough-search and precise-search to enhance the tracking precision. Firstly, it constructs the local patch model according to the initialized tracking area which includes some background areas. Secondly, the target is preliminarily located through the local exhaustive search algorithm based on the integral histogram, then the final position of the target is calculated through the local patches learning. Finally, the local patch model is updated with the retained sequence during the tracking process. This paper mainly studies the search strategy, background restraining and model update, and the experimental results show that the proposed method obtains a distinct improvement in coping with appearance change, background distraction and occlusion.
In order to solve the problems of appearance change, background distraction and occlusion in the object tracking, an efficient algorithm for visual tracking based on the local patch model and model update is proposed. This paper combines rough-search and precise-search to enhance the tracking precision. Firstly, it constructs the local patch model according to the initialized tracking area which includes some background areas. Secondly, the target is preliminarily located through the local exhaustive search algorithm based on the integral histogram, then the final position of the target is calculated through the local patches learning. Finally, the local patch model is updated with the retained sequence during the tracking process. This paper mainly studies the search strategy, background restraining and model update, and the experimental results show that the proposed method obtains a distinct improvement in coping with appearance change, background distraction and occlusion.
2015, 37(6): 1365-1371.
doi: 10.11999/JEIT140960
Abstract:
This paper proposes a novel shape representation method based on statistical features. According to the joint analysis on Centroid-Contour Distance (CCD) and chaincode, the silhouette is decomposed into several levels based on CCD. And then, the chaincode describing laying in each level is analyzed to extract the Joint Statistical of Centroid-Contour Distance and Chaincode (JSCCDC) descriptor for the silhouette. The similarity between different shapes can be measured by the city-block distance. Experiment results show that the proposed method describes both global and local features. Compared with traditional feature weighting method, JSCCDC is more accurate and reliable for shape matching and retrieval.
This paper proposes a novel shape representation method based on statistical features. According to the joint analysis on Centroid-Contour Distance (CCD) and chaincode, the silhouette is decomposed into several levels based on CCD. And then, the chaincode describing laying in each level is analyzed to extract the Joint Statistical of Centroid-Contour Distance and Chaincode (JSCCDC) descriptor for the silhouette. The similarity between different shapes can be measured by the city-block distance. Experiment results show that the proposed method describes both global and local features. Compared with traditional feature weighting method, JSCCDC is more accurate and reliable for shape matching and retrieval.
2015, 37(6): 1372-1377.
doi: 10.11999/JEIT141093
Abstract:
To overcome the curse of dimensionality caused by vectorization of image matrices, and to increase robustness to outliers, L1-norm based Two-Dimensional Linear Discriminant Analysis (2DLDA-L1) is proposed for dimensionality reduction. It makes full use of strong robustness of L1-norm to outliers and noises. Furthermore, it performs dimensionality reduction directly on image matrices. A rapid iterative optimization algorithm, with its proof of monotonic convergence to local optimum, is given. Experiments on several public image databases verify the robustness and the effectiveness of the proposed method.
To overcome the curse of dimensionality caused by vectorization of image matrices, and to increase robustness to outliers, L1-norm based Two-Dimensional Linear Discriminant Analysis (2DLDA-L1) is proposed for dimensionality reduction. It makes full use of strong robustness of L1-norm to outliers and noises. Furthermore, it performs dimensionality reduction directly on image matrices. A rapid iterative optimization algorithm, with its proof of monotonic convergence to local optimum, is given. Experiments on several public image databases verify the robustness and the effectiveness of the proposed method.
2015, 37(6): 1378-1383.
doi: 10.11999/JEIT141241
Abstract:
It is well-known that restricts on weight and volume of spaceborne sensing instruments are strict. Single tripole antenna usually needs one three-channel receiver to estimate the parameters of electromagnetic waves. When the weight of the receiver is high and the volume is big, channel garbling and gain imbalance exist. This paper puts forward two algorithms based on Time Division (TD) method with tripole antenna for parameter estimation of electromagnetic waves. The two methods make it possible that a single tripole antenna needs only a one-channel receiver, which not only decreases the volume and weight of the receiver, but also reduces the cost and overcome channel garbling and gain imbalance. The simulations prove the validity of the proposed algorithms.
It is well-known that restricts on weight and volume of spaceborne sensing instruments are strict. Single tripole antenna usually needs one three-channel receiver to estimate the parameters of electromagnetic waves. When the weight of the receiver is high and the volume is big, channel garbling and gain imbalance exist. This paper puts forward two algorithms based on Time Division (TD) method with tripole antenna for parameter estimation of electromagnetic waves. The two methods make it possible that a single tripole antenna needs only a one-channel receiver, which not only decreases the volume and weight of the receiver, but also reduces the cost and overcome channel garbling and gain imbalance. The simulations prove the validity of the proposed algorithms.
2015, 37(6): 1384-1388.
doi: 10.11999/JEIT141390
Abstract:
The measurement of blood-oxygen saturation is based on the pulse wave signal, but there are many factors impact the accuracy of measurement, such as high frequency noise caused by instrument thermal noise and baseline drift caused by the breath. A method which combines Ensemble Empirical Mode Decomposition (EEMD) and Permutation Entropy (PE) is proposed, it can decrease high frequency noise and baseline drift. The pulse wave signal is decomposed by EEMD, the PE of each Intrinsic Mode Function (IMF) is calculated and the threshold value of PE is chosen. Then the IMFs which present high frequency noise and baseline drift are judged and decreased. Finally, the signal without high frequency noise and baseline drift is achieved. A self-developed measurement device is used to obtain the pulse wave for testing validation, and the signal spectrum and AC-DC modulation ratio value are adopted to evaluate the effect. The result shows that this method could effectively remove high frequency noise and baseline drift, which is conducive to improve the accuracy of blood-oxygen saturation.
The measurement of blood-oxygen saturation is based on the pulse wave signal, but there are many factors impact the accuracy of measurement, such as high frequency noise caused by instrument thermal noise and baseline drift caused by the breath. A method which combines Ensemble Empirical Mode Decomposition (EEMD) and Permutation Entropy (PE) is proposed, it can decrease high frequency noise and baseline drift. The pulse wave signal is decomposed by EEMD, the PE of each Intrinsic Mode Function (IMF) is calculated and the threshold value of PE is chosen. Then the IMFs which present high frequency noise and baseline drift are judged and decreased. Finally, the signal without high frequency noise and baseline drift is achieved. A self-developed measurement device is used to obtain the pulse wave for testing validation, and the signal spectrum and AC-DC modulation ratio value are adopted to evaluate the effect. The result shows that this method could effectively remove high frequency noise and baseline drift, which is conducive to improve the accuracy of blood-oxygen saturation.
2015, 37(6): 1389-1394.
doi: 10.11999/JEIT141254
Abstract:
Compared with the Back Projection Algorithm (BPA), the interpolation load of the Fast Factorized Back Projection Algorithm (FFBPA) is released. However, the 2D interpolation in the image domain is essential for the FFBPA and the intensive computational burden limits its application in practice. This paper presents the geometric correction based FFBPA for the spotlight SAR imaging. In this algorithm, the sub-image registration is accomplished by the geometric correction method that the sub-image projection in the different coordinate systems and sub-image fusion are fulfilled by the shift in the range dimension and the rotation in the angle dimension. Thus the method avoids the individual interpolation and it is more efficient than the FFBPA. Simulation results validate its imaging performance and efficiency.
Compared with the Back Projection Algorithm (BPA), the interpolation load of the Fast Factorized Back Projection Algorithm (FFBPA) is released. However, the 2D interpolation in the image domain is essential for the FFBPA and the intensive computational burden limits its application in practice. This paper presents the geometric correction based FFBPA for the spotlight SAR imaging. In this algorithm, the sub-image registration is accomplished by the geometric correction method that the sub-image projection in the different coordinate systems and sub-image fusion are fulfilled by the shift in the range dimension and the rotation in the angle dimension. Thus the method avoids the individual interpolation and it is more efficient than the FFBPA. Simulation results validate its imaging performance and efficiency.
2015, 37(6): 1395-1401.
doi: 10.11999/JEIT140900
Abstract:
In nonhomogeneous sea clutter, abnormal cells included reference cells constrain the performance of the Sample Covariance Matrix (SCM), and then influence the detection performance of the traditional Adaptive Matched Filter (AMF) detector, while censoring abnormal cells may cause singularity of the covariance matrix in the case of limited reference cells. Without changing number of the reference cells, this paper devises the median and normalized covariance matrix estimator and uses in the detection scheme of the AMF. Compared with the traditional AMF, the newly devised AMF obtains better performance in both measured and simulated clutter.
In nonhomogeneous sea clutter, abnormal cells included reference cells constrain the performance of the Sample Covariance Matrix (SCM), and then influence the detection performance of the traditional Adaptive Matched Filter (AMF) detector, while censoring abnormal cells may cause singularity of the covariance matrix in the case of limited reference cells. Without changing number of the reference cells, this paper devises the median and normalized covariance matrix estimator and uses in the detection scheme of the AMF. Compared with the traditional AMF, the newly devised AMF obtains better performance in both measured and simulated clutter.
2015, 37(6): 1402-1408.
doi: 10.11999/JEIT141012
Abstract:
Smart use of prior information is one of effective approaches to improve the performance of Bayesian estimator. At the design stage of Bayesian estimator, the prior model parameters must be specified, but these parameters may not be identical with parameters of environment at the applicant stage. The mismatched prior model can result to the performance degradation of Bayesian estimator. In this paper, a general framework of prior model parameters cognition based on the estimator performance is given at first. Base on the framework, for a Bayesian estimator of DC signal in WGN, the estimation performance is analyzed, and an iterated cognition method of prior model parameters is proposed. The computer simulation is used to analyze the sensitivity and robustness of the estimator under the mismatched prior model condition, and the iterated cognition procedure under different conditions. The computer simulation results show that, the feedback from the estimation performance to the prior model parameters is obtained with the cognitive method proposed in this paper, and the prior model can be matched with the current environment model after the repeated interactions between the estimator and environment.
Smart use of prior information is one of effective approaches to improve the performance of Bayesian estimator. At the design stage of Bayesian estimator, the prior model parameters must be specified, but these parameters may not be identical with parameters of environment at the applicant stage. The mismatched prior model can result to the performance degradation of Bayesian estimator. In this paper, a general framework of prior model parameters cognition based on the estimator performance is given at first. Base on the framework, for a Bayesian estimator of DC signal in WGN, the estimation performance is analyzed, and an iterated cognition method of prior model parameters is proposed. The computer simulation is used to analyze the sensitivity and robustness of the estimator under the mismatched prior model condition, and the iterated cognition procedure under different conditions. The computer simulation results show that, the feedback from the estimation performance to the prior model parameters is obtained with the cognitive method proposed in this paper, and the prior model can be matched with the current environment model after the repeated interactions between the estimator and environment.
2015, 37(6): 1409-1415.
doi: 10.11999/JEIT141131
Abstract:
Equipped with an airborne spotlight SAR, inertia navigation system can not measure the motion errors with the required accuracy for high resolution SAR imaging, which may degrade severely the quality of a SAR image. In this paper, a novel autofocus algorithm that can be directly embedded in Polar Format Algorithm (PFA) is proposed, that is, a Hybrid Multistage Parameterized Minimum Entropy (HMPME) algorithm to proceed Range Cell Migration (RCM) correction and a scaled-stepsize iterative phase correction method based on Contrast Enhancement (CE). The autofocus processing accurately compensates the effects of range cell migration and phase errors. Also, this algorithm is most robust in processing images of low contrast and low signal-to-noise ratio. Finally, simulations and experiments with real spotlight SAR data validate the effectiveness of the proposed algorithm.
Equipped with an airborne spotlight SAR, inertia navigation system can not measure the motion errors with the required accuracy for high resolution SAR imaging, which may degrade severely the quality of a SAR image. In this paper, a novel autofocus algorithm that can be directly embedded in Polar Format Algorithm (PFA) is proposed, that is, a Hybrid Multistage Parameterized Minimum Entropy (HMPME) algorithm to proceed Range Cell Migration (RCM) correction and a scaled-stepsize iterative phase correction method based on Contrast Enhancement (CE). The autofocus processing accurately compensates the effects of range cell migration and phase errors. Also, this algorithm is most robust in processing images of low contrast and low signal-to-noise ratio. Finally, simulations and experiments with real spotlight SAR data validate the effectiveness of the proposed algorithm.
2015, 37(6): 1416-1423.
doi: 10.11999/JEIT141179
Abstract:
In the scenario of multiple targets and multiple tasks, radar should have multiple functions to realize different modes, such as search and tracking. Traditional radar can only implement one function and the working mode is not flexible, which may result in inefficient use of the system resources. In this paper, a MIMO radar waveform design method is proposed to realize multiple modes. Based on the criterions of beampattern matching, power spectrum matching in the desired direction or frequency spectrum matching, a multi-objective optimization model for the waveform matrix with constant modulus constraint is established, and is solved by the conjugate gradient method. The numerical results show that the optimized waveforms have multiple beams and different modes in the beam directions, which can simultaneously realize search, tracking and so on.
In the scenario of multiple targets and multiple tasks, radar should have multiple functions to realize different modes, such as search and tracking. Traditional radar can only implement one function and the working mode is not flexible, which may result in inefficient use of the system resources. In this paper, a MIMO radar waveform design method is proposed to realize multiple modes. Based on the criterions of beampattern matching, power spectrum matching in the desired direction or frequency spectrum matching, a multi-objective optimization model for the waveform matrix with constant modulus constraint is established, and is solved by the conjugate gradient method. The numerical results show that the optimized waveforms have multiple beams and different modes in the beam directions, which can simultaneously realize search, tracking and so on.
2015, 37(6): 1424-1430.
doi: 10.11999/JEIT141106
Abstract:
The array antenna SAR is able to realize the three dimensional imaging. In order to improve the imaging quality, the measurement equipment is adopted to acquire the motion information of the array antennas platform for the motion compensation. However, the measurement error may impact the quality of compensation and imaging. Therefore analysis of the impact is indispensable. This paper establishes the imaging model and the measurement error analysis model, analyses the impact on phase error from the aspect of position and angle measurement error respectively, compares the impact of measurement error in the different directions, analyses the impact on imaging index by the simulation, and introduces the idea of angle-error-array-length-ratio to quantify the impact of angle error. The conclusion that the measurement error in the height and roll angle has the biggest impact is proposed and the error tolerance in certain conditions is given at last, which provides the theoretical guidance and reference for the choice and design of the measurement equipment and the motion compensation method.
The array antenna SAR is able to realize the three dimensional imaging. In order to improve the imaging quality, the measurement equipment is adopted to acquire the motion information of the array antennas platform for the motion compensation. However, the measurement error may impact the quality of compensation and imaging. Therefore analysis of the impact is indispensable. This paper establishes the imaging model and the measurement error analysis model, analyses the impact on phase error from the aspect of position and angle measurement error respectively, compares the impact of measurement error in the different directions, analyses the impact on imaging index by the simulation, and introduces the idea of angle-error-array-length-ratio to quantify the impact of angle error. The conclusion that the measurement error in the height and roll angle has the biggest impact is proposed and the error tolerance in certain conditions is given at last, which provides the theoretical guidance and reference for the choice and design of the measurement equipment and the motion compensation method.
2015, 37(6): 1431-1436.
doi: 10.11999/JEIT141079
Abstract:
This paper addresses the simultaneous optimization problem of the multi-objective waveform design for MIMO radar with collocated antennas. Inspired from the idea of alternating projection, a waveform design framework is presented based on the Arbitrary-Dimensional Iterative Spectral Approximation Algorithm (ADISAA). Multi-objective of waveform design such as transmit beampattern match, good correlation, spectrum notch can be controlled by the adjustable weights. Finally, the constant modulus signal is designed. Simulation results show: the proposed algorithm improves the correlation performance of waveform at a specified lag intervals after transmit beampattern matching, the spectrum notch is designed to avoid the spectrum band which is polluted by active jamming and color noise, and it has lower computational complexity.
This paper addresses the simultaneous optimization problem of the multi-objective waveform design for MIMO radar with collocated antennas. Inspired from the idea of alternating projection, a waveform design framework is presented based on the Arbitrary-Dimensional Iterative Spectral Approximation Algorithm (ADISAA). Multi-objective of waveform design such as transmit beampattern match, good correlation, spectrum notch can be controlled by the adjustable weights. Finally, the constant modulus signal is designed. Simulation results show: the proposed algorithm improves the correlation performance of waveform at a specified lag intervals after transmit beampattern matching, the spectrum notch is designed to avoid the spectrum band which is polluted by active jamming and color noise, and it has lower computational complexity.
2015, 37(6): 1437-1442.
doi: 10.11999/JEIT141234
Abstract:
A new approach is proposed to compute spaceborne SAR range ambiguity using covariance matrix. The causation of range ambiguity of spaceborne SAR and the ambiguous energy difference between like-polarized channel and cross-polarized channel are analyzed. In current study, it is common to use polarimetric scattering matrix to compute range ambiguity. Because of the speckle, the scattering matrix of a pixel can not describe the distributed targets precisely. Therefore, it is hard to decide the range ambiguity signal ratio of adjacent pixels. In this paper, the formulation of computing range ambiguity is derived for distributed targets first. Second, this new method is tested by using Radarsat-2 data. The results show that covariance matrix can effectively illustrate distributed targets. The results are smooth and confirmed. The new method is rational to compute range ambiguity of distributed targets.
A new approach is proposed to compute spaceborne SAR range ambiguity using covariance matrix. The causation of range ambiguity of spaceborne SAR and the ambiguous energy difference between like-polarized channel and cross-polarized channel are analyzed. In current study, it is common to use polarimetric scattering matrix to compute range ambiguity. Because of the speckle, the scattering matrix of a pixel can not describe the distributed targets precisely. Therefore, it is hard to decide the range ambiguity signal ratio of adjacent pixels. In this paper, the formulation of computing range ambiguity is derived for distributed targets first. Second, this new method is tested by using Radarsat-2 data. The results show that covariance matrix can effectively illustrate distributed targets. The results are smooth and confirmed. The new method is rational to compute range ambiguity of distributed targets.
2015, 37(6): 1443-1449.
doi: 10.11999/JEIT140948
Abstract:
The ionospheric scintillation can destroy the coherence of SAR echos, and correspondingly degrade SAR imaging performance. The previous studies are conducted under the hypothesis of the given ionospheric electron density irregularities, which are unavailable with the current measurement technologies. In this paper, the characteristics of ionospheric scintillations at low latitudes are analysed by using the observational data of Ultra High Frequency (UHF) band scintillations in the years of high and moderate solar activity at Haikou station. Based on the phase screen theory, a method is proposed to quantify the effects of ionospheric scintillation on P-band spaceborne SAR by using the scintillation index. The results show that the scintillations occur mostly at the night time at low latitudes, especially in equinoxes. The scintillations occur approximately 3.8% during a typical year of high solar activity. For P band SAR, the weak scintillation widens the mainlobe of azimuthal Impulse Response Function (IRF), increases the intensity of sidelobe, and reduces the azimuthal resolution. The moderate scintillation disturbs the IRF seriously, increases the intensity of sidelobe to the degree of mainlobe, and makes the peak of mainlobe shift in azimuthal direction, which can result in the disability of SAR imaging.
The ionospheric scintillation can destroy the coherence of SAR echos, and correspondingly degrade SAR imaging performance. The previous studies are conducted under the hypothesis of the given ionospheric electron density irregularities, which are unavailable with the current measurement technologies. In this paper, the characteristics of ionospheric scintillations at low latitudes are analysed by using the observational data of Ultra High Frequency (UHF) band scintillations in the years of high and moderate solar activity at Haikou station. Based on the phase screen theory, a method is proposed to quantify the effects of ionospheric scintillation on P-band spaceborne SAR by using the scintillation index. The results show that the scintillations occur mostly at the night time at low latitudes, especially in equinoxes. The scintillations occur approximately 3.8% during a typical year of high solar activity. For P band SAR, the weak scintillation widens the mainlobe of azimuthal Impulse Response Function (IRF), increases the intensity of sidelobe, and reduces the azimuthal resolution. The moderate scintillation disturbs the IRF seriously, increases the intensity of sidelobe to the degree of mainlobe, and makes the peak of mainlobe shift in azimuthal direction, which can result in the disability of SAR imaging.
2015, 37(6): 1450-1456.
doi: 10.11999/JEIT141150
Abstract:
The scheme which is based on the Digital Delay Locked Loop (DDLL), Frequency Locked Loop (FLL), and Phase Locked Loop (PLL) is implemented in the microwave radar for spatial rendezvous and docking, and the delay, frequency and Direction Of Arrival (DOA) estimations of the incident direct-sequence spread spectrum signal transmitted by cooperative target are obtained. Yet the DDLL, FLL, and PLL (DFP) based scheme has not made full use of the received signal. For this reason, a novel Maximum Likelihood Estimation (MLE) Based Tracking (MLBT) algorithm with a low computational burden is proposed. The feature that the gradients of cost function are proportional to parameter errors is employed to design discriminators of parameter errors. Then three tracking loops are set up to provide the parameter estimations. In the following section, the variance characteristics of discriminators are investigated, and the low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are given for the MLBT algorithm. Finally, the simulations and computational efficiency analysis are provided. The low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are verified. Additionally, it is also shown that the MLBT algorithm achieves better performances in terms of estimators accuracy than those of the DFP based scheme with a limited increase in computational burden.
The scheme which is based on the Digital Delay Locked Loop (DDLL), Frequency Locked Loop (FLL), and Phase Locked Loop (PLL) is implemented in the microwave radar for spatial rendezvous and docking, and the delay, frequency and Direction Of Arrival (DOA) estimations of the incident direct-sequence spread spectrum signal transmitted by cooperative target are obtained. Yet the DDLL, FLL, and PLL (DFP) based scheme has not made full use of the received signal. For this reason, a novel Maximum Likelihood Estimation (MLE) Based Tracking (MLBT) algorithm with a low computational burden is proposed. The feature that the gradients of cost function are proportional to parameter errors is employed to design discriminators of parameter errors. Then three tracking loops are set up to provide the parameter estimations. In the following section, the variance characteristics of discriminators are investigated, and the low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are given for the MLBT algorithm. Finally, the simulations and computational efficiency analysis are provided. The low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are verified. Additionally, it is also shown that the MLBT algorithm achieves better performances in terms of estimators accuracy than those of the DFP based scheme with a limited increase in computational burden.
2015, 37(6): 1457-1462.
doi: 10.11999/JEIT141227
Abstract:
The sparse representation of signal via dictionary learning algorithms is widely used in signal processing field. Since there is redundancy in the new space defined by overcomplete dictionary atoms, the problem of finding sparse representations may bring the uncertainty and ambiguity in the presence of unknown amplitude perturbations, which is unfavorable to radar High Resolution Range Profile (HRRP) target recognition task. To deal with this issue, this paper proposes a novel algorithm called Stable Dictionary Learning (SDL), which constructs a robust loss function via marginalizing dropout to learn a stable adaptive dictionary. The algorithm considers the structure similarity among the adjacent HRRPs without scatterers motion through range cells, and enforces the constraints that the sparse representations of adjacent HRRPs should have the same supports. Moreover, SDL utilizes the structured sparse regularization learned in the training phase to automatically select the optimal sub-dictionary basis vectors, which is used for the classification of the test sample. Experimental results on measured radar HRRP dataset validate the effectiveness of the proposed method.
The sparse representation of signal via dictionary learning algorithms is widely used in signal processing field. Since there is redundancy in the new space defined by overcomplete dictionary atoms, the problem of finding sparse representations may bring the uncertainty and ambiguity in the presence of unknown amplitude perturbations, which is unfavorable to radar High Resolution Range Profile (HRRP) target recognition task. To deal with this issue, this paper proposes a novel algorithm called Stable Dictionary Learning (SDL), which constructs a robust loss function via marginalizing dropout to learn a stable adaptive dictionary. The algorithm considers the structure similarity among the adjacent HRRPs without scatterers motion through range cells, and enforces the constraints that the sparse representations of adjacent HRRPs should have the same supports. Moreover, SDL utilizes the structured sparse regularization learned in the training phase to automatically select the optimal sub-dictionary basis vectors, which is used for the classification of the test sample. Experimental results on measured radar HRRP dataset validate the effectiveness of the proposed method.
2015, 37(6): 1463-1469.
doi: 10.11999/JEIT141022
Abstract:
Since Interferometric Circular SAR (InCSAR) has the advantage of all-directional observation, a method for Digital Elevation Model (DEM) extraction based on InCSAR is proposed to ensure the high accuracy of the high resolution Circular SAR imaging. Firstly, the signal model of InCSAR is presented when Back Projection (BP) processing is adopted for SAR imaging; secondly, DEM extraction based on InCSAR is proposed; thirdly, the proposed method is validated by the simulation test.
Since Interferometric Circular SAR (InCSAR) has the advantage of all-directional observation, a method for Digital Elevation Model (DEM) extraction based on InCSAR is proposed to ensure the high accuracy of the high resolution Circular SAR imaging. Firstly, the signal model of InCSAR is presented when Back Projection (BP) processing is adopted for SAR imaging; secondly, DEM extraction based on InCSAR is proposed; thirdly, the proposed method is validated by the simulation test.
2015, 37(6): 1470-1475.
doi: 10.11999/JEIT141042
Abstract:
The distribution of a multiplier of 2-partitioned random multiplicative model is symmetric about 1/2, which makes the shape of multifractal function of the simulation data be fixed. Based on the analysis of the construction process of the 2-partitioned random multiplicative model, the N-partitioned random multiplicative model is proposed, as a generalization of the 2-partitioned and it breaks the limitation that the multiplier is symmetric about 1/2. The model is more convenient to simulate data with the desired shape of multifractal function. It is proved theoretically and experimentally that the distribution of the multiplier determines the shape of the multifractal function.
The distribution of a multiplier of 2-partitioned random multiplicative model is symmetric about 1/2, which makes the shape of multifractal function of the simulation data be fixed. Based on the analysis of the construction process of the 2-partitioned random multiplicative model, the N-partitioned random multiplicative model is proposed, as a generalization of the 2-partitioned and it breaks the limitation that the multiplier is symmetric about 1/2. The model is more convenient to simulate data with the desired shape of multifractal function. It is proved theoretically and experimentally that the distribution of the multiplier determines the shape of the multifractal function.
2015, 37(6): 1476-1482.
doi: 10.11999/JEIT141504
Abstract:
With development of anti-stealth technology, meter-wave radar comes into sight of scientific community again due to its natural superiority of anti-stealth and anti-radiation missile. But as strongly influenced by multi- path effect in process of detecting target with low elevation angle, meter-wave radar may obtain a measured height with large deviation that unable to meet actual need. However, the development of data fusion technology in radar network finds a solution to this problem. This paper uses data fusion technology of radar network to realize three-dimensional positioning of target only with distance and azimuth information measured by meter-wave radar, so that the problem of height measurement in meter-wave radar can be well solved. In consideration of effect of earth curvature, the proposed height measurement algorithm of meter-wave radar network utilizes geodetic coordinate transformation, coordinate system transformations, and data transformation to unite all radars data into one reasonable work platform, namely virtual plane. Height measurement is conducted to target on this plane. Azimuth angle information with not high resolution ratio but good data stability is used to determine hunting zone of algorithm so as to improve minimum error method. The target distance information with high resolution ratio is used to obtain final longitude, latitude and altitude estimate of target. Sometimes target distance estimate may be inaccurate as a result of strong reflection on earth surface, according to which a confidence judgment criterion is established to verify availability of positioning. Through the simulation analysis, the proposed algorithm is verified to obtain a good accuracy in height measurement and can be regarded as an effective method in height measurement for radar network.
With development of anti-stealth technology, meter-wave radar comes into sight of scientific community again due to its natural superiority of anti-stealth and anti-radiation missile. But as strongly influenced by multi- path effect in process of detecting target with low elevation angle, meter-wave radar may obtain a measured height with large deviation that unable to meet actual need. However, the development of data fusion technology in radar network finds a solution to this problem. This paper uses data fusion technology of radar network to realize three-dimensional positioning of target only with distance and azimuth information measured by meter-wave radar, so that the problem of height measurement in meter-wave radar can be well solved. In consideration of effect of earth curvature, the proposed height measurement algorithm of meter-wave radar network utilizes geodetic coordinate transformation, coordinate system transformations, and data transformation to unite all radars data into one reasonable work platform, namely virtual plane. Height measurement is conducted to target on this plane. Azimuth angle information with not high resolution ratio but good data stability is used to determine hunting zone of algorithm so as to improve minimum error method. The target distance information with high resolution ratio is used to obtain final longitude, latitude and altitude estimate of target. Sometimes target distance estimate may be inaccurate as a result of strong reflection on earth surface, according to which a confidence judgment criterion is established to verify availability of positioning. Through the simulation analysis, the proposed algorithm is verified to obtain a good accuracy in height measurement and can be regarded as an effective method in height measurement for radar network.
2015, 37(6): 1483-1489.
doi: 10.11999/JEIT140653
Abstract:
Through taking the Multi-Carrier Phase Coded (MCPC) signal as the subpulse, and replacing the linear frequency step with the Costas frequency hopping, a new Inter-Pulse Costas frequency hopping and intra-pulse Multi-Carrier Chaotic Phase Coded (denoted by IPC-MCCPC) radar signal is designed on the basis of stepped-frequency signal. The ambiguity function and autocorrelation performance of the designed signal are studied. Simulation results show that the designed signal carries forward the advantage that the stepped-frequency signal achieves a larger operating bandwidth by instantaneous bandwidth synthesis, and overcomes the defect of range-velocity coupling caused by frequency-stepped. The interpulses multi-carrier characteristic can decrease the frequency stepped pulse number under the condition of keeping the total bandwidth as same as the stepped-frequency signal, thus increases the data rate of signal processing. The designed signal has stronger secrecy due to the introduction of chaotic phase modulation. The ambiguity function of designed signal has lower periodic side lobe because the Costas frequency hopping. Besides that, the designed signal has flexible structure, numerous parameters and complex modulation mode makes it more difficult to be identified by reconnaissance receivers, so the anti-intercept ability of radar system is greatly improved.
Through taking the Multi-Carrier Phase Coded (MCPC) signal as the subpulse, and replacing the linear frequency step with the Costas frequency hopping, a new Inter-Pulse Costas frequency hopping and intra-pulse Multi-Carrier Chaotic Phase Coded (denoted by IPC-MCCPC) radar signal is designed on the basis of stepped-frequency signal. The ambiguity function and autocorrelation performance of the designed signal are studied. Simulation results show that the designed signal carries forward the advantage that the stepped-frequency signal achieves a larger operating bandwidth by instantaneous bandwidth synthesis, and overcomes the defect of range-velocity coupling caused by frequency-stepped. The interpulses multi-carrier characteristic can decrease the frequency stepped pulse number under the condition of keeping the total bandwidth as same as the stepped-frequency signal, thus increases the data rate of signal processing. The designed signal has stronger secrecy due to the introduction of chaotic phase modulation. The ambiguity function of designed signal has lower periodic side lobe because the Costas frequency hopping. Besides that, the designed signal has flexible structure, numerous parameters and complex modulation mode makes it more difficult to be identified by reconnaissance receivers, so the anti-intercept ability of radar system is greatly improved.
2015, 37(6): 1490-1494.
doi: 10.11999/JEIT141232
Abstract:
In order to take full advantage of Doppler information for Multi-Target Tracking (MTT) in the clutter environment under the framework of emerging Random Finite Sets (RFS), an MTT algorithm based on Gaussian Mixture Cardinalized Probability Hypothesis Density (GM-CPHD) for pulse Doppler radar is proposed. Based on the standard GM-CPHD, the target states are updated sequentially using Doppler measurements after updating them using position measurements, then more accurate likelihood function and state estimation are obtained. Simulation results show the effectiveness of the proposed algorithm, and the introduced Doppler information can effectively suppress clutter and evidently improve tracking performance.
In order to take full advantage of Doppler information for Multi-Target Tracking (MTT) in the clutter environment under the framework of emerging Random Finite Sets (RFS), an MTT algorithm based on Gaussian Mixture Cardinalized Probability Hypothesis Density (GM-CPHD) for pulse Doppler radar is proposed. Based on the standard GM-CPHD, the target states are updated sequentially using Doppler measurements after updating them using position measurements, then more accurate likelihood function and state estimation are obtained. Simulation results show the effectiveness of the proposed algorithm, and the introduced Doppler information can effectively suppress clutter and evidently improve tracking performance.
2015, 37(6): 1495-1501.
doi: 10.11999/JEIT141059
Abstract:
For Polarimetric SAR (PolSAR), because it contains more scattering information, thus it can provide more available features. How to use the features is crucial for the PolSAR image classification, however, there are no existing specific rules. To solve the above problem, a supervised Polarimetric SAR image classification method via weighted ensemble based on 0-1 matrix decomposition is proposed. The proposed method adopts matrix decomposition ensemble to learn on different feature subsets to get coefficients, and weighting ensemble algorithm is employed via the predictive results to improve the final classification results. Firstly, some features are extracted from PolSAR data as initial feature group and are divided randomly into several feature subsets. Then, according to the ensemble algorithm to get the different weights based on the feature subsets, small coefficients are assigned to bad classification results to decrease the harmful impact of some features. The final classification result is achieved by combining the results together. The experimental results of L-band and C-band PolSAR data demonstrate that the proposed method can effectively improve the classification results.
For Polarimetric SAR (PolSAR), because it contains more scattering information, thus it can provide more available features. How to use the features is crucial for the PolSAR image classification, however, there are no existing specific rules. To solve the above problem, a supervised Polarimetric SAR image classification method via weighted ensemble based on 0-1 matrix decomposition is proposed. The proposed method adopts matrix decomposition ensemble to learn on different feature subsets to get coefficients, and weighting ensemble algorithm is employed via the predictive results to improve the final classification results. Firstly, some features are extracted from PolSAR data as initial feature group and are divided randomly into several feature subsets. Then, according to the ensemble algorithm to get the different weights based on the feature subsets, small coefficients are assigned to bad classification results to decrease the harmful impact of some features. The final classification result is achieved by combining the results together. The experimental results of L-band and C-band PolSAR data demonstrate that the proposed method can effectively improve the classification results.
2015, 37(6): 1502-1506.
doi: 10.11999/JEIT141233
Abstract:
A computational method for the rotational loss in the troposcatter propagation is presented because the beams of transceivers antenna can not be along the circular path in trans-horizon passive detection. Because of the narrow beams of the transceivers in the troposcatter propagation, a Gauss function pattern of the antennas is assumed. An azimuth term is derived from the scatter receiver power and a path loss formula used for the beam rotation is given in this paper. Comparison with the experimental data presented in the literature, the two have good consistency. The rotational loss is simulated for the case that both antennas of the transmitter and receiver are not oriented on the great circle bearings. The proposed method is able to serve as a reference for designing the passive location system and detection system in the troposcatter trans-horizon propagation.
A computational method for the rotational loss in the troposcatter propagation is presented because the beams of transceivers antenna can not be along the circular path in trans-horizon passive detection. Because of the narrow beams of the transceivers in the troposcatter propagation, a Gauss function pattern of the antennas is assumed. An azimuth term is derived from the scatter receiver power and a path loss formula used for the beam rotation is given in this paper. Comparison with the experimental data presented in the literature, the two have good consistency. The rotational loss is simulated for the case that both antennas of the transmitter and receiver are not oriented on the great circle bearings. The proposed method is able to serve as a reference for designing the passive location system and detection system in the troposcatter trans-horizon propagation.
2015, 37(6): 1507-1512.
doi: 10.11999/JEIT141195
Abstract:
传统电波折射修正算法普遍采用大气球面分层假设,这类算法在修正高仰角测量目标时具有较好的修正精度,然而对于低仰角、远距离目标,修正精度还不高。该文提出一种折射修正算法,采用更精确的椭球面分层模型描述大气分布,利用迭代递推的方法计算修正后的目标位置,相比传统折射修正算法,计算量有所增加,但是提高了低仰角、远距离目标测量数据折射修正精度,可用于事后数据处理。
传统电波折射修正算法普遍采用大气球面分层假设,这类算法在修正高仰角测量目标时具有较好的修正精度,然而对于低仰角、远距离目标,修正精度还不高。该文提出一种折射修正算法,采用更精确的椭球面分层模型描述大气分布,利用迭代递推的方法计算修正后的目标位置,相比传统折射修正算法,计算量有所增加,但是提高了低仰角、远距离目标测量数据折射修正精度,可用于事后数据处理。
2015, 37(6): 1513-1519.
doi: 10.11999/JEIT141146
Abstract:
In order to reduce test data and test time, a test data compression method for multiple scan chain which bases on mirror-symmetrical reference slices is proposed. This method uses two mutually mirror-symmetrical reference slices for compatibility comparison with scan slice, that improves the compression ratio. If the scan slice is compatible to one of the reference slices, only a few bits are needed to encode it and can be loaded in parallel. Otherwise, the scan slice will replace one reference slice. A longest compatibility strategy is proposed when the scan slice and reference slice satisfy more compatible relationship. It can further improve the test compression ratio to determine the code word according to the different compatibility frequency statistics situations. The experimental results show that the average compression rate of the proposed scheme reaches 69.13%.
In order to reduce test data and test time, a test data compression method for multiple scan chain which bases on mirror-symmetrical reference slices is proposed. This method uses two mutually mirror-symmetrical reference slices for compatibility comparison with scan slice, that improves the compression ratio. If the scan slice is compatible to one of the reference slices, only a few bits are needed to encode it and can be loaded in parallel. Otherwise, the scan slice will replace one reference slice. A longest compatibility strategy is proposed when the scan slice and reference slice satisfy more compatible relationship. It can further improve the test compression ratio to determine the code word according to the different compatibility frequency statistics situations. The experimental results show that the average compression rate of the proposed scheme reaches 69.13%.
2015, 37(6): 1520-1524.
doi: 10.11999/JEIT141248
Abstract:
The registration of point clouds with high noises, outliers and missing data will be failure because the correspondence between point clouds is inaccurate. This paper proposes a information theory based point cloud registration method called KL-Reg algorithm without building correspondence. The method represents the point cloud with Gaussian mixture model, then computes the transformation through minimizing the KL divergence without build explicit correspondence. Experimental results show that KL-Reg algorithm is precise and stable.
The registration of point clouds with high noises, outliers and missing data will be failure because the correspondence between point clouds is inaccurate. This paper proposes a information theory based point cloud registration method called KL-Reg algorithm without building correspondence. The method represents the point cloud with Gaussian mixture model, then computes the transformation through minimizing the KL divergence without build explicit correspondence. Experimental results show that KL-Reg algorithm is precise and stable.