Email alert
2019 Vol. 41, No. 7
Display Method:
2019, 41(7): 1525-1532.
doi: 10.11999/JEIT180722
Abstract:
In Cloud Radio Access Network (Cloud-RAN), most of the existing work assumes Remote Radio Heads (RRHs) could not cache content. To better adapt the content-centric feature of next-generation communication networks, it is necessary to consider caching function for RRHs in Cloud-RAN. Motivated by this, this paper intends to design the suitable caching schemes and reduce the burden of fronthaul link burden through resource allocation. It is assumed the system utilizes Orthogonal Frequency Division Multiple Access (OFDMA) technique. A jointly optimization scheme of SubCarrier (SC) allocation, RRH selection, and transmission power is proposed to minimize the total downlink power consumption. To transform the original non-convex problem, a Lagrange dual decomposition is utilized to design the optimal allocation scheme. The experimental results show that the proposed algorithms can effectively improve the energy efficiency of the system, which meets the requirements of green communication in the future.
In Cloud Radio Access Network (Cloud-RAN), most of the existing work assumes Remote Radio Heads (RRHs) could not cache content. To better adapt the content-centric feature of next-generation communication networks, it is necessary to consider caching function for RRHs in Cloud-RAN. Motivated by this, this paper intends to design the suitable caching schemes and reduce the burden of fronthaul link burden through resource allocation. It is assumed the system utilizes Orthogonal Frequency Division Multiple Access (OFDMA) technique. A jointly optimization scheme of SubCarrier (SC) allocation, RRH selection, and transmission power is proposed to minimize the total downlink power consumption. To transform the original non-convex problem, a Lagrange dual decomposition is utilized to design the optimal allocation scheme. The experimental results show that the proposed algorithms can effectively improve the energy efficiency of the system, which meets the requirements of green communication in the future.
2019, 41(7): 1533-1539.
doi: 10.11999/JEIT180771
Abstract:
To solve the problem of lacking efficient and dynamic resource allocation schemes for 5G Network Slicing (NS) in Cloud Radio Access Network (C-RAN) scenario in the existing researches, a virtual resource allocation algorithm for NS in virtualized C-RAN is proposed. Firstly, a stochastic optimization model in virtualized C-RAN network is established based on the Constrained Markov Decision Process (CMDP) theory, which maximizes the average sum rates of all slices as its objective, and is subject to the average delay constraint for each slice as well as the average network backhaul link bandwidth consumption constraint in the meantime. Secondly, in order to overcome the issue of having difficulties in acquiring the accurate transition probabilities of the system states in the proposed CMDP optimization problem, the concept of Post-Decision State (PDS) as an " intermediate state” is introduced, which is used to describe the state of the system after the known dynamics, but before the unknown dynamics occur, and it incorporates all of the known information about the system state transition. Finally, an online learning based virtual resource allocation algorithm is presented for NS in virtualized C-RAN, where in each discrete resource scheduling slot, it will allocate appropriate Resource Blocks (RBs) and caching resource for each network slice according to the observed current system state. The simulation results reveal that the proposed algorithm can effectively satisfy the Quality of Service (QoS) demand of each individual network slice, reduce the pressure of backhaul link on bandwidth consumption and improve the system throughput.
To solve the problem of lacking efficient and dynamic resource allocation schemes for 5G Network Slicing (NS) in Cloud Radio Access Network (C-RAN) scenario in the existing researches, a virtual resource allocation algorithm for NS in virtualized C-RAN is proposed. Firstly, a stochastic optimization model in virtualized C-RAN network is established based on the Constrained Markov Decision Process (CMDP) theory, which maximizes the average sum rates of all slices as its objective, and is subject to the average delay constraint for each slice as well as the average network backhaul link bandwidth consumption constraint in the meantime. Secondly, in order to overcome the issue of having difficulties in acquiring the accurate transition probabilities of the system states in the proposed CMDP optimization problem, the concept of Post-Decision State (PDS) as an " intermediate state” is introduced, which is used to describe the state of the system after the known dynamics, but before the unknown dynamics occur, and it incorporates all of the known information about the system state transition. Finally, an online learning based virtual resource allocation algorithm is presented for NS in virtualized C-RAN, where in each discrete resource scheduling slot, it will allocate appropriate Resource Blocks (RBs) and caching resource for each network slice according to the observed current system state. The simulation results reveal that the proposed algorithm can effectively satisfy the Quality of Service (QoS) demand of each individual network slice, reduce the pressure of backhaul link on bandwidth consumption and improve the system throughput.
2019, 41(7): 1540-1547.
doi: 10.11999/JEIT180812
Abstract:
For the problem of Direct Sequence-Code Division Multiple Access (DS-CDMA) signal in traditional asynchronous single-channel, including blind estimation of the Pseudo-Noise (PN) sequence and information sequence, a method using multi-channel synchronous and asynchronous based on PARAllel FACtor (PARAFAC) is proposed. Firstly, the signal is modeled as a multi-channel receiving model, then the observed data matrix is equivalent to a factor model. Finally, the iterative least squares algorithm is applied to decomposing the parallel factor, and the information sequence and PN sequences of DS-CDMA signals are further estimated. The simulation results show that the proposed method can effectively estimate the PN sequence and information sequence of the short code DS-CDMA signal, and the estimation of 6 user PN sequences can be realized under the condition that the number of channels is 10 and the Signal-to-Noise Ratio (SNR) is –10 dB.
For the problem of Direct Sequence-Code Division Multiple Access (DS-CDMA) signal in traditional asynchronous single-channel, including blind estimation of the Pseudo-Noise (PN) sequence and information sequence, a method using multi-channel synchronous and asynchronous based on PARAllel FACtor (PARAFAC) is proposed. Firstly, the signal is modeled as a multi-channel receiving model, then the observed data matrix is equivalent to a factor model. Finally, the iterative least squares algorithm is applied to decomposing the parallel factor, and the information sequence and PN sequences of DS-CDMA signals are further estimated. The simulation results show that the proposed method can effectively estimate the PN sequence and information sequence of the short code DS-CDMA signal, and the estimation of 6 user PN sequences can be realized under the condition that the number of channels is 10 and the Signal-to-Noise Ratio (SNR) is –10 dB.
2019, 41(7): 1548-1554.
doi: 10.11999/JEIT180804
Abstract:
Present researches on Distributed Luby’s Transmission (DLT) codes are restricted on several-sources and one-layer-relay networks, thus the Multiple Layers Distributed LT (MLDLT) code for multiple-layers-relays networks is proposed. In MLDLT, sources are grouped and realys are layered in order that scores of sources can be connected to the only destination through the layered relays. By this scheme, the distributed communication between scores of sources and the destination can be performed. Through the and-or tree analysis, the linear procedures for the optimization of the relays' degree distributions are derived. On both lossless and lossy links, asymptotic performances of MLDLT are analized and the numberical simulations are experimented. The results demonstrate that MLDLT can achieve satisfying erasure floors on both lossless and lossy links. MLDLT is a feasible solution for the scores-sources and multiple-layers-realys networks.
Present researches on Distributed Luby’s Transmission (DLT) codes are restricted on several-sources and one-layer-relay networks, thus the Multiple Layers Distributed LT (MLDLT) code for multiple-layers-relays networks is proposed. In MLDLT, sources are grouped and realys are layered in order that scores of sources can be connected to the only destination through the layered relays. By this scheme, the distributed communication between scores of sources and the destination can be performed. Through the and-or tree analysis, the linear procedures for the optimization of the relays' degree distributions are derived. On both lossless and lossy links, asymptotic performances of MLDLT are analized and the numberical simulations are experimented. The results demonstrate that MLDLT can achieve satisfying erasure floors on both lossless and lossy links. MLDLT is a feasible solution for the scores-sources and multiple-layers-realys networks.
2019, 41(7): 1555-1564.
doi: 10.11999/JEIT180392
Abstract:
The Mann-Whitney rank sum test based Wireless Local Area Network (WLAN) indoor mapping and localization approach is proposed. Firstly, according to the localization accuracy requirement, this approach performs the motion paths segmentation in target area, and meanwhile merges the similar motion path segments based on the Mann-Whitney rank sum test. Then, a signal clustering algorithm based on the similar Received Signal Strength (RSS) sequence segments is adopted to guarantee the physical adjacency of the RSS samples in the same cluster. Finally, the backbone nodes based diffusion mapping is used to construct the mapping relations between the physical and signal spaces, and the motion user localization is consequently achieved. The experimental results indicate that compared with the existing WLAN indoor mapping and localization approaches, the proposed one is able to achieve higher mapping and localization accuracy without motion sensor assistance or location fingerprint database construction.
The Mann-Whitney rank sum test based Wireless Local Area Network (WLAN) indoor mapping and localization approach is proposed. Firstly, according to the localization accuracy requirement, this approach performs the motion paths segmentation in target area, and meanwhile merges the similar motion path segments based on the Mann-Whitney rank sum test. Then, a signal clustering algorithm based on the similar Received Signal Strength (RSS) sequence segments is adopted to guarantee the physical adjacency of the RSS samples in the same cluster. Finally, the backbone nodes based diffusion mapping is used to construct the mapping relations between the physical and signal spaces, and the motion user localization is consequently achieved. The experimental results indicate that compared with the existing WLAN indoor mapping and localization approaches, the proposed one is able to achieve higher mapping and localization accuracy without motion sensor assistance or location fingerprint database construction.
2019, 41(7): 1565-1571.
doi: 10.11999/JEIT181021
Abstract:
In complex indoor environment, the measured Received Signal Strength (RSS) values will fluctuate in different degrees, which lead to inaccurate characterization of wireless signal propagation model. To solve this problem, a universal coarse grained localization method is proposed based on the Wi-Fi ranging location model. This method gets the signal propagation model by fitting the measured RSS value. On this basis, the distance between the unknown node and the Access Point (AP) is calculated, then the location of the unknown node is realized by the beetle antennae search algorithm. The performance of the propagation model and the effectiveness of the optimization algorithm are verified by simulation.
In complex indoor environment, the measured Received Signal Strength (RSS) values will fluctuate in different degrees, which lead to inaccurate characterization of wireless signal propagation model. To solve this problem, a universal coarse grained localization method is proposed based on the Wi-Fi ranging location model. This method gets the signal propagation model by fitting the measured RSS value. On this basis, the distance between the unknown node and the Access Point (AP) is calculated, then the location of the unknown node is realized by the beetle antennae search algorithm. The performance of the propagation model and the effectiveness of the optimization algorithm are verified by simulation.
2019, 41(7): 1572-1578.
doi: 10.11999/JEIT180716
Abstract:
Considering the problem that using a large number of reserved paths causes higher complexity in order to obtain better performance for polar code Successive Cancellation List (SCL) decoding, the adaptive SCL decoding algorithm at a high Signal to Noise Ratio (SNR) reduces a certain amount of calculations, however, brings a higher decoding delay. According to the order of polar code decoding, an SCL decoding algorithm combining segmentation Cyclic Redundancy Check (CRC) with adaptively selecting the number of reserved paths is proposed. The simulation results show that compared with the traditional CRC-assisted SCL decoding algorithm and adaptive-SCL algorithm, when the code rate is R=0.5, the complexity under low SNR (–1 dB) is reduced by about 21.6%, and the complexity at high SNR (3 dB) is reduced by about 64%, at the same time, better decoding performance is obtained.
Considering the problem that using a large number of reserved paths causes higher complexity in order to obtain better performance for polar code Successive Cancellation List (SCL) decoding, the adaptive SCL decoding algorithm at a high Signal to Noise Ratio (SNR) reduces a certain amount of calculations, however, brings a higher decoding delay. According to the order of polar code decoding, an SCL decoding algorithm combining segmentation Cyclic Redundancy Check (CRC) with adaptively selecting the number of reserved paths is proposed. The simulation results show that compared with the traditional CRC-assisted SCL decoding algorithm and adaptive-SCL algorithm, when the code rate is R=0.5, the complexity under low SNR (–1 dB) is reduced by about 21.6%, and the complexity at high SNR (3 dB) is reduced by about 64%, at the same time, better decoding performance is obtained.
2019, 41(7): 1579-1586.
doi: 10.11999/JEIT180807
Abstract:
To satisfy the demand of the high precision time and frequency synchronization for engineering application, to reduce system complexity and ensure the construction of large-scale optical fiber network for time and frequency transmission, a method of high precision time and frequency integration transfer via optical fiber based on pseudo-code modulation is developed. The optical fiber time and frequency transfer system is designed and built. The unidirectional and bidirectional time and frequency transfer test via optical fiber are completed. In the unidirectional time-frequency transfer test, the influence of temperature change on the transmission delay of the system is analyzed. In the bidirectional time-frequency transfer test, the system additional time transfer jitter is 0.28 ps/s, 0.82 ps/1000 s, the additional frequency transfer instability is 4.94×10–13/s, and 6.39×10–17/40000 s. The results show that the proposed method achieves high precision time and frequency integration synchronization, and the system additional time transfer jitter is better than the current optical fiber time synchronization schemes.
To satisfy the demand of the high precision time and frequency synchronization for engineering application, to reduce system complexity and ensure the construction of large-scale optical fiber network for time and frequency transmission, a method of high precision time and frequency integration transfer via optical fiber based on pseudo-code modulation is developed. The optical fiber time and frequency transfer system is designed and built. The unidirectional and bidirectional time and frequency transfer test via optical fiber are completed. In the unidirectional time-frequency transfer test, the influence of temperature change on the transmission delay of the system is analyzed. In the bidirectional time-frequency transfer test, the system additional time transfer jitter is 0.28 ps/s, 0.82 ps/1000 s, the additional frequency transfer instability is 4.94×10–13/s, and 6.39×10–17/40000 s. The results show that the proposed method achieves high precision time and frequency integration synchronization, and the system additional time transfer jitter is better than the current optical fiber time synchronization schemes.
2019, 41(7): 1587-1593.
doi: 10.11999/JEIT180737
Abstract:
In order to solve the problems of all virtual links take without discrimination, high backup resource consumption and long network recovery delay after failures in existing survivable virtual network link protection methods, a Core Link Aware Survivable Virtual Network Link Protection (CLA-SVNLP) method is proposed. At first, the core degree metric model of virtual link is constructed by considering virtual link dynamic and static factors. According to virtual network survivable needs, virtual links with high core degrees are protected by backup resources. Then the p-cycle is introduced into survivable virtual network link protection and the p-cycle is constructed based on the characteristics of virtual network to provide 1:N protection for core virtual links. That means each core virtual link consumes 1/N backup link bandwidth resources and the backup link resource consumption is reduced. It also transforms the single physical link protection into single virtual link protection in multiple p-cycles. At last, the network coding and p-cycle are both used to transform the 1:N protection into 1+N protection for core virtual links which avoids fault location, detection and data retransmission after failures. Simulation results show that the proposed method can improve the utilization of backup resource and shorten the network recovery delay after failures.
In order to solve the problems of all virtual links take without discrimination, high backup resource consumption and long network recovery delay after failures in existing survivable virtual network link protection methods, a Core Link Aware Survivable Virtual Network Link Protection (CLA-SVNLP) method is proposed. At first, the core degree metric model of virtual link is constructed by considering virtual link dynamic and static factors. According to virtual network survivable needs, virtual links with high core degrees are protected by backup resources. Then the p-cycle is introduced into survivable virtual network link protection and the p-cycle is constructed based on the characteristics of virtual network to provide 1:N protection for core virtual links. That means each core virtual link consumes 1/N backup link bandwidth resources and the backup link resource consumption is reduced. It also transforms the single physical link protection into single virtual link protection in multiple p-cycles. At last, the network coding and p-cycle are both used to transform the 1:N protection into 1+N protection for core virtual links which avoids fault location, detection and data retransmission after failures. Simulation results show that the proposed method can improve the utilization of backup resource and shorten the network recovery delay after failures.
2019, 41(7): 1594-1600.
doi: 10.11999/JEIT180764
Abstract:
The Dissimilar Redundancy Structure (DRS) based cyberspace security technology is an active defense technology, which uses features such as dissimilarity and redundancy to block or disrupt network attacks to improve system reliability and security. By analyzing how heterogeneity can improve the security of the system, the importance of quantification of heterogeneity is pointed out and the heterogeneity of DRS is defined as the complexity and disparity of its execution set. A new method which is suitable for quantitative heterogeneity is also proposed. The experimental results show that this method can divide 10 execution sets into 9 categories, while the Shannon-Wiener index, Simpson index and Pielou index can only divide into 4 categories. This paper provides a new method to quantify the heterogeneity of DRS in theory, and provides guidance for engineering DRS systems.
The Dissimilar Redundancy Structure (DRS) based cyberspace security technology is an active defense technology, which uses features such as dissimilarity and redundancy to block or disrupt network attacks to improve system reliability and security. By analyzing how heterogeneity can improve the security of the system, the importance of quantification of heterogeneity is pointed out and the heterogeneity of DRS is defined as the complexity and disparity of its execution set. A new method which is suitable for quantitative heterogeneity is also proposed. The experimental results show that this method can divide 10 execution sets into 9 categories, while the Shannon-Wiener index, Simpson index and Pielou index can only divide into 4 categories. This paper provides a new method to quantify the heterogeneity of DRS in theory, and provides guidance for engineering DRS systems.
2019, 41(7): 1601-1609.
doi: 10.11999/JEIT180775
Abstract:
In order to overcome the vulnerability of Physical Unclonable Function (PUF) to modeling attacks, a controlled PUF architecture based on sensitivity confusion mechanism is proposed. According to the Boolean function definition of PUF and Walsh spectrum theory, it is derived that each excitation bit has different sensitivity, and the position selection rules related to the parity of the confound value bit width are analyzed and summarized. This rule guides the design of the Multi-bit Wide Confusion Algorithm (MWCA) and constructs a controlled PUF architecture with high security. The basic PUF structure is evaluated as a protective object of the controlled PUF. It is found that the response generated by the controlled PUF based on the sensitivity confusion mechanism has better randomness. Logistic regression algorithm is used to model different PUF attack. The experimental results show that compared with the basic ROPUF, the arbiter PUF and the OB-PUF based on the random confusion mechanism, the controlled PUF based on the sensitivity confusion mechanism can significantly improve the PUF resistance capabilities for modeling attack.
In order to overcome the vulnerability of Physical Unclonable Function (PUF) to modeling attacks, a controlled PUF architecture based on sensitivity confusion mechanism is proposed. According to the Boolean function definition of PUF and Walsh spectrum theory, it is derived that each excitation bit has different sensitivity, and the position selection rules related to the parity of the confound value bit width are analyzed and summarized. This rule guides the design of the Multi-bit Wide Confusion Algorithm (MWCA) and constructs a controlled PUF architecture with high security. The basic PUF structure is evaluated as a protective object of the controlled PUF. It is found that the response generated by the controlled PUF based on the sensitivity confusion mechanism has better randomness. Logistic regression algorithm is used to model different PUF attack. The experimental results show that compared with the basic ROPUF, the arbiter PUF and the OB-PUF based on the random confusion mechanism, the controlled PUF based on the sensitivity confusion mechanism can significantly improve the PUF resistance capabilities for modeling attack.
2019, 41(7): 1610-1617.
doi: 10.11999/JEIT180729
Abstract:
LiCi algorithm is a newly lightweight block cipher. Due to its new design idea adopted by Patil et al, it has the advantages of compact design, low energy consumption and less chip area, thus is is especially suitable for resource-constrained environments. Currently, its security receives extensively attention, and Patil et al. claimed that the 16-round reduced LiCi can sufficiently resist both differential attack and linear attack. In this paper, a new 10-round impossible differential distinguisher is constructed based on the differential characteristics of the S-box and the meet-in-the-middle technique. Moreover, on the basis of this distinguisher, a 16-round impossible differential attack on LiCi is proposed by respectively extending 3-round forward and backward via the key scheduling scheme. This attack requires a time complexity of about 283.08 16-round encryptions, a data complexity of about 259.76 chosen plaintexts, and a memory complexity of 276.76 data blocks, which illustrates that the 16-round LiCi cipher can not resist impossible differential attack.
LiCi algorithm is a newly lightweight block cipher. Due to its new design idea adopted by Patil et al, it has the advantages of compact design, low energy consumption and less chip area, thus is is especially suitable for resource-constrained environments. Currently, its security receives extensively attention, and Patil et al. claimed that the 16-round reduced LiCi can sufficiently resist both differential attack and linear attack. In this paper, a new 10-round impossible differential distinguisher is constructed based on the differential characteristics of the S-box and the meet-in-the-middle technique. Moreover, on the basis of this distinguisher, a 16-round impossible differential attack on LiCi is proposed by respectively extending 3-round forward and backward via the key scheduling scheme. This attack requires a time complexity of about 283.08 16-round encryptions, a data complexity of about 259.76 chosen plaintexts, and a memory complexity of 276.76 data blocks, which illustrates that the 16-round LiCi cipher can not resist impossible differential attack.
2019, 41(7): 1618-1624.
doi: 10.11999/JEIT180735
Abstract:
A sufficient condition for general quadratic polynomial systems to be topologically conjugate with Tent map is proposed. Base on this condition, the probability density function of a class of quadratic polynomial systems is provided and transformations function which can homogenize this class of chaotic systems is further obtained. The performances of both the original system and the homogenized system are evaluated. Numerical simulations show that the information entropy of the uniformly distributed sequences is closer to the theoretical limit and its discrete entropy remains unchanged. In conclusion, with such homogenization method all the chaotic characteristics of the original system is inherited and better uniformity is performed.
A sufficient condition for general quadratic polynomial systems to be topologically conjugate with Tent map is proposed. Base on this condition, the probability density function of a class of quadratic polynomial systems is provided and transformations function which can homogenize this class of chaotic systems is further obtained. The performances of both the original system and the homogenized system are evaluated. Numerical simulations show that the information entropy of the uniformly distributed sequences is closer to the theoretical limit and its discrete entropy remains unchanged. In conclusion, with such homogenization method all the chaotic characteristics of the original system is inherited and better uniformity is performed.
2019, 41(7): 1625-1632.
doi: 10.11999/JEIT180798
Abstract:
In order to encode better the depth maps in 3D video, the 3D-High Efficiency Video Coding (3D-HEVC) standard is introduced in Depth Modeling Modes(DMMs), which increase the quality of original algorithm while improving the encoding complexity. The traditional architecture of DMM-1 encoder circuit has a longer coding period and can only meet real-time coding requirements of lower resolution and frame rate. In order to improve the performance of DMM-1 encoder, the structure of DMM-1 algorithm is researched and a five-stage pipeline architecture of DMM-1 encoder is proposed. The pipeline architecture can reduce the coding cycles. The architecture is implemented by Verilog HDL. Experiments show that this architecture can reduce the coding cycle by at least 52.3%, at the cost of 1568 gates compared to previous work by Sanchez G. et al. (2017).
In order to encode better the depth maps in 3D video, the 3D-High Efficiency Video Coding (3D-HEVC) standard is introduced in Depth Modeling Modes(DMMs), which increase the quality of original algorithm while improving the encoding complexity. The traditional architecture of DMM-1 encoder circuit has a longer coding period and can only meet real-time coding requirements of lower resolution and frame rate. In order to improve the performance of DMM-1 encoder, the structure of DMM-1 algorithm is researched and a five-stage pipeline architecture of DMM-1 encoder is proposed. The pipeline architecture can reduce the coding cycles. The architecture is implemented by Verilog HDL. Experiments show that this architecture can reduce the coding cycle by at least 52.3%, at the cost of 1568 gates compared to previous work by Sanchez G. et al. (2017).
2019, 41(7): 1633-1640.
doi: 10.11999/JEIT180793
Abstract:
Ontology, as the superstructure of knowledge graph, has great significance in knowledge graph domain. In general, structural redundancy may arise in ontology evolution. Most of existing redundancy elimination algorithms focus on transitive redundancies while ignore equivalent relations. Focusing on this problem, a redundancy elimination algorithm based on super-node theory is proposed. Firstly, the nodes equivalent to each other are considered as a super-node to transfer the ontology into a directed acyclic graph. Thus the redundancies relating to transitive relations can be eliminated by existing methods. Then equivalent relations are restored, and the redundancies between equivalent and transitive relations are eliminated. Experiments on both synthetic dynamic networks and real networks indicate that the proposed algorithm can detect redundant relations precisely, with better performance and stability compared with the benchmarks.
Ontology, as the superstructure of knowledge graph, has great significance in knowledge graph domain. In general, structural redundancy may arise in ontology evolution. Most of existing redundancy elimination algorithms focus on transitive redundancies while ignore equivalent relations. Focusing on this problem, a redundancy elimination algorithm based on super-node theory is proposed. Firstly, the nodes equivalent to each other are considered as a super-node to transfer the ontology into a directed acyclic graph. Thus the redundancies relating to transitive relations can be eliminated by existing methods. Then equivalent relations are restored, and the redundancies between equivalent and transitive relations are eliminated. Experiments on both synthetic dynamic networks and real networks indicate that the proposed algorithm can detect redundant relations precisely, with better performance and stability compared with the benchmarks.
2019, 41(7): 1641-1649.
doi: 10.11999/JEIT180792
Abstract:
To solve the problems that there are few labeled data in speech data for diagnosis of Parkinson’s Disease (PD), and the distributed condition of the training and the test data is different, the two aspects of dimension reduction and sample augment are considered. A novel transfer learning algorithm is proposed based on noise weighting sparse coding combined with speech sample / feature parallel selection. The algorithm can learn the structural information from the source domain and express the effective PD features, and achieves dimension reduction and sample augment simultaneously. Considering the relationship between the samples and features, the higher quality features can be extracted. Firstly, the features are extracted from the public data set and the feature data set is constructed as source domain. Then the training data and test data of the target domain are sparsely represented based on source domain. Spares representing includs traditional Sparse Coding(SC) and Convolutional Sparse Coding(CSC); Next, the sparse representing data are screened according to sample feature selection simultaneously, so as to improve the accuracy of the PD classification; Finally, the Support Vector Machine(SVM) classifier is adopted. Experiments show that it achieves the highest classification accuracy of 95.0% and the average classification accuracy of 86.0%, and obtains obvious improvement according to the subjects, compared with the relevant algorithms. Besides, compared with sparse coding, convolutional sparse coding can be beneficial to extracting high level features from PD data set; moreover, it is proved that transfer learning is effective.
To solve the problems that there are few labeled data in speech data for diagnosis of Parkinson’s Disease (PD), and the distributed condition of the training and the test data is different, the two aspects of dimension reduction and sample augment are considered. A novel transfer learning algorithm is proposed based on noise weighting sparse coding combined with speech sample / feature parallel selection. The algorithm can learn the structural information from the source domain and express the effective PD features, and achieves dimension reduction and sample augment simultaneously. Considering the relationship between the samples and features, the higher quality features can be extracted. Firstly, the features are extracted from the public data set and the feature data set is constructed as source domain. Then the training data and test data of the target domain are sparsely represented based on source domain. Spares representing includs traditional Sparse Coding(SC) and Convolutional Sparse Coding(CSC); Next, the sparse representing data are screened according to sample feature selection simultaneously, so as to improve the accuracy of the PD classification; Finally, the Support Vector Machine(SVM) classifier is adopted. Experiments show that it achieves the highest classification accuracy of 95.0% and the average classification accuracy of 86.0%, and obtains obvious improvement according to the subjects, compared with the relevant algorithms. Besides, compared with sparse coding, convolutional sparse coding can be beneficial to extracting high level features from PD data set; moreover, it is proved that transfer learning is effective.
2019, 41(7): 1650-1657.
doi: 10.11999/JEIT180780
Abstract:
Because of the problem that the target is prone to drift in complex background, a robust tracking algorithm based on spatial reliability constraint is proposed. Firstly, the pre-trained Convolutional Neural Network (CNN) model is used to extract the multi-layer deep features of the target, and the correlation filters are respectively trained on each layer to perform weighted fusion of the obtained response maps. Then, the reliability region information of the target is extracted through the high-level feature map, a binary matrix is obtained. Finally, the obtained binary matrix is used to constrain the search area of the response map, and the maximum response value in the area is the target position. In addition, in order to deal with the long-term occlusion problem, a random selection model update strategy with the first frame template information is proposed. The experimental results show that the proposed algorithm has good performance in dealing with similar background interference, occlusion, and other scenes.
Because of the problem that the target is prone to drift in complex background, a robust tracking algorithm based on spatial reliability constraint is proposed. Firstly, the pre-trained Convolutional Neural Network (CNN) model is used to extract the multi-layer deep features of the target, and the correlation filters are respectively trained on each layer to perform weighted fusion of the obtained response maps. Then, the reliability region information of the target is extracted through the high-level feature map, a binary matrix is obtained. Finally, the obtained binary matrix is used to constrain the search area of the response map, and the maximum response value in the area is the target position. In addition, in order to deal with the long-term occlusion problem, a random selection model update strategy with the first frame template information is proposed. The experimental results show that the proposed algorithm has good performance in dealing with similar background interference, occlusion, and other scenes.
2019, 41(7): 1658-1665.
doi: 10.11999/JEIT180777
Abstract:
As machine learning is widely applied to various domains, its security vulnerability is also highlighted. A PSO (Particle Swarm Optimization) based adversarial example generation algorithm is proposed to reveal the potential security risks of Support Vector Machine (SVM). The adversarial examples, generated by slightly crafting the legitimate samples, can mislead SVM classifier to give wrong classification results. Using the linear separable property of SVM in high-dimensional feature space, PSO is used to find the salient features, and then the average method is used to map back to the original input space to construct the adversarial example. This method makes full use of the easily finding salient features of linear models in the feature space, and the interpretable advantages of the original input space. Experimental results show that the proposed method can fool SVM classifier by using the adversarial example generated by less than 7 % small perturbation, thus proving that SVM has obvious security vulnerability.
As machine learning is widely applied to various domains, its security vulnerability is also highlighted. A PSO (Particle Swarm Optimization) based adversarial example generation algorithm is proposed to reveal the potential security risks of Support Vector Machine (SVM). The adversarial examples, generated by slightly crafting the legitimate samples, can mislead SVM classifier to give wrong classification results. Using the linear separable property of SVM in high-dimensional feature space, PSO is used to find the salient features, and then the average method is used to map back to the original input space to construct the adversarial example. This method makes full use of the easily finding salient features of linear models in the feature space, and the interpretable advantages of the original input space. Experimental results show that the proposed method can fool SVM classifier by using the adversarial example generated by less than 7 % small perturbation, thus proving that SVM has obvious security vulnerability.
2019, 41(7): 1666-1673.
doi: 10.11999/JEIT180751
Abstract:
To overcome the mechanism shortcomings of wormhole and white hole selection in the Multi-Verse Optimizer (MVO), an Improved Multi-Universes Optimization (IMVO) algorithm is proposed. To speed up global exploration ability and quick iteration ability, this thesis designs the existence mechanism of wormhole with fixed probability and the Travel Distance Rate (TDR) that its convergence from early stage's smoothly to later stage's fast. The random white hole selection mechanism is proposed; Black holes can revolve around selected white hole stars and is modelled to solve the problem of information communication of the Inter-generational Universes. The performance of IMVO is verified by comparison experiments in low-middle dimensions. Three benchmarks test functions are selected for comparison in large scale which are difficult to be optimized, the experimental results show that IMVO has good applicability and robustness with higher solving accuracy and success rate in large scale optimization problem.
To overcome the mechanism shortcomings of wormhole and white hole selection in the Multi-Verse Optimizer (MVO), an Improved Multi-Universes Optimization (IMVO) algorithm is proposed. To speed up global exploration ability and quick iteration ability, this thesis designs the existence mechanism of wormhole with fixed probability and the Travel Distance Rate (TDR) that its convergence from early stage's smoothly to later stage's fast. The random white hole selection mechanism is proposed; Black holes can revolve around selected white hole stars and is modelled to solve the problem of information communication of the Inter-generational Universes. The performance of IMVO is verified by comparison experiments in low-middle dimensions. Three benchmarks test functions are selected for comparison in large scale which are difficult to be optimized, the experimental results show that IMVO has good applicability and robustness with higher solving accuracy and success rate in large scale optimization problem.
2019, 41(7): 1674-1681.
doi: 10.11999/JEIT180720
Abstract:
In order to solve the incomplete semantic structure problem that occurs in the process of using the Abstract Meaning Representation (AMR) graph to predict the summary subgraph, a semantic summarization algorithm is proposed based on Integer Linear Programming (ILP) reconstructed AMR graph structure. Firstly, the text data are preprocessed to generate an AMR total graph. Then the important node information of the summary subgraph is extracted from the AMR total graph based on the statistical features. Finally, the ILP method is applied to reconstructing the node relationships in the summary subgraph, which is further utilized to generate a semantic summarization. The experimental results show that compared with other semantic summarization methods, the ROUGE index and Smatch index of the proposed algorithm are significantly improved, up to 9% and 14% respectively. This method improves significantly the quality of semantic summarization.
In order to solve the incomplete semantic structure problem that occurs in the process of using the Abstract Meaning Representation (AMR) graph to predict the summary subgraph, a semantic summarization algorithm is proposed based on Integer Linear Programming (ILP) reconstructed AMR graph structure. Firstly, the text data are preprocessed to generate an AMR total graph. Then the important node information of the summary subgraph is extracted from the AMR total graph based on the statistical features. Finally, the ILP method is applied to reconstructing the node relationships in the summary subgraph, which is further utilized to generate a semantic summarization. The experimental results show that compared with other semantic summarization methods, the ROUGE index and Smatch index of the proposed algorithm are significantly improved, up to 9% and 14% respectively. This method improves significantly the quality of semantic summarization.
2019, 41(7): 1682-1689.
doi: 10.11999/JEIT180796
Abstract:
For the passive detection of underwater line-spectrum target, the information such as the azimuth, frequency and the number of the line-spectrum signals is usually unknown, and the line-spectrum detection performance is affected by broadband interferences and noise. For this issue, a Space-Time Joint Detecion (STJD) method of detecting the unknown line-spectrum target by space-time domain processing is proposed. Firstly, a space-time filter that autonomously matches the unknown line-spectrum signals is constructed to filter out the broadband interferences and noise. Secondly, the conventional frequency domain beamforming is performed on the filtered signals, and then a space-time two-dimensional beam output with relatively pure line-spectrum spectral peaks is obtained. The line-spectrum signals are extracted from the space-time two-dimensional beam output, and the spatial spectrum is calculated using the extracted line-spectrum information. Then, the detection of the line-spectrum target is realized. Theoretical derivation and simulation results verify that the proposed method performs the spatiotemporal filtering on the unknown line-spectrum signals in the minimum mean square error sense, and fully utilizes the line-spectrum information for the passive detection of underwater line-spectrum target. Compared with the existing line-spectrum target detection methods utilizing the line-spectrum features, the proposed method requires lower Signal to Noise Ratio (SNR), and has better detection performance under the complex multi-target multi-spectrum-line conditions.
For the passive detection of underwater line-spectrum target, the information such as the azimuth, frequency and the number of the line-spectrum signals is usually unknown, and the line-spectrum detection performance is affected by broadband interferences and noise. For this issue, a Space-Time Joint Detecion (STJD) method of detecting the unknown line-spectrum target by space-time domain processing is proposed. Firstly, a space-time filter that autonomously matches the unknown line-spectrum signals is constructed to filter out the broadband interferences and noise. Secondly, the conventional frequency domain beamforming is performed on the filtered signals, and then a space-time two-dimensional beam output with relatively pure line-spectrum spectral peaks is obtained. The line-spectrum signals are extracted from the space-time two-dimensional beam output, and the spatial spectrum is calculated using the extracted line-spectrum information. Then, the detection of the line-spectrum target is realized. Theoretical derivation and simulation results verify that the proposed method performs the spatiotemporal filtering on the unknown line-spectrum signals in the minimum mean square error sense, and fully utilizes the line-spectrum information for the passive detection of underwater line-spectrum target. Compared with the existing line-spectrum target detection methods utilizing the line-spectrum features, the proposed method requires lower Signal to Noise Ratio (SNR), and has better detection performance under the complex multi-target multi-spectrum-line conditions.
2019, 41(7): 1690-1697.
doi: 10.11999/JEIT180723
Abstract:
In order to realize underdetermined wideband Direction Of Arrival(DOA) estimation based on sparse array, an algorithm on account of Distributed Compressive Sensing(DCS) is proposed. Firstly, wideband signal processing model based on sparse array is deduced and the underdetermined wideband DOA estimation is formulated as a DCS problem. Then, the DCS-Simultaneous Orthogonal Matching Pursuit(DCS-SOMP) algorithm is utilized to solve this problem. Finally, the off-grid problem is considered and a joint DCS model containing off-grid parameters is established. Estimations of DOAs and off-grid parameters are achieved through iterative solution. Simulation results show that the proposed algorithm is effective and have advantages in resolution and computational complexity.
In order to realize underdetermined wideband Direction Of Arrival(DOA) estimation based on sparse array, an algorithm on account of Distributed Compressive Sensing(DCS) is proposed. Firstly, wideband signal processing model based on sparse array is deduced and the underdetermined wideband DOA estimation is formulated as a DCS problem. Then, the DCS-Simultaneous Orthogonal Matching Pursuit(DCS-SOMP) algorithm is utilized to solve this problem. Finally, the off-grid problem is considered and a joint DCS model containing off-grid parameters is established. Estimations of DOAs and off-grid parameters are achieved through iterative solution. Simulation results show that the proposed algorithm is effective and have advantages in resolution and computational complexity.
2019, 41(7): 1698-1704.
doi: 10.11999/JEIT180719
Abstract:
A grating lobe suppression method of wideband real time delay pattern based on Particle Swarm Optimization(PSO) algorithm is proposed to solve the problem of grating lobe arise from inter-element is larger than wavelength. Firstly, the array energy pattern based on wideband real time delay is defined. Then, a fitness function is constructed with maximum sidelobe level of the array energy pattern. Finally, the grating lobe is further suppressed by optimizing the elements position distribution using Particle Swarm Optimization (PSO) algorithm. The simulation results show that the proposed grating lobes suppression method is more effective than individually using the particle swarm optimization method or the wideband real time delay method. Furthermore, the influence of the element space, the element number, the time width and the center frequency of signal on the performance of grating lobe suppression are studied.
A grating lobe suppression method of wideband real time delay pattern based on Particle Swarm Optimization(PSO) algorithm is proposed to solve the problem of grating lobe arise from inter-element is larger than wavelength. Firstly, the array energy pattern based on wideband real time delay is defined. Then, a fitness function is constructed with maximum sidelobe level of the array energy pattern. Finally, the grating lobe is further suppressed by optimizing the elements position distribution using Particle Swarm Optimization (PSO) algorithm. The simulation results show that the proposed grating lobes suppression method is more effective than individually using the particle swarm optimization method or the wideband real time delay method. Furthermore, the influence of the element space, the element number, the time width and the center frequency of signal on the performance of grating lobe suppression are studied.
2019, 41(7): 1705-1711.
doi: 10.11999/JEIT180332
Abstract:
Since the performance of adaptive beamforming algorithm for coherent signals degrades when the estimation error in the Direction Of Arrival (DOA) of the desired signal is large, a new multistage blocking based beamforming algorithm for coherent interference suppression is proposed. Firstly, the blocking matrix is constructed and the principle of multistage blocking is derived, with which the received signal is processed to remove the desired signal component. Then the mapping between the array manifold of sub-aperture array and the full-aperture array is analyzed when only the desired signal exists in the space. On this basis, the extension transformation is derived with its effectiveness proved in the presence of interference signals. At last, the optimal weight vector of the adaptive beamformer for coherent interference is obtained by extension transformation. Requiring no prior information of the DOA of the interference signals, the new method is robust to the DOA estimation error, and can avoid the loss of array aperture. The effectiveness and superiority of the new algorithm are verified by simulation analysis.
Since the performance of adaptive beamforming algorithm for coherent signals degrades when the estimation error in the Direction Of Arrival (DOA) of the desired signal is large, a new multistage blocking based beamforming algorithm for coherent interference suppression is proposed. Firstly, the blocking matrix is constructed and the principle of multistage blocking is derived, with which the received signal is processed to remove the desired signal component. Then the mapping between the array manifold of sub-aperture array and the full-aperture array is analyzed when only the desired signal exists in the space. On this basis, the extension transformation is derived with its effectiveness proved in the presence of interference signals. At last, the optimal weight vector of the adaptive beamformer for coherent interference is obtained by extension transformation. Requiring no prior information of the DOA of the interference signals, the new method is robust to the DOA estimation error, and can avoid the loss of array aperture. The effectiveness and superiority of the new algorithm are verified by simulation analysis.
2019, 41(7): 1712-1720.
doi: 10.11999/JEIT180851
Abstract:
Benefiting from the rapid development of digital frequency storage technology, the Intermittent Sampling Repeater Jamming(ISRJ) is widely used. Existing radar anti-jamming means are hard to against this jamming effectively. Based on the analysis of the principles of the ISRJ, for the discontinuities of ISRJ in time domain, an anti-ISRJ method based on LFM segmented pulse compression is proposed. This method utilizes the orthogonality between LFM segmented signals, combines with cover waveform concept, distinguishes jamming and target through narrow band filter group, then suppresses jamming, finally accumulates signals in intra-pulse and inter-pulse. Theoretical analysis and experimental results show the anti-ISRJ method can effectively resist the intermittent sampling interference which combines with different styles of multiple jammers.
Benefiting from the rapid development of digital frequency storage technology, the Intermittent Sampling Repeater Jamming(ISRJ) is widely used. Existing radar anti-jamming means are hard to against this jamming effectively. Based on the analysis of the principles of the ISRJ, for the discontinuities of ISRJ in time domain, an anti-ISRJ method based on LFM segmented pulse compression is proposed. This method utilizes the orthogonality between LFM segmented signals, combines with cover waveform concept, distinguishes jamming and target through narrow band filter group, then suppresses jamming, finally accumulates signals in intra-pulse and inter-pulse. Theoretical analysis and experimental results show the anti-ISRJ method can effectively resist the intermittent sampling interference which combines with different styles of multiple jammers.
2019, 41(7): 1721-1727.
doi: 10.11999/JEIT180766
Abstract:
A joint real-valued beamspace-based method for angle estimation in bistatic Multiple-Input Multiple-Output (MIMO) radar is proposed. Instead of using the traditional Discrete Fourier Transform (DFT) beamspace filter, the proposed beamspace filter is designed through convex optimization, which can flexibly control the bandwidth and limit the sidelobe level. Based on this property, the mainlobe-to-sidelobe ratio of the proposed beamspace filter can be greatly improved, which results in the improved estimation performance. More importantly, the structure of the proposed beamspace matrix can be properly designed, which is indispensable in constructing the real-valued signal model. Finally, the mapping relationship to compensate the interpolation error is established. Simulation results verify the effectiveness of the proposed method.
A joint real-valued beamspace-based method for angle estimation in bistatic Multiple-Input Multiple-Output (MIMO) radar is proposed. Instead of using the traditional Discrete Fourier Transform (DFT) beamspace filter, the proposed beamspace filter is designed through convex optimization, which can flexibly control the bandwidth and limit the sidelobe level. Based on this property, the mainlobe-to-sidelobe ratio of the proposed beamspace filter can be greatly improved, which results in the improved estimation performance. More importantly, the structure of the proposed beamspace matrix can be properly designed, which is indispensable in constructing the real-valued signal model. Finally, the mapping relationship to compensate the interpolation error is established. Simulation results verify the effectiveness of the proposed method.
2019, 41(7): 1728-1734.
doi: 10.11999/JEIT180758
Abstract:
This paper presents a method of low-altitude wind-shear speed estimation based on Generalized adjacent Multi-Beam (GMB) adaptive processing under aircraft yawing. The clutter range-dependence compensation method based on echo data is first used to correct the range dependence of clutter for estimating the clutter covariance matrix. Then the dimension-reduced transform matrix is calculated by combining adjacent multiple beams in the airspace and adjacent multiple Doppler channels in time domain simultaneously, and the radar echo data of the measured range bin is reduced in dimension, and then the optimal weight vector of the GMB adaptive processor is constructed to filter adaptively the dimension-reduced data. Finally, the accurate estimation of the wind speed under the aircraft yawing is got. The simulation results show that the proposed method can obtain an effective estimation of wind speed under aircraft yawing.
This paper presents a method of low-altitude wind-shear speed estimation based on Generalized adjacent Multi-Beam (GMB) adaptive processing under aircraft yawing. The clutter range-dependence compensation method based on echo data is first used to correct the range dependence of clutter for estimating the clutter covariance matrix. Then the dimension-reduced transform matrix is calculated by combining adjacent multiple beams in the airspace and adjacent multiple Doppler channels in time domain simultaneously, and the radar echo data of the measured range bin is reduced in dimension, and then the optimal weight vector of the GMB adaptive processor is constructed to filter adaptively the dimension-reduced data. Finally, the accurate estimation of the wind speed under the aircraft yawing is got. The simulation results show that the proposed method can obtain an effective estimation of wind speed under aircraft yawing.
2019, 41(7): 1735-1742.
doi: 10.11999/JEIT180747
Abstract:
In view of the imaging quality of sparse ISAR imaging methods is limited by the inaccurate sparse representation of the scene to be imaged, the Dictionary Learning (DL) technique is introduced into ISAR sparse imaging to get better sparse representation of the scene. An off-line DL based imaging method and an on-line DL based imaging method are proposed. The off-line DL imaging method can obtain a better sparse representation via a dictionary learned from the available ISAR images. The on-line DL imaging method can obtain the sparse representation from the data currently considered by jointly optimizing the imaging and DL processes. The results of both simulated and real ISAR data show that the on-line DL imaging method and the off-line dictionary imaging method are both able to better sparsely represent the target scene leading to better imaging results. The off-line DL based imaging method works better than the on-line DL based imaging method with respect to both imaging quality and computational efficiency.
In view of the imaging quality of sparse ISAR imaging methods is limited by the inaccurate sparse representation of the scene to be imaged, the Dictionary Learning (DL) technique is introduced into ISAR sparse imaging to get better sparse representation of the scene. An off-line DL based imaging method and an on-line DL based imaging method are proposed. The off-line DL imaging method can obtain a better sparse representation via a dictionary learned from the available ISAR images. The on-line DL imaging method can obtain the sparse representation from the data currently considered by jointly optimizing the imaging and DL processes. The results of both simulated and real ISAR data show that the on-line DL imaging method and the off-line dictionary imaging method are both able to better sparsely represent the target scene leading to better imaging results. The off-line DL based imaging method works better than the on-line DL based imaging method with respect to both imaging quality and computational efficiency.
2019, 41(7): 1743-1750.
doi: 10.11999/JEIT180707
Abstract:
The detection performance of ship targets by skywave Over-The-Horizon Radar (OTHR) is affected by the sea clutter seriously. Accurate and adaptive suppression of sea clutter is significant for improving the detection performance of ship target. To solve the non-adaptive shortness of the sea clutter suppression algorithm based on High-Order Singular Value Decomposition (HOSVD), a modified adaptive algorithm based on Peak Signal-to-Noise Ratio (PSNR)-HOSVD is proposed by introducing the PSNR. The modified algorithm has a smaller computational complexity than the one based on HOSVD, since only one projection matrix is established from the left singular vectors of the third-mode unfolding matrix. Meanwhile, the modified algorithm has a better performance than the HOSVD based one, because the components of sea clutter are only aggregated in the column space of the third-mode unfolding matrix. Experimental results based on two sets of measured data received in ideal and non-ideal situations in respective show that, the modified adaptive algorithm based on PSNR-HOSVD has a better performance than the peer algorithms.
The detection performance of ship targets by skywave Over-The-Horizon Radar (OTHR) is affected by the sea clutter seriously. Accurate and adaptive suppression of sea clutter is significant for improving the detection performance of ship target. To solve the non-adaptive shortness of the sea clutter suppression algorithm based on High-Order Singular Value Decomposition (HOSVD), a modified adaptive algorithm based on Peak Signal-to-Noise Ratio (PSNR)-HOSVD is proposed by introducing the PSNR. The modified algorithm has a smaller computational complexity than the one based on HOSVD, since only one projection matrix is established from the left singular vectors of the third-mode unfolding matrix. Meanwhile, the modified algorithm has a better performance than the HOSVD based one, because the components of sea clutter are only aggregated in the column space of the third-mode unfolding matrix. Experimental results based on two sets of measured data received in ideal and non-ideal situations in respective show that, the modified adaptive algorithm based on PSNR-HOSVD has a better performance than the peer algorithms.
2019, 41(7): 1751-1757.
doi: 10.11999/JEIT180520
Abstract:
Appropriate warhead structure modeling is the basis for warhead parameters estimation. In this paper, the warhead is modeled by the blunt-nosed chamfered cone model, which regards the spherical center and the chamfer scattering centers as the sliding centers and takes the influence of the side of the cone into account, the general form of the position of the scattering centers is given based on the model. Then, the micro-motion of the scattering centers in the blunt-nosed chamfered cone model is derived. Based on this, a nonlinear optimization method is proposed to estimate the target's motion parameters and structural parameters. Finally, simulation results verify the correctness of the model and the effectiveness of the parameter estimation method.
Appropriate warhead structure modeling is the basis for warhead parameters estimation. In this paper, the warhead is modeled by the blunt-nosed chamfered cone model, which regards the spherical center and the chamfer scattering centers as the sliding centers and takes the influence of the side of the cone into account, the general form of the position of the scattering centers is given based on the model. Then, the micro-motion of the scattering centers in the blunt-nosed chamfered cone model is derived. Based on this, a nonlinear optimization method is proposed to estimate the target's motion parameters and structural parameters. Finally, simulation results verify the correctness of the model and the effectiveness of the parameter estimation method.
2019, 41(7): 1758-1765.
doi: 10.11999/JEIT181061
Abstract:
With the development of earth remote sensing technology, SAR system is required to obtain high resolution and wide swath simultaneously, the space borne array SAR combined with Digital Beam Forming (DBF) technology provides a good solution to solve the problem. However, the phase error between channels will degrade the quality of DBF, and the traditional compensation methods suffer from large error or limite application. In this paper, a compensation method based on antenna pattern and Doppler correlation coefficient is proposed, using the antenna pattern and meanwhile utilizing the Doppler correlation coefficient. By minimizing the combined cost function, the phase error between channels are estimated. Simulation results using RADAR-SAT data validate the effectiveness of the proposed method.
With the development of earth remote sensing technology, SAR system is required to obtain high resolution and wide swath simultaneously, the space borne array SAR combined with Digital Beam Forming (DBF) technology provides a good solution to solve the problem. However, the phase error between channels will degrade the quality of DBF, and the traditional compensation methods suffer from large error or limite application. In this paper, a compensation method based on antenna pattern and Doppler correlation coefficient is proposed, using the antenna pattern and meanwhile utilizing the Doppler correlation coefficient. By minimizing the combined cost function, the phase error between channels are estimated. Simulation results using RADAR-SAT data validate the effectiveness of the proposed method.
2019, 41(7): 1766-1773.
doi: 10.11999/JEIT181171
Abstract:
In order to solve the problem of speed and position error divergence in the integrated navigation system based on MicroElectro Mechanical Systems (MEMS) inertial device and GPS system combined positioning, an improved Adaptive Unsecnted Kalman Filter (AUKF) enhanced by the Radial Basis Function(RBF) neural network based on Artificial Bee Colony(ABC) algorithm is proposed. When the GPS signal is out of lock, the trained network outputs predictied information to perform error correction on the Strapdown Inertial Navigation System(SINS). Finally, the performance of the method is verified by vehicle-mounted semi-physical simulation experiments. The experimental results show that the proposed method has a significant inhibitory effect on the error divergence of the strapdown inertial navigation system in the case of loss of lock.
In order to solve the problem of speed and position error divergence in the integrated navigation system based on MicroElectro Mechanical Systems (MEMS) inertial device and GPS system combined positioning, an improved Adaptive Unsecnted Kalman Filter (AUKF) enhanced by the Radial Basis Function(RBF) neural network based on Artificial Bee Colony(ABC) algorithm is proposed. When the GPS signal is out of lock, the trained network outputs predictied information to perform error correction on the Strapdown Inertial Navigation System(SINS). Finally, the performance of the method is verified by vehicle-mounted semi-physical simulation experiments. The experimental results show that the proposed method has a significant inhibitory effect on the error divergence of the strapdown inertial navigation system in the case of loss of lock.
2019, 41(7): 1774-1778.
doi: 10.11999/JEIT180761
Abstract:
Discriminant Neighborhood Embedding (DNE) algorithm is introduced into the speaker recognition system. DNE is a manifold learning approach and aims at preserving the local neighborhood structure on the data manifold. As well, DNE has much more power in discrimination by sufficiently using the between-class discriminant information. The experimental results on the telephone-telephone core condition of the NIST 2010 Speaker Recognition Evaluation (SRE) dataset indicate the effectiveness of DNE algorithm.
Discriminant Neighborhood Embedding (DNE) algorithm is introduced into the speaker recognition system. DNE is a manifold learning approach and aims at preserving the local neighborhood structure on the data manifold. As well, DNE has much more power in discrimination by sufficiently using the between-class discriminant information. The experimental results on the telephone-telephone core condition of the NIST 2010 Speaker Recognition Evaluation (SRE) dataset indicate the effectiveness of DNE algorithm.