Email alert
2018 Vol. 40, No. 10
Display Method:
2018, 40(10)
Abstract:
2018, 40(10): 2287-2293.
doi: 10.11999/JEIT180043
Abstract:
Uplink resource allocation problem in Device-to-Device (D2D) communications underlaying LTE-A networks is analyzed. First, the problem is modeled as a Mixed Integer NonLinear Programming (MINLP). Then the algorithm calculates each waiting user’s identity list in accordance with the preference for channels to form coalitions. On the premise of guaranteeing the Quality of Service (QoS) of users in the system, the suitable resource and reuse partner are assigned to each user through Maximum Weighted Bipartite Matching (MWBM). The simulation results show that this algorithm can break the constraint that D2D pairs can only stay on dedicated or reused mode when they are on data transmission, and expand the range of available resource for D2D users, which increases effectively the system sum-rate compared with the existing algorithm.
Uplink resource allocation problem in Device-to-Device (D2D) communications underlaying LTE-A networks is analyzed. First, the problem is modeled as a Mixed Integer NonLinear Programming (MINLP). Then the algorithm calculates each waiting user’s identity list in accordance with the preference for channels to form coalitions. On the premise of guaranteeing the Quality of Service (QoS) of users in the system, the suitable resource and reuse partner are assigned to each user through Maximum Weighted Bipartite Matching (MWBM). The simulation results show that this algorithm can break the constraint that D2D pairs can only stay on dedicated or reused mode when they are on data transmission, and expand the range of available resource for D2D users, which increases effectively the system sum-rate compared with the existing algorithm.
2018, 40(10): 2294-2300.
doi: 10.11999/JEIT180088
Abstract:
The achievable rate performance for the massive MIMO-OFDM system is investigated, where each antenna is equipped with low-resolution Analog-to-Digital Converters (ADC) and the Maximum Ratio Combining (MRC) receiver is assumed to be employed. The closed-form expression for the uplink achievable rate is firstly derived by using the Additive Quantization Noise Model (AQNM) model, which reforms the nonlinear quantization function into a linear one. Then the performance between the low-resolution quantization system and the conventional system with infinite resolution ADCs is compared based on the derived closed-form expression. Simulation results are presented to verify the analytical results. In addition, it is illustrated that the performance loss of using low-resolution ADCs can be compensated for by deploying more antennas at the base station.
The achievable rate performance for the massive MIMO-OFDM system is investigated, where each antenna is equipped with low-resolution Analog-to-Digital Converters (ADC) and the Maximum Ratio Combining (MRC) receiver is assumed to be employed. The closed-form expression for the uplink achievable rate is firstly derived by using the Additive Quantization Noise Model (AQNM) model, which reforms the nonlinear quantization function into a linear one. Then the performance between the low-resolution quantization system and the conventional system with infinite resolution ADCs is compared based on the derived closed-form expression. Simulation results are presented to verify the analytical results. In addition, it is illustrated that the performance loss of using low-resolution ADCs can be compensated for by deploying more antennas at the base station.
2018, 40(10): 2301-2308.
doi: 10.11999/JEIT170929
Abstract:
To describe the near field effect and the non-stationary characteristic of the Massive MIMO channel, a non-stationary 3D spatial channel model based on stochastic scattering clusters for Massive MIMO systems is proposed. The parabolic wave instead of the spherical wave is used to model the near field effect, and the channel capacity of the model is analyzed under parabolic wavefront condition. For non-stationary properties of massive MIMO channel, the effective scattering clusters set of transmitting and receiving antenna elements is determined based on the effective probability of scattering clusters, and the stochastic evolution of scattering clusters along the antenna array axis is modeled to describe properly the appearance and disappearance of scattering clusters. Simulation results demonstrate that parabolic wavefront and the stochastic evolution of effective scattering clusters are good candidates to model Massive MIMO channel characteristics.
To describe the near field effect and the non-stationary characteristic of the Massive MIMO channel, a non-stationary 3D spatial channel model based on stochastic scattering clusters for Massive MIMO systems is proposed. The parabolic wave instead of the spherical wave is used to model the near field effect, and the channel capacity of the model is analyzed under parabolic wavefront condition. For non-stationary properties of massive MIMO channel, the effective scattering clusters set of transmitting and receiving antenna elements is determined based on the effective probability of scattering clusters, and the stochastic evolution of scattering clusters along the antenna array axis is modeled to describe properly the appearance and disappearance of scattering clusters. Simulation results demonstrate that parabolic wavefront and the stochastic evolution of effective scattering clusters are good candidates to model Massive MIMO channel characteristics.
2018, 40(10): 2309-2315.
doi: 10.11999/JEIT171073
Abstract:
Sparse Code Multiple Access (SCMA), based on Message Passing Algorithm (MPA) for multiuser detection, is a Non-Orthogonal Multiple Access (NOMA) scheme proposed to meet the demands of the future 5G communication. For the problem that the MPA has the characteristics of high algorithm complexity, some statistical results for the Probability Density Function (PDF) of received signal at various Signal to Noise Ratio (SNR) are first derived. Then, data mapping relationship between resources node and users node is fully considered based on the non-orthogonal property of SCMA, therefore a Partial Codewords Searching of MPA (PCS-MPA) is proposed with threshold decision scheme of PDF. Simulations results show that the proposed PCS-MPA can reduce the complexity without changing the Bit Error Ratio (BER), especially at the case of high SNR.
Sparse Code Multiple Access (SCMA), based on Message Passing Algorithm (MPA) for multiuser detection, is a Non-Orthogonal Multiple Access (NOMA) scheme proposed to meet the demands of the future 5G communication. For the problem that the MPA has the characteristics of high algorithm complexity, some statistical results for the Probability Density Function (PDF) of received signal at various Signal to Noise Ratio (SNR) are first derived. Then, data mapping relationship between resources node and users node is fully considered based on the non-orthogonal property of SCMA, therefore a Partial Codewords Searching of MPA (PCS-MPA) is proposed with threshold decision scheme of PDF. Simulations results show that the proposed PCS-MPA can reduce the complexity without changing the Bit Error Ratio (BER), especially at the case of high SNR.
2018, 40(10): 2316-2322.
doi: 10.11999/JEIT171222
Abstract:
Considering the requirements of time and data synchronization in multi-BS (Base Station) positioning in the current outdoor cellular network and the problem of signals’ detectability in area without service BS due to NLOS (Non-Line-Of-Sight) environment, a single base station localization algorithm based on B-LM (Broyden Fletcher Goldfarb Shanno-Levenberg Marquard) ring of scattering model using NLOS information is proposed. Firstly, the localization objective equation is constructed according to the geometric positions of the scatterers, the target, the base station and the NLOS multipath information. Then, the localization equation is transformed into the least square optimization problem. Finally, the B-LM algorithm based on Hessian matrix modification methodology in LM algorithm and the construction of second order partial derivative in quasi-Newton algorithm is proposed, which ensures the localization algorithm converges to the optimal solutions to obtain the target’s location. The simulation results show that the proposed single base station localization algorithm can achieve a high positioning accuracy in the NLOS environment for macrocell.
Considering the requirements of time and data synchronization in multi-BS (Base Station) positioning in the current outdoor cellular network and the problem of signals’ detectability in area without service BS due to NLOS (Non-Line-Of-Sight) environment, a single base station localization algorithm based on B-LM (Broyden Fletcher Goldfarb Shanno-Levenberg Marquard) ring of scattering model using NLOS information is proposed. Firstly, the localization objective equation is constructed according to the geometric positions of the scatterers, the target, the base station and the NLOS multipath information. Then, the localization equation is transformed into the least square optimization problem. Finally, the B-LM algorithm based on Hessian matrix modification methodology in LM algorithm and the construction of second order partial derivative in quasi-Newton algorithm is proposed, which ensures the localization algorithm converges to the optimal solutions to obtain the target’s location. The simulation results show that the proposed single base station localization algorithm can achieve a high positioning accuracy in the NLOS environment for macrocell.
2018, 40(10): 2323-2330.
doi: 10.11999/JEIT180003
Abstract:
To ensure secure transmission in dense heterogeneous cellular networks with imperfect Channel State Information (CSI), the influence of Artificial Noise (AN) on secure and reliable communication is analyzed, and a power split factor optimization model is presented to obtain the optimal value under different channel estimation accuracy. First, the connection outage probability and secrecy outage probability are deduced by considering the influence of channel estimation error on signal transmission and AN leakage. Then, a power split factor optimization model is presented, which maximizes the secrecy throughput subject to the security and reliability requirements. A K-dimensional search method is employed to solve the optimal power split factor of each tier. Finally, the numerical results verify that the AN transmission scheme with optimal power split factor can increase secrecy throughput by about 15%.
To ensure secure transmission in dense heterogeneous cellular networks with imperfect Channel State Information (CSI), the influence of Artificial Noise (AN) on secure and reliable communication is analyzed, and a power split factor optimization model is presented to obtain the optimal value under different channel estimation accuracy. First, the connection outage probability and secrecy outage probability are deduced by considering the influence of channel estimation error on signal transmission and AN leakage. Then, a power split factor optimization model is presented, which maximizes the secrecy throughput subject to the security and reliability requirements. A K-dimensional search method is employed to solve the optimal power split factor of each tier. Finally, the numerical results verify that the AN transmission scheme with optimal power split factor can increase secrecy throughput by about 15%.
2018, 40(10): 2331-2336.
doi: 10.11999/JEIT180023
Abstract:
To deal with the problem that the bit error rate reduces obviously when the Code Index Modulation (CIM) is used to improve spectrum utilization, a novel Non-orthogonal-Code Index Modulation (N-CIM) is proposed. The bit stream of the transmitter is divided into mapping block of Pseudo Noise (PN) code and informational block of modulation, which are mapped into the index of the PN code and modulation symbol respectively. The real part and imaginary part of the modulation symbol are spread by selecting the identical activate PN code. Simulation and analysis results show that N-CIM outperforms CIM by about 2~3 dB in additive white Gaussian noise channel when the bit error rate is 10–5, and N-CIM outperforms CIM by about 2 dB in Rayleigh fading channel when the bit error rate is 10–2 at the same spectral efficiency.
To deal with the problem that the bit error rate reduces obviously when the Code Index Modulation (CIM) is used to improve spectrum utilization, a novel Non-orthogonal-Code Index Modulation (N-CIM) is proposed. The bit stream of the transmitter is divided into mapping block of Pseudo Noise (PN) code and informational block of modulation, which are mapped into the index of the PN code and modulation symbol respectively. The real part and imaginary part of the modulation symbol are spread by selecting the identical activate PN code. Simulation and analysis results show that N-CIM outperforms CIM by about 2~3 dB in additive white Gaussian noise channel when the bit error rate is 10–5, and N-CIM outperforms CIM by about 2 dB in Rayleigh fading channel when the bit error rate is 10–2 at the same spectral efficiency.
2018, 40(10): 2337-2344.
doi: 10.11999/JEIT171232
Abstract:
In conventional cooperative spectrum sensing, the signal model is usually simplified as a single-stage channel environment where the Secondary Users (SUs) collect their spectrum data and report to the Fusion Center (FC) with the same transmit power. This hampers the FC from efficiently exploiting the space diversity gain beneath the data of different users. In order to solve this problem and control the user transmit power in reporting their data, three Optimal Power Control (OPC) schemes are proposed. When the Channel Statistic (CS) of the sensing channel and the reporting channel are perfectly known at the FC, a CS Aided Optimal Power Control (CSA-OPC) scheme is derived in closed-form, whereas when the CS is practically unavailable, Principal EigenVector aided OPC (PEV-OPC) and Blindly Weighted Multiple-EigenVector aided OPC (BWMEV-OPC) schemes are developed. Theoretical analysis and computer simulation verify that the propose OPC schemes greatly ameliorate the spectrum sensing performance, compared to the non-OPC aided cooperative spectrum sensing schemes.
In conventional cooperative spectrum sensing, the signal model is usually simplified as a single-stage channel environment where the Secondary Users (SUs) collect their spectrum data and report to the Fusion Center (FC) with the same transmit power. This hampers the FC from efficiently exploiting the space diversity gain beneath the data of different users. In order to solve this problem and control the user transmit power in reporting their data, three Optimal Power Control (OPC) schemes are proposed. When the Channel Statistic (CS) of the sensing channel and the reporting channel are perfectly known at the FC, a CS Aided Optimal Power Control (CSA-OPC) scheme is derived in closed-form, whereas when the CS is practically unavailable, Principal EigenVector aided OPC (PEV-OPC) and Blindly Weighted Multiple-EigenVector aided OPC (BWMEV-OPC) schemes are developed. Theoretical analysis and computer simulation verify that the propose OPC schemes greatly ameliorate the spectrum sensing performance, compared to the non-OPC aided cooperative spectrum sensing schemes.
2018, 40(10): 2345-2351.
doi: 10.11999/JEIT171208
Abstract:
In order to address the problems of the high bandwidth blocking probability and imbalance resources consumption in physical network during virtual optical network mapping, Fragmentation-Aware based on time and spectrum domain of Virtual Network Mapping (FA-VNM) algorithm is proposed. In the FA-VNM algorithm, the fragments problem in the time domain and the spectrum domain is considered. Fragment formula jointly considering the time fragment and spectrum fragment is devised to minimize the spectrum fragments. Further, in order to balance the network resources consumption, based on the FA-VNM, Load Balancing based on degree of Virtual Network Mapping (LB-VNM) algorithm is proposed. In the stage of node mapping, physical node average resource carrying capacity is introduced and the physical node with larger average resources carrying capacity is mapped first. In order to balance the resource consumption in physical path, weight value of physical path is calculated in the stage of link mapping. Then, according to the weight value of each physical path, virtual links are mapped to achieve the purpose of load balancing for reduce the blocking rate. Simulation results show that the algorithms can effectively reduce the blocking rate and improve the resources utilization.
In order to address the problems of the high bandwidth blocking probability and imbalance resources consumption in physical network during virtual optical network mapping, Fragmentation-Aware based on time and spectrum domain of Virtual Network Mapping (FA-VNM) algorithm is proposed. In the FA-VNM algorithm, the fragments problem in the time domain and the spectrum domain is considered. Fragment formula jointly considering the time fragment and spectrum fragment is devised to minimize the spectrum fragments. Further, in order to balance the network resources consumption, based on the FA-VNM, Load Balancing based on degree of Virtual Network Mapping (LB-VNM) algorithm is proposed. In the stage of node mapping, physical node average resource carrying capacity is introduced and the physical node with larger average resources carrying capacity is mapped first. In order to balance the resource consumption in physical path, weight value of physical path is calculated in the stage of link mapping. Then, according to the weight value of each physical path, virtual links are mapped to achieve the purpose of load balancing for reduce the blocking rate. Simulation results show that the algorithms can effectively reduce the blocking rate and improve the resources utilization.
2018, 40(10): 2352-2357.
doi: 10.11999/JEIT170965
Abstract:
Plateaued functions play a significant role in cryptography, coding theory and so on. In this paper, a new primary construction of plateaued function is given. Some cryptographic properties of the constructed plateaued functions are studied. It is shown that the existing primary constructions of plateaued function can be reduced to the proposed construction.
Plateaued functions play a significant role in cryptography, coding theory and so on. In this paper, a new primary construction of plateaued function is given. Some cryptographic properties of the constructed plateaued functions are studied. It is shown that the existing primary constructions of plateaued function can be reduced to the proposed construction.
2018, 40(10): 2358-2364.
doi: 10.11999/JEIT171207
Abstract:
The outlier nodes detection and localization in Wireless Sensor Networks (WSNs) is a crucial step in ensuring the accuracy and reliability of network data acquisition. Based on the theory of graph signal processing, a novel algorithm is presented for outlier detection and localization in WSNs. The new algorithm first builds the graph signal model of the network, then detect the location of the outlier based on the method of vertex-domain and graph frequency-domain joint analysis. Specifically speaking, the first step of algorithm is extracting the high-frequency component of the signal using a high-pass graph filter. In the second step, the network is decomposed into a set of sub-graphs, and then the specific frequency components of the output signal in sub-graphs are filtered out. The third step is to locate the suspected outlier center-nodes of sub-graphs based on the threshold of the filtered sub-graphs signal. Finally, the outlier nodes in the network are detected and located by comparing the set of nodes of each sub-graph with the set of suspected outlier nodes. Experimental results show that compared with the existing outlier detection methods in networks, the proposed method not only has higher probability of outlier detection, but also has a higher positioning rate of outlier nodes.
The outlier nodes detection and localization in Wireless Sensor Networks (WSNs) is a crucial step in ensuring the accuracy and reliability of network data acquisition. Based on the theory of graph signal processing, a novel algorithm is presented for outlier detection and localization in WSNs. The new algorithm first builds the graph signal model of the network, then detect the location of the outlier based on the method of vertex-domain and graph frequency-domain joint analysis. Specifically speaking, the first step of algorithm is extracting the high-frequency component of the signal using a high-pass graph filter. In the second step, the network is decomposed into a set of sub-graphs, and then the specific frequency components of the output signal in sub-graphs are filtered out. The third step is to locate the suspected outlier center-nodes of sub-graphs based on the threshold of the filtered sub-graphs signal. Finally, the outlier nodes in the network are detected and located by comparing the set of nodes of each sub-graph with the set of suspected outlier nodes. Experimental results show that compared with the existing outlier detection methods in networks, the proposed method not only has higher probability of outlier detection, but also has a higher positioning rate of outlier nodes.
2018, 40(10): 2365-2372.
doi: 10.11999/JEIT180016
Abstract:
Behavioral analysis of Internet users over time is a hot spot in user behavior analysis in recent years, usually clustering users is a way to find the feature of user behavior. Problems like poor computing performance or inaccurate distance metric exist in present research about clustering user time series data, which is unable to deal with large scale data. To solve this problem, a method for clustering time series in user behavior is proposed based on symmetric Kullback-Leibler (KL) distance. First time series data is transformed into probability models, and then a distance metric named KL distance is introduce, using partition clustering method, the different time distribution between different users. For the Large-scale feature of physical network data, each process of clustering is optimized based on the characteristics of KL distance. It also proves an efficient solution for finding the clustering centroids. The experimental results show that this method can improve the accuracy of 4% compared with clustering algorithm using the Euclidean distance metric or DTW metric, and the calculation time of this method is less a quantity degree than clustering algorithm using medoids centroids. This method is used to deal with user traffic data obtained in physical network which proves its application value.
Behavioral analysis of Internet users over time is a hot spot in user behavior analysis in recent years, usually clustering users is a way to find the feature of user behavior. Problems like poor computing performance or inaccurate distance metric exist in present research about clustering user time series data, which is unable to deal with large scale data. To solve this problem, a method for clustering time series in user behavior is proposed based on symmetric Kullback-Leibler (KL) distance. First time series data is transformed into probability models, and then a distance metric named KL distance is introduce, using partition clustering method, the different time distribution between different users. For the Large-scale feature of physical network data, each process of clustering is optimized based on the characteristics of KL distance. It also proves an efficient solution for finding the clustering centroids. The experimental results show that this method can improve the accuracy of 4% compared with clustering algorithm using the Euclidean distance metric or DTW metric, and the calculation time of this method is less a quantity degree than clustering algorithm using medoids centroids. This method is used to deal with user traffic data obtained in physical network which proves its application value.
2018, 40(10): 2373-2380.
doi: 10.11999/JEIT171128
Abstract:
Object tracking is easily influenced by illumination, occlusion, scale, background clutter, and fast motion, and it requires higher real-time performance. The object tracking algorithm based on compressive sensing has a better real-time performance but performs weakly in tracking when object appearance is changed greatly. Based on the framework of compressive sensing, a Multi-Model real-time Compressive Tracking (MMCT) algorithm is proposed, which adopts the compressive sensing to decrease the high dimensional features for the tracking process and to satisfy the real-time performance. The MMCT algorithm selects the most suitable classifier by judging the maximum classification score difference of classifiers in the previous two frames, and enhances the accuracy of location. The MMCT algorithm also presents a new model update strategy, which employs the fixed or dynamic learning rates according to the differences of decision classifiers and improves the precision of classification. The multi-model introduced by MMCT does not increase the computational burden and shows an excellent real-time performance. The experimental results indicate that the MMCT algorithm can well adapt to illumination, occlusion, background clutter and plane-rotation.
Object tracking is easily influenced by illumination, occlusion, scale, background clutter, and fast motion, and it requires higher real-time performance. The object tracking algorithm based on compressive sensing has a better real-time performance but performs weakly in tracking when object appearance is changed greatly. Based on the framework of compressive sensing, a Multi-Model real-time Compressive Tracking (MMCT) algorithm is proposed, which adopts the compressive sensing to decrease the high dimensional features for the tracking process and to satisfy the real-time performance. The MMCT algorithm selects the most suitable classifier by judging the maximum classification score difference of classifiers in the previous two frames, and enhances the accuracy of location. The MMCT algorithm also presents a new model update strategy, which employs the fixed or dynamic learning rates according to the differences of decision classifiers and improves the precision of classification. The multi-model introduced by MMCT does not increase the computational burden and shows an excellent real-time performance. The experimental results indicate that the MMCT algorithm can well adapt to illumination, occlusion, background clutter and plane-rotation.
2018, 40(10): 2381-2387.
doi: 10.11999/JEIT180184
Abstract:
The accuracy of pedestrian re-recognition mainly depends on the similarity measure and the feature learning model. The existing measurement methods have the characteristics of translation invariance, which make the training of network parameters difficult. Several existing feature learning models only emphasize the absolute distance between sample pairs, but ignore the relative distance between positive sample pairs and negative sample pairs, resulting in a weak discriminant feature in network learning. In view of the shortcomings of existing measurement methods, a distance measurement method of translation change is presented, which can effectively measure the similarity between images. To overcome the shortcomings of the feature learning model, based on the proposed translation distance metric, a new logistic regression model with enlarged intervals is proposed. By increasing the relative distance between the positive and negative sample pairs, the network can get more discriminant features. In the experiment, the validity of the proposed measurement and the feature learning model is verified on the Market1501, CUHK03 database. Experimental results show that the proposed metric performs better than the Mahalanobis distance metric 6.59%, and the proposed feature learning algorithm also achieves good performance. The average precision of the algorithm is improved significantly compared with the existing advanced algorithms.
The accuracy of pedestrian re-recognition mainly depends on the similarity measure and the feature learning model. The existing measurement methods have the characteristics of translation invariance, which make the training of network parameters difficult. Several existing feature learning models only emphasize the absolute distance between sample pairs, but ignore the relative distance between positive sample pairs and negative sample pairs, resulting in a weak discriminant feature in network learning. In view of the shortcomings of existing measurement methods, a distance measurement method of translation change is presented, which can effectively measure the similarity between images. To overcome the shortcomings of the feature learning model, based on the proposed translation distance metric, a new logistic regression model with enlarged intervals is proposed. By increasing the relative distance between the positive and negative sample pairs, the network can get more discriminant features. In the experiment, the validity of the proposed measurement and the feature learning model is verified on the Market1501, CUHK03 database. Experimental results show that the proposed metric performs better than the Mahalanobis distance metric 6.59%, and the proposed feature learning algorithm also achieves good performance. The average precision of the algorithm is improved significantly compared with the existing advanced algorithms.
2018, 40(10): 2388-2394.
doi: 10.11999/JEIT171032
Abstract:
Hazy image enhancement has important practical significance. Since the existing haze removal algorithms have disadvantages in improving the global contrast of images, a novel hazy image enhancement algorithm is presented by integrating advantages of haze removal and histogram equalization. Firstly, the hazy image is processed respectively using guided filtering-based dark channel prior algorithm and HSV space-based histogram equalization algorithm. Then, the output image is obtained by fusing the above two results using weighting factor which is constructed by the revised transmittance map. Simulation results show that the algorithm has higher standard deviation, average gradient and information entropy than the present hazy removal algorithm, and shows better result of global and local contrast. The running time of the algorithm mainly depends on the process of haze removal, which can meet the real-time requirements for normal size image.
Hazy image enhancement has important practical significance. Since the existing haze removal algorithms have disadvantages in improving the global contrast of images, a novel hazy image enhancement algorithm is presented by integrating advantages of haze removal and histogram equalization. Firstly, the hazy image is processed respectively using guided filtering-based dark channel prior algorithm and HSV space-based histogram equalization algorithm. Then, the output image is obtained by fusing the above two results using weighting factor which is constructed by the revised transmittance map. Simulation results show that the algorithm has higher standard deviation, average gradient and information entropy than the present hazy removal algorithm, and shows better result of global and local contrast. The running time of the algorithm mainly depends on the process of haze removal, which can meet the real-time requirements for normal size image.
2018, 40(10): 2395-2401.
doi: 10.11999/JEIT171116
Abstract:
Inspired by the mechanism of human brain visual perception, an action recognition approach integrating dual spatio-temporal network flow and visual attention is proposed in a deep learning framework. First, the optical flow features with body motion are extracted frame-by-frame from video with coarse-to-fine Lucas-Kanade flow estimation. Then, the GoogLeNet neural network with fine-tuned pre-trained model is applied to convoluting layer-by-layer and aggregate respectively appearance images and the related optical flow features in the selected time window. Next, the multi-layered Long Short-Term Memory (LSTM) neural networks are exploited to cross-recursively perceive the spatio-temporal semantic feature sequences with high level and significant structure. Meanwhile, the inter-dependent implicit states are decoded in the given time window, and the attention salient feature sequence is obtained from temporal stream with the visual feature descriptor in spatial stream and the label probability of each frame. Then, the temporal attention confidence for each frame with respect to human actions is calculated with the relative entropy measure and fused with the probability distributions with respect to the action categories from the given spatial perception network stream in the video sequence. Finally, the softmax classifier is exploited to identify the category of human action in the given video sequence. Experimental results show that this presented approach has significant advantages in classification accuracy compared with other methods.
Inspired by the mechanism of human brain visual perception, an action recognition approach integrating dual spatio-temporal network flow and visual attention is proposed in a deep learning framework. First, the optical flow features with body motion are extracted frame-by-frame from video with coarse-to-fine Lucas-Kanade flow estimation. Then, the GoogLeNet neural network with fine-tuned pre-trained model is applied to convoluting layer-by-layer and aggregate respectively appearance images and the related optical flow features in the selected time window. Next, the multi-layered Long Short-Term Memory (LSTM) neural networks are exploited to cross-recursively perceive the spatio-temporal semantic feature sequences with high level and significant structure. Meanwhile, the inter-dependent implicit states are decoded in the given time window, and the attention salient feature sequence is obtained from temporal stream with the visual feature descriptor in spatial stream and the label probability of each frame. Then, the temporal attention confidence for each frame with respect to human actions is calculated with the relative entropy measure and fused with the probability distributions with respect to the action categories from the given spatial perception network stream in the video sequence. Finally, the softmax classifier is exploited to identify the category of human action in the given video sequence. Experimental results show that this presented approach has significant advantages in classification accuracy compared with other methods.
2018, 40(10): 2402-2407.
doi: 10.11999/JEIT171125
Abstract:
The q-gradient is a generalized gradient based on the q-derivative concept. To improve the filtering performance of the Affine Projection Algorithm (APA), the q-gradient is applied to APA based on the minimum of the recent mean square errors, generating a novel q-Affine Projection Algorithm (q-APA). The q-APA with appropriate setting of q achieves desirable filtering performance in the presence of Gaussian noises. A sufficient condition for guaranteeing convergence of the proposed q-APA is also presented, and its steady-state Excess Mean Square Error (EMSE) of q-APA is obtained theoretically to evaluate the filtering performance. In addition, the Variable q-APA (V-q-APA) is developed to improve further the filtering performance. Simulations in the context of system identification demonstrate the superior filtering performance of the proposed algorithms compared with APA and Variable q-Least Mean Square (V-q-LMS) algorithm in the presence of Gaussian noise.
The q-gradient is a generalized gradient based on the q-derivative concept. To improve the filtering performance of the Affine Projection Algorithm (APA), the q-gradient is applied to APA based on the minimum of the recent mean square errors, generating a novel q-Affine Projection Algorithm (q-APA). The q-APA with appropriate setting of q achieves desirable filtering performance in the presence of Gaussian noises. A sufficient condition for guaranteeing convergence of the proposed q-APA is also presented, and its steady-state Excess Mean Square Error (EMSE) of q-APA is obtained theoretically to evaluate the filtering performance. In addition, the Variable q-APA (V-q-APA) is developed to improve further the filtering performance. Simulations in the context of system identification demonstrate the superior filtering performance of the proposed algorithms compared with APA and Variable q-Least Mean Square (V-q-LMS) algorithm in the presence of Gaussian noise.
2018, 40(10): 2408-2414.
doi: 10.11999/JEIT170813
Abstract:
Smoothed l0 norm (SL0) algorithm is a compressive sensing reconstruction algorithm based on approximate l0 norm, which uses the steepest descent method and gradient projection principle, by selecting a decreasing sequence to get the optimal solution. It has the advantages of high matching degree, low computational complexity and without knowing the signal sparsity. However, the iterative direction of steepest descent method is negative gradient direction, which leads to the " sawtooth phenomenon” and the slower convergence speed in the vicinity of the optimal solution. The Newton method has a good convergence speed but has higher requirement of the initial value and needs to calculate the Hessian matrix. The quasi Newton method overcomes this shortcoming and uses BFGS formula to calculate the approximate matrix of the Hessian matrix, it only needs the first derivative information. On the basis of SL0 algorithm and BFGS quasi Newton method, an improved reconstruction algorithm for Compressed Sensing (CS) signal is proposed. The steepest descent method is first used to get an estimated value, and then is taken as the initial value of quasi Newton method, using BFGS method to update the iterative direction until retaining the optimal solution. The simulation results show that the proposed algorithm has great improvement in reconstruction accuracy, peak signal to noise ratio and reconstruction matching degree.
Smoothed l0 norm (SL0) algorithm is a compressive sensing reconstruction algorithm based on approximate l0 norm, which uses the steepest descent method and gradient projection principle, by selecting a decreasing sequence to get the optimal solution. It has the advantages of high matching degree, low computational complexity and without knowing the signal sparsity. However, the iterative direction of steepest descent method is negative gradient direction, which leads to the " sawtooth phenomenon” and the slower convergence speed in the vicinity of the optimal solution. The Newton method has a good convergence speed but has higher requirement of the initial value and needs to calculate the Hessian matrix. The quasi Newton method overcomes this shortcoming and uses BFGS formula to calculate the approximate matrix of the Hessian matrix, it only needs the first derivative information. On the basis of SL0 algorithm and BFGS quasi Newton method, an improved reconstruction algorithm for Compressed Sensing (CS) signal is proposed. The steepest descent method is first used to get an estimated value, and then is taken as the initial value of quasi Newton method, using BFGS method to update the iterative direction until retaining the optimal solution. The simulation results show that the proposed algorithm has great improvement in reconstruction accuracy, peak signal to noise ratio and reconstruction matching degree.
2018, 40(10): 2415-2422.
doi: 10.11999/JEIT180032
Abstract:
In order to solve the problem of near-field source localization and array gain-phase error calibration, a method of gain-phase error calibration is proposed based on uniform array symmetry. The distance parameter is separated by reconstructing the virtual array, and then the decoupling between azimuth and error is realized by transforming the steering vector of the virtual array. Through the transformation of the real array steering vector, the decoupling between the distance and the gain-phase error is realized, and the cascade estimation of the azimuth and distance of the near-field source and the gain-phase error coefficient of the array is achieved. The simulation results show that compared with the exist algorithms, the proposed algorithm has less computational complexity, more accurate azimuth and distance parameters estimation, and higher accuracy of gain and phase error calibration.
In order to solve the problem of near-field source localization and array gain-phase error calibration, a method of gain-phase error calibration is proposed based on uniform array symmetry. The distance parameter is separated by reconstructing the virtual array, and then the decoupling between azimuth and error is realized by transforming the steering vector of the virtual array. Through the transformation of the real array steering vector, the decoupling between the distance and the gain-phase error is realized, and the cascade estimation of the azimuth and distance of the near-field source and the gain-phase error coefficient of the array is achieved. The simulation results show that compared with the exist algorithms, the proposed algorithm has less computational complexity, more accurate azimuth and distance parameters estimation, and higher accuracy of gain and phase error calibration.
2018, 40(10): 2423-2429.
doi: 10.11999/JEIT180022
Abstract:
The robust beamformer suffers performance degradation due to the distortion of towed array shape caused by the maneuverings of tow platform. To address this problem, a low complexity robust Capon beamforming method is proposed based on time-varying array focusing and dimension reduction. First, the array shape is estimated sequentially using the array heading data based on Water-Pulley model. The Sample Covariance Matrix (SCM) at each recording time is focused to a reference array model via the STeered Covariance Matrix (STCM) technique to eliminate the array model error. Then, the reducing transform matrix is formed based on the conjugate gradient direction vectors of the focused SCM. The reduced-dimension Capon beamformer is finally derived to calculate the spatial spectrum. The results of the simulations show that, the proposed method can improve the Signal-to-Interference-plus-Noise Ratio (SINR) of the beamforming during the maneuvering of towed array. The results of sea-trial data processing show that the proposed method can improve the output Signal-to-Noise Ratio (SNR) of the target, as well as the performance of detecting weak targets and solving the left-right ambiguity during maneuvering.
The robust beamformer suffers performance degradation due to the distortion of towed array shape caused by the maneuverings of tow platform. To address this problem, a low complexity robust Capon beamforming method is proposed based on time-varying array focusing and dimension reduction. First, the array shape is estimated sequentially using the array heading data based on Water-Pulley model. The Sample Covariance Matrix (SCM) at each recording time is focused to a reference array model via the STeered Covariance Matrix (STCM) technique to eliminate the array model error. Then, the reducing transform matrix is formed based on the conjugate gradient direction vectors of the focused SCM. The reduced-dimension Capon beamformer is finally derived to calculate the spatial spectrum. The results of the simulations show that, the proposed method can improve the Signal-to-Interference-plus-Noise Ratio (SINR) of the beamforming during the maneuvering of towed array. The results of sea-trial data processing show that the proposed method can improve the output Signal-to-Noise Ratio (SNR) of the target, as well as the performance of detecting weak targets and solving the left-right ambiguity during maneuvering.
2018, 40(10): 2430-2437.
doi: 10.11999/JEIT170759
Abstract:
Considering at the problem that the suppression effect of single signal processing or data processing is poor on blanket-deception compound jamming, a suppression algorithm of blanket-distance deception compound jamming based on joint signal-data processing is proposed. Firstly, the Fractional Fourier Transform (FRFT) domain narrowband filtering and LFM signal reconstruction algorithm are used to suppress the suppression of the signal layer and reduce the leakage probability of the real target. Then, the target tracks and the deception tracks are rejected by using the M/N logic method. Finally, according to the different characteristics of the angle variance between the false targets and the true targets, the false targets are eliminated by the \begin{document}${\chi ^2}$\end{document}
test and the clustering algorithm. The simulation results verify the good effect of the algorithm proposed in this paper.
Considering at the problem that the suppression effect of single signal processing or data processing is poor on blanket-deception compound jamming, a suppression algorithm of blanket-distance deception compound jamming based on joint signal-data processing is proposed. Firstly, the Fractional Fourier Transform (FRFT) domain narrowband filtering and LFM signal reconstruction algorithm are used to suppress the suppression of the signal layer and reduce the leakage probability of the real target. Then, the target tracks and the deception tracks are rejected by using the M/N logic method. Finally, according to the different characteristics of the angle variance between the false targets and the true targets, the false targets are eliminated by the
Strobe Pulse Design for Quadrature Multiplexed Binary Offset Carrier Modulation in BeiDou B1C Signal
2018, 40(10): 2438-2446.
doi: 10.11999/JEIT180109
Abstract:
The third generation of BeiDou satellite navigation system employs Quadrature Multiplexed Binary Offset Carrier (QMBOC) modulation for B1C signal. In order to improve the anti-multipath performance of code tracking loops and solve the problem of code tracking ambiguity for BeiDou system, a double strobe code tracking loop structure for QMBOC(6, 1, 4/33) modulation is proposed. According to the ideal phase discrimination function and the auto-correlation function of BOC signal, two kinds of strobe pulse are designed for BOC(1, 1) and BOC(6, 1) respectively. Then, these two strobe pulse waveforms are used to correlated with the input signal in the code tracking loop. Finally, these two correlation functions are weighed combined for phase discrimination. The computer simulation results show that the proposed method can not only eliminate the code tracking ambiguity for QMBOC(6, 1, 4/33), but also improve the anti-multipath performance dramatically: the multipath error envelope area is reduced by about 33% compared with the existed method.
The third generation of BeiDou satellite navigation system employs Quadrature Multiplexed Binary Offset Carrier (QMBOC) modulation for B1C signal. In order to improve the anti-multipath performance of code tracking loops and solve the problem of code tracking ambiguity for BeiDou system, a double strobe code tracking loop structure for QMBOC(6, 1, 4/33) modulation is proposed. According to the ideal phase discrimination function and the auto-correlation function of BOC signal, two kinds of strobe pulse are designed for BOC(1, 1) and BOC(6, 1) respectively. Then, these two strobe pulse waveforms are used to correlated with the input signal in the code tracking loop. Finally, these two correlation functions are weighed combined for phase discrimination. The computer simulation results show that the proposed method can not only eliminate the code tracking ambiguity for QMBOC(6, 1, 4/33), but also improve the anti-multipath performance dramatically: the multipath error envelope area is reduced by about 33% compared with the existed method.
2018, 40(10): 2447-2452.
doi: 10.11999/JEIT180068
Abstract:
To solve the problem of dimension disaster when solving air combat maneuvering decision-making by dynamic programming, a swarm intelligence maneuvering decision-making method based on the approximate dynamic programming is proposed. Firstly, the Unmanned Aerial Vehicle (UAV) dynamic model and advantage functions of situation are established. On this basis, air combat process is divided into several stages according to dynamic programming thought. In order to reduce the search space, an Artificial Potential Field (APF) Guiding Ant Lion Optimizer (ALO) approximate optimal control amount is adopted in each programming stage. Finally, by comparing expert system, the experiment result indicates that the high dynamic and real-time air combat maneuvering decision can be solved by the proposed method effectively.
To solve the problem of dimension disaster when solving air combat maneuvering decision-making by dynamic programming, a swarm intelligence maneuvering decision-making method based on the approximate dynamic programming is proposed. Firstly, the Unmanned Aerial Vehicle (UAV) dynamic model and advantage functions of situation are established. On this basis, air combat process is divided into several stages according to dynamic programming thought. In order to reduce the search space, an Artificial Potential Field (APF) Guiding Ant Lion Optimizer (ALO) approximate optimal control amount is adopted in each programming stage. Finally, by comparing expert system, the experiment result indicates that the high dynamic and real-time air combat maneuvering decision can be solved by the proposed method effectively.
2018, 40(10): 2453-2460.
doi: 10.11999/JEIT170876
Abstract:
The indoor vision positioning algorithm based on object detection is a novel indoor positioning solution, which determines the position of the user through the process of objects detection, position matching, location equation calculation, etc. However, limited by the field-of-view of monocular camera and objects detection accuracy, the localization equation, which is constructed according to the detected objects range information, is seriously ill conditioned. Therefore, this paper proposes a novel localization method based on an improved robust ridge regression estimation, which reduces the influence of the lower accurate observations by iterative weight selection. The experimental results show that compared with Ordinary Least Square (OLS), Levenberg-Marquardt (LM) and Ridge Regression (RR) algorithms, the proposed improved robust ridge regression estimation algorithm can effectively improve the positioning success rate and positioning accuracy of the object detection based indoor navigation method.
The indoor vision positioning algorithm based on object detection is a novel indoor positioning solution, which determines the position of the user through the process of objects detection, position matching, location equation calculation, etc. However, limited by the field-of-view of monocular camera and objects detection accuracy, the localization equation, which is constructed according to the detected objects range information, is seriously ill conditioned. Therefore, this paper proposes a novel localization method based on an improved robust ridge regression estimation, which reduces the influence of the lower accurate observations by iterative weight selection. The experimental results show that compared with Ordinary Least Square (OLS), Levenberg-Marquardt (LM) and Ridge Regression (RR) algorithms, the proposed improved robust ridge regression estimation algorithm can effectively improve the positioning success rate and positioning accuracy of the object detection based indoor navigation method.
2018, 40(10): 2461-2469.
doi: 10.11999/JEIT180035
Abstract:
Coherent Change Detection (CCD) detects change areas in the scene using its decorrelation, yet vegetation areas with volume scattering and low signal-noise ratio areas in the scene also appear as low coherence, which causes interference to change areas to be detected. A polarimetric SAR CCD method is proposed. Firstly, the polarimetric coherence between two SAR images before and after changing is employed to set up weighted trace coherence statistics. Secondly, the polarimetric coherence between channels of each SAR image is employed to set up volume scattering constraint by establishing GEV mixture distribution model and solving parameters of each part using improved EM algorithm. Lastly, constraint of scattering power change is combined to set up the final polarimetric CCD test statistics. Using this method, the interference could be eliminated without influence of detect performance. The method is validated by two L-band full-polarimetric SAR images before and after changing. Results and index parameters demonstrate the correctness and validity of the proposed method.
Coherent Change Detection (CCD) detects change areas in the scene using its decorrelation, yet vegetation areas with volume scattering and low signal-noise ratio areas in the scene also appear as low coherence, which causes interference to change areas to be detected. A polarimetric SAR CCD method is proposed. Firstly, the polarimetric coherence between two SAR images before and after changing is employed to set up weighted trace coherence statistics. Secondly, the polarimetric coherence between channels of each SAR image is employed to set up volume scattering constraint by establishing GEV mixture distribution model and solving parameters of each part using improved EM algorithm. Lastly, constraint of scattering power change is combined to set up the final polarimetric CCD test statistics. Using this method, the interference could be eliminated without influence of detect performance. The method is validated by two L-band full-polarimetric SAR images before and after changing. Results and index parameters demonstrate the correctness and validity of the proposed method.
2018, 40(10): 2470-2477.
doi: 10.11999/JEIT180049
Abstract:
High squint spotlight mode of spaceborne SAR can be used to achieve high resolution and wide swath, and also can be used to acquire information of target from multi-azimuth. However, the considerable range migration can result in efficiency decreasing in data acquisition, and dilemma in system design. This problem can be solved by the technology named PRI (Pulse Repetition Interval) variation which can track the slant range variation of the target during data acquisition. In this paper, the principle of PRI variation is studied, and methods of PRI sequence iterative design and system parameter selection are proposed. Two approaches to reconstruct the nonequal spaced and nonperiod data in azimuth sampling are compared. Finally, the first results of PRI variation mode of airborne SAR experiment with high slant spotlight mode are presented.
High squint spotlight mode of spaceborne SAR can be used to achieve high resolution and wide swath, and also can be used to acquire information of target from multi-azimuth. However, the considerable range migration can result in efficiency decreasing in data acquisition, and dilemma in system design. This problem can be solved by the technology named PRI (Pulse Repetition Interval) variation which can track the slant range variation of the target during data acquisition. In this paper, the principle of PRI variation is studied, and methods of PRI sequence iterative design and system parameter selection are proposed. Two approaches to reconstruct the nonequal spaced and nonperiod data in azimuth sampling are compared. Finally, the first results of PRI variation mode of airborne SAR experiment with high slant spotlight mode are presented.
2018, 40(10): 2478-2483.
doi: 10.11999/JEIT180055
Abstract:
Hyperspectral remote sensing images have a wealth of spectral information and a huge universe of data. In order to utilize effectively hyperspectral image data and promote the development of hyperspectral remote sensing technology, a hyperspectral image compression algorithm based on adaptive band clustering Principal Component Analysis (PCA) and Back Propagation (BP) neural network is proposed. Affinity Propagation (AP) clustering algorithm for adaptive band clustering is used, and PCA is performed on the each band group respectively after clustering. Finally, all principal components are encoded and compressed by BP neural network. The innovation point lies in BP neural network compressed image during the training step, the error of backpropagation is to compare difference between the original image and the output image, and then adjust the weight and threshold of each layer in the reverse direction. Band clustering of hyperspectral images can not only effectively utilize the spectral correlation and improve the compression performance, but also reduce the computational complexity of PCA. Experimental results investigate that the proposed algorithm achieve a better performance on Signal-to-Noise Ratio (SNR) and spectral angle than other algorithm under the same compression ratio.
Hyperspectral remote sensing images have a wealth of spectral information and a huge universe of data. In order to utilize effectively hyperspectral image data and promote the development of hyperspectral remote sensing technology, a hyperspectral image compression algorithm based on adaptive band clustering Principal Component Analysis (PCA) and Back Propagation (BP) neural network is proposed. Affinity Propagation (AP) clustering algorithm for adaptive band clustering is used, and PCA is performed on the each band group respectively after clustering. Finally, all principal components are encoded and compressed by BP neural network. The innovation point lies in BP neural network compressed image during the training step, the error of backpropagation is to compare difference between the original image and the output image, and then adjust the weight and threshold of each layer in the reverse direction. Band clustering of hyperspectral images can not only effectively utilize the spectral correlation and improve the compression performance, but also reduce the computational complexity of PCA. Experimental results investigate that the proposed algorithm achieve a better performance on Signal-to-Noise Ratio (SNR) and spectral angle than other algorithm under the same compression ratio.
2018, 40(10): 2484-2490.
doi: 10.11999/JEIT180081
Abstract:
To solve the problem of declined resolution of Bistatic Inverse Synthetic Aperture Radar (B-ISAR) imaging by bistatic angle, a B-ISAR range profile resolution enhancement algorithm is put forward based on Multiple Measurement Vector (MMV) Complex Approximate Message Passing (MCAMP). The range joint sparse model is established. By utilizing vectorization operation, the joint sparse problem is converted into a block complex basis pursuit denoising problem. To achieve the range profile which is immune to bistatic angle influence, the MCAMP algorithm is proposed by using the Kronecker product. The Fast Fourier Transform (FFT) is introduced to instead of multiplication between matrix and matrix, which improves the efficiency of the proposed algorithm by reducing the computational complexity further. Simulation imaging results verify the effectiveness and efficiency of the proposed method.
To solve the problem of declined resolution of Bistatic Inverse Synthetic Aperture Radar (B-ISAR) imaging by bistatic angle, a B-ISAR range profile resolution enhancement algorithm is put forward based on Multiple Measurement Vector (MMV) Complex Approximate Message Passing (MCAMP). The range joint sparse model is established. By utilizing vectorization operation, the joint sparse problem is converted into a block complex basis pursuit denoising problem. To achieve the range profile which is immune to bistatic angle influence, the MCAMP algorithm is proposed by using the Kronecker product. The Fast Fourier Transform (FFT) is introduced to instead of multiplication between matrix and matrix, which improves the efficiency of the proposed algorithm by reducing the computational complexity further. Simulation imaging results verify the effectiveness and efficiency of the proposed method.
2018, 40(10): 2491-2497.
doi: 10.11999/JEIT171174
Abstract:
In order to solve the angle tracking problem of bistatic MIMO radar when the number of target is unknown, a joint tracking algorithm of the number of target and the angle is proposed. There is no variable in Adaptive Asymmetric Joint Diagonalization (AAJD) algorithm that can directly represent the eigenvalue. Therefore, the idea of principal component sequence estimation is introduced to the improved AAJD algorithm, and the eigenvalues are iteratively evaluated. Then, the number of target is estimated by using the improved information theory. Secondly, the anti-dithering algorithm of target number is proposed, which improves the robustness of the algorithm. Finally, the ESPRIT algorithm is improved to realize the automatic matching and association of DOD and DOA. The simulation results show that the improved AAJD algorithm can successfully track the number of target and angle trajectories. The efficiency of the proposed method is verified.
In order to solve the angle tracking problem of bistatic MIMO radar when the number of target is unknown, a joint tracking algorithm of the number of target and the angle is proposed. There is no variable in Adaptive Asymmetric Joint Diagonalization (AAJD) algorithm that can directly represent the eigenvalue. Therefore, the idea of principal component sequence estimation is introduced to the improved AAJD algorithm, and the eigenvalues are iteratively evaluated. Then, the number of target is estimated by using the improved information theory. Secondly, the anti-dithering algorithm of target number is proposed, which improves the robustness of the algorithm. Finally, the ESPRIT algorithm is improved to realize the automatic matching and association of DOD and DOA. The simulation results show that the improved AAJD algorithm can successfully track the number of target and angle trajectories. The efficiency of the proposed method is verified.
2018, 40(10): 2498-2505.
doi: 10.11999/JEIT180019
Abstract:
For the passive radar based on Long Term Evolution (LTE) signal, firstly, the ambiguity function are analyzed, and the producing mechanism of different side peaks are explained. Then, a series of corresponding suppression algorithms are proposed for two types of side peaks degrading detection performance: For the side peaks caused by the cyclic prefix, a fast ambiguity algorithm based on non-continuous chunking of data is proposed; For the side peaks caused by the non-continuous spectrum, the suppression algorithm of bandwidth synthesis and frequency domain windowed is proposed. Finally, the thumbtack ambiguity function of the LTE signal is obtained through integrated processing of two suppression algorithms. The work of this paper provides a new method for the side peaks suppression of the passive radar based on the LTE signal.
For the passive radar based on Long Term Evolution (LTE) signal, firstly, the ambiguity function are analyzed, and the producing mechanism of different side peaks are explained. Then, a series of corresponding suppression algorithms are proposed for two types of side peaks degrading detection performance: For the side peaks caused by the cyclic prefix, a fast ambiguity algorithm based on non-continuous chunking of data is proposed; For the side peaks caused by the non-continuous spectrum, the suppression algorithm of bandwidth synthesis and frequency domain windowed is proposed. Finally, the thumbtack ambiguity function of the LTE signal is obtained through integrated processing of two suppression algorithms. The work of this paper provides a new method for the side peaks suppression of the passive radar based on the LTE signal.
2018, 40(10): 2506-2512.
doi: 10.11999/JEIT180031
Abstract:
This paper proposes a method of clutter suppression based on phase encoding and subspace projection for close slow-moving target detection in strong clutter environment. In the framework, the periodic detection signal is modulated with phase encoding, and the clutter is whitened through echo decoding of the slow-time dimension to reduce the correlation between clutter and target echo. Furthermore interference subspace is constructed on the basis of the autocorrelation differences between whitened clutter and useful signal components. The receiving signal is projected to the signal subspace orthogonal to the clutter subspace for clutter suppression. Since the construction of clutter space does not need to assume the clutter model, it avoids the problem of mismatch between the model hypothesis and the actual environment. Simulation results and real data processing results show that this method has better performance than conventional methods under low signal-to-clutter ratio.
This paper proposes a method of clutter suppression based on phase encoding and subspace projection for close slow-moving target detection in strong clutter environment. In the framework, the periodic detection signal is modulated with phase encoding, and the clutter is whitened through echo decoding of the slow-time dimension to reduce the correlation between clutter and target echo. Furthermore interference subspace is constructed on the basis of the autocorrelation differences between whitened clutter and useful signal components. The receiving signal is projected to the signal subspace orthogonal to the clutter subspace for clutter suppression. Since the construction of clutter space does not need to assume the clutter model, it avoids the problem of mismatch between the model hypothesis and the actual environment. Simulation results and real data processing results show that this method has better performance than conventional methods under low signal-to-clutter ratio.
2018, 40(10): 2513-2520.
doi: 10.11999/JEIT180181
Abstract:
Scheduling staffs servicing alien airlines aims to yield task-person assignments by covering the required skills and minimizing employee total working hours as well as balancing staffs’ workload. Its essence is a personnel scheduling problem constrained by multiple task types, hierarchical skills as well as day and night alternation. The existing algorithms do not consider the constraint of day and night alternation. An algorithm is proposed to address that issue. The proposed algorithm firstly designs a data copy trick to quickly model the issue of staff scheduling constrained by day and night alternation. A novel Block Gibbs sampling technique with replacement is designed to efficiently optimize the formulated problem. Theoretical analysis indicates that the computational complexity of the proposed algorithm is the same scale to that of the baselines, whereas the proposed algorithm gains high sampling efficiency. Experimental results on a real dataset shows the improvement of the proposed algorithm over the existing methods is at least 0.62% in terms of evaluation measures.
Scheduling staffs servicing alien airlines aims to yield task-person assignments by covering the required skills and minimizing employee total working hours as well as balancing staffs’ workload. Its essence is a personnel scheduling problem constrained by multiple task types, hierarchical skills as well as day and night alternation. The existing algorithms do not consider the constraint of day and night alternation. An algorithm is proposed to address that issue. The proposed algorithm firstly designs a data copy trick to quickly model the issue of staff scheduling constrained by day and night alternation. A novel Block Gibbs sampling technique with replacement is designed to efficiently optimize the formulated problem. Theoretical analysis indicates that the computational complexity of the proposed algorithm is the same scale to that of the baselines, whereas the proposed algorithm gains high sampling efficiency. Experimental results on a real dataset shows the improvement of the proposed algorithm over the existing methods is at least 0.62% in terms of evaluation measures.
2018, 40(10): 2521-2528.
doi: 10.11999/JEIT180215
Abstract:
To solve communication conflicts and algorithm running time problem in task scheduling process of heterogeneous computing system, a cat swarm optimization task scheduling algorithm is proposed based on double arbitration mechanism and Taguchi orthogonal method. Firstly, the double arbitration mechanism is used to manage the task resources, and the task assignment is dynamically decided to avoid effectively communication conflicts. Then, the Taguchi orthogonal method is applied to the tracking mode of the cat swarm optimization process to reduce the algorithm running time and improve the quality of the solution. Experimental results show that the algorithm runs at a rate of at least about 10% faster than other algorithms. The algorithm performs best in parallelism when dealing with a large number of tasks and has considerable advantages in heterogeneous environments.
To solve communication conflicts and algorithm running time problem in task scheduling process of heterogeneous computing system, a cat swarm optimization task scheduling algorithm is proposed based on double arbitration mechanism and Taguchi orthogonal method. Firstly, the double arbitration mechanism is used to manage the task resources, and the task assignment is dynamically decided to avoid effectively communication conflicts. Then, the Taguchi orthogonal method is applied to the tracking mode of the cat swarm optimization process to reduce the algorithm running time and improve the quality of the solution. Experimental results show that the algorithm runs at a rate of at least about 10% faster than other algorithms. The algorithm performs best in parallelism when dealing with a large number of tasks and has considerable advantages in heterogeneous environments.
2018, 40(10): 2529-2534.
doi: 10.11999/JEIT171020
Abstract:
A double-layer model is proposed to reduce the calculation amounts of the Linear Ship Map (LSM) model. The proposed model can be used for rapid and accurate calculation of the electromagnetic propagation characteristics in the complicated atmospheric environment over the sea. In the proposed model, the calculation regions are divided into the upper-layer and the lower-layer. The upper-layer is calculated by the Wide angle Parabolic Equation (WPE) model and the lower-layer is calculated by the LSM model. Through reducing the calculation height and optimizing the step length, the proposed model can be exact and rapid. By simulation, the proposed model is compared with LSM model in the smooth and the rough sea surface conditions. The results show that the proposed model can decrease the calculation time by 1/10 in the rough sea surface condition.
A double-layer model is proposed to reduce the calculation amounts of the Linear Ship Map (LSM) model. The proposed model can be used for rapid and accurate calculation of the electromagnetic propagation characteristics in the complicated atmospheric environment over the sea. In the proposed model, the calculation regions are divided into the upper-layer and the lower-layer. The upper-layer is calculated by the Wide angle Parabolic Equation (WPE) model and the lower-layer is calculated by the LSM model. Through reducing the calculation height and optimizing the step length, the proposed model can be exact and rapid. By simulation, the proposed model is compared with LSM model in the smooth and the rough sea surface conditions. The results show that the proposed model can decrease the calculation time by 1/10 in the rough sea surface condition.
2018, 40(10): 2535-2540.
doi: 10.11999/JEIT171000
Abstract:
The impact on impregnated cathode electronic emission, when it is coated by films, is an important studied content in the field of thermionic cathode. A cathode sample is evenly split by impregnated dispenser cathode and coated impregnated dispenser cathode. The sample is activated at 1150 ℃ in the Deep UltraViolet laser Photo- and Thermal- Emission Electron Microscope system (DUV-PEEM/TEEM) for 2 hours. In this system, the micro- electron emission phenomenon of two cathode and its changes with temperature are compared and studied directly. The results show that the electronic emission of both impregnated cathode and coated impregnated cathode are mainly located at the pores and the edge of the adjacent particles; Along with the rise of cathode's temperature, the cathode emission without coating anything is still mainly focused on the pores and nearby narrow regions on cathode surface, that the emission area changed a little. However, for the coated impregnated dispenser cathode, the effective emission area is extended from the pores and its edge to the area far away from the pores. These results for the first time give the electron emission characteristics for impregnated cathode coated with film, which have certain reference value to understand the emission mechanism of this cathode.
The impact on impregnated cathode electronic emission, when it is coated by films, is an important studied content in the field of thermionic cathode. A cathode sample is evenly split by impregnated dispenser cathode and coated impregnated dispenser cathode. The sample is activated at 1150 ℃ in the Deep UltraViolet laser Photo- and Thermal- Emission Electron Microscope system (DUV-PEEM/TEEM) for 2 hours. In this system, the micro- electron emission phenomenon of two cathode and its changes with temperature are compared and studied directly. The results show that the electronic emission of both impregnated cathode and coated impregnated cathode are mainly located at the pores and the edge of the adjacent particles; Along with the rise of cathode's temperature, the cathode emission without coating anything is still mainly focused on the pores and nearby narrow regions on cathode surface, that the emission area changed a little. However, for the coated impregnated dispenser cathode, the effective emission area is extended from the pores and its edge to the area far away from the pores. These results for the first time give the electron emission characteristics for impregnated cathode coated with film, which have certain reference value to understand the emission mechanism of this cathode.