Email alert
2014 Vol. 36, No. 12
Display Method:
2014, 36(12): 2795-2801.
doi: 10.3724/SP.J.1146.2014.00114
Abstract:
How to efficiently utilize the finite storage space and cache content chunks in the content store poses challenges to the caching policy in Named Data Networking (NDN). Using the differentiated caching strategy, a collaborative caching algorithm is proposed based on the request correlation. In the scheme, the subsequent correlated content chunks are requested in advance to increase the hit ratio for content requesting. When making the caching decision, a two-dimensional differentiated caching policy combining the caching location and cache-resident time is proposed. According to the change of content activity, the caching location is pushed downstream hop by hop in the spatial dimension in order to spread popular contents to the network edge in a gradual manner, and the cache-resident time is adjusted dynamically in the time dimension. The simulation results show that the proposed algorithm can efficiently decrease the request latency, reduce the cache redundancy, and achieve higher cache hit ratio than other caching strategies.
How to efficiently utilize the finite storage space and cache content chunks in the content store poses challenges to the caching policy in Named Data Networking (NDN). Using the differentiated caching strategy, a collaborative caching algorithm is proposed based on the request correlation. In the scheme, the subsequent correlated content chunks are requested in advance to increase the hit ratio for content requesting. When making the caching decision, a two-dimensional differentiated caching policy combining the caching location and cache-resident time is proposed. According to the change of content activity, the caching location is pushed downstream hop by hop in the spatial dimension in order to spread popular contents to the network edge in a gradual manner, and the cache-resident time is adjusted dynamically in the time dimension. The simulation results show that the proposed algorithm can efficiently decrease the request latency, reduce the cache redundancy, and achieve higher cache hit ratio than other caching strategies.
2014, 36(12): 2802-2808.
doi: 10.3724/SP.J.1146.2014.00211
Abstract:
Most of Controller placements take the Propagation Delay (PD) as the important consideration in Software-Defined Network (SDN), ignoring the influence of the Transmission Delay (TD) on the network performance. This paper provides a delay-aware controller placement for fast response. First, the Controller placement is formulated as an optimization problem based on PD and TD. Average delay and maximum delay minimization models are updated, of which the processes about the optimal solution are circumstantiated. Further, delay optimization model is deduced by fuzzy set theory. Finally, according to whether or not considering TD, two placement algorithms, Transmission and Propagation Algorithm (TPA) and Propagation Algorithm (PA), are presented. In order to measure the performance of the solutions, a factual network topology is chosen and the simulation result shows that TPA superiorities over PA in terms of response speed and network stability, the total delay of delay optimization model is less than the others.
Most of Controller placements take the Propagation Delay (PD) as the important consideration in Software-Defined Network (SDN), ignoring the influence of the Transmission Delay (TD) on the network performance. This paper provides a delay-aware controller placement for fast response. First, the Controller placement is formulated as an optimization problem based on PD and TD. Average delay and maximum delay minimization models are updated, of which the processes about the optimal solution are circumstantiated. Further, delay optimization model is deduced by fuzzy set theory. Finally, according to whether or not considering TD, two placement algorithms, Transmission and Propagation Algorithm (TPA) and Propagation Algorithm (PA), are presented. In order to measure the performance of the solutions, a factual network topology is chosen and the simulation result shows that TPA superiorities over PA in terms of response speed and network stability, the total delay of delay optimization model is less than the others.
2014, 36(12): 2809-2815.
doi: 10.3724/SP.J.1146.2013.01955
Abstract:
In the context that social network becomes more and more complicated and huge, it is extremely difficult and complex to mine the global community structures of large networks. Therefore, the local community detection has important application significance for studying and understanding the community structure of complex networks. The existing algorithms often have some defects, such as low accuracy and stability, the preset thresholds difficult to obtain, etc.. In this paper, a local community detecting algorithm is proposed based on boundary nodes identification, and a comprehensive consideration of the external and internal link similarity of neighborhood nodes for community clustering is given. Meanwhile, the method can effectively control the scale and scope of the local community based on the boundary node identification, so as the complete structure information of the local community is obtained. Through the experiments on both computer-generated and real-world networks, the results show that the proposed algorithm can automatically mine local community structure from the given node without predefined parameters, and improve the performance of local community detection in stability and accuracy.
In the context that social network becomes more and more complicated and huge, it is extremely difficult and complex to mine the global community structures of large networks. Therefore, the local community detection has important application significance for studying and understanding the community structure of complex networks. The existing algorithms often have some defects, such as low accuracy and stability, the preset thresholds difficult to obtain, etc.. In this paper, a local community detecting algorithm is proposed based on boundary nodes identification, and a comprehensive consideration of the external and internal link similarity of neighborhood nodes for community clustering is given. Meanwhile, the method can effectively control the scale and scope of the local community based on the boundary node identification, so as the complete structure information of the local community is obtained. Through the experiments on both computer-generated and real-world networks, the results show that the proposed algorithm can automatically mine local community structure from the given node without predefined parameters, and improve the performance of local community detection in stability and accuracy.
2014, 36(12): 2816-2821.
doi: 10.3724/SP.J.1146.2014.00042
Abstract:
Considering the security resource allocation problem in the two-way relay networks exiting an eavesdropper, to improve the security of the relay, a security secrecy ratio scheme under the constraint of subchannel allocation and power is studied in this paper. Compared to the traditional secrecy capacity scheme, the security secrecy ratio scheme pays more attention to reflecting the users own security extent. Based on the proposed scheme, security Quality of Service (QoS) requirement for different users and the network fairness are further considered. Besides, power allocation, subchannel allocation and subchannel pairing are joint considered. Then, the optimal solution is obtained through Constraint Particle Swarm Optimization (CPSO) algorithm, Binary CPSO (B_CPSO) algorithm and Classic Hungarian Algorithm (CHA), respectively. Finally, the network resources are allocated in an optimal manner and the secrecy ratio for legitimate users is improved. Simulations results show the effectiveness of the proposed algorithm.
Considering the security resource allocation problem in the two-way relay networks exiting an eavesdropper, to improve the security of the relay, a security secrecy ratio scheme under the constraint of subchannel allocation and power is studied in this paper. Compared to the traditional secrecy capacity scheme, the security secrecy ratio scheme pays more attention to reflecting the users own security extent. Based on the proposed scheme, security Quality of Service (QoS) requirement for different users and the network fairness are further considered. Besides, power allocation, subchannel allocation and subchannel pairing are joint considered. Then, the optimal solution is obtained through Constraint Particle Swarm Optimization (CPSO) algorithm, Binary CPSO (B_CPSO) algorithm and Classic Hungarian Algorithm (CHA), respectively. Finally, the network resources are allocated in an optimal manner and the secrecy ratio for legitimate users is improved. Simulations results show the effectiveness of the proposed algorithm.
2014, 36(12): 2822-2827.
doi: 10.3724/SP.J.1146.2014.00056
Abstract:
Uplink resource allocation in Device-to-Device (D2D) enabled cellular systems is studied. The sum-rate maximization problem is transformed into a concise Binary Integer Programming (BIP) problem, which is NP-hard. Then based on the Canonical duality theory, a dual problem is obtained. The dual problem is a convex problem in a continuous domain. Under appropriate conditions, the dual method attains the global optimal solution of the primal problem with zero duality gap. An algorithm based on the Barrier method is proposed to solve the dual problem. Simulation results show that the proposed algorithm performs close to optimal and outperforms the existing algorithm.
Uplink resource allocation in Device-to-Device (D2D) enabled cellular systems is studied. The sum-rate maximization problem is transformed into a concise Binary Integer Programming (BIP) problem, which is NP-hard. Then based on the Canonical duality theory, a dual problem is obtained. The dual problem is a convex problem in a continuous domain. Under appropriate conditions, the dual method attains the global optimal solution of the primal problem with zero duality gap. An algorithm based on the Barrier method is proposed to solve the dual problem. Simulation results show that the proposed algorithm performs close to optimal and outperforms the existing algorithm.
2014, 36(12): 2828-2834.
doi: 10.3724/SP.J.1146.2014.00028
Abstract:
This paper proposes a novel wideband compressive spectrum sensing scheme based on the Generalized Likelihood Ratio Test (GLRT), in which the GLRT statistic and the decision threshold are derived according to Random Matrix Theory (RMT). The proposed scheme exploits only compressive measurements to detect the occupancy status of each sub-band in a wide spectral range without requiring signal reconstruction or priori information. In addition, to alleviate the communication and data acquisition overhead of Secondary Users (SUs), a Sensor Node (SN)-assisted cooperative sensing framework is also addressed. In this sensing framework, the sensor nodes perform compressive sampling instead of the SUs at the sub-Nyquist rate. Both theoretical analysis and simulation results show that compared with the traditional GLRT algorithm with signal reconstruction and the Roys Largest Root Test (RLRT) algorithm, the proposed scheme not only has lower computational complexity and cost and more robust sensing performance, but also can achieve better detection performance with a fewer number of SNs.
This paper proposes a novel wideband compressive spectrum sensing scheme based on the Generalized Likelihood Ratio Test (GLRT), in which the GLRT statistic and the decision threshold are derived according to Random Matrix Theory (RMT). The proposed scheme exploits only compressive measurements to detect the occupancy status of each sub-band in a wide spectral range without requiring signal reconstruction or priori information. In addition, to alleviate the communication and data acquisition overhead of Secondary Users (SUs), a Sensor Node (SN)-assisted cooperative sensing framework is also addressed. In this sensing framework, the sensor nodes perform compressive sampling instead of the SUs at the sub-Nyquist rate. Both theoretical analysis and simulation results show that compared with the traditional GLRT algorithm with signal reconstruction and the Roys Largest Root Test (RLRT) algorithm, the proposed scheme not only has lower computational complexity and cost and more robust sensing performance, but also can achieve better detection performance with a fewer number of SNs.
2014, 36(12): 2835-2841.
doi: 10.3724/SP.J.1146.2014.00013
Abstract:
To solve the confliction between the security and reliability of physical layer security codes and improve the secrecy rate, a security coding method based on puncturing polar codes is proposed. In order to keep the security and reliability, the confidential information is mapped to the specific input proposition which could be decoded by legal receiver but be equivocal for eavesdropper based on channel polarization theory. By analyzing the check trees of polar codes, the puncturing pattern is designed based on the influences of outputs on confidential information which are described by three parameters. The theoretical analysis and simulation results verify that the proposed method is able to guarantee simultaneously the security and reliability while improving the efficiency of the confidential information.
To solve the confliction between the security and reliability of physical layer security codes and improve the secrecy rate, a security coding method based on puncturing polar codes is proposed. In order to keep the security and reliability, the confidential information is mapped to the specific input proposition which could be decoded by legal receiver but be equivocal for eavesdropper based on channel polarization theory. By analyzing the check trees of polar codes, the puncturing pattern is designed based on the influences of outputs on confidential information which are described by three parameters. The theoretical analysis and simulation results verify that the proposed method is able to guarantee simultaneously the security and reliability while improving the efficiency of the confidential information.
2014, 36(12): 2842-2847.
doi: 10.3724/SP.J.1146.2013.01759
Abstract:
Multiple tags collision becomes an important factor blocking the popularization of Radio Frequency IDentification (RFID). To improve the identification efficiency and reduce the communication overhead, an novel algorithm, anti-Collided Bits Indicator Algorithm (CBIA) is proposed base don Bits Indicator Algorthm (BIA). Using the collision bit tracking technology and collided bits coding technology, the reader splits the tags into smaller subsets according to the identified collided bits. This process is repeated until all collided bits are solved. CBIA groups the tags into determinate subsets to avoid generating idle slots. The analysis and simulation results show that the average throughput of CBIA is 0.7 tags per slot, which is better than that of other algorithms, such as Optimal Query Tracking Tree protocol (OQTT) and Collision Tracking Tree Algorithm (CTTA).
Multiple tags collision becomes an important factor blocking the popularization of Radio Frequency IDentification (RFID). To improve the identification efficiency and reduce the communication overhead, an novel algorithm, anti-Collided Bits Indicator Algorithm (CBIA) is proposed base don Bits Indicator Algorthm (BIA). Using the collision bit tracking technology and collided bits coding technology, the reader splits the tags into smaller subsets according to the identified collided bits. This process is repeated until all collided bits are solved. CBIA groups the tags into determinate subsets to avoid generating idle slots. The analysis and simulation results show that the average throughput of CBIA is 0.7 tags per slot, which is better than that of other algorithms, such as Optimal Query Tracking Tree protocol (OQTT) and Collision Tracking Tree Algorithm (CTTA).
2014, 36(12): 2848-2854.
doi: 10.3724/SP.J.1146.2014.00684
Abstract:
At present most Identity-based authenticated key agreement protocols are built on the security infrastructure in which a single Private Key Generator (PKG) is contained as the only trusted third party of the whole system, however such kind of infrastructure can not satisfy the requirements of hierarchical identity register and authentication. On the basis of Hierarchical Identity Based Encryption (HIBE) system, this paper reconstructs the private key and proposes a new hierarchical identity based authenticated key agreement protocol using the bilinear map in multiplicative cyclic group and it provides secure session key exchange mechanism for cloud entities on different hierarchical levels. Based on the Computational Diffie-Hellman (CDH) and Gap Diffie-Hellman (GDH) assumptions, this paper proves that the new protocol not only achieves known-key security, forward secrecy and PKG forward secrecy, but also resists key-compromise impersonation attacks in the eCK model.
At present most Identity-based authenticated key agreement protocols are built on the security infrastructure in which a single Private Key Generator (PKG) is contained as the only trusted third party of the whole system, however such kind of infrastructure can not satisfy the requirements of hierarchical identity register and authentication. On the basis of Hierarchical Identity Based Encryption (HIBE) system, this paper reconstructs the private key and proposes a new hierarchical identity based authenticated key agreement protocol using the bilinear map in multiplicative cyclic group and it provides secure session key exchange mechanism for cloud entities on different hierarchical levels. Based on the Computational Diffie-Hellman (CDH) and Gap Diffie-Hellman (GDH) assumptions, this paper proves that the new protocol not only achieves known-key security, forward secrecy and PKG forward secrecy, but also resists key-compromise impersonation attacks in the eCK model.
2014, 36(12): 2855-2860.
doi: 10.3724/SP.J.1146.2014.00080
Abstract:
Since the existing algorithms can not meet the requirements of real-time compression and transmission for UAV(Unmanned Aerial Vehicle) videos, a new real-time compression algorithm with low complexity for UAV videos is proposed. Considering the plane background and unified motion of UAV videos, the proposed method establishes an affine model for global motion estimation and compression. The experimental results demonstrate that the proposed algorithm is able to reduce the total encoding time while maintaining the performance compared with H.264, which the quality requirement and real-time performance for UAV videos transmission can be satisfied in most cases.
Since the existing algorithms can not meet the requirements of real-time compression and transmission for UAV(Unmanned Aerial Vehicle) videos, a new real-time compression algorithm with low complexity for UAV videos is proposed. Considering the plane background and unified motion of UAV videos, the proposed method establishes an affine model for global motion estimation and compression. The experimental results demonstrate that the proposed algorithm is able to reduce the total encoding time while maintaining the performance compared with H.264, which the quality requirement and real-time performance for UAV videos transmission can be satisfied in most cases.
2014, 36(12): 2861-2868.
doi: 10.3724/SP.J.1146.2014.00318
Abstract:
A Discrete Cosine Transform (DCT) based Modulation Transfer Function (MTF) is used to improve the intra quantization matrix for the High Efficiency Video Coding (HEVC) standard. A new method is used to calculate the spatial frequency in the calculation process. The integer DCT for the HEVC is achieved by scaling and hand-tuning the DCT matrix. Due to difference between these two transforms, the quantization matrices are optimized. The experimental results show that the proposed HEVC intra quantization matrix based on the visual perception can reduce more bit rate at the similar video quality by using a Structural SIMilarity (SSIM) based Bjontegaard Delta Bit Rate (BDBR) performance evaluation. Since only the quantization matrices are changed in the encoding process, the proposed algorithm does not affect the structure of the encoding algorithm and does not add the coding complexity.
A Discrete Cosine Transform (DCT) based Modulation Transfer Function (MTF) is used to improve the intra quantization matrix for the High Efficiency Video Coding (HEVC) standard. A new method is used to calculate the spatial frequency in the calculation process. The integer DCT for the HEVC is achieved by scaling and hand-tuning the DCT matrix. Due to difference between these two transforms, the quantization matrices are optimized. The experimental results show that the proposed HEVC intra quantization matrix based on the visual perception can reduce more bit rate at the similar video quality by using a Structural SIMilarity (SSIM) based Bjontegaard Delta Bit Rate (BDBR) performance evaluation. Since only the quantization matrices are changed in the encoding process, the proposed algorithm does not affect the structure of the encoding algorithm and does not add the coding complexity.
2014, 36(12): 2869-2875.
doi: 10.3724/SP.J.1146.2013.02001
Abstract:
Recently, extensive researches have been focused on the early stoping criterion for Belief-Propagation(BP) decoding of LDPC codes. However, there is little study on suitable design of early stopping criterion for Weighted Bit Flipping (WBF) decodings. Based on the whole novel understanding of WBF algorithm, this paper study presents a low complexity and high adaptable stopping criterion which detects most of the undecodable blocks in an early stage of the decoding process. The simulation results show that the proposed method can significatly reduce the average number of required iterations with negligible performance loss, which is able to achieve appealing tradeoff between the complexity and the performance.
Recently, extensive researches have been focused on the early stoping criterion for Belief-Propagation(BP) decoding of LDPC codes. However, there is little study on suitable design of early stopping criterion for Weighted Bit Flipping (WBF) decodings. Based on the whole novel understanding of WBF algorithm, this paper study presents a low complexity and high adaptable stopping criterion which detects most of the undecodable blocks in an early stage of the decoding process. The simulation results show that the proposed method can significatly reduce the average number of required iterations with negligible performance loss, which is able to achieve appealing tradeoff between the complexity and the performance.
2014, 36(12): 2876-2881.
doi: 10.3724/SP.J.1146.2013.02014
Abstract:
This paper proposes an Eigenvalue Reconstruction method in Noise Subspace (ERNS) for Direction of Arrival DOA estimation with high resolution, provided that the powers of sources are different. The noise subspace eigenvalues belonging to the covariance matrix of received signals, obtained by EigenValue Decomposition (EVD), are modified to construct a new covariance matrix with respect to virtual source. The noise subspace eigenvalues corresponding to the new covariance matrix remain the same as before they are modified. The invariance of the noise subspace is utilized to estimate the DOA of emitters. The theory and process of ERNS algorithm are provided, at the same time, the theory and performance of ERNS algorithm is validated by computer simulations. The simulation results show that the ERNS algorithm has a better performance in successful probability of weak signal estimation compared with other subspace methods and MUSIC algorithm.
This paper proposes an Eigenvalue Reconstruction method in Noise Subspace (ERNS) for Direction of Arrival DOA estimation with high resolution, provided that the powers of sources are different. The noise subspace eigenvalues belonging to the covariance matrix of received signals, obtained by EigenValue Decomposition (EVD), are modified to construct a new covariance matrix with respect to virtual source. The noise subspace eigenvalues corresponding to the new covariance matrix remain the same as before they are modified. The invariance of the noise subspace is utilized to estimate the DOA of emitters. The theory and process of ERNS algorithm are provided, at the same time, the theory and performance of ERNS algorithm is validated by computer simulations. The simulation results show that the ERNS algorithm has a better performance in successful probability of weak signal estimation compared with other subspace methods and MUSIC algorithm.
2014, 36(12): 2882-2888.
doi: 10.3724/SP.J.1146.2013.02018
Abstract:
Since the adaptive beamformer suffers from the output performance degradation when the position of the interference moves, a new null broadening technique is proposed. The algorithm uses the projection technology to transform the array receives data, combined with diagonal loading techniques to get the new covariance matrix. The original covariance matrix is replaced by a new covariance matrix as such, and then a null broadening beam can be obtained using the adaptive beamforming technique. The simulations results show that this method can effectively broadening the beam null width and enhance the null depth, so the new algorithm can suppress strong interference with rapid movement; and the algorithm is easy processing, which can still work effectively even in the condition of low snapshot, the algorithm enhanced robustness of the beamformer.
Since the adaptive beamformer suffers from the output performance degradation when the position of the interference moves, a new null broadening technique is proposed. The algorithm uses the projection technology to transform the array receives data, combined with diagonal loading techniques to get the new covariance matrix. The original covariance matrix is replaced by a new covariance matrix as such, and then a null broadening beam can be obtained using the adaptive beamforming technique. The simulations results show that this method can effectively broadening the beam null width and enhance the null depth, so the new algorithm can suppress strong interference with rapid movement; and the algorithm is easy processing, which can still work effectively even in the condition of low snapshot, the algorithm enhanced robustness of the beamformer.
2014, 36(12): 2889-2895.
doi: 10.3724/SP.J.1146.2014.00106
Abstract:
The radiated noise sources of underwater vehicle can be localized by analysis of Doppler information. Then effective suppression of noise can be taken. However, traditional time-frequency methods can hardly distinguish the Doppler shift of noise sources with a single frequency. In this paper, a method of multi-Doppler signals analysis based on Chirp-Fourier transform is presented. The Doppler signal is decomposed into a sum of Linear Frequency Modulate (LFM) components, and transformed into two-dimensional frequency-modulate factor domain. Then the location of multiple noise sources can be estimated by the extracted Doppler information and the interference caused by single frequency sources can be suppressed. Computer simulation and experimental result at sea demonstrate the validity of the proposed method.
The radiated noise sources of underwater vehicle can be localized by analysis of Doppler information. Then effective suppression of noise can be taken. However, traditional time-frequency methods can hardly distinguish the Doppler shift of noise sources with a single frequency. In this paper, a method of multi-Doppler signals analysis based on Chirp-Fourier transform is presented. The Doppler signal is decomposed into a sum of Linear Frequency Modulate (LFM) components, and transformed into two-dimensional frequency-modulate factor domain. Then the location of multiple noise sources can be estimated by the extracted Doppler information and the interference caused by single frequency sources can be suppressed. Computer simulation and experimental result at sea demonstrate the validity of the proposed method.
2014, 36(12): 2896-2901.
doi: 10.3724/SP.J.1146.2014.00131
Abstract:
Traditional echo control techniques as Partitioned Block Frequency Domain Adaptive Filter (PBFDAF) with stochastic gradient adaptive method usually endure slow convergence and insufficient echo suppression in reverberant room when the echo is speech and the echo path is unstable. An algorithm based on frequency domain stage-wise regression is proposed for acoustic echo control to achieve faster convergence of the system estimation with insignificant bias. Commonly used additional double-talk detector and inter-channel coherence based residual echo suppressor are not needed since short-time coherence analysis is performed in each stage. By further making mild assumptions on the quasi-stationarity of the near-end background noise, both fast convergence and stability of the estimation can be achieved simultaneously with a non-stationarity controlled smoothing factor. Experiments are carried out to show the superiority of the proposed approach in terms of convergence speed and steady state error in distant talking mode in ordinary room environment with various common levels of background noise.
Traditional echo control techniques as Partitioned Block Frequency Domain Adaptive Filter (PBFDAF) with stochastic gradient adaptive method usually endure slow convergence and insufficient echo suppression in reverberant room when the echo is speech and the echo path is unstable. An algorithm based on frequency domain stage-wise regression is proposed for acoustic echo control to achieve faster convergence of the system estimation with insignificant bias. Commonly used additional double-talk detector and inter-channel coherence based residual echo suppressor are not needed since short-time coherence analysis is performed in each stage. By further making mild assumptions on the quasi-stationarity of the near-end background noise, both fast convergence and stability of the estimation can be achieved simultaneously with a non-stationarity controlled smoothing factor. Experiments are carried out to show the superiority of the proposed approach in terms of convergence speed and steady state error in distant talking mode in ordinary room environment with various common levels of background noise.
2014, 36(12): 2902-2908.
doi: 10.3724/SP.J.1146.2013.02022
Abstract:
The power line being indispensable under indoor environment is firstly introduced to be as the antenna, which wideband high-frequency signals are injected into it to obtain the location fingerprint so as to achieve precise indoor positioning. Firstly, the realization technology about the injection of the wideband high-frequency signal analysis is proposed, and the construction method of the indoor location fingerprint is deeply described. Meanwhile, the indoor positioning technology based on the naive Bias classification algorithm is discussed at detail. Finally, the experimental analysis shows that the positioning technology based on naive Bayes classification algorithm has higher positioning accuracy and better adaptive ability of time migration than the positioning technology based on K-Nearest Neighbor (KNN) classification algorithm, in the case of multiple training samples.
The power line being indispensable under indoor environment is firstly introduced to be as the antenna, which wideband high-frequency signals are injected into it to obtain the location fingerprint so as to achieve precise indoor positioning. Firstly, the realization technology about the injection of the wideband high-frequency signal analysis is proposed, and the construction method of the indoor location fingerprint is deeply described. Meanwhile, the indoor positioning technology based on the naive Bias classification algorithm is discussed at detail. Finally, the experimental analysis shows that the positioning technology based on naive Bayes classification algorithm has higher positioning accuracy and better adaptive ability of time migration than the positioning technology based on K-Nearest Neighbor (KNN) classification algorithm, in the case of multiple training samples.
2014, 36(12): 2909-2914.
doi: 10.3724/SP.J.1146.2014.00039
Abstract:
The common way of conflicting evidence combination is to modify the basic probability mass assignment of evidence bodies by a certain indicator which can reflect or describe the information uncertainty of the conflicting evidence. In existing conflicting evidence combination methods, indicators such as the distance of evidence and ambiguity are used. However, these indicators reflect only one or several aspects of the characteristics of the conflicting information uncertainty. A novel method of conflicting evidence combination is proposed based on the total uncertainty degree of information. The concept of combined total uncertainty of information is defined based on Cartesian product. An approach of predicting the range of fused informations combined total uncertainty degree by the total uncertainty degree of each body of evidence before information fusion is also presented. Weights for each evidence body are obtained according to the total uncertainty degree of each evidence body and the combined total uncertainty on their Cartesian product. Then, the bodies of conflicting evidence are combined by the weighted average according to Dempsters rule. Results of numerical examples of information fusion show that, compared with the existing approaches, the total uncertainty degree of the combined information obtained by the proposed method is smaller, which means the combined information is more helpful to subsquent decision analysis and data applications.
The common way of conflicting evidence combination is to modify the basic probability mass assignment of evidence bodies by a certain indicator which can reflect or describe the information uncertainty of the conflicting evidence. In existing conflicting evidence combination methods, indicators such as the distance of evidence and ambiguity are used. However, these indicators reflect only one or several aspects of the characteristics of the conflicting information uncertainty. A novel method of conflicting evidence combination is proposed based on the total uncertainty degree of information. The concept of combined total uncertainty of information is defined based on Cartesian product. An approach of predicting the range of fused informations combined total uncertainty degree by the total uncertainty degree of each body of evidence before information fusion is also presented. Weights for each evidence body are obtained according to the total uncertainty degree of each evidence body and the combined total uncertainty on their Cartesian product. Then, the bodies of conflicting evidence are combined by the weighted average according to Dempsters rule. Results of numerical examples of information fusion show that, compared with the existing approaches, the total uncertainty degree of the combined information obtained by the proposed method is smaller, which means the combined information is more helpful to subsquent decision analysis and data applications.
2014, 36(12): 2915-2922.
doi: 10.3724/SP.J.1146.2013.01915
Abstract:
A novel algorithm called Dual Latent Variable Spaces Local Particle Search (DLVSLPS) is proposed. It can estimate the 3D human motion sequence from silhouettes of multi-view image sequence more accurately. Gaussian Process Dynamical Models (GPDM) is used to reduce the dimension to build the dual latent variable spaces and the mapping from low dimensional latent variable data to high dimensional data. Then, the low dimensional particles are searched in these spaces by the method called Neighbor Weight Prior Condition Search (NWPCS). The better high dimensional data are generated from the mapping to estimate the 3D human motion of the corresponding frame. The proposed algorithm aims to solve the problem of traditional particle filters. The problem is that sampling in high dimensional data space can not get the valid and correct data to estimate the 3D human motion. The simulating experiments show the proposed algorithm has better performance than the traditional particle filters. The better performance includes the multi-view and discontinuous frame estimation, overcoming the silhouette ambiguity and reducing the estimation error.
A novel algorithm called Dual Latent Variable Spaces Local Particle Search (DLVSLPS) is proposed. It can estimate the 3D human motion sequence from silhouettes of multi-view image sequence more accurately. Gaussian Process Dynamical Models (GPDM) is used to reduce the dimension to build the dual latent variable spaces and the mapping from low dimensional latent variable data to high dimensional data. Then, the low dimensional particles are searched in these spaces by the method called Neighbor Weight Prior Condition Search (NWPCS). The better high dimensional data are generated from the mapping to estimate the 3D human motion of the corresponding frame. The proposed algorithm aims to solve the problem of traditional particle filters. The problem is that sampling in high dimensional data space can not get the valid and correct data to estimate the 3D human motion. The simulating experiments show the proposed algorithm has better performance than the traditional particle filters. The better performance includes the multi-view and discontinuous frame estimation, overcoming the silhouette ambiguity and reducing the estimation error.
2014, 36(12): 2923-2928.
doi: 10.3724/SP.J.1146.2014.00422
Abstract:
The processing of generating dictionary of function in Kernel Matching Pursuit (KMP) often uses greedy algorithm for global optimal searching, the dictionary learning time of KMP is too long. To overcome the above drawbacks, a novel classification algorithm (AP-KMP) based on Affinity Propagation (AP) and KMP is proposed. This method utilizes clustering algorithms to optimize dictionary division process in KMP algorithm, then the KMP algorithm is used to search in these local dictionary space, thus reducing the computation time. Finally, four algorithms and AP-KMP are carried out respectively for some UCI datasets and remote sensing image datasets, the conclusion of which fully demonstrates that the AP-KMP algorithm is superior over another four algorithms in computation time and classification performance.
The processing of generating dictionary of function in Kernel Matching Pursuit (KMP) often uses greedy algorithm for global optimal searching, the dictionary learning time of KMP is too long. To overcome the above drawbacks, a novel classification algorithm (AP-KMP) based on Affinity Propagation (AP) and KMP is proposed. This method utilizes clustering algorithms to optimize dictionary division process in KMP algorithm, then the KMP algorithm is used to search in these local dictionary space, thus reducing the computation time. Finally, four algorithms and AP-KMP are carried out respectively for some UCI datasets and remote sensing image datasets, the conclusion of which fully demonstrates that the AP-KMP algorithm is superior over another four algorithms in computation time and classification performance.
2014, 36(12): 2929-2934.
doi: 10.3724/SP.J.1146.2014.00123
Abstract:
Aiming at the problem that a Continuous Wave (CW) is easy to be suppressed by the frequency domain filtering and the efficiency of a broadband continuous blanket jamming is low, this paper presents a new broadband comb spectrum jamming type which is called single pulse CW. The jamming frequency domain range of single pulse CW is set respectively according to the power spectral density characteristic of C/A code, P(Y) code and M code signal. Taking code tracking error as the evaluation index of jamming effect, the GPS receiver code tracking performance of using a narrowband non-coherent delay lock loop is simulated and analyzed under different jamming circumstance. The simulation results show that the influence of single pulse CW jamming to C/A code and M code with the different Pseudo Random Noise Code (PRN) and phase of modulating sub-carrier are different, under the same Jamming-to-Signal Ratio (JSR) condition, the jamming effect of single pulse CW is better than the broadband Gaussian noise and matched spectrum.
Aiming at the problem that a Continuous Wave (CW) is easy to be suppressed by the frequency domain filtering and the efficiency of a broadband continuous blanket jamming is low, this paper presents a new broadband comb spectrum jamming type which is called single pulse CW. The jamming frequency domain range of single pulse CW is set respectively according to the power spectral density characteristic of C/A code, P(Y) code and M code signal. Taking code tracking error as the evaluation index of jamming effect, the GPS receiver code tracking performance of using a narrowband non-coherent delay lock loop is simulated and analyzed under different jamming circumstance. The simulation results show that the influence of single pulse CW jamming to C/A code and M code with the different Pseudo Random Noise Code (PRN) and phase of modulating sub-carrier are different, under the same Jamming-to-Signal Ratio (JSR) condition, the jamming effect of single pulse CW is better than the broadband Gaussian noise and matched spectrum.
2014, 36(12): 2935-2941.
doi: 10.3724/SP.J.1146.2013.01931
Abstract:
For the problem of two Dimensional Direction Of Arrival (2-D DOA) estimation, a broadband 2-D DOA algorithm is proposed based on the Compressed Sensing (CS). It can obtain the center frequency, azimuth angle and pitch angle of multiple narrowband signal. Firstly, an overcomplete sparse dictionary is established using the space frequency of azimuth and pitch. Then the high resolution estimation of space frequency is achieved with the compressed sampling array receiving data. Finally, the spatial filtering is used to realize the match of center frequency, azimuth angle and pitch angle. Theoretical analysis shows that the proposed algorithm has a higher estimation precision and lower SNR threshold without multidimensional search process. It reduces the computation by compressed sampling and the simulation results verify the effectiveness and correctness.
For the problem of two Dimensional Direction Of Arrival (2-D DOA) estimation, a broadband 2-D DOA algorithm is proposed based on the Compressed Sensing (CS). It can obtain the center frequency, azimuth angle and pitch angle of multiple narrowband signal. Firstly, an overcomplete sparse dictionary is established using the space frequency of azimuth and pitch. Then the high resolution estimation of space frequency is achieved with the compressed sampling array receiving data. Finally, the spatial filtering is used to realize the match of center frequency, azimuth angle and pitch angle. Theoretical analysis shows that the proposed algorithm has a higher estimation precision and lower SNR threshold without multidimensional search process. It reduces the computation by compressed sampling and the simulation results verify the effectiveness and correctness.
2014, 36(12): 2942-2948.
doi: 10.3724/SP.J.1146.2014.00566
Abstract:
Compressed Sensing (CS) reconstruction of hyperspectral image is an effective mechanism to comedy the traditional hyperspectral imaging pattern with the drawback of high redundancy and vast data volume. This paper presents a new multiple measurement vector model for compressed sensing reconstruction of hyperspectral data in consideration of its multiple channel character. In the encoding side, the random convolution operator is used to rapidly obtain the measurement vector of each channel which is subsequently reorganized as a measurement vector matrix. In the decoding side, a joint reconstruction model is proposed to reconstruct the hyperspectral data from the multiple measurement vectors. The model decomposes the hyperspectral data into the inter-channel correlated and differenced component in the sparsifying transform domain, where the correlated component with high spatial and spectral correlation is constrained to be graph structured sparse and the differenced component is constrained to be sparse. A numerical optimization algorithm is also proposed to solve the reconstruction model by the alternating direction method of multiplier. Every sub-problem in the iteration formula admits analysis solution by introducing the auxiliary variable and linearization operation. The complexity of the numerical optimization algorithm is reduced. The experimental results demonstrate the effectiveness of the proposed algorithm.
Compressed Sensing (CS) reconstruction of hyperspectral image is an effective mechanism to comedy the traditional hyperspectral imaging pattern with the drawback of high redundancy and vast data volume. This paper presents a new multiple measurement vector model for compressed sensing reconstruction of hyperspectral data in consideration of its multiple channel character. In the encoding side, the random convolution operator is used to rapidly obtain the measurement vector of each channel which is subsequently reorganized as a measurement vector matrix. In the decoding side, a joint reconstruction model is proposed to reconstruct the hyperspectral data from the multiple measurement vectors. The model decomposes the hyperspectral data into the inter-channel correlated and differenced component in the sparsifying transform domain, where the correlated component with high spatial and spectral correlation is constrained to be graph structured sparse and the differenced component is constrained to be sparse. A numerical optimization algorithm is also proposed to solve the reconstruction model by the alternating direction method of multiplier. Every sub-problem in the iteration formula admits analysis solution by introducing the auxiliary variable and linearization operation. The complexity of the numerical optimization algorithm is reduced. The experimental results demonstrate the effectiveness of the proposed algorithm.
2014, 36(12): 2949-2955.
doi: 10.3724/SP.J.1146.2014.00808
Abstract:
Feature extraction is the key technique for Radar Automatic Target Recognition (RATR) based on High Resolution Range Profile (HRRP). Traditional feature extraction algorithms usually use shallow models. When applying such models, the inherent structure of the target is always ignored, which is disadvantageous for learning effective features. To address this issue, a deep framework for radar HRRP target recognition is proposed, which adopts multi-layered nonlinear networks for feature learning. Ground on the stable physical properties of the average profile in each HRRP frame without migration through resolution cell, Stacked Robust Auto-Encoders (SRAEs) are further developed, which are stacked by a series of RAEs. SRAEs can not only reconstruct the original HRRP samples, but also constrain the HRRPs in one frame close to the average profile. Then the top-level output of the networks is used as the input to the classifier. Experimental results on measured radar HRRP dataset validate the effectiveness of the proposed method.
Feature extraction is the key technique for Radar Automatic Target Recognition (RATR) based on High Resolution Range Profile (HRRP). Traditional feature extraction algorithms usually use shallow models. When applying such models, the inherent structure of the target is always ignored, which is disadvantageous for learning effective features. To address this issue, a deep framework for radar HRRP target recognition is proposed, which adopts multi-layered nonlinear networks for feature learning. Ground on the stable physical properties of the average profile in each HRRP frame without migration through resolution cell, Stacked Robust Auto-Encoders (SRAEs) are further developed, which are stacked by a series of RAEs. SRAEs can not only reconstruct the original HRRP samples, but also constrain the HRRPs in one frame close to the average profile. Then the top-level output of the networks is used as the input to the classifier. Experimental results on measured radar HRRP dataset validate the effectiveness of the proposed method.
2014, 36(12): 2956-2962.
doi: 10.3724/SP.J.1146.2013.02037
Abstract:
The micro-Doppler modulation caused by precession is considered as an important signature for the discrimination of space cone-shaped target. A novel feature extraction method for precessing cone-shaped target with narrow-band radar networks is proposed in this paper. Based on analysis of the scattering properties of the cone-shaped target, this paper first derives the scattering centers theoretical Instantaneous Frequency (IF) variations induced by precession, and the IF variations obtained from multiple radar aspects are matched according to their spectrum entropy. Then according to the properties of IF variations of the top and bottom scattering centers under different radar aspects, the precession and geometry feature extraction method is proposed for estimating the targets parameters, such as height, bottom radius, location of barycentric, precession angle etc. Experiments based on the electromagnetic computation data verify the validness and accuracy of the proposed method.
The micro-Doppler modulation caused by precession is considered as an important signature for the discrimination of space cone-shaped target. A novel feature extraction method for precessing cone-shaped target with narrow-band radar networks is proposed in this paper. Based on analysis of the scattering properties of the cone-shaped target, this paper first derives the scattering centers theoretical Instantaneous Frequency (IF) variations induced by precession, and the IF variations obtained from multiple radar aspects are matched according to their spectrum entropy. Then according to the properties of IF variations of the top and bottom scattering centers under different radar aspects, the precession and geometry feature extraction method is proposed for estimating the targets parameters, such as height, bottom radius, location of barycentric, precession angle etc. Experiments based on the electromagnetic computation data verify the validness and accuracy of the proposed method.
2014, 36(12): 2963-2968.
doi: 10.3724/SP.J.1146.2014.00072
Abstract:
The detection performance of the Moving Target Detection (MTD) method descends badly in a non-Gaussian correlated clutter background. Therefore, a radar target detection method in the non-Gaussian correlated clutter background is proposed, which is obtained based on the alpha-stable distribution clutter model and the eigenfilter. The proposed method suppresses the non-Gaussian clutter by the signed power and whitens the correlated clutter of the fractional lower order correlate matrix, which is based on the alpha-stable distribution clutter model. Finally, the eigenfilter is used to get a higher signal clutter ratio. Simulations and real data results show that, the detection performance of the proposed method obviously outperforms the MTD method in the non-Gaussian correlated clutter background.
The detection performance of the Moving Target Detection (MTD) method descends badly in a non-Gaussian correlated clutter background. Therefore, a radar target detection method in the non-Gaussian correlated clutter background is proposed, which is obtained based on the alpha-stable distribution clutter model and the eigenfilter. The proposed method suppresses the non-Gaussian clutter by the signed power and whitens the correlated clutter of the fractional lower order correlate matrix, which is based on the alpha-stable distribution clutter model. Finally, the eigenfilter is used to get a higher signal clutter ratio. Simulations and real data results show that, the detection performance of the proposed method obviously outperforms the MTD method in the non-Gaussian correlated clutter background.
2014, 36(12): 2969-2974.
doi: 10.3724/SP.J.1146.2014.00563
Abstract:
Adaptive Coherence Estimator (ACE) often suffers considerable performance degradation in the presence of steering vector errors. In this paper, a robust ACE detector based on the ellipsoid uncertainty set constraint is proposed. A detailed analysis of ACE detector is first conducted, which results in an interesting observation that scaling of the steering vector does not affect the statistical test of ACE. With this property exploited, a model for designing robust ACE detector is constructed and is subsequently converted into a convex optimization problem. Then, the solution to the problem is given with the powerful Newton-Raphson method. Simulation results show that the robustness of the proposed detector against the steering vector errors can be improved significantly compared with the standard ACE.
Adaptive Coherence Estimator (ACE) often suffers considerable performance degradation in the presence of steering vector errors. In this paper, a robust ACE detector based on the ellipsoid uncertainty set constraint is proposed. A detailed analysis of ACE detector is first conducted, which results in an interesting observation that scaling of the steering vector does not affect the statistical test of ACE. With this property exploited, a model for designing robust ACE detector is constructed and is subsequently converted into a convex optimization problem. Then, the solution to the problem is given with the powerful Newton-Raphson method. Simulation results show that the robustness of the proposed detector against the steering vector errors can be improved significantly compared with the standard ACE.
2014, 36(12): 2975-2979.
doi: 10.3724/SP.J.1146.2014.00045
Abstract:
The traditional methods for automatic recognition of sidelobe tracking of angle measurement system in radar techniques are presented based on the different magnitude between the signals of sum channel and difference channel. By researching the principal of sidelobe tracking, analyzing it through the computer simulations or the experimental results, a new conclusion about the features of sidelobe tracking is presented as the phases of the signals of sum channel and difference channel are orthogonal and the signal of difference channel is with high magnitude at the sidelobe tracking position. According to the above conclusion, a method based on the magnitude-phase characteristics is proposed: The eigenvector revealing magnitude-phase characteristics for recognizing sidelobe tracking is constructed and the Support Vector Machine (SVM) is employed for later classification process. It demonstrates in experiments that the recognition accuracy is significantly improved by using the new approach which is with strong robustness and high real-time performance.
The traditional methods for automatic recognition of sidelobe tracking of angle measurement system in radar techniques are presented based on the different magnitude between the signals of sum channel and difference channel. By researching the principal of sidelobe tracking, analyzing it through the computer simulations or the experimental results, a new conclusion about the features of sidelobe tracking is presented as the phases of the signals of sum channel and difference channel are orthogonal and the signal of difference channel is with high magnitude at the sidelobe tracking position. According to the above conclusion, a method based on the magnitude-phase characteristics is proposed: The eigenvector revealing magnitude-phase characteristics for recognizing sidelobe tracking is constructed and the Support Vector Machine (SVM) is employed for later classification process. It demonstrates in experiments that the recognition accuracy is significantly improved by using the new approach which is with strong robustness and high real-time performance.
2014, 36(12): 2980-2985.
doi: 10.3724/SP.J.1146.2014.00018
Abstract:
Through-the-wall radar imaging by estimating the wall thickness and the dielectric constant is a hot research field in recent years. In order to lift strict restriction that the antenna should parallel to the wall in the traditional through-wall imaging model, and to solve the existing environmental parameters estimation algorithm with low computational efficiency and poor robustness problem, a novel linear MIMO array through-wall imaging model is proposed to adapt to the situation with unknown positional relationship between the array and the wall. Furthermore, based on the analysis of the echo path of the front and rear walls, this paper also presents a novel environmental parameters estimation algorithm with high estimation robustness and low computational complexity. Comparing to the conventional environmental parameters estimation algorithm, this proposed algorithm needs neither extra operations nor special targets to assist. The results calculated from Finite Difference Time Domain (FDTD) simulation verify the effectiveness of the proposed imaging model and environmental parameters estimation algorithm.
Through-the-wall radar imaging by estimating the wall thickness and the dielectric constant is a hot research field in recent years. In order to lift strict restriction that the antenna should parallel to the wall in the traditional through-wall imaging model, and to solve the existing environmental parameters estimation algorithm with low computational efficiency and poor robustness problem, a novel linear MIMO array through-wall imaging model is proposed to adapt to the situation with unknown positional relationship between the array and the wall. Furthermore, based on the analysis of the echo path of the front and rear walls, this paper also presents a novel environmental parameters estimation algorithm with high estimation robustness and low computational complexity. Comparing to the conventional environmental parameters estimation algorithm, this proposed algorithm needs neither extra operations nor special targets to assist. The results calculated from Finite Difference Time Domain (FDTD) simulation verify the effectiveness of the proposed imaging model and environmental parameters estimation algorithm.
2014, 36(12): 2986-2993.
doi: 10.3724/SP.J.1146.2013.01831
Abstract:
The SAR imaging algorithm based on Compressed Sensing (CS), could complete the high-resolution imaging of sparse target with the sampling data below the Nyquist sampling rate. However, the Single Measurement Vectors (SMV) model used for range profile reconstruction in existing algorithms, is time-consuming and noise-affected. Based on the Multiple Measurement Vectors (MMV) model, this paper proposes to recovery the joint sparse target signal source of the same sparse structure by MMV. The range profile imaging performance is analyzed theoretically and experimentally. Then, a 2-D SAR imaging algorithm, in which the range imaging is realized based on MMV model and azimuth imaging is realized based on SMV model, is proposed. This algorithm is superior to the SMV-based CS algorithm both on time-consuming and reconstruction precision. The processing of simulation data and radar measured data verifies the effectiveness of this algorithm.
The SAR imaging algorithm based on Compressed Sensing (CS), could complete the high-resolution imaging of sparse target with the sampling data below the Nyquist sampling rate. However, the Single Measurement Vectors (SMV) model used for range profile reconstruction in existing algorithms, is time-consuming and noise-affected. Based on the Multiple Measurement Vectors (MMV) model, this paper proposes to recovery the joint sparse target signal source of the same sparse structure by MMV. The range profile imaging performance is analyzed theoretically and experimentally. Then, a 2-D SAR imaging algorithm, in which the range imaging is realized based on MMV model and azimuth imaging is realized based on SMV model, is proposed. This algorithm is superior to the SMV-based CS algorithm both on time-consuming and reconstruction precision. The processing of simulation data and radar measured data verifies the effectiveness of this algorithm.
2014, 36(12): 2994-3000.
doi: 10.3724/SP.J.1146.2013.02031
Abstract:
The Mosaic mode is a hybrid mode of spotlight and ScanSAR, which can obtain SAR images with a large coverage and a high resolution. In this paper, a new Mosaic mode is proposed. The image scene is divided into several sub imaging blocks along the range and azimuth direction. Each block is imaged by sliding spotlight and then pieced together. Short transmit sub-aperture makes the scanning angle decrease significantly, which reduces the difficulty of system design. It also makes the range migration of each block smaller. Azimuth multichannel is helpful to improve the system Signal to Noise Ratio (SNR). PRF (Pulse Repetition Frequency) can be reduced by space sampling, which is constructive to select the proper PRF in the timing diagram.
The Mosaic mode is a hybrid mode of spotlight and ScanSAR, which can obtain SAR images with a large coverage and a high resolution. In this paper, a new Mosaic mode is proposed. The image scene is divided into several sub imaging blocks along the range and azimuth direction. Each block is imaged by sliding spotlight and then pieced together. Short transmit sub-aperture makes the scanning angle decrease significantly, which reduces the difficulty of system design. It also makes the range migration of each block smaller. Azimuth multichannel is helpful to improve the system Signal to Noise Ratio (SNR). PRF (Pulse Repetition Frequency) can be reduced by space sampling, which is constructive to select the proper PRF in the timing diagram.
2014, 36(12): 3001-3007.
doi: 10.3724/SP.J.1146.2013.01861
Abstract:
This paper is concerned with the multitarget localization for bistatic MIMO radar in the presence of Symmetric-Stable (SS) impulsive noise. As the non-existence of the second-order matrix degrades the estimation performance of the subspace-based algorithm in SS impulsive noise environment, the preprocessing method is proposed to normalize the received data by maximizing the 2-norm of the row of data. The theoretical analysis proves that the covariance matrix of normalized data is finite. Then the sparse linear model is constructed by performing the vectorization operation on the covariance matrix. And the Covariance Matrix Smoothed L0 norm (CMSL0) method is proposed to estimate the angle of the target. Finally, the Fractional Lower Order Moments (FLOM)-maximum likelihood method is utilized to obtain the location of the target. The simulation results show that both the MUSIC and CMSL0 algorithms can estimate the angle of target effectively after maximizing the 2-norm of the row of received data. The CMSL0 algorithm can obtain better estimation performance and has better robustness against the impulsive noise than the MUSIC algorithm. In addition, compared with the MUSIC algorithm, the CMSL0 algorithm does not require to estimate the actual number of the targets and is not restricted to be within a half wavelength interelement spacing.
This paper is concerned with the multitarget localization for bistatic MIMO radar in the presence of Symmetric-Stable (SS) impulsive noise. As the non-existence of the second-order matrix degrades the estimation performance of the subspace-based algorithm in SS impulsive noise environment, the preprocessing method is proposed to normalize the received data by maximizing the 2-norm of the row of data. The theoretical analysis proves that the covariance matrix of normalized data is finite. Then the sparse linear model is constructed by performing the vectorization operation on the covariance matrix. And the Covariance Matrix Smoothed L0 norm (CMSL0) method is proposed to estimate the angle of the target. Finally, the Fractional Lower Order Moments (FLOM)-maximum likelihood method is utilized to obtain the location of the target. The simulation results show that both the MUSIC and CMSL0 algorithms can estimate the angle of target effectively after maximizing the 2-norm of the row of received data. The CMSL0 algorithm can obtain better estimation performance and has better robustness against the impulsive noise than the MUSIC algorithm. In addition, compared with the MUSIC algorithm, the CMSL0 algorithm does not require to estimate the actual number of the targets and is not restricted to be within a half wavelength interelement spacing.
2014, 36(12): 3008-3013.
doi: 10.3724/SP.J.1146.2013.01984
Abstract:
For passive radar, the maneuvering target causes range cell migration and Doppler cell migration, leading to a detection ability loss. The signal model is established and a novel detection algorithm is proposed to solve the problem. Firstly, the fast and slow time domains are divided by the overlapping segment method and the Keystone transform is adopted in order to correct the range cell migration due to differential velocity. Then the signal is divided into segments second time in the slow time domain. Finally range cell migration and Doppler cell migration induced by differential acceleration are corrected by the segmentation implementation method of the Fourier transform to achieve long term coherent integration. Experiments based on simulated and practical signals verify the effectiveness of the proposed algorithm.
For passive radar, the maneuvering target causes range cell migration and Doppler cell migration, leading to a detection ability loss. The signal model is established and a novel detection algorithm is proposed to solve the problem. Firstly, the fast and slow time domains are divided by the overlapping segment method and the Keystone transform is adopted in order to correct the range cell migration due to differential velocity. Then the signal is divided into segments second time in the slow time domain. Finally range cell migration and Doppler cell migration induced by differential acceleration are corrected by the segmentation implementation method of the Fourier transform to achieve long term coherent integration. Experiments based on simulated and practical signals verify the effectiveness of the proposed algorithm.
2014, 36(12): 3014-3020.
doi: 10.3724/SP.J.1146.2013.02011
Abstract:
In light of the fact that enormous elements exist when applying the uniform rectangular planar array to skywave Over-The-Horizon Radar (OTHR), the sparse optimization approach of the uniform 2-D arrays using an improved genetic algorithm is proposed as follows: an optimal model for the sparse rectangular planar array is established from the view of beam shaping; initial population of the genetic algorithm is identified by elevation beam on the bases of the Direction-Of-Arrival (DOA) of multi-mode propagation echoes; the fitness function is revised for the purpose of avoiding early maturing and random walk; and both crossover and mutation operators are improved in order to achieve precise control of the sparsity ratio. Simulation results indicate that the precise control of the sparsity ratio and the optimized performance are obtained by using the improved genetic algorithm. Finally, the paper analyses the feasibility of 2-D arrays application into the OTHR engineering, indicates the conditions for application and existing technical challenges, and then presents the solutions accordingly.
In light of the fact that enormous elements exist when applying the uniform rectangular planar array to skywave Over-The-Horizon Radar (OTHR), the sparse optimization approach of the uniform 2-D arrays using an improved genetic algorithm is proposed as follows: an optimal model for the sparse rectangular planar array is established from the view of beam shaping; initial population of the genetic algorithm is identified by elevation beam on the bases of the Direction-Of-Arrival (DOA) of multi-mode propagation echoes; the fitness function is revised for the purpose of avoiding early maturing and random walk; and both crossover and mutation operators are improved in order to achieve precise control of the sparsity ratio. Simulation results indicate that the precise control of the sparsity ratio and the optimized performance are obtained by using the improved genetic algorithm. Finally, the paper analyses the feasibility of 2-D arrays application into the OTHR engineering, indicates the conditions for application and existing technical challenges, and then presents the solutions accordingly.
2014, 36(12): 3021-3026.
doi: 10.3724/SP.J.1146.2013.01826
Abstract:
During the process of antenna design and optimization, classical optimization methods often require hundreds or even thousands trials of different parameter combinations, which leads to a low efficiency in solving multi-parameter and large scale optimization problems. In this paper, a quick and approximate computation of the EM response can be realized though a Kriging model, which is created by fitting the simulation results to their structural parameters. The number of EM simulation needed can be reduced by Latin Hypercube Sampling for MultiDimensional Uniformity (LHS-MDU) at the initial stage and a candidate-selecting method in the following optimization loops. In order to optimize the resonant frequency and impedance bandwidth, the feed position of a rectangular patch antenna and the dipole lengths of a dual-band monopole antenna are adjusted by the proposed method and compared with the genetic optimization, the numbers of EM simulation are reduced by 75% and 84% respectively.
During the process of antenna design and optimization, classical optimization methods often require hundreds or even thousands trials of different parameter combinations, which leads to a low efficiency in solving multi-parameter and large scale optimization problems. In this paper, a quick and approximate computation of the EM response can be realized though a Kriging model, which is created by fitting the simulation results to their structural parameters. The number of EM simulation needed can be reduced by Latin Hypercube Sampling for MultiDimensional Uniformity (LHS-MDU) at the initial stage and a candidate-selecting method in the following optimization loops. In order to optimize the resonant frequency and impedance bandwidth, the feed position of a rectangular patch antenna and the dipole lengths of a dual-band monopole antenna are adjusted by the proposed method and compared with the genetic optimization, the numbers of EM simulation are reduced by 75% and 84% respectively.
2014, 36(12): 3027-3034.
doi: 10.3724/SP.J.1146.2014.00023
Abstract:
Reconfigurable cipher processing architecture is a newly proposed architecture for security information processing, but it has the shortage of low throughput and low utilization. To solve the problem, this paper proposes the Stream based Reconfigurable Clustered block Cipher Processing Array (S-RCCPA) architecture based on the steam processor architecture. S-RCCPA incorporates coarse-grained reconfigurable function unit, hierarchy Crossbar interconnection network and distributed key storage, and it supports combined static-dynamic reconfiguration and variable virtual pipeline partition. Experiment results show that, for technology, classical block ciphers can achieve 5.28~47.84 times speedup when mapped to 41 S-RCCPA.
Reconfigurable cipher processing architecture is a newly proposed architecture for security information processing, but it has the shortage of low throughput and low utilization. To solve the problem, this paper proposes the Stream based Reconfigurable Clustered block Cipher Processing Array (S-RCCPA) architecture based on the steam processor architecture. S-RCCPA incorporates coarse-grained reconfigurable function unit, hierarchy Crossbar interconnection network and distributed key storage, and it supports combined static-dynamic reconfiguration and variable virtual pipeline partition. Experiment results show that, for technology, classical block ciphers can achieve 5.28~47.84 times speedup when mapped to 41 S-RCCPA.
2014, 36(12): 3035-3041.
doi: 10.3724/SP.J.1146.2013.02025
Abstract:
Embedded memories are easily influenced by Single-Event Effects (SEE). A model to calculate the SEE failure rate of an embedded memory is proposed, which considers the likelihood that an single-event upset or single-event transient will become an error in different types of circuits. It can also be used for the quantitative analysis of SEE mitigation techniques for versatile memories. Experimental investigations are performed using heavy ion accelerators on an experimental embedded programmable memory, which is designed by Institute of Electronics, Chinese Academy of Sciences. The result of 10.5% average error verifies the effectiveness of the proposed model.
Embedded memories are easily influenced by Single-Event Effects (SEE). A model to calculate the SEE failure rate of an embedded memory is proposed, which considers the likelihood that an single-event upset or single-event transient will become an error in different types of circuits. It can also be used for the quantitative analysis of SEE mitigation techniques for versatile memories. Experimental investigations are performed using heavy ion accelerators on an experimental embedded programmable memory, which is designed by Institute of Electronics, Chinese Academy of Sciences. The result of 10.5% average error verifies the effectiveness of the proposed model.
2014, 36(12): 3042-3045.
doi: 10.3724/SP.J.1146.2014.00008
Abstract:
In this paper, the relationship between the group delay and correlation delay is discussed, and the influence of dispersion on the calibration of a system is analyzed, when a modulated signal passes through a dispersive narrow-band system. The envelope delay of a modulated signal is caused by the distortion of its rising and falling edges when it passes through a system. The researches on the modulated signals, including the rectangular pulse modulation, triangular pulse modulation, cosine pulse modulation and chirp modulation, show that the correlation delay is different from the group delay of the system at the point of the carrier frequency. The correlation delay is approximate to the weighted average of the group delay, and the weighting factor is the product of the spectrum of the signal and the amplitude response of the system. When the group delay is used to calibrate the correlation delay, the linearity of the phase response of the system becomes better, the calibration is higher.
In this paper, the relationship between the group delay and correlation delay is discussed, and the influence of dispersion on the calibration of a system is analyzed, when a modulated signal passes through a dispersive narrow-band system. The envelope delay of a modulated signal is caused by the distortion of its rising and falling edges when it passes through a system. The researches on the modulated signals, including the rectangular pulse modulation, triangular pulse modulation, cosine pulse modulation and chirp modulation, show that the correlation delay is different from the group delay of the system at the point of the carrier frequency. The correlation delay is approximate to the weighted average of the group delay, and the weighting factor is the product of the spectrum of the signal and the amplitude response of the system. When the group delay is used to calibrate the correlation delay, the linearity of the phase response of the system becomes better, the calibration is higher.
2014, 36(12): 3046-3050.
doi: 10.3724/SP.J.1146.2014.00545
Abstract:
In order to improve the modeling accuracy of the AH model,this paper introduces some lagging envelope terms and leading envelope terms to the weak nonlinear module with memory in the AH model to simulate the lagging and the leading envelope effects of the broadband RFPA. In this way, a Generalized Augmented Hammerstein (GAH) model is proposed. This paper experimentally compares the modeling accuracy and model computational complexity of the GAH, AH, memory polynomial, fractional-order memory polynomial and generalized memory polynomial models. The experimental results illustrate that the GAH model can characterize the memory effects of the RFPA well with low computational complexity.
In order to improve the modeling accuracy of the AH model,this paper introduces some lagging envelope terms and leading envelope terms to the weak nonlinear module with memory in the AH model to simulate the lagging and the leading envelope effects of the broadband RFPA. In this way, a Generalized Augmented Hammerstein (GAH) model is proposed. This paper experimentally compares the modeling accuracy and model computational complexity of the GAH, AH, memory polynomial, fractional-order memory polynomial and generalized memory polynomial models. The experimental results illustrate that the GAH model can characterize the memory effects of the RFPA well with low computational complexity.