Email alert
2014 Vol. 36, No. 1
Display Method:
2014, 36(1): 1-7.
doi: 10.3724/SP.J.1146.2013.00503
Abstract:
In-network caching is one of the key aspects of Content Centric Networking (CCN), which is widely concerned recently. However, the ALWAYS caching scheme (caching everywhere on the delivery path) in CCN produces a great of redundancy, while the Betw scheme leads to that the node has the more frequent replacement with the larger betweenness centrality, which will decrease the availability of the content. In this paper, a novel in-network caching scheme named BetwRep is proposed based on a metric including the betweenness centrality and the replacement rate of one node to address the problem-where to cache along the delivery path. Simulation experiment based on ndnSIM demonstrates that the BetwRep caching scheme achieves the lower loading in the source server and less average hops than that of Betw scheme and ALWAYS scheme.
In-network caching is one of the key aspects of Content Centric Networking (CCN), which is widely concerned recently. However, the ALWAYS caching scheme (caching everywhere on the delivery path) in CCN produces a great of redundancy, while the Betw scheme leads to that the node has the more frequent replacement with the larger betweenness centrality, which will decrease the availability of the content. In this paper, a novel in-network caching scheme named BetwRep is proposed based on a metric including the betweenness centrality and the replacement rate of one node to address the problem-where to cache along the delivery path. Simulation experiment based on ndnSIM demonstrates that the BetwRep caching scheme achieves the lower loading in the source server and less average hops than that of Betw scheme and ALWAYS scheme.
2014, 36(1): 8-14.
doi: 10.3724/SP.J.1146.2012.00933
Abstract:
A Cluster-based Data Aggregation Algorithm (CDAA) for Wireless Multimedia Sensor Networks (WMSNs) is proposed. CDAA extends effectively the network life cycle by a new clustering method and a data aggregation scheme based on the clustering method. According to the orientation feature and the residual energy of an individual multimedia sensor node, a new clustering method is proposed. Furthermore, a scheme of data aggregation is applied based on the clustering method. Compared with LEACH, PEGASIS and AntSensNet protocols, simulation results show that CDAA can decrease the number of data transmission in WMSNs, balance energy consumption and prolong the networks lifetime.
A Cluster-based Data Aggregation Algorithm (CDAA) for Wireless Multimedia Sensor Networks (WMSNs) is proposed. CDAA extends effectively the network life cycle by a new clustering method and a data aggregation scheme based on the clustering method. According to the orientation feature and the residual energy of an individual multimedia sensor node, a new clustering method is proposed. Furthermore, a scheme of data aggregation is applied based on the clustering method. Compared with LEACH, PEGASIS and AntSensNet protocols, simulation results show that CDAA can decrease the number of data transmission in WMSNs, balance energy consumption and prolong the networks lifetime.
2014, 36(1): 15-21.
doi: 10.3724/SP.J.1146.2013.00427
Abstract:
Considering Service Providers (SP) revenue optimization issue, a two-stage multi-virtual machine resource provisioning mechanism is proposed in this paper. Firstly, a capacity planning model is proposed and the particle swarm algorithm is used to make resource purchase set, which can gain the maximize profit for SP. Then, a utility function to measure customers satisfaction is proposed to optimize SPs profit in the long way. Simulation experimental results show that the proposed method improves effectively SPs profit, and gains better users satisfaction.
Considering Service Providers (SP) revenue optimization issue, a two-stage multi-virtual machine resource provisioning mechanism is proposed in this paper. Firstly, a capacity planning model is proposed and the particle swarm algorithm is used to make resource purchase set, which can gain the maximize profit for SP. Then, a utility function to measure customers satisfaction is proposed to optimize SPs profit in the long way. Simulation experimental results show that the proposed method improves effectively SPs profit, and gains better users satisfaction.
2014, 36(1): 22-26.
doi: 10.3724/SP.J.1146.2013.00466
Abstract:
There exist some defects such as low efficiency in the spending protocol and deposit protocol of the proposed by Izabachene et al. (2012) divisible E-cash system based on the standard model. Using the Groth-Sahai (GS) proof system and accumulator, this paper proposes a reverse binary tree algorithm and designs an efficient divisible E-cash system under the standard model. The new system can calculate simultaneously the series number of the leaf nodes of the binary tree in the process of the binary tree construction. A user can prove the correctness of spending path directly, thus the computation load of user is constant in the spending protocol. The new system achieves both the weak exculpability and the strong exculpability. Finally, the security proof of the system is given in the standard model which includes unforgeability, anonymity, identification of double spender and exculpability.
There exist some defects such as low efficiency in the spending protocol and deposit protocol of the proposed by Izabachene et al. (2012) divisible E-cash system based on the standard model. Using the Groth-Sahai (GS) proof system and accumulator, this paper proposes a reverse binary tree algorithm and designs an efficient divisible E-cash system under the standard model. The new system can calculate simultaneously the series number of the leaf nodes of the binary tree in the process of the binary tree construction. A user can prove the correctness of spending path directly, thus the computation load of user is constant in the spending protocol. The new system achieves both the weak exculpability and the strong exculpability. Finally, the security proof of the system is given in the standard model which includes unforgeability, anonymity, identification of double spender and exculpability.
2014, 36(1): 27-33.
doi: 10.3724/SP.J.1146.2013.00392
Abstract:
Cognitive sensor local information sparse representation and compressive measurement are investigated, which are conducted by Analog-to-Information Converters (AIC) at each sensor in Cognitive Wireless Sensor Networks (C-WSN). Gradient Projection Sparse Reconstruction (GPSR) scheme based on energy-efficiency measurement is proposed. According to the spatial-temporal correlation structure of non-stationary signals perceived by massive cognitive sensors in Event Region (ER), these signals are mapped to wavelet orthogonal basis concatenate dictionaries to perform sparse representation. Adaptive measurement is implemented via weighted energy subset function, which could obtain the proper observation in energy-efficiency approach. The corresponding measurement matrix is constructed by the orthogonalization of these selected measurement vectors. Adaptive compressive reconstruction is performed at sink via GPSR algorithm, which is compared with conventional Orthogonal Matching Pursuit (OMP) algorithm. Simulation results indicate that, signal reconstruction effect based on energy-efficiency measurement GPSR adaptive compression is superior to Gaussian random measurement in the region where compression ratio is less than 0.2. With the same sensor numbers, the proposed GPSR adaptive compression approach has small reconstruction Mean Square Error (MSE) at low Signal-to-Noise Ratio (SNR) region, and the required measurement number is less than Gaussian random measurement, which guarantees sensors energy balance effectively.
Cognitive sensor local information sparse representation and compressive measurement are investigated, which are conducted by Analog-to-Information Converters (AIC) at each sensor in Cognitive Wireless Sensor Networks (C-WSN). Gradient Projection Sparse Reconstruction (GPSR) scheme based on energy-efficiency measurement is proposed. According to the spatial-temporal correlation structure of non-stationary signals perceived by massive cognitive sensors in Event Region (ER), these signals are mapped to wavelet orthogonal basis concatenate dictionaries to perform sparse representation. Adaptive measurement is implemented via weighted energy subset function, which could obtain the proper observation in energy-efficiency approach. The corresponding measurement matrix is constructed by the orthogonalization of these selected measurement vectors. Adaptive compressive reconstruction is performed at sink via GPSR algorithm, which is compared with conventional Orthogonal Matching Pursuit (OMP) algorithm. Simulation results indicate that, signal reconstruction effect based on energy-efficiency measurement GPSR adaptive compression is superior to Gaussian random measurement in the region where compression ratio is less than 0.2. With the same sensor numbers, the proposed GPSR adaptive compression approach has small reconstruction Mean Square Error (MSE) at low Signal-to-Noise Ratio (SNR) region, and the required measurement number is less than Gaussian random measurement, which guarantees sensors energy balance effectively.
2014, 36(1): 34-40.
doi: 10.3724/SP.J.1146.2013.00155
Abstract:
Due to the limitation of the closed-form analysis of end-to-end delay in mobile ad hoc networks, this paper develops an effective modeling scheme for delay in the networks where the delivery is out-of-order and the two-hop relay algorithm with single copy is involved, and presents a rigorous theoretical upper bound. First, for various random mobility models, it is proved that the inter-meeting time between mobile nodes can be expressed in a unified expression. Furthermore, taking the medium competition, the traffic competition and the queuing delay into consideration, the critical time period of delay is defined accurately, and then the queuing service is modeled. Finally, an exact upper bound of the end-to-end delay is derived in closed-form. Simulation results validate that the theoretical delay matches the experimental data closely.
Due to the limitation of the closed-form analysis of end-to-end delay in mobile ad hoc networks, this paper develops an effective modeling scheme for delay in the networks where the delivery is out-of-order and the two-hop relay algorithm with single copy is involved, and presents a rigorous theoretical upper bound. First, for various random mobility models, it is proved that the inter-meeting time between mobile nodes can be expressed in a unified expression. Furthermore, taking the medium competition, the traffic competition and the queuing delay into consideration, the critical time period of delay is defined accurately, and then the queuing service is modeled. Finally, an exact upper bound of the end-to-end delay is derived in closed-form. Simulation results validate that the theoretical delay matches the experimental data closely.
2014, 36(1): 41-47.
doi: 10.3724/SP.J.1146.2013.00214
Abstract:
A fault location mechanism is proposed based on lightpath status aware using cluster allocation to solve the issues of long fault location time and high service dependence. According to the constraints of network clustering, two-layer network model is established through the minimum dominating set theory. In addition, a new operation called matrix and is defined in the proposed mechanism. When a link failure occurs, the cluster head and sink node will achieve fast and accurate fault location via the operation of matrix and. The simulation shows that the fault location rate and fault location time are significantly improved with lower complexity and resource cost.
A fault location mechanism is proposed based on lightpath status aware using cluster allocation to solve the issues of long fault location time and high service dependence. According to the constraints of network clustering, two-layer network model is established through the minimum dominating set theory. In addition, a new operation called matrix and is defined in the proposed mechanism. When a link failure occurs, the cluster head and sink node will achieve fast and accurate fault location via the operation of matrix and. The simulation shows that the fault location rate and fault location time are significantly improved with lower complexity and resource cost.
2014, 36(1): 48-54.
doi: 10.3724/SP.J.1146.2013.00382
Abstract:
Based on the group theory, the symmetries of mappings of Bit-Interleaved Coded Modulation (or with Iterative Decoding) systems (BICM(-ID)) are studied. Firstly, the definitions of the symmetries of mappings for BICM (-ID) are given. The symmetries of binary labels of a mapping are the intrinsic properties of BICM(-ID), which are isomorphic to the symmetry group of a hypercube of order m. According to the symmetries of BICM (-ID), an Improved Binary Switch Algorithm (IBSA) is proposed. The search space of IBSA is a transversal of the symmetry group of mappings. Consequently, the efficiency of IBSA is improved compared to the traditional BSA. Finally, Simulation results of a 16-ary two dimensional constellation show that the search efficiency of IBSA can be improved about 4% during 40000 trials. For 32-PSK, the results show that the search efficiency can be improved at least 3.5% and the time required to calculate a single mapping is shorten about 4000 times.
Based on the group theory, the symmetries of mappings of Bit-Interleaved Coded Modulation (or with Iterative Decoding) systems (BICM(-ID)) are studied. Firstly, the definitions of the symmetries of mappings for BICM (-ID) are given. The symmetries of binary labels of a mapping are the intrinsic properties of BICM(-ID), which are isomorphic to the symmetry group of a hypercube of order m. According to the symmetries of BICM (-ID), an Improved Binary Switch Algorithm (IBSA) is proposed. The search space of IBSA is a transversal of the symmetry group of mappings. Consequently, the efficiency of IBSA is improved compared to the traditional BSA. Finally, Simulation results of a 16-ary two dimensional constellation show that the search efficiency of IBSA can be improved about 4% during 40000 trials. For 32-PSK, the results show that the search efficiency can be improved at least 3.5% and the time required to calculate a single mapping is shorten about 4000 times.
2014, 36(1): 55-60.
doi: 10.3724/SP.J.1146.2013.00340
Abstract:
This paper studies the mathematical model of Multiband Joint Detection (MJD). The MJD problem can be formulated as a constrained optimization problem with the goal of maximizing the aggregated opportunistic throughput of a cognitive radio system under some constraints on the interference to the Primary User (PU). An Immune Clone Algorithm (ICA) is proposed to solve this problem. The performance of the proposed method is analyzed and compared with the Genetic Algorithm (GA) based technique through computer simulations. Experiment results show that the proposed method offers higher aggregated opportunistic throughput rates under the same interference to PU than GA, and the results demonstrate the stability and effectiveness of the method.
This paper studies the mathematical model of Multiband Joint Detection (MJD). The MJD problem can be formulated as a constrained optimization problem with the goal of maximizing the aggregated opportunistic throughput of a cognitive radio system under some constraints on the interference to the Primary User (PU). An Immune Clone Algorithm (ICA) is proposed to solve this problem. The performance of the proposed method is analyzed and compared with the Genetic Algorithm (GA) based technique through computer simulations. Experiment results show that the proposed method offers higher aggregated opportunistic throughput rates under the same interference to PU than GA, and the results demonstrate the stability and effectiveness of the method.
2014, 36(1): 61-66.
doi: 10.3724/SP.J.1146.2013.00461
Abstract:
Covariance matrix based spectrum sensing encounters performance degradation when there the antenna correlation is low. To overcome this drawback, a nonparametric cooperative spectrum sensing algorithm based on Friedman test is proposed. Distributed sensors possess the effect of space diversity, so that the signal power among the sensors at the same time may not be completely equal. Based on this feature, the spectrum sensing is realized by comparing signal powers among the sensors. For the nonparametric approach is adopted, the proposed algorithm is robust to noise uncertainty and is suitable for noise of any statistical distribution. The theoretical expression of decision threshold is also derived, which shows that the decision threshold has no relationship with the sample number. As a result, the threshold does not need to be reset when the sample number changes. Simulation results demonstrate the effectiveness of the algorithm.
Covariance matrix based spectrum sensing encounters performance degradation when there the antenna correlation is low. To overcome this drawback, a nonparametric cooperative spectrum sensing algorithm based on Friedman test is proposed. Distributed sensors possess the effect of space diversity, so that the signal power among the sensors at the same time may not be completely equal. Based on this feature, the spectrum sensing is realized by comparing signal powers among the sensors. For the nonparametric approach is adopted, the proposed algorithm is robust to noise uncertainty and is suitable for noise of any statistical distribution. The theoretical expression of decision threshold is also derived, which shows that the decision threshold has no relationship with the sample number. As a result, the threshold does not need to be reset when the sample number changes. Simulation results demonstrate the effectiveness of the algorithm.
2014, 36(1): 67-73.
doi: 10.3724/SP.J.1146.2013.00046
Abstract:
In this paper a proactive scheduling algorithm is proposed based on associative interference of spatial subchannels for MU-MIMO downlink (broadcast) channel. The strategy converts user scheduling into subchannel selection issue. With comprehensive consideration of candidate subchannel transmission gain, along with mutual interference among candidate and selected subchannels, as well as those to be selected potentially. A set of subchannels with less mutual interference are achieved. Simulation results show that by choosing proper associative interference parameters, the proposed algorithm can achieve good tradeoff between computational complexity and transmission performance, and improve system sum rate effectively.
In this paper a proactive scheduling algorithm is proposed based on associative interference of spatial subchannels for MU-MIMO downlink (broadcast) channel. The strategy converts user scheduling into subchannel selection issue. With comprehensive consideration of candidate subchannel transmission gain, along with mutual interference among candidate and selected subchannels, as well as those to be selected potentially. A set of subchannels with less mutual interference are achieved. Simulation results show that by choosing proper associative interference parameters, the proposed algorithm can achieve good tradeoff between computational complexity and transmission performance, and improve system sum rate effectively.
2014, 36(1): 74-81.
doi: 10.3724/SP.J.1146.2013.00416
Abstract:
This paper discusses the dynamic S-boxes using the combination of inversion mapping and an affine transformation over the finite field. First, a definition of differential probability for dynamic S-box is provided. Necessary and sufficient conditions of impossible differentials in dynamic S-box and the number of impossible differentials are presented. Then, an upper bound on the maximum differential probability of dynamic S-box is proved, and the accessibility of this bound is presented. Finally, the differential properties of dynamic S-box consisting of randomly chosen S-boxes are researched by simulation experiments. The theoretical and experimental analyses show that dynamic S-box is better than single S-box in differential properties.
This paper discusses the dynamic S-boxes using the combination of inversion mapping and an affine transformation over the finite field. First, a definition of differential probability for dynamic S-box is provided. Necessary and sufficient conditions of impossible differentials in dynamic S-box and the number of impossible differentials are presented. Then, an upper bound on the maximum differential probability of dynamic S-box is proved, and the accessibility of this bound is presented. Finally, the differential properties of dynamic S-box consisting of randomly chosen S-boxes are researched by simulation experiments. The theoretical and experimental analyses show that dynamic S-box is better than single S-box in differential properties.
2014, 36(1): 82-87.
doi: 10.3724/SP.J.1146.2013.00283
Abstract:
The SNOW family is a main trend of the design of the stream cipher. Because of the security vulnerabilities of the SNOW family, this paper selects SNOW 2.0 algorithm which is the most representative of the family as a research object. Three core components of SNOW 2.0 that are mold addition on more than one domain, nonlinear S-box and Linear Feedback Shift Register (LFSR) are analyzed using statistical tests. Several improved algorithms are proposed based on improving random S-box and improving high performance LFSR. The result enhances effectively the security and performance of SNOW family.
The SNOW family is a main trend of the design of the stream cipher. Because of the security vulnerabilities of the SNOW family, this paper selects SNOW 2.0 algorithm which is the most representative of the family as a research object. Three core components of SNOW 2.0 that are mold addition on more than one domain, nonlinear S-box and Linear Feedback Shift Register (LFSR) are analyzed using statistical tests. Several improved algorithms are proposed based on improving random S-box and improving high performance LFSR. The result enhances effectively the security and performance of SNOW family.
2014, 36(1): 88-93.
doi: 10.3724/SP.J.1146.2013.00332
Abstract:
A novel chaotic circuit with a single bifurcation parameter is presented in this paper. The circuit is composed of a Wien-Bridge oscillator and a piecewise-linear memristor. By adjusting the system parameter, the proposed circuit performs chaotic and hyper-chaotic behaviors from doubling-periodic. The dynamic properties of the new circuit are demonstrated via universal dynamics analysis methods such as equilibria stability, Lyapunov exponent spectra and bifurcation diagrams. An equivalent circuit which realizes the action of three segments piecewise linear flux- controlled memristor is proposed and employed to the chaotic circuit. The Pspice simulation results of the resultant circuit are consistent with theoretical analysis.
A novel chaotic circuit with a single bifurcation parameter is presented in this paper. The circuit is composed of a Wien-Bridge oscillator and a piecewise-linear memristor. By adjusting the system parameter, the proposed circuit performs chaotic and hyper-chaotic behaviors from doubling-periodic. The dynamic properties of the new circuit are demonstrated via universal dynamics analysis methods such as equilibria stability, Lyapunov exponent spectra and bifurcation diagrams. An equivalent circuit which realizes the action of three segments piecewise linear flux- controlled memristor is proposed and employed to the chaotic circuit. The Pspice simulation results of the resultant circuit are consistent with theoretical analysis.
2014, 36(1): 94-100.
doi: 10.3724/SP.J.1146.2013.00342
Abstract:
This paper presents an efficient anonymous message authentication scheme for vehicular ad hoc networks. By using identity-based sign-encryption technique, a vehicular user can first authenticate with a region center to obtain a group signature key material, where the group is managed by the region center. Then, the user can employ the key material to sign a message and broadcast it into the network. Other vehicular users can directly check the signature without revocation verification. In addition, the used group signature supports batch verification, which significantly reduces the verification overhead. Compared with the existing schemes, the proposed scheme can achieve backward secure revocation.
This paper presents an efficient anonymous message authentication scheme for vehicular ad hoc networks. By using identity-based sign-encryption technique, a vehicular user can first authenticate with a region center to obtain a group signature key material, where the group is managed by the region center. Then, the user can employ the key material to sign a message and broadcast it into the network. Other vehicular users can directly check the signature without revocation verification. In addition, the used group signature supports batch verification, which significantly reduces the verification overhead. Compared with the existing schemes, the proposed scheme can achieve backward secure revocation.
2014, 36(1): 101-107.
doi: 10.3724/SP.J.1146.2013.00193
Abstract:
To evaluate Piccolos security against Power Analysis Attack (PAA), a cipher text attack model is proposed and Correlation Power Analysis (CPA) is conducted on this cipher implementation with measured power traces based on Side-channel Attack Standard Evaluation BOard (SASEBO). Due to the whiten keys for the final round of Piccolo, attacked keys including RK24L, RK24R, WK2 and WK3 are divided into four sub-keys, which are disclosed one by one. This approach can reduce the 80-bit primary key search space from 280 to (2220+2212+216) and make it possible to recover the primary key. The attack results show that 3000 measured power traces are enough to recover Piccolos 80-bit primary key, which proves the attack models feasibility and Piccolos vulnerability to CPA against its hardware implementation. So, some countermeasures should be used for Piccolos hardware implementation.
To evaluate Piccolos security against Power Analysis Attack (PAA), a cipher text attack model is proposed and Correlation Power Analysis (CPA) is conducted on this cipher implementation with measured power traces based on Side-channel Attack Standard Evaluation BOard (SASEBO). Due to the whiten keys for the final round of Piccolo, attacked keys including RK24L, RK24R, WK2 and WK3 are divided into four sub-keys, which are disclosed one by one. This approach can reduce the 80-bit primary key search space from 280 to (2220+2212+216) and make it possible to recover the primary key. The attack results show that 3000 measured power traces are enough to recover Piccolos 80-bit primary key, which proves the attack models feasibility and Piccolos vulnerability to CPA against its hardware implementation. So, some countermeasures should be used for Piccolos hardware implementation.
2014, 36(1): 108-113.
doi: 10.3724/SP.J.1146.2012.01491
Abstract:
The method using the decision tree for script virus detection can make full use of the information of training samples. But complex sample features and large number of samples will produce large number of nodes which result in the high algorithm time complexity and affect the classification accuracy due to the pruning process. In order to improve classification performance, a fusion algorithm using the information of fuzzy pattern is designed based on the decision tree classification algorithm. Three important characteristics of fuzzy pattern about close degree are regarded as the three attributes of sample information vector in the decision tree to build decision tree through training get. The stability and accuracy of the algorithm is verified by experiment. The experiment results show that the proposed algorithm increases discrimination of attributes and reduces the decision tree branch.
The method using the decision tree for script virus detection can make full use of the information of training samples. But complex sample features and large number of samples will produce large number of nodes which result in the high algorithm time complexity and affect the classification accuracy due to the pruning process. In order to improve classification performance, a fusion algorithm using the information of fuzzy pattern is designed based on the decision tree classification algorithm. Three important characteristics of fuzzy pattern about close degree are regarded as the three attributes of sample information vector in the decision tree to build decision tree through training get. The stability and accuracy of the algorithm is verified by experiment. The experiment results show that the proposed algorithm increases discrimination of attributes and reduces the decision tree branch.
2014, 36(1): 114-120.
doi: 10.3724/SP.J.1146.2013.00443
Abstract:
A new steganalytic scheme of color JPEG images is proposed based on YCbCr color space. The features of the proposed scheme include intra-channel features and inter-channel features. The intra-channel features are formed by Markov features, extended DCT features and co-occurrence matrices features and capture effectively the dependency among DCT coefficients in Y channel. The inter-channel features are extracted in difference planes between channels, which can effectively capture the dependency between channels. In the classification process, the intra-channel and inter-channel features are respectively used to train sub-classifiers. By adjusting the proportion of two kinds of sub-classifier, the optimal decisions are synthesized by using majority voting. Experimental results show that proposed scheme is applicable to low embedding color JPEG images and the performance outperforms some state-of-the-art feature sets.
A new steganalytic scheme of color JPEG images is proposed based on YCbCr color space. The features of the proposed scheme include intra-channel features and inter-channel features. The intra-channel features are formed by Markov features, extended DCT features and co-occurrence matrices features and capture effectively the dependency among DCT coefficients in Y channel. The inter-channel features are extracted in difference planes between channels, which can effectively capture the dependency between channels. In the classification process, the intra-channel and inter-channel features are respectively used to train sub-classifiers. By adjusting the proportion of two kinds of sub-classifier, the optimal decisions are synthesized by using majority voting. Experimental results show that proposed scheme is applicable to low embedding color JPEG images and the performance outperforms some state-of-the-art feature sets.
2014, 36(1): 121-127.
doi: 10.3724/SP.J.1146.2013.00303
Abstract:
A DNA sequence compression method based on Collaborative Particle swarm optimization-based Memetic Algorithm (CPMA) is proposed. CPMA adopts the Comprehensive Learning Particle Swarm Optimization (CLPSO) as the global search and a Dynamic Adjustive Chaotic Search Operator (DACSO) as the local search respectively. In CPMA, it looks for the global optimal code book based on Extended Approximate Repeat Vector (EARV), by which the DNA sequence is compressed. Experimental results demonstrate better performance of HMPSO than the other optimization algorithms, and it is very close to the global optimization point in most of the test functions adopted by the paper. The compression performance of the method based on CPMA is markedly improved compared to many of the classical DNA sequence compression algorithms.
A DNA sequence compression method based on Collaborative Particle swarm optimization-based Memetic Algorithm (CPMA) is proposed. CPMA adopts the Comprehensive Learning Particle Swarm Optimization (CLPSO) as the global search and a Dynamic Adjustive Chaotic Search Operator (DACSO) as the local search respectively. In CPMA, it looks for the global optimal code book based on Extended Approximate Repeat Vector (EARV), by which the DNA sequence is compressed. Experimental results demonstrate better performance of HMPSO than the other optimization algorithms, and it is very close to the global optimization point in most of the test functions adopted by the paper. The compression performance of the method based on CPMA is markedly improved compared to many of the classical DNA sequence compression algorithms.
2014, 36(1): 128-134.
doi: 10.3724/SP.J.1146.2013.00297
Abstract:
As the traditional quantized asynchronous randomized gossip consensus algorithm is based on uniform selection probability time mode, the impact of network topology on local information transfer is not been fully considered. Thus, an improved quantized asynchronous randomized gossip consensus algorithm with non-uniform selection probability is proposed in this paper. Firstly, the asynchronous time model with non-uniform selection probability is proposed. Then the convergence of the algorithm is analyzed with randomized quantized information. The impact of the quantization resolution and the second largest eigenvalue of the probabilistic weighted matrix on convergence rate is also discussed. Furthermore, this paper proposes an optimization algorithm for selection probabilities with projection subgradient method in a distributed manner. The numerical example indicates that, the proposed algorithm improves the convergence rate by optimizing selection probabilities of agents.
As the traditional quantized asynchronous randomized gossip consensus algorithm is based on uniform selection probability time mode, the impact of network topology on local information transfer is not been fully considered. Thus, an improved quantized asynchronous randomized gossip consensus algorithm with non-uniform selection probability is proposed in this paper. Firstly, the asynchronous time model with non-uniform selection probability is proposed. Then the convergence of the algorithm is analyzed with randomized quantized information. The impact of the quantization resolution and the second largest eigenvalue of the probabilistic weighted matrix on convergence rate is also discussed. Furthermore, this paper proposes an optimization algorithm for selection probabilities with projection subgradient method in a distributed manner. The numerical example indicates that, the proposed algorithm improves the convergence rate by optimizing selection probabilities of agents.
2014, 36(1): 135-139.
doi: 10.3724/SP.J.1146.2013.00756
Abstract:
For the issue of aperture loss appears when localizing near-field sources, a novel near-field source localization algorithm is presented. The algorithm is based on the co-prime symmetric array, thus the intersensor spacing need not be limited to quarter-wavelength. First, a special fourth-order cumulant matrix is constructed to estimate the azimuth angles of sources by the MUSIC algorithm. Second, the range parameters of sources can be obtained by searching the spectral peak with each estimated bearing angle. The algorithm transforms the two-dimensional localization issue into several one-dimensional searching issue, and the parameters are automatically paired. The array aperture is extended by using co-prime symmetric, and the algorithm improves the spatial resolution and parameters estimated performance. Simulation results verify the effectiveness of the proposed algorithm.
For the issue of aperture loss appears when localizing near-field sources, a novel near-field source localization algorithm is presented. The algorithm is based on the co-prime symmetric array, thus the intersensor spacing need not be limited to quarter-wavelength. First, a special fourth-order cumulant matrix is constructed to estimate the azimuth angles of sources by the MUSIC algorithm. Second, the range parameters of sources can be obtained by searching the spectral peak with each estimated bearing angle. The algorithm transforms the two-dimensional localization issue into several one-dimensional searching issue, and the parameters are automatically paired. The array aperture is extended by using co-prime symmetric, and the algorithm improves the spatial resolution and parameters estimated performance. Simulation results verify the effectiveness of the proposed algorithm.
2014, 36(1): 140-146.
doi: 10.3724/SP.J.1146.2013.00422
Abstract:
The existing lattice generation algorithms have no exact word end time because the Weighted Finite State Transducer (WFST) decoding networks have no word end node. An algorithm is proposed to generate the standard speech recognition lattice within the WFST decoding framework. The lattices which have no exact word end time can not be used in the keyword spotting system. In this paper, the transformation relationship between WFST phone lattices and standard word lattice is firstly studied. Afterward, a dynamic lexicon matching method is proposed to get back the word end time. Finally, a token passing method is proposed to transform the phone lattices into standard word lattices. A prune strategy is also proposed to accelerate the token passing process, which decreases the transforming time to less than 3% additional computation time above one-pass decoding. The lattices generated by the proposed algorithm can be used in not only the language model rescoring but also the keyword spotting systems. The experimental results show that the proposed algorithm is efficient for practical application and the lattices generated by the proposed algorithm have more information than the lattices generated by the comparative dynamic decoder. This algorithm has a good performance in language model rescoring and keyword spotting.
The existing lattice generation algorithms have no exact word end time because the Weighted Finite State Transducer (WFST) decoding networks have no word end node. An algorithm is proposed to generate the standard speech recognition lattice within the WFST decoding framework. The lattices which have no exact word end time can not be used in the keyword spotting system. In this paper, the transformation relationship between WFST phone lattices and standard word lattice is firstly studied. Afterward, a dynamic lexicon matching method is proposed to get back the word end time. Finally, a token passing method is proposed to transform the phone lattices into standard word lattices. A prune strategy is also proposed to accelerate the token passing process, which decreases the transforming time to less than 3% additional computation time above one-pass decoding. The lattices generated by the proposed algorithm can be used in not only the language model rescoring but also the keyword spotting systems. The experimental results show that the proposed algorithm is efficient for practical application and the lattices generated by the proposed algorithm have more information than the lattices generated by the comparative dynamic decoder. This algorithm has a good performance in language model rescoring and keyword spotting.
2014, 36(1): 147-151.
doi: 10.3724/SP.J.1146.2013.00472
Abstract:
A method of multichannel joint frequency and Direction Of Arrival (DOA) estimation is proposed for the signal with wide frequency band. Under the condition of sub-Nyquist spatio-temporal sampling, the method can provide unambiguous frequency and DOA estimation. According to the space-time two-dimension unambiguous array presented in the paper, the expanded dimension of multiple sampling channels solves the issue of temporal undersampling, and the expanded dimension of multiple snapshot channels overcomes the issue of spatial undersampling. Using the temporal filtering, the estimated frequency and DOA are cascaded, and the pairing of the estimated frequency and DOA is automatically determined. In addition, the spatio-temporal cascaded method can avoid the 2-D spectral peak searching and high-dimensional eigenvalue decomposition, which reduces the computational complexity. Simulation results demonstrate the effectiveness of the novel method.
A method of multichannel joint frequency and Direction Of Arrival (DOA) estimation is proposed for the signal with wide frequency band. Under the condition of sub-Nyquist spatio-temporal sampling, the method can provide unambiguous frequency and DOA estimation. According to the space-time two-dimension unambiguous array presented in the paper, the expanded dimension of multiple sampling channels solves the issue of temporal undersampling, and the expanded dimension of multiple snapshot channels overcomes the issue of spatial undersampling. Using the temporal filtering, the estimated frequency and DOA are cascaded, and the pairing of the estimated frequency and DOA is automatically determined. In addition, the spatio-temporal cascaded method can avoid the 2-D spectral peak searching and high-dimensional eigenvalue decomposition, which reduces the computational complexity. Simulation results demonstrate the effectiveness of the novel method.
2014, 36(1): 152-157.
doi: 10.3724/SP.J.1146.2013.00476
Abstract:
To reduce the sampling rate in chaotic modulation, this paper proposes a multi-channel chaotic modulation Analog-to-Information Conversion (AIC) structure. The proposed structure samples multiple-channel state outputs of parameter-modulation chaotic system to obtain compressed measurements and reduces the sampling rate of each sampling channel with total sampling rate unchanged. In comparison with chaotic modulation, the new structure increases the number of sampling units, but greatly enhances the reconstruction performance of the high-sparsity signals. According to chaotic impulsive synchronization theory, the reconstruction condition is developed and the method to select the sampled system states is supplied. The Lorenz system is taken an example to study the reconstruction performance of frequency-sparse signals. Numerical simulations illustrate the effectiveness of the proposed AIC structure.
To reduce the sampling rate in chaotic modulation, this paper proposes a multi-channel chaotic modulation Analog-to-Information Conversion (AIC) structure. The proposed structure samples multiple-channel state outputs of parameter-modulation chaotic system to obtain compressed measurements and reduces the sampling rate of each sampling channel with total sampling rate unchanged. In comparison with chaotic modulation, the new structure increases the number of sampling units, but greatly enhances the reconstruction performance of the high-sparsity signals. According to chaotic impulsive synchronization theory, the reconstruction condition is developed and the method to select the sampled system states is supplied. The Lorenz system is taken an example to study the reconstruction performance of frequency-sparse signals. Numerical simulations illustrate the effectiveness of the proposed AIC structure.
2014, 36(1): 158-163.
doi: 10.3724/SP.J.1146.2013.00463
Abstract:
A separation method of multiple signals from their superposition recorded at several sensors is addressed. The method employs polyspectra of the sensor data to extract the unknown signals and estimate the Finite Impulse Response (FIR) coupling systems via a linear equation basic algorithm. The method is useful for multichannel blind deconvolution of colored input signals with (possibly) overlapping spectra. An extension of the main algorithm, which can be applied to non-stationary signals separation such as quasiperiodic signal, is also given. Whats more, the method is applied to electromagnetic radiation measurement. Simulation results verify the effectiveness of the algorithm.
A separation method of multiple signals from their superposition recorded at several sensors is addressed. The method employs polyspectra of the sensor data to extract the unknown signals and estimate the Finite Impulse Response (FIR) coupling systems via a linear equation basic algorithm. The method is useful for multichannel blind deconvolution of colored input signals with (possibly) overlapping spectra. An extension of the main algorithm, which can be applied to non-stationary signals separation such as quasiperiodic signal, is also given. Whats more, the method is applied to electromagnetic radiation measurement. Simulation results verify the effectiveness of the algorithm.
2014, 36(1): 164-168.
doi: 10.3724/SP.J.1146.2013.00444
Abstract:
A novel method for parameters estimation of coherently distributed non-circular signal based on the concept of sparse representation is proposed. The non-circular property is introduced into the model of distributed source, and the non-circular property is fully used to unite the covariance and elliptic covariance matrix of the array output. By representing them on overcomplete dictionaries subject to sparse constraint, and transforming DOA estimation into a sparse reconstruction problem, the method is able to solve the central DOA and angular spread at a time. Simulation results show that the proposed method can be used in different kinds of non-circular rate with better performance of low SNR and resolution, and the proposed algorithm can also effectively estimate the DOA in the case of both circular and non-circular signal existing.
A novel method for parameters estimation of coherently distributed non-circular signal based on the concept of sparse representation is proposed. The non-circular property is introduced into the model of distributed source, and the non-circular property is fully used to unite the covariance and elliptic covariance matrix of the array output. By representing them on overcomplete dictionaries subject to sparse constraint, and transforming DOA estimation into a sparse reconstruction problem, the method is able to solve the central DOA and angular spread at a time. Simulation results show that the proposed method can be used in different kinds of non-circular rate with better performance of low SNR and resolution, and the proposed algorithm can also effectively estimate the DOA in the case of both circular and non-circular signal existing.
2014, 36(1): 169-174.
doi: 10.3724/SP.J.1146.2013.00023
Abstract:
Sparse random matrices have attractive properties, such as low storage requirement, low computational complexity in both encoding and recovery, easy incremental updates, and they show great advantages in distributed applications. To make sure sparse random matrices can be used as the measurement matrix, the Restricted Isometry Property (RIP) of such matrices is proved in this paper. Firstly, it is shown that the measurement matrix satisfies RIP is equivalent to the Gram matrix of its submatrix has all of eigenvalues around 1; then it is proved that sparse random matrices satisfy RIP with high probability provided the numbers of measurements satisfy certain conditions. Simulation results show that sparse random matrices can guarantee accurate reconstruction of original signal, while greatly reduce the time of measuring and reconstruction.
Sparse random matrices have attractive properties, such as low storage requirement, low computational complexity in both encoding and recovery, easy incremental updates, and they show great advantages in distributed applications. To make sure sparse random matrices can be used as the measurement matrix, the Restricted Isometry Property (RIP) of such matrices is proved in this paper. Firstly, it is shown that the measurement matrix satisfies RIP is equivalent to the Gram matrix of its submatrix has all of eigenvalues around 1; then it is proved that sparse random matrices satisfy RIP with high probability provided the numbers of measurements satisfy certain conditions. Simulation results show that sparse random matrices can guarantee accurate reconstruction of original signal, while greatly reduce the time of measuring and reconstruction.
2014, 36(1): 175-180.
doi: 10.3724/SP.J.1146.2013.00490
Abstract:
Keystone transform can be employed to eliminate the effects of linear range migration through resolution cells of moving targets during the coherent integration time of Pulse Doppler (PD) radar. However, with Doppler ambiguity, Half-Blind-Velocity Effect (HBVE) occurs when the conventional implementation method of keystone transform is applied. The causes of HBVE are analyzed, and the existing elimination methods of HBVE are introduced. To decrease the computation cost, a method to suppress HBVE is firstly presented, on the basis of which a new method of HBVE elimination is put forward. The proposed method can remove the linear range migration of moving targets with all possible velocities within a desired ambiguous Doppler interval, and its computation cost is reduced by almost 50% compared to the existing one. Theoretical analysis and simulation results validate the effectiveness of the presented methods.
Keystone transform can be employed to eliminate the effects of linear range migration through resolution cells of moving targets during the coherent integration time of Pulse Doppler (PD) radar. However, with Doppler ambiguity, Half-Blind-Velocity Effect (HBVE) occurs when the conventional implementation method of keystone transform is applied. The causes of HBVE are analyzed, and the existing elimination methods of HBVE are introduced. To decrease the computation cost, a method to suppress HBVE is firstly presented, on the basis of which a new method of HBVE elimination is put forward. The proposed method can remove the linear range migration of moving targets with all possible velocities within a desired ambiguous Doppler interval, and its computation cost is reduced by almost 50% compared to the existing one. Theoretical analysis and simulation results validate the effectiveness of the presented methods.
2014, 36(1): 181-186.
doi: 10.3724/SP.J.1146.2013.00320
Abstract:
Prior information can be used to improve detection performance of knowledge aided detectors, but the detection performance may be affected by the mismatches between the prior information and current clutter environment. In this paper, the knowledge aided detector in compound Gaussian clutter is considered, for the inverse Gamma distribution is used as the prior distribution of clutter texture component, and the detection performance of this detector is analyzed with different clutter texture component model parameters. First, false alarm rate and detection probability of Swerling I target are given under the condition of mismatched prior information parameters. Second, the impact on the detection performance with clutter texture distribution parameters is analyzed under the conditions of given prior information parameters. Theoretical analysis results show that when the distribution parameters of clutter texture component are located in some area, the detection performance could be better than that with the prior information matchs the clutter environment. The computer simulation validates the conclusion.
Prior information can be used to improve detection performance of knowledge aided detectors, but the detection performance may be affected by the mismatches between the prior information and current clutter environment. In this paper, the knowledge aided detector in compound Gaussian clutter is considered, for the inverse Gamma distribution is used as the prior distribution of clutter texture component, and the detection performance of this detector is analyzed with different clutter texture component model parameters. First, false alarm rate and detection probability of Swerling I target are given under the condition of mismatched prior information parameters. Second, the impact on the detection performance with clutter texture distribution parameters is analyzed under the conditions of given prior information parameters. Theoretical analysis results show that when the distribution parameters of clutter texture component are located in some area, the detection performance could be better than that with the prior information matchs the clutter environment. The computer simulation validates the conclusion.
2014, 36(1): 187-193.
doi: 10.3724/SP.J.1146.2012.01597
Abstract:
For ISAR imaging, the range and cross-range resolutions are constrained by the bandwidth of transmitted signal and Coherent Processing Interval (CPI). In this paper, a novel algorithm of Two-Dimension(2D) joint super-resolution ISAR imaging is addressed based on Compressive Sensing (CS) theory. The ISAR observation signal model is established, where the 2D super-resolution dictionary is formed. By exploiting the sparse prior information of ISAR image, 2D super-resolution imaging is mathematically converted into thel1 norm optimization. The super-resolution ISAR imaging can be realized with accuracy via fast optimization algorithm. In the proposed algorithm, the 2D coupling information of the echo can be effectively utilized through the joint processing of range and azimuth dimension. Besides, the efficiency of the proposed algorithm is improved by using the Conjugate Gradient (CG) algorithm, Fast Fourier Transform (FFT) and Hadamard multiplication operations. Simulation and real-data experiments verify the effectiveness of the proposed algorithm.
For ISAR imaging, the range and cross-range resolutions are constrained by the bandwidth of transmitted signal and Coherent Processing Interval (CPI). In this paper, a novel algorithm of Two-Dimension(2D) joint super-resolution ISAR imaging is addressed based on Compressive Sensing (CS) theory. The ISAR observation signal model is established, where the 2D super-resolution dictionary is formed. By exploiting the sparse prior information of ISAR image, 2D super-resolution imaging is mathematically converted into thel1 norm optimization. The super-resolution ISAR imaging can be realized with accuracy via fast optimization algorithm. In the proposed algorithm, the 2D coupling information of the echo can be effectively utilized through the joint processing of range and azimuth dimension. Besides, the efficiency of the proposed algorithm is improved by using the Conjugate Gradient (CG) algorithm, Fast Fourier Transform (FFT) and Hadamard multiplication operations. Simulation and real-data experiments verify the effectiveness of the proposed algorithm.
2014, 36(1): 194-201.
doi: 10.3724/SP.J.1146.2013.00264
Abstract:
Based on the electromagnetic scattering model, an algorithm of deception jamming for Inverse Synthetic Aperture Radar (ISAR) aerial target is proposed. The simulated data of electromagnetic model is utilized to modulate the ISAR echoes, and thus the scattering characteristics of the target, such as shading and multiple scattering, can be simulated. Besides, the motion characteristics, such as translation and attitude, are also included. These ensure the fidelity of the false target. The proposed method requires less computation amount and is capable of real-time realization. Simulation results verify the effectiveness of the algorithm, and the computation complexity of the algorithm is also analyzed.
Based on the electromagnetic scattering model, an algorithm of deception jamming for Inverse Synthetic Aperture Radar (ISAR) aerial target is proposed. The simulated data of electromagnetic model is utilized to modulate the ISAR echoes, and thus the scattering characteristics of the target, such as shading and multiple scattering, can be simulated. Besides, the motion characteristics, such as translation and attitude, are also included. These ensure the fidelity of the false target. The proposed method requires less computation amount and is capable of real-time realization. Simulation results verify the effectiveness of the algorithm, and the computation complexity of the algorithm is also analyzed.
2014, 36(1): 202-208.
doi: 10.3724/SP.J.1146.2012.01699
Abstract:
The eigenvector method for maximum-likelihood estimation of phase error can obtain ideal performance of phase error estimation by using the eigenvector corresponding to its largest eigenvalue. Although the method is accurate and robust, it requires eigen-decomposition of the sample covariance matrix, which is computationally expensive and limits its real-time applications. In this paper, a Weighted Maximum Norm Method (WMNM) for phase error estimation is proposed. The eigenvector of the maximum eigenvalue can be obtained directly by solving the problem of maximizing L-2 norm, which avoids the eigen-decomposition of the sample covariance matrix and reduces the computational cost greatly. By adding different weights to each range bin, the contribution of the range cells with high SNR can be enhanced. Experimental results of the measured data by SAR and Inverse SAR (ISAR) verify the validity of the proposed algorithm.
The eigenvector method for maximum-likelihood estimation of phase error can obtain ideal performance of phase error estimation by using the eigenvector corresponding to its largest eigenvalue. Although the method is accurate and robust, it requires eigen-decomposition of the sample covariance matrix, which is computationally expensive and limits its real-time applications. In this paper, a Weighted Maximum Norm Method (WMNM) for phase error estimation is proposed. The eigenvector of the maximum eigenvalue can be obtained directly by solving the problem of maximizing L-2 norm, which avoids the eigen-decomposition of the sample covariance matrix and reduces the computational cost greatly. By adding different weights to each range bin, the contribution of the range cells with high SNR can be enhanced. Experimental results of the measured data by SAR and Inverse SAR (ISAR) verify the validity of the proposed algorithm.
2014, 36(1): 209-214.
doi: 10.3724/SP.J.1146.2013.00384
Abstract:
The passive radar faces a significant problem in multipath clutter, and the Batch version of the Extensive Cancellation Algorithm (ECA-B) is an efficient method for clutter mitigation. However, the clutter and target echoes, whose delays are included in the cancelled range area, are modulated due to the block batch processing in the algorithm, and it results in the clutter residues and false target emergences. The modulation mechanism of ECA-B is analyzed and the corresponding compensation method is proposed in this paper. The basic principle of the method is to compensate the clutter modulation by carrier frequency offset fine estimation and then avoid the target modulation by selecting the block number properly or compensate the target modulation by estimating zero frequency components of every block signal in the modulated range cell. The simulation and real-life data results confirm the correctness of the analysis on the modulation mechanism and the validity of the proposed method, which improves the algorithm performance.
The passive radar faces a significant problem in multipath clutter, and the Batch version of the Extensive Cancellation Algorithm (ECA-B) is an efficient method for clutter mitigation. However, the clutter and target echoes, whose delays are included in the cancelled range area, are modulated due to the block batch processing in the algorithm, and it results in the clutter residues and false target emergences. The modulation mechanism of ECA-B is analyzed and the corresponding compensation method is proposed in this paper. The basic principle of the method is to compensate the clutter modulation by carrier frequency offset fine estimation and then avoid the target modulation by selecting the block number properly or compensate the target modulation by estimating zero frequency components of every block signal in the modulated range cell. The simulation and real-life data results confirm the correctness of the analysis on the modulation mechanism and the validity of the proposed method, which improves the algorithm performance.
2014, 36(1): 215-219.
doi: 10.3724/SP.J.1146.2013.00401
Abstract:
A 3-D space-time nonadaptive pre-filtering approach in airborne radar is proposed in this paper. It makes full use of predictable radar system and platform parameters such as airborne crab, Pulse Repetition Frequency (PRF) and interelement spacing, and the 3-D space-time nonadaptive filter is easily designed by analyzing the structure of the received sampling data of two adjacent pulses. Most clutter is filtered previously so that small part of residual clutter can be sufficiently suppressed by reduced-dimensional adaptive processing. The simulation results confirm that the proposed pre-filtering approach is compatible with airborne crab and can reduce the Degree of Freedom (DoF) of clutter, and the moving target detection performance is improved in the whole clutter region, especially in mainlobe region.
A 3-D space-time nonadaptive pre-filtering approach in airborne radar is proposed in this paper. It makes full use of predictable radar system and platform parameters such as airborne crab, Pulse Repetition Frequency (PRF) and interelement spacing, and the 3-D space-time nonadaptive filter is easily designed by analyzing the structure of the received sampling data of two adjacent pulses. Most clutter is filtered previously so that small part of residual clutter can be sufficiently suppressed by reduced-dimensional adaptive processing. The simulation results confirm that the proposed pre-filtering approach is compatible with airborne crab and can reduce the Degree of Freedom (DoF) of clutter, and the moving target detection performance is improved in the whole clutter region, especially in mainlobe region.
2014, 36(1): 220-227.
doi: 10.3724/SP.J.1146.2013.00228
Abstract:
In order to preserve the SAR image edge characteristics and improve the suppression performance of multiplicative speckle noise in SAR image, a new despeckling algorithm based on iterative direction filtering is proposed. Firstly, the ratio Edge Strength Map (ESM) and direction information are estimated by Gaussian-Gamma-shaped bi-windows, and anisotropic support domain along the ESM direction is obtained with the ESM and direction information to adaptively control the Anisotropic Gaussian Kernel (AGK) in rectangular local window. Secondly, the decay factor is obtained by combining several local statistics, and the negative-exponential weighting coefficients are produced by the decay factor and are adaptive to the characteristics of regional distribution of SAR image. Thirdly, direction filtering is formed by combining the negative-exponential weighting coefficients and the local window with anisotropic support domain and different directions. Finally, speckle suppression in SAR image with edge protection can be realized by iterative operation of direction filtering. The experimental results show that, compared with most existing despeckling algorithms, the proposed algorithm achieves better performance in the speckle suppression and image edge preservation.
In order to preserve the SAR image edge characteristics and improve the suppression performance of multiplicative speckle noise in SAR image, a new despeckling algorithm based on iterative direction filtering is proposed. Firstly, the ratio Edge Strength Map (ESM) and direction information are estimated by Gaussian-Gamma-shaped bi-windows, and anisotropic support domain along the ESM direction is obtained with the ESM and direction information to adaptively control the Anisotropic Gaussian Kernel (AGK) in rectangular local window. Secondly, the decay factor is obtained by combining several local statistics, and the negative-exponential weighting coefficients are produced by the decay factor and are adaptive to the characteristics of regional distribution of SAR image. Thirdly, direction filtering is formed by combining the negative-exponential weighting coefficients and the local window with anisotropic support domain and different directions. Finally, speckle suppression in SAR image with edge protection can be realized by iterative operation of direction filtering. The experimental results show that, compared with most existing despeckling algorithms, the proposed algorithm achieves better performance in the speckle suppression and image edge preservation.
2014, 36(1): 228-233.
doi: 10.3724/SP.J.1146.2013.00486
Abstract:
Wideband equiangular spiral antennas are typically applied to detection and tracing of passive radar seeker. In order to reduce the height of the equiangular spiral antenna, a low-profile slot equiangular spiral antenna fed by a microstrip to slotline and backed by a cavity is proposed. This microstrip to slotline Balun (unbalance to balance) transforms unbalanced electrical distribution produced by coaxial line into balanced electrical distribution to feed the slot equiangular spiral antenna. Measure results indicate that a wide band (1 : 9) of Voltage Standing Wave Ratio (VSWR), good radiation pattern and circular polarization are realized. The height of back cavity for a unidirectional radiation is 0.05L(L is the wavelength of lowest operation frequency). Lower working frequency band is expanded by filling the back cavity with a ring-shaped rectangular absorber. Measured results show that a frequency band of 1: 6.4 (voltage standing wave ratio VSWR below 2), better than 4 dB antenna gain, good circular polarization and radiation pattern are achieved. Planar feed structure and shallow cavity contribute to a low-profile slot equiangular antenna. Measured results verify the effectiveness of the microstrip to slotline Balun used to feed the slot equiangular slot antenna.
Wideband equiangular spiral antennas are typically applied to detection and tracing of passive radar seeker. In order to reduce the height of the equiangular spiral antenna, a low-profile slot equiangular spiral antenna fed by a microstrip to slotline and backed by a cavity is proposed. This microstrip to slotline Balun (unbalance to balance) transforms unbalanced electrical distribution produced by coaxial line into balanced electrical distribution to feed the slot equiangular spiral antenna. Measure results indicate that a wide band (1 : 9) of Voltage Standing Wave Ratio (VSWR), good radiation pattern and circular polarization are realized. The height of back cavity for a unidirectional radiation is 0.05L(L is the wavelength of lowest operation frequency). Lower working frequency band is expanded by filling the back cavity with a ring-shaped rectangular absorber. Measured results show that a frequency band of 1: 6.4 (voltage standing wave ratio VSWR below 2), better than 4 dB antenna gain, good circular polarization and radiation pattern are achieved. Planar feed structure and shallow cavity contribute to a low-profile slot equiangular antenna. Measured results verify the effectiveness of the microstrip to slotline Balun used to feed the slot equiangular slot antenna.
2014, 36(1): 234-240.
doi: 10.3724/SP.J.1146.2013.00449
Abstract:
In order to reduce effectively the hardware and timing overhead for circuit soft-error-tolerance, a hybrid hardening technique for soft error tolerance is proposed based on timing priority in this paper. A two-stage hardening strategy is exploitsed by using flip-flop replacement and duplicated gate method to harden circuit. At first stage, based on the timing priority principle, high reliability temporal redundancy flip-flop is used to harden circuit on the path of timing slack. At second stage, duplicated gate method is used on timing sensitive path. Compared with traditional techniques, the proposed technique can not only mask the Single Event Transient (SET) and protect against the Single Event Upset (SEU), but also reduce the overhead of the area. The experiment result of ISCAS89 benchmark circuits in 45 nm Nangate process proves that the circuit average soft error rate is reduced by more than 99% and the average area overhead is 36.84%.
In order to reduce effectively the hardware and timing overhead for circuit soft-error-tolerance, a hybrid hardening technique for soft error tolerance is proposed based on timing priority in this paper. A two-stage hardening strategy is exploitsed by using flip-flop replacement and duplicated gate method to harden circuit. At first stage, based on the timing priority principle, high reliability temporal redundancy flip-flop is used to harden circuit on the path of timing slack. At second stage, duplicated gate method is used on timing sensitive path. Compared with traditional techniques, the proposed technique can not only mask the Single Event Transient (SET) and protect against the Single Event Upset (SEU), but also reduce the overhead of the area. The experiment result of ISCAS89 benchmark circuits in 45 nm Nangate process proves that the circuit average soft error rate is reduced by more than 99% and the average area overhead is 36.84%.
2014, 36(1): 241-245.
doi: 10.3724/SP.J.1146.2013.00885
Abstract:
To support frequency selective scheduling in uplink, Long Term Evolution (LTE) system defines the Sounding Reference Signal (SRS) for channel quality estimation. This paper focuses on the Signal-to-Noise Ratio (SNR) estimation of the SRS. In order to deal with the shortcomings of Boumards method and the traditional DFT method, an improved estimation method based on DFT is proposed. This method reduces the energy leakages influence of useful signal on high SNR by correcting the noise estimated interval in time domain, thus more accurate SNR estimation can be obtained. Simulation results show that the estimated performance of the proposed method is better than Boumards method and traditional DFT method, and the average performance achieves an improvement of over 6 dB in high SNR area.
To support frequency selective scheduling in uplink, Long Term Evolution (LTE) system defines the Sounding Reference Signal (SRS) for channel quality estimation. This paper focuses on the Signal-to-Noise Ratio (SNR) estimation of the SRS. In order to deal with the shortcomings of Boumards method and the traditional DFT method, an improved estimation method based on DFT is proposed. This method reduces the energy leakages influence of useful signal on high SNR by correcting the noise estimated interval in time domain, thus more accurate SNR estimation can be obtained. Simulation results show that the estimated performance of the proposed method is better than Boumards method and traditional DFT method, and the average performance achieves an improvement of over 6 dB in high SNR area.
2014, 36(1): 246-249.
doi: 10.3724/SP.J.1146.2013.00323
Abstract:
This paper proposes a low-complexity Peak to Average Power Ratio (PAPR) reduction method in Orthogonal Frequency Division Multiplexing (OFDM) system based on the Fractional Fourier Transform (FrFT). The method reduces PAPR effectively through periodic extension of random phase sequence to the length of FrFT-OFDM symbol, weighting it with phase factors and multiplying transmitted data vector. Only one Inverse Discrete Fractional Fourier Transform (IDFrFT) operation is performed in the proposed method, and the signal candidates can be calculated in time domain via weighted summation of the chirp circularly shifted FrFT-OFDM symbols. The simulation results show that, in the case that all the methods have 32 candidates, the proposed method has almost the same performance, when compared with the SeLecting Mapping (SLM) and performs even better than the Partial Transmit Sequence (PTS). More importantly, the proposed method has lower computational complexity compared with SLM and PTS.
This paper proposes a low-complexity Peak to Average Power Ratio (PAPR) reduction method in Orthogonal Frequency Division Multiplexing (OFDM) system based on the Fractional Fourier Transform (FrFT). The method reduces PAPR effectively through periodic extension of random phase sequence to the length of FrFT-OFDM symbol, weighting it with phase factors and multiplying transmitted data vector. Only one Inverse Discrete Fractional Fourier Transform (IDFrFT) operation is performed in the proposed method, and the signal candidates can be calculated in time domain via weighted summation of the chirp circularly shifted FrFT-OFDM symbols. The simulation results show that, in the case that all the methods have 32 candidates, the proposed method has almost the same performance, when compared with the SeLecting Mapping (SLM) and performs even better than the Partial Transmit Sequence (PTS). More importantly, the proposed method has lower computational complexity compared with SLM and PTS.
2014, 36(1): 250-254.
doi: 10.3724/SP.J.1146.2013.00348
Abstract:
Since the traditional frequency-sweeping based Bio-Impedance Spectroscopy (BIS) measurement method can not reflect accurately the true impedance of organism, the development of multi-frequency simultaneous fast measurement method of BIS has significance. A BIS Multi-Frequency Synchronized (MFS) measurement method is proposed based on MFS signal excitation and a windowed FFT algorithm. First, the synthesis of a seven-frequency synchronized signal, as well as its spectral characteristic, is introduced. Then, a Nuttall self-convolution window function with excellent performance is put forward, and the improved interpolation FFT-based harmonic analysis algorithm based on this window is built. It validates theoretically the feasibility of the proposed BIS Multi- Frequency Synchronized fast measurement method, and provides technical support for development of practical BIS fast measurement systems.
Since the traditional frequency-sweeping based Bio-Impedance Spectroscopy (BIS) measurement method can not reflect accurately the true impedance of organism, the development of multi-frequency simultaneous fast measurement method of BIS has significance. A BIS Multi-Frequency Synchronized (MFS) measurement method is proposed based on MFS signal excitation and a windowed FFT algorithm. First, the synthesis of a seven-frequency synchronized signal, as well as its spectral characteristic, is introduced. Then, a Nuttall self-convolution window function with excellent performance is put forward, and the improved interpolation FFT-based harmonic analysis algorithm based on this window is built. It validates theoretically the feasibility of the proposed BIS Multi- Frequency Synchronized fast measurement method, and provides technical support for development of practical BIS fast measurement systems.