Email alert
2014 Vol. 36, No. 9
Display Method:
2014, 36(9): 2033-2040.
doi: 10.3724/SP.J.1146.2013.01562
Abstract:
A Generalized Discriminant Analysis (GDA) based method is proposed to find the optimal chroma-like channel for image splicing detection. The channel design scheme is modeled as an optimization problem in which the objective function is the discriminative power of GDA with constraint of coefficients of the chroma-like channel. The optimization problem is resolved by grid searching combined with the gradient ascent algorithm. The results obtained from the experiment in the Columbia Image Splicing Detection Evaluation Dataset show that the detection performance of four current methods in the mainstream with regard to the rate of identification in the most optimal color designed channel is better than that in the existing color channels, which verify the versatility and effectiveness of the proposed method.
A Generalized Discriminant Analysis (GDA) based method is proposed to find the optimal chroma-like channel for image splicing detection. The channel design scheme is modeled as an optimization problem in which the objective function is the discriminative power of GDA with constraint of coefficients of the chroma-like channel. The optimization problem is resolved by grid searching combined with the gradient ascent algorithm. The results obtained from the experiment in the Columbia Image Splicing Detection Evaluation Dataset show that the detection performance of four current methods in the mainstream with regard to the rate of identification in the most optimal color designed channel is better than that in the existing color channels, which verify the versatility and effectiveness of the proposed method.
2014, 36(9): 2041-2046.
doi: 10.3724/SP.J.1146.2013.01598
Abstract:
Symmetry detection plays an important role in image analysis and pattern recognition. Based on phase symmetry and principal component analysis, a new image symmetry detection method is proposed. Firstly, phase symmetry is computed on different scales and orientations. On each orientation, the phase symmetry values of different scales are merged together. And then the main feature of different orientations is extracted by using principal component analysis. Finally, the results of symmetry detection can be obtained by using non-maximal suppression and adaptive hysteresis thresholding. The experiments show that the proposed method can be applied directly to original images and it does not need segmentation or any preprocessing. And it is insensitive to rotation, brightness and contrast. It also can detect mirror symmetry, rotational symmetry and curve symmetry at the same time, whether bright or dark objects.
Symmetry detection plays an important role in image analysis and pattern recognition. Based on phase symmetry and principal component analysis, a new image symmetry detection method is proposed. Firstly, phase symmetry is computed on different scales and orientations. On each orientation, the phase symmetry values of different scales are merged together. And then the main feature of different orientations is extracted by using principal component analysis. Finally, the results of symmetry detection can be obtained by using non-maximal suppression and adaptive hysteresis thresholding. The experiments show that the proposed method can be applied directly to original images and it does not need segmentation or any preprocessing. And it is insensitive to rotation, brightness and contrast. It also can detect mirror symmetry, rotational symmetry and curve symmetry at the same time, whether bright or dark objects.
2014, 36(9): 2047-2052.
doi: 10.3724/SP.J.1146.2013.01763
Abstract:
Color-based moving object detection performs poorly when illumination changes or shadow exists. Depth-based moving object detection is affected by the high level of depth-data noise at object boundaries, and it fails when foreground objects move close to the background. For these reasons, a novel approach that establishes color and depth classifier for each pixel is presented by making full use of color information obtained by CCD camera and depth information obtained by TOF camera. In order to realize the effective detection, different weights are assigned adaptively for each output of the classifier by considering foreground detections in the previous frames and the depth feature. Multi video sequences are captured to verify the proposed method, and the experimental results show that the proposed approach can effectively solve the limitations of color-based or depth-based detection and realize the effective detection.
Color-based moving object detection performs poorly when illumination changes or shadow exists. Depth-based moving object detection is affected by the high level of depth-data noise at object boundaries, and it fails when foreground objects move close to the background. For these reasons, a novel approach that establishes color and depth classifier for each pixel is presented by making full use of color information obtained by CCD camera and depth information obtained by TOF camera. In order to realize the effective detection, different weights are assigned adaptively for each output of the classifier by considering foreground detections in the previous frames and the depth feature. Multi video sequences are captured to verify the proposed method, and the experimental results show that the proposed approach can effectively solve the limitations of color-based or depth-based detection and realize the effective detection.
2014, 36(9): 2053-2060.
doi: 10.3724/SP.J.1146.2013.01534
Abstract:
In order to solve the thorny cerebrovascular segmentation problems about cerebral vessels of many branches, small shape, special position and complex patterns, this paper presents a novel statistical method to achieve effectively the accurate segmentation of cerebral vessels. Firstly, the Markov random field information is added to the statistical model which makes the full use of the spatial neighborhood information of each pixel and a new Markov statistical model is proposed; then Stochastic versions of the Expectation Maximization (SEM) algorithm is used to estimate parameters of the Markov model and the optimal solution is found, which finishes the three-dimensional cerebrovascular segmentation. Experimental results show that the proposed method can not only segment the large vessel branches, but also have a good effect on small vessels segmentation because of considering neighborhood information of each pixel. Therefore, the proposed method also has the far-reaching significance to the clinical prevention and diagnosis of cerebrovascular diseases.
In order to solve the thorny cerebrovascular segmentation problems about cerebral vessels of many branches, small shape, special position and complex patterns, this paper presents a novel statistical method to achieve effectively the accurate segmentation of cerebral vessels. Firstly, the Markov random field information is added to the statistical model which makes the full use of the spatial neighborhood information of each pixel and a new Markov statistical model is proposed; then Stochastic versions of the Expectation Maximization (SEM) algorithm is used to estimate parameters of the Markov model and the optimal solution is found, which finishes the three-dimensional cerebrovascular segmentation. Experimental results show that the proposed method can not only segment the large vessel branches, but also have a good effect on small vessels segmentation because of considering neighborhood information of each pixel. Therefore, the proposed method also has the far-reaching significance to the clinical prevention and diagnosis of cerebrovascular diseases.
2014, 36(9): 2061-2067.
doi: 10.3724/SP.J.1146.2013.01506
Abstract:
In this paper, a novel salient image edge detection technique that is based on Ant Colony Optimization (ACO) is presented. Firstly, the proposed method designs a new edge saliency description called Support Region Area (SRA) using phase grouping algorithm. Then, two kind of heuristic information, SRA and gradient magnitude, are introduced in ACO to guide the ants movement. The quantity of pheromone laid by each ant on its new arrived node is calculated based the SRA and the gradient magnitude on the node. Each ants transition probability is calculated by a new method which linear weighted combines the pheromone, the gradient magnitude and the SRA in the ants 8-connectivity neighborhood. A taboo table is created for each ant that recorder the nodes it has recently visited, and is used to present the ant form visiting the same set of nodes repeatedly. Experimental results show the success of the technique in extracting salient edges from visual and infrared images.
In this paper, a novel salient image edge detection technique that is based on Ant Colony Optimization (ACO) is presented. Firstly, the proposed method designs a new edge saliency description called Support Region Area (SRA) using phase grouping algorithm. Then, two kind of heuristic information, SRA and gradient magnitude, are introduced in ACO to guide the ants movement. The quantity of pheromone laid by each ant on its new arrived node is calculated based the SRA and the gradient magnitude on the node. Each ants transition probability is calculated by a new method which linear weighted combines the pheromone, the gradient magnitude and the SRA in the ants 8-connectivity neighborhood. A taboo table is created for each ant that recorder the nodes it has recently visited, and is used to present the ant form visiting the same set of nodes repeatedly. Experimental results show the success of the technique in extracting salient edges from visual and infrared images.
2014, 36(9): 2068-2074.
doi: 10.3724/SP.J.1146.2013.01488
Abstract:
Mostly, some traces of digital tamper may be left in the image during the manipulation, which result in the inconsistency of the natural images and provide some clues for image forgeries. This paper proposes a novel double compression probability model to describe the change of DCT coefficients after double compression, and combines the Bayes theorem to express the Double Quantization (DQ) effect and locate the tampered region with the posterior?probability map. Experimental results show that the method can exactly locate the tampered region, and the accuracy improves remarkably especially when the first compression quality factor is smaller than the second one. It is also found that the method is resistant to different kinds of forgery techniques, such as manual manipulation, inpainting algorithm and Bayesian-matting.
Mostly, some traces of digital tamper may be left in the image during the manipulation, which result in the inconsistency of the natural images and provide some clues for image forgeries. This paper proposes a novel double compression probability model to describe the change of DCT coefficients after double compression, and combines the Bayes theorem to express the Double Quantization (DQ) effect and locate the tampered region with the posterior?probability map. Experimental results show that the method can exactly locate the tampered region, and the accuracy improves remarkably especially when the first compression quality factor is smaller than the second one. It is also found that the method is resistant to different kinds of forgery techniques, such as manual manipulation, inpainting algorithm and Bayesian-matting.
2014, 36(9): 2075-2080.
doi: 10.3724/SP.J.1146.2013.01756
Abstract:
A parameter estimation algorithm based on the power spectral Discrete Cosine Transform (DCT) is proposed to estimate the parameters of Binary Phase Shift Keying (BPSK) signals under low Signal-to-Noise Ratio (SNR) environment. According to DCTs and energy compaction, the BPSK signals code length can be accurately estimated after power spectral Discrete Cosine Transform (DCT) processing and threshold processing. Then the carrier frequency and pulse width can also be calculated exactly after IDCT processing which can lower the noises impact on the estimation. Experiment demonstrates that the proposed algorithm can realize accurate parameters estimation under low SNR environment. When , the Success Rate (SR) of carrier frequency and pulse width are 22.1% and 28.3% higher than that of the compared algorithm respectively.
A parameter estimation algorithm based on the power spectral Discrete Cosine Transform (DCT) is proposed to estimate the parameters of Binary Phase Shift Keying (BPSK) signals under low Signal-to-Noise Ratio (SNR) environment. According to DCTs and energy compaction, the BPSK signals code length can be accurately estimated after power spectral Discrete Cosine Transform (DCT) processing and threshold processing. Then the carrier frequency and pulse width can also be calculated exactly after IDCT processing which can lower the noises impact on the estimation. Experiment demonstrates that the proposed algorithm can realize accurate parameters estimation under low SNR environment. When , the Success Rate (SR) of carrier frequency and pulse width are 22.1% and 28.3% higher than that of the compared algorithm respectively.
2014, 36(9): 2081-2085.
doi: 10.3724/SP.J.1146.2013.01697
Abstract:
New constructions of perfect Gaussian integer sequence with even period are proposed. According to the parity of the multilevel perfect sequences over integer, different mappings and combined methods are used for constructing perfect Gaussian integer sequences. The length of the resultant sequence is equal to or twice that of the multilevel perfect sequence. New perfect Gaussian integer sequences are generated by the proposed construction and the number of the existing perfect Gaussian integer sequences is extended.
New constructions of perfect Gaussian integer sequence with even period are proposed. According to the parity of the multilevel perfect sequences over integer, different mappings and combined methods are used for constructing perfect Gaussian integer sequences. The length of the resultant sequence is equal to or twice that of the multilevel perfect sequence. New perfect Gaussian integer sequences are generated by the proposed construction and the number of the existing perfect Gaussian integer sequences is extended.
2014, 36(9): 2086-2092.
doi: 10.3724/SP.J.1146.2013.01609
Abstract:
Based on binary Zero Correlation Zone (ZCZ) periodic complementary sequence sets with even period, new quaternary ZCZ periodic complementary sequence sets are researched by using inverse Gray mapping. The optimality of the sequence sets is improved, which means, in certain conditions, the obtained sequence sets can achieve the theoretical bound, though the binary ZCZ periodic complementary sequence sets employed can not reach the theoretical bound. At the same time, the proposed scheme can get different quaternary ZCZ periodic complementary sequence sets by using different parameters. The research results show that the method can effectively increase the number of the quaternary ZCZ periodic complementary sequence sets, which are different from the sets obtained before, and provides more choices in practical engineering application.
Based on binary Zero Correlation Zone (ZCZ) periodic complementary sequence sets with even period, new quaternary ZCZ periodic complementary sequence sets are researched by using inverse Gray mapping. The optimality of the sequence sets is improved, which means, in certain conditions, the obtained sequence sets can achieve the theoretical bound, though the binary ZCZ periodic complementary sequence sets employed can not reach the theoretical bound. At the same time, the proposed scheme can get different quaternary ZCZ periodic complementary sequence sets by using different parameters. The research results show that the method can effectively increase the number of the quaternary ZCZ periodic complementary sequence sets, which are different from the sets obtained before, and provides more choices in practical engineering application.
2014, 36(9): 2093-2097.
doi: 10.3724/SP.J.1146.2013.01622
Abstract:
Recently, based on Belief-Propagation (BP), Min-Sum (MS) and Normalized MS (NMS) algorithms, three corresponding Weighted Bit Flipping (WBF) decoding algorithms are proposed for LDPC codes. However, not only the strict physical significance but also the inherent relationship of these WBF algorithms is still remain largely unknown. In this paper, the theoretical derivation, and an inherent relationship between them is developed from a whole novel understanding. Furthermore, the simulation results demonstrate the rationality and accuracy of the conclusion, which presents a certain reference value for design of new improved WBF algorithms.
Recently, based on Belief-Propagation (BP), Min-Sum (MS) and Normalized MS (NMS) algorithms, three corresponding Weighted Bit Flipping (WBF) decoding algorithms are proposed for LDPC codes. However, not only the strict physical significance but also the inherent relationship of these WBF algorithms is still remain largely unknown. In this paper, the theoretical derivation, and an inherent relationship between them is developed from a whole novel understanding. Furthermore, the simulation results demonstrate the rationality and accuracy of the conclusion, which presents a certain reference value for design of new improved WBF algorithms.
2014, 36(9): 2098-2103.
doi: 10.3724/SP.J.1146.2013.01692
Abstract:
With focus on blind estimation of the Pseudo-Noise (PN) sequence of a Direct Sequence Spread Spectrum (DSSS) signal in non-cooperative spread spectrum communications, a blind estimation approach of PN sequence and information sequence is proposed based on Singular Value Decomposition (SVD). The chip rate of the PN sequence and the PN sequence period need to be known. Firstly, SVD is applied to the signal observation matrix made up of received signal. Then, the estimation of the PN sequence is obtained based on the left singular vector. At the same time, the information sequences can be estimated with the signal sequence unsynchronized and the PN sequence unknown through the right singular vector. Simulation experiment results verify that the proposed approach is of high stability, high accuracy, low computational complexity and short observation time.
With focus on blind estimation of the Pseudo-Noise (PN) sequence of a Direct Sequence Spread Spectrum (DSSS) signal in non-cooperative spread spectrum communications, a blind estimation approach of PN sequence and information sequence is proposed based on Singular Value Decomposition (SVD). The chip rate of the PN sequence and the PN sequence period need to be known. Firstly, SVD is applied to the signal observation matrix made up of received signal. Then, the estimation of the PN sequence is obtained based on the left singular vector. At the same time, the information sequences can be estimated with the signal sequence unsynchronized and the PN sequence unknown through the right singular vector. Simulation experiment results verify that the proposed approach is of high stability, high accuracy, low computational complexity and short observation time.
2014, 36(9): 2104-2110.
doi: 10.3724/SP.J.1146.2013.01661
Abstract:
This paper focuses on the resource allocation problem in OFDM-based Decode-and-Forward (DF) relay link to maximize Energy Efficiency (EE). Different from the exiting researches that minimizing the transmission power under constant date rate or maximizing the EE without constraints, for considering the circuit power consumption, the optimization problem is formulated as maximizing EE under the constraints of the minimum rate requirements and respective transmission powers of the Source node (S) and Relay node (R) while considering the subcarriers paring problem, which is an optimally joint subcarriers pairing and power allocation problem. It is proved that the optimal energy-efficient solution under the constraints for the OFDM-based relay link is globally unique. Furthermore, a low complexity resource allocation scheme is proposed to solve the formulating joint optimization problem. The simulation results show that the proposed scheme can adaptively allocate power resources under the constraints of the minimum data rate and maximum transmission powers of S/R nodes, and achieve the optimal energy efficiency, which also reduces the outage probability of the transmission link.
This paper focuses on the resource allocation problem in OFDM-based Decode-and-Forward (DF) relay link to maximize Energy Efficiency (EE). Different from the exiting researches that minimizing the transmission power under constant date rate or maximizing the EE without constraints, for considering the circuit power consumption, the optimization problem is formulated as maximizing EE under the constraints of the minimum rate requirements and respective transmission powers of the Source node (S) and Relay node (R) while considering the subcarriers paring problem, which is an optimally joint subcarriers pairing and power allocation problem. It is proved that the optimal energy-efficient solution under the constraints for the OFDM-based relay link is globally unique. Furthermore, a low complexity resource allocation scheme is proposed to solve the formulating joint optimization problem. The simulation results show that the proposed scheme can adaptively allocate power resources under the constraints of the minimum data rate and maximum transmission powers of S/R nodes, and achieve the optimal energy efficiency, which also reduces the outage probability of the transmission link.
2014, 36(9): 2111-2116.
doi: 10.3724/SP.J.1146.2013.01711
Abstract:
Conventionally, linear precoding methods in a Cognitive Radio (CR) Multiple Input Multiple Output (MIMO) network mainly aim to limit or completely cancel the interference between Secondary Users (SUs) while achieving superior performance of the CR system. In contrast to the traditional?precoding method, this paper presents two novel precoding methods from the perspective of using interference, namely Cognitive Radio Partial Linear Precoding (CR-PLP) and Cognitive Radio Phase Alignment Linear Precoding (CR-PALP).?The theoretical analysis and simulation results show that these two methods both enhance the Signal to Interference and Noise Ratio (SINR) through constructive interference by taking advantage of the phase information of the constellation. Contrary to CR-PLP where the destructive interference is zeroed, CR-PALP rotates and converts it into constructive part. Then less Symbol Error Rate (SER) and enhanced information transmission rate are achieved with lower computational complexity.
Conventionally, linear precoding methods in a Cognitive Radio (CR) Multiple Input Multiple Output (MIMO) network mainly aim to limit or completely cancel the interference between Secondary Users (SUs) while achieving superior performance of the CR system. In contrast to the traditional?precoding method, this paper presents two novel precoding methods from the perspective of using interference, namely Cognitive Radio Partial Linear Precoding (CR-PLP) and Cognitive Radio Phase Alignment Linear Precoding (CR-PALP).?The theoretical analysis and simulation results show that these two methods both enhance the Signal to Interference and Noise Ratio (SINR) through constructive interference by taking advantage of the phase information of the constellation. Contrary to CR-PLP where the destructive interference is zeroed, CR-PALP rotates and converts it into constructive part. Then less Symbol Error Rate (SER) and enhanced information transmission rate are achieved with lower computational complexity.
2014, 36(9): 2117-2123.
doi: 10.3724/SP.J.1146.2013.01492
Abstract:
This paper addresses the spatial correlation of MIMO channel with application to indoor Visible Light Communication (VLC). This study establishes a model for indoor VLC MIMO channel based on the Lambert radiation model, and then analyzes the effect on the spatial correlation of both transmit and receive channel by different distance parameters in this model. Thus, the mathematical expressions of the spatial correlation coefficients of both transmit and receive channel are derived, which can provide a theoretical guide to the layout of the indoor VLC MIMO system. The simulation results regarding the channel matrix condition verify that the channel spatial correlation becomes stronger with the decreasing of Light Emitting Diode (LED) spacing, PhotoDetector (PD) spacing, and the increasing of the vertical distance from LED to PD. In addition, the channel capacity simulation results show that the channel capacity can be enhanced by increasing the number of LED and PD, whereas the channel capacity gain can be reduced with stronger channel spatial correlation.
This paper addresses the spatial correlation of MIMO channel with application to indoor Visible Light Communication (VLC). This study establishes a model for indoor VLC MIMO channel based on the Lambert radiation model, and then analyzes the effect on the spatial correlation of both transmit and receive channel by different distance parameters in this model. Thus, the mathematical expressions of the spatial correlation coefficients of both transmit and receive channel are derived, which can provide a theoretical guide to the layout of the indoor VLC MIMO system. The simulation results regarding the channel matrix condition verify that the channel spatial correlation becomes stronger with the decreasing of Light Emitting Diode (LED) spacing, PhotoDetector (PD) spacing, and the increasing of the vertical distance from LED to PD. In addition, the channel capacity simulation results show that the channel capacity can be enhanced by increasing the number of LED and PD, whereas the channel capacity gain can be reduced with stronger channel spatial correlation.
2014, 36(9): 2124-2130.
doi: 10.3724/SP.J.1146.2013.01666
Abstract:
Truncated differential cryptanalysis is a variant of differential cryptanalysis. In order to evaluate the ability of a block cipher against the truncated differential cryptanalysis, it is needed to give out the upper bound of the probability of the truncated differential chain. Masayuki Kanda et al. propose a conjecture about the upper bound of the probability of the truncated differential when the S-boxes in block cipher are the combination of the inverse function and a bijective affine transformation in GF(256). This paper gives out an evaluation about the upper bound of the probability of the truncated differential by assuming the S-boxes as bijective S-boxes and Masayuki Kandas conjecture is the special case of the problem that the evaluation considers. In some cases, the upper bound given by the evaluation is approaching to the conjecture. This conclusion can serve to evaluate the upper bound probability of the truncated differential chain. The results provide further support for the provable security of a block cipher against the truncated differential cryptanalysis in theory.
Truncated differential cryptanalysis is a variant of differential cryptanalysis. In order to evaluate the ability of a block cipher against the truncated differential cryptanalysis, it is needed to give out the upper bound of the probability of the truncated differential chain. Masayuki Kanda et al. propose a conjecture about the upper bound of the probability of the truncated differential when the S-boxes in block cipher are the combination of the inverse function and a bijective affine transformation in GF(256). This paper gives out an evaluation about the upper bound of the probability of the truncated differential by assuming the S-boxes as bijective S-boxes and Masayuki Kandas conjecture is the special case of the problem that the evaluation considers. In some cases, the upper bound given by the evaluation is approaching to the conjecture. This conclusion can serve to evaluate the upper bound probability of the truncated differential chain. The results provide further support for the provable security of a block cipher against the truncated differential cryptanalysis in theory.
2014, 36(9): 2131-2137.
doi: 10.3724/SP.J.1146.2014.00002
Abstract:
In order to get the efficient resource utilization in wireless network, this paper introduces a novel- Coverage and Capacity Proportional Fairness (-CCPF) scheduling algorithm for network Coverage and Capacity Optimization (CCO) on the basis of enhancing the fairness of edger users schedule priority. It firstly proves the theoretical convergence of this algorithm and then proposes a coverage-capacity optimization scheme on the basis of this algorithm, traffic distribution and power adjustment. This scheme is simulated in an experimental regular scenario and a real non-regular scenario respectively. The results show that in the use case of CCO, the novel algorithm can ensure the reasonable Resource Occupation (RO) ratio and make the mean user throughput increased by 19% and 33% compared with the-Proportional Fairness (-PF) algorithm in the experimental and real scenario respectively.
In order to get the efficient resource utilization in wireless network, this paper introduces a novel- Coverage and Capacity Proportional Fairness (-CCPF) scheduling algorithm for network Coverage and Capacity Optimization (CCO) on the basis of enhancing the fairness of edger users schedule priority. It firstly proves the theoretical convergence of this algorithm and then proposes a coverage-capacity optimization scheme on the basis of this algorithm, traffic distribution and power adjustment. This scheme is simulated in an experimental regular scenario and a real non-regular scenario respectively. The results show that in the use case of CCO, the novel algorithm can ensure the reasonable Resource Occupation (RO) ratio and make the mean user throughput increased by 19% and 33% compared with the-Proportional Fairness (-PF) algorithm in the experimental and real scenario respectively.
2014, 36(9): 2138-2144.
doi: 10.3724/SP.J.1146.2013.01481
Abstract:
In order to comprehensively and effectively assess the vulnerability of power communication network, which is an important supporting network of smart gird, a cross-layer assessment method based on the information entropy is proposed. Firstly, a calculation method of power business importance is presented and the business importance is taken as a parameter to model power communication network on the business layer. The importance of edges on the business layer is described by the Edge Business Importance (EBI). Secondly, taking into account the business layer, transport layer and physical layer, Edge Cross-layer Importance (ECI) is proposed and the information entropy of ECI on a network, which is called Edge Cross-layer Entropy (ECE), is defined as the assessment index of network vulnerability. Finally, taking a real communication network as the simulating background, the validity of the method is proved by comparing the change of network vulnerability curves and ECEs under different routing strategies. The proposed method is suitable for not only power communication networks but also all the networks which carry non-uniform businesses.
In order to comprehensively and effectively assess the vulnerability of power communication network, which is an important supporting network of smart gird, a cross-layer assessment method based on the information entropy is proposed. Firstly, a calculation method of power business importance is presented and the business importance is taken as a parameter to model power communication network on the business layer. The importance of edges on the business layer is described by the Edge Business Importance (EBI). Secondly, taking into account the business layer, transport layer and physical layer, Edge Cross-layer Importance (ECI) is proposed and the information entropy of ECI on a network, which is called Edge Cross-layer Entropy (ECE), is defined as the assessment index of network vulnerability. Finally, taking a real communication network as the simulating background, the validity of the method is proved by comparing the change of network vulnerability curves and ECEs under different routing strategies. The proposed method is suitable for not only power communication networks but also all the networks which carry non-uniform businesses.
2014, 36(9): 2145-2151.
doi: 10.3724/SP.J.1146.2013.01585
Abstract:
In ubiquitous stub environments, the limited network resources and device capabilities are always causing the conflict of multi-user utilities allocation. To deal with this problem, a Device Composition Mechanism of Multi-service Equilibrium (DCMME) is proposed to make effective composition with heterogeneous devices. Firstly, based on the equilibrium index of relative entropy, the equilibrium service quality utility function is designed, and furthermore, the model of the Multiservice-Oriented Device Composition (MODC) problem is established. Then, this paper proposes to use dimensionality reduction of MODC in order to produce an Equilibrium-based Device Composition (EDC) algorithm. Finally, the simulation is implemented with C++ programming and MATLAB. The results show that the DCMME improves the equilibrium of multi-user by 0.5%~20% and proves to perform well in balancing and ensuring each users utility under multi-service circumstance.
In ubiquitous stub environments, the limited network resources and device capabilities are always causing the conflict of multi-user utilities allocation. To deal with this problem, a Device Composition Mechanism of Multi-service Equilibrium (DCMME) is proposed to make effective composition with heterogeneous devices. Firstly, based on the equilibrium index of relative entropy, the equilibrium service quality utility function is designed, and furthermore, the model of the Multiservice-Oriented Device Composition (MODC) problem is established. Then, this paper proposes to use dimensionality reduction of MODC in order to produce an Equilibrium-based Device Composition (EDC) algorithm. Finally, the simulation is implemented with C++ programming and MATLAB. The results show that the DCMME improves the equilibrium of multi-user by 0.5%~20% and proves to perform well in balancing and ensuring each users utility under multi-service circumstance.
2014, 36(9): 2152-2157.
doi: 10.3724/SP.J.1146.2013.01777
Abstract:
The pros and cons of load balancing among multi cells has a big impact on the network performance. Due to the trade-off of the existing methods, they are difficult to ensure the key performance indicators (e.g. the call blocking rate) to obtain best performance in the network. In order to address the issue, Load balancing is formulated as a multi-objective optimization, of which the objective function for Quality of Service (QoS) requirement is a joint optimization function of the load balancing index and average load function of the network, and the objective function for Best Effort (BE) user is the total utility function of all the BE usersthroughput, taking the available resources and users QoS requests as the constraints. Additionally, a distributed load balancing algorithm is proposed in view of the computational complexity in a practically system operation, which includes the resource scheduling policy, user switching conditions and call access control. The simulation results show that the proposed method has achieved a better load balancing index, thus effectively reducing the new call blocking rate by the QoS users and improving the network resources utilization.
The pros and cons of load balancing among multi cells has a big impact on the network performance. Due to the trade-off of the existing methods, they are difficult to ensure the key performance indicators (e.g. the call blocking rate) to obtain best performance in the network. In order to address the issue, Load balancing is formulated as a multi-objective optimization, of which the objective function for Quality of Service (QoS) requirement is a joint optimization function of the load balancing index and average load function of the network, and the objective function for Best Effort (BE) user is the total utility function of all the BE usersthroughput, taking the available resources and users QoS requests as the constraints. Additionally, a distributed load balancing algorithm is proposed in view of the computational complexity in a practically system operation, which includes the resource scheduling policy, user switching conditions and call access control. The simulation results show that the proposed method has achieved a better load balancing index, thus effectively reducing the new call blocking rate by the QoS users and improving the network resources utilization.
2014, 36(9): 2158-2165.
doi: 10.3724/SP.J.1146.2013.01600
Abstract:
Two novel methods based on hosts behavior analysis are proposed for encrypted packet-based Internet traffic classification. Combined with traffic matrix and network structure entropy, some new exponents for in-degree and out-degree are introduced to illustrate the characterization of connection and message transmission among the network nodes. These exponents can be used to describe traffic feature in different periods and time scale. Visibility graph is also used to convert traffic sequence to network. And the features for network structure are utilized to analyze the host behavior in the traffic sequence. The experimental results demonstrate that the variable trend of entropy exponents and network structure for different traffic have great difference. And two proposed methods can achieve effective traffic classification.
Two novel methods based on hosts behavior analysis are proposed for encrypted packet-based Internet traffic classification. Combined with traffic matrix and network structure entropy, some new exponents for in-degree and out-degree are introduced to illustrate the characterization of connection and message transmission among the network nodes. These exponents can be used to describe traffic feature in different periods and time scale. Visibility graph is also used to convert traffic sequence to network. And the features for network structure are utilized to analyze the host behavior in the traffic sequence. The experimental results demonstrate that the variable trend of entropy exponents and network structure for different traffic have great difference. And two proposed methods can achieve effective traffic classification.
2014, 36(9): 2166-2172.
doi: 10.3724/SP.J.1146.2013.01645
Abstract:
In this paper a sparse array SAR 3D imaging for continuous scene based on Compressed Sensing (CS) is proposed. It exploits the sparsity property of the SAR image under multi-aperture observation structure which supposes that SAR images become sparse in the transform domain by eliminating the random phase of each scattering cell, and CS theory is introduced into the signal processing in the transform domain. The proposed method can achieve high resolution 3D imaging and get nearly the same image quality as the full array with a few samplings. The proposed method decreases the constrain of the array designing on elevation direction and provides the possibility for imaging in the situation that a full equivalent array cannot be achieved. Simulation results verify the effectiveness of the proposed method.
In this paper a sparse array SAR 3D imaging for continuous scene based on Compressed Sensing (CS) is proposed. It exploits the sparsity property of the SAR image under multi-aperture observation structure which supposes that SAR images become sparse in the transform domain by eliminating the random phase of each scattering cell, and CS theory is introduced into the signal processing in the transform domain. The proposed method can achieve high resolution 3D imaging and get nearly the same image quality as the full array with a few samplings. The proposed method decreases the constrain of the array designing on elevation direction and provides the possibility for imaging in the situation that a full equivalent array cannot be achieved. Simulation results verify the effectiveness of the proposed method.
2014, 36(9): 2173-2179.
doi: 10.3724/SP.J.1146.2013.01590
Abstract:
As Inverse SAR (ISAR) imaging utilizes synthetic aperture with aspects changes related to the Radar Line of Sight (RLOS) to acquire azimuth resolution, the accurate estimation of rotated velocity is pivotal for the geometric scaling of ISAR images to measure the real size of a target. Compared with current methods by estimating motional parameters and the integrated images registration, this paper proposes a novel algorithm by extracting and registering the interested points of ISAR images from sub-aperture data, which provides the points coordinate-locations to calculate the virtual rotated velocity. First, adequate interested points are extracted from two sub-aperture images by Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). Those points are then pinpointed by matching and re-matching with the minimum Euclid- distance and RANdom SAmple Consensus (RANSAC) principles, respectively. Finally, the rotated velocity, a premise to acquire the cross-resolution, can be estimated to achieve the precise target scaling. Simulated and real data validate the effectiveness and robustness of the proposed algorithm.
As Inverse SAR (ISAR) imaging utilizes synthetic aperture with aspects changes related to the Radar Line of Sight (RLOS) to acquire azimuth resolution, the accurate estimation of rotated velocity is pivotal for the geometric scaling of ISAR images to measure the real size of a target. Compared with current methods by estimating motional parameters and the integrated images registration, this paper proposes a novel algorithm by extracting and registering the interested points of ISAR images from sub-aperture data, which provides the points coordinate-locations to calculate the virtual rotated velocity. First, adequate interested points are extracted from two sub-aperture images by Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). Those points are then pinpointed by matching and re-matching with the minimum Euclid- distance and RANdom SAmple Consensus (RANSAC) principles, respectively. Finally, the rotated velocity, a premise to acquire the cross-resolution, can be estimated to achieve the precise target scaling. Simulated and real data validate the effectiveness and robustness of the proposed algorithm.
2014, 36(9): 2180-2186.
doi: 10.3724/SP.J.1146.2013.01558
Abstract:
In order to jointly use the MIMO-ISAR space-time sampling signals for imaging, a novel method based on 2D frequency estimation is proposed for the rearrangement of MIMO-ISAR space-time echo. Firstly, the ratio of the frequencies of space and time sampling signals is estimated, and then the space-time sampling signals are rearranged according to the ratio. After uniform interpolating, the azimuth image is retrieved by FFT processing. Compared with the existing approach, the precision of parameter estimation is efficiently improved by utilizing the super-resolution performance of the modern spectral estimation algorithm. Meanwhile, the ratio of the two frequencies is estimated via Randomized Hough Transform (RHT), and the proposed method is still applicable when the frequency of the space sampling signal is ambiguity. The simulation results confirm the validity of the proposed method.
In order to jointly use the MIMO-ISAR space-time sampling signals for imaging, a novel method based on 2D frequency estimation is proposed for the rearrangement of MIMO-ISAR space-time echo. Firstly, the ratio of the frequencies of space and time sampling signals is estimated, and then the space-time sampling signals are rearranged according to the ratio. After uniform interpolating, the azimuth image is retrieved by FFT processing. Compared with the existing approach, the precision of parameter estimation is efficiently improved by utilizing the super-resolution performance of the modern spectral estimation algorithm. Meanwhile, the ratio of the two frequencies is estimated via Randomized Hough Transform (RHT), and the proposed method is still applicable when the frequency of the space sampling signal is ambiguity. The simulation results confirm the validity of the proposed method.
2014, 36(9): 2187-2193.
doi: 10.3724/SP.J.1146.2013.01486
Abstract:
A common problem in Inverse Synthetic Aperture Radar (ISAR) imaging is to estimate the rotating parameters for uniformly accelerated rotation target. At first, the radar echo model for uniformly accelerated rotation target is introduced. Then, the relationship between the extracted polynomial phase parameters and rotation parameters is deduced when the Linear Frequency Modulation (LFM) model is utilized to extract the prominent scattering centers from profile sequences. An algorithm is proposed on this basis which can estimate the rotating parameters unbiased for uniformly accelerated rotation target. Simulation results verify the effectiveness of the proposed algorithm.
A common problem in Inverse Synthetic Aperture Radar (ISAR) imaging is to estimate the rotating parameters for uniformly accelerated rotation target. At first, the radar echo model for uniformly accelerated rotation target is introduced. Then, the relationship between the extracted polynomial phase parameters and rotation parameters is deduced when the Linear Frequency Modulation (LFM) model is utilized to extract the prominent scattering centers from profile sequences. An algorithm is proposed on this basis which can estimate the rotating parameters unbiased for uniformly accelerated rotation target. Simulation results verify the effectiveness of the proposed algorithm.
2014, 36(9): 2194-2200.
doi: 10.3724/SP.J.1146.2013.01451
Abstract:
In order to solve the occlusion issue in SAR image target recognition, a new classification method is proposed based on non-negative sparse representation. The difference between L0-norm and L1-norm minimization in solving non-negative sparse representation problem is analyzed, and it is proved that L1-norm regularization method pursuits not only the sparsity of the solution but also the similarity between the input signal and the selected atoms under some conditions, hence it is fit for classification application. The experimental results on Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset show that the non-negative sparse representation classification method with L1-norm regularization can achieve much better recognition performance, and it is more robust in the recognition of targets with occlusion compared with the traditional method.
In order to solve the occlusion issue in SAR image target recognition, a new classification method is proposed based on non-negative sparse representation. The difference between L0-norm and L1-norm minimization in solving non-negative sparse representation problem is analyzed, and it is proved that L1-norm regularization method pursuits not only the sparsity of the solution but also the similarity between the input signal and the selected atoms under some conditions, hence it is fit for classification application. The experimental results on Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset show that the non-negative sparse representation classification method with L1-norm regularization can achieve much better recognition performance, and it is more robust in the recognition of targets with occlusion compared with the traditional method.
2014, 36(9): 2201-2206.
doi: 10.3724/SP.J.1146.2013.01792
Abstract:
This paper proposes a new mono-pulse radar beam sharpening method with multichannel L1 regularization. First, a multi-channel L1 regularization model is derived for mono-pulse radar beam sharpening based on the maximum a posteriori probability criterion. Then, an extended iterative shrinkage threshold algorithm is proposed to solve the multi-channel L1 regularization problem. Theoretical analysis and simulation results show that this new method can ensure beam sharpening performance and improve the ability to suppress noise, and it can efficiently solve the noise leaks problem caused by channels pattern, which does not meet the strong prime conditions in mono-pulse radar. The performance of the proposed algrithem is significantly better than the existing mono-pulse beam sharpening methods.
This paper proposes a new mono-pulse radar beam sharpening method with multichannel L1 regularization. First, a multi-channel L1 regularization model is derived for mono-pulse radar beam sharpening based on the maximum a posteriori probability criterion. Then, an extended iterative shrinkage threshold algorithm is proposed to solve the multi-channel L1 regularization problem. Theoretical analysis and simulation results show that this new method can ensure beam sharpening performance and improve the ability to suppress noise, and it can efficiently solve the noise leaks problem caused by channels pattern, which does not meet the strong prime conditions in mono-pulse radar. The performance of the proposed algrithem is significantly better than the existing mono-pulse beam sharpening methods.
2014, 36(9): 2207-2213.
doi: 10.3724/SP.J.1146.2013.01582
Abstract:
Along-Track Interferometric Synthetic Aperture Radar (ATI-SAR) has the potential to measure the radial velocities of slowly moving targets. The accuracy of the radial velocity is limited by the accuracy of the interferometric parameters. Sensitivity equations are good ways to analyze the impact of the system parameters on interferometric SAR. Available sensitivity analysis for ATI-SAR mainly focuses on the case of no squint angle. In this paper, the expression of the radial velocity in the presence of squint is derived firstly, and the sensitivity equations are further obtained. Moreover, the sensitivity of the interferometric parameters is simulated with airborne parameters, and the accuracy requirements for different parameters are given for different velocity accuracy requirements, which provide quantitative reference for real airborne ATI-SAR analysis and calibration requirement analysis.
Along-Track Interferometric Synthetic Aperture Radar (ATI-SAR) has the potential to measure the radial velocities of slowly moving targets. The accuracy of the radial velocity is limited by the accuracy of the interferometric parameters. Sensitivity equations are good ways to analyze the impact of the system parameters on interferometric SAR. Available sensitivity analysis for ATI-SAR mainly focuses on the case of no squint angle. In this paper, the expression of the radial velocity in the presence of squint is derived firstly, and the sensitivity equations are further obtained. Moreover, the sensitivity of the interferometric parameters is simulated with airborne parameters, and the accuracy requirements for different parameters are given for different velocity accuracy requirements, which provide quantitative reference for real airborne ATI-SAR analysis and calibration requirement analysis.
2014, 36(9): 2214-2219.
doi: 10.3724/SP.J.1146.2013.01709
Abstract:
Non-side-looking airborne radar can induce variation of clutter signature with range and the conventional method for space time adaptive processing has significant performance degradation. To deal with this problem, this paper presents an adaptive subspace method. Firstly, the geometry parameter is estimated by curve fitting. Then the clutter subspace can be represented using the estimated geometry parameter. Finally, data is projected on the orthogonal complement subspace of clutter to suppress the clutter. The simulation results illustrate that the method is able to obtain an accurate estimate of the parameters and a good clutter suppression performance.
Non-side-looking airborne radar can induce variation of clutter signature with range and the conventional method for space time adaptive processing has significant performance degradation. To deal with this problem, this paper presents an adaptive subspace method. Firstly, the geometry parameter is estimated by curve fitting. Then the clutter subspace can be represented using the estimated geometry parameter. Finally, data is projected on the orthogonal complement subspace of clutter to suppress the clutter. The simulation results illustrate that the method is able to obtain an accurate estimate of the parameters and a good clutter suppression performance.
2014, 36(9): 2220-2226.
doi: 10.3724/SP.J.1146.2013.01670
Abstract:
The mainbeam is difficult to maintain in adaptive beamforming for conformal array, and even worse, the sidelobe is very high. To alleviate these problems, an adaptive beamforming method is proposed based on the response vector optimization. Through adaptively adjusting the response vector under the well-maintained mainbeam constraint, the optimal response vector is derived, thus the sub-optimal adaptive weight is obtained. The proposed method converts the non-convex quadratically constrained quadratic optimizing problem into a higher-dimension subspace, then the problem is transformed into a convex optimization problem via SemiDefinite Relaxation (SDR), and then its sub-optimal solution is efficiently achieved. The method not only maintains the desired response of the mainbeam, but also overcomes the disadvantages of high sidelobe resulting from the conventional Linearly Constrained Minimum Variance (LCMV) adaptive approach. Moreover, it is robust against the geometry of the array. Simulation results demonstrate the effectiveness of the proposed method.
The mainbeam is difficult to maintain in adaptive beamforming for conformal array, and even worse, the sidelobe is very high. To alleviate these problems, an adaptive beamforming method is proposed based on the response vector optimization. Through adaptively adjusting the response vector under the well-maintained mainbeam constraint, the optimal response vector is derived, thus the sub-optimal adaptive weight is obtained. The proposed method converts the non-convex quadratically constrained quadratic optimizing problem into a higher-dimension subspace, then the problem is transformed into a convex optimization problem via SemiDefinite Relaxation (SDR), and then its sub-optimal solution is efficiently achieved. The method not only maintains the desired response of the mainbeam, but also overcomes the disadvantages of high sidelobe resulting from the conventional Linearly Constrained Minimum Variance (LCMV) adaptive approach. Moreover, it is robust against the geometry of the array. Simulation results demonstrate the effectiveness of the proposed method.
2014, 36(9): 2227-2231.
doi: 10.3724/SP.J.1146.2013.01688
Abstract:
For expanding the passband, two Antipodal Sinusoidally Tapered Slot Antennas (ASTSAs) with high gain and wide bandwidth are proposed based on the conventional ALTSA and fed by Substrate Integrated Waveguide (SIW). The Half Mode Substrate Integrated Waveguide (HMSIW) monopulse fed network is designed by making use of the principles of rectangular 3 dB directional coupler and 90 phase shifter. Then the symmetrical and unsymmetrical monopulse ASTSAs are realized with passbands both over 3.0 GHz and gains of sum beam about 9.0 dBi. The zero depths of different beam at 10.0 GHz both lower than -20 dB. There into, the unsymmetrical monopulse ASTSA has wider bandwidth and more balanced gain, which is widely used in the millimetre-wave directional-finding system.
For expanding the passband, two Antipodal Sinusoidally Tapered Slot Antennas (ASTSAs) with high gain and wide bandwidth are proposed based on the conventional ALTSA and fed by Substrate Integrated Waveguide (SIW). The Half Mode Substrate Integrated Waveguide (HMSIW) monopulse fed network is designed by making use of the principles of rectangular 3 dB directional coupler and 90 phase shifter. Then the symmetrical and unsymmetrical monopulse ASTSAs are realized with passbands both over 3.0 GHz and gains of sum beam about 9.0 dBi. The zero depths of different beam at 10.0 GHz both lower than -20 dB. There into, the unsymmetrical monopulse ASTSA has wider bandwidth and more balanced gain, which is widely used in the millimetre-wave directional-finding system.
2014, 36(9): 2232-2237.
doi: 10.3724/SP.J.1146.2013.01807
Abstract:
In view of the weakness of millimeter wave radiation signals energy, a calibration method for linear array with the presence of sensor gain, phase and position errors is presented. Based on the array distortion model at low Signal-to Noise Ratio (SNR), and through using a single signal source without knowing exact orientation, the calibration data in multiple angles is obtained by rotating the array antenna, and sensor gain, phase and position error parameters are estimated, thus accomplishing the joint calibration of gain, phase and position errors. The proposed method offers a less complex computation process with higher precision of estimation and good calibration performance. The theoretical analysis and computer simulation results show the effectiveness of the method.
In view of the weakness of millimeter wave radiation signals energy, a calibration method for linear array with the presence of sensor gain, phase and position errors is presented. Based on the array distortion model at low Signal-to Noise Ratio (SNR), and through using a single signal source without knowing exact orientation, the calibration data in multiple angles is obtained by rotating the array antenna, and sensor gain, phase and position error parameters are estimated, thus accomplishing the joint calibration of gain, phase and position errors. The proposed method offers a less complex computation process with higher precision of estimation and good calibration performance. The theoretical analysis and computer simulation results show the effectiveness of the method.
2014, 36(9): 2238-2243.
doi: 10.3724/SP.J.1146.2013.01737
Abstract:
Time Reversal (TR) is a new method for wave propagation and control with electromagnetic environment adaptability. Time reversal method can simplify the pattern design of antenna arrays with complicated structures. In this paper, a time reversal method is investigated for determining the excitation of a micro-strip antenna array. Both simulations and experiments are performed. The results show that the time reversal method can exactly determine the array excitation for beam patterns with different desired main-lobes. Compared with the traditional array excitation-determining methods, time reversal has remarkable advantages in determining the array excitation since it does not require coupling compensation computation and has no limitations to the element geometrical structures and array configurations.
Time Reversal (TR) is a new method for wave propagation and control with electromagnetic environment adaptability. Time reversal method can simplify the pattern design of antenna arrays with complicated structures. In this paper, a time reversal method is investigated for determining the excitation of a micro-strip antenna array. Both simulations and experiments are performed. The results show that the time reversal method can exactly determine the array excitation for beam patterns with different desired main-lobes. Compared with the traditional array excitation-determining methods, time reversal has remarkable advantages in determining the array excitation since it does not require coupling compensation computation and has no limitations to the element geometrical structures and array configurations.
2014, 36(9): 2244-2250.
doi: 10.3724/SP.J.1146.2013.01623
Abstract:
The spaceborne scatterometers which are in operation with a general resolution of 25km are already challenged by the higher resolution requirement in some applications, such as polar ice mapping, rainforest mapping and the research of coastal wind. The scatterometer image reconstruction technology can improve the resolution through the data processing method without necessarily changing the system hardwares. The Scatterometer Image Reconstruction (SIR) algorithm that are being used currently is based on an earlier algorithm called Multiplicative Algebraic Reconstruction Technique (MART). In this paper, a new image reconstruction method is applied to the scatterometer image reconstruction for the rotating fan-beam spaceborne scatterometer called the total variation regularization. The simulation experiments show that the new method can improve the quality of the reconstructed image through both resolution enhancement and noise reduction at the same time.
The spaceborne scatterometers which are in operation with a general resolution of 25km are already challenged by the higher resolution requirement in some applications, such as polar ice mapping, rainforest mapping and the research of coastal wind. The scatterometer image reconstruction technology can improve the resolution through the data processing method without necessarily changing the system hardwares. The Scatterometer Image Reconstruction (SIR) algorithm that are being used currently is based on an earlier algorithm called Multiplicative Algebraic Reconstruction Technique (MART). In this paper, a new image reconstruction method is applied to the scatterometer image reconstruction for the rotating fan-beam spaceborne scatterometer called the total variation regularization. The simulation experiments show that the new method can improve the quality of the reconstructed image through both resolution enhancement and noise reduction at the same time.
2014, 36(9): 2251-2257.
doi: 10.3724/SP.J.1146.2013.01646
Abstract:
Interconnection network plays an important role in Coarse-Grained Reconfigurable Arrays (CGRAs), and it has a significant influence on the performance, area cost and power consumption. To reduce the area cost and power consumption caused by the interconnection network, and improve the performance of CGRA, a self-routing and non-blocking interconnection network is proposed, and a hierarchical network topology is constructed. Through the proposed interconnection network, connection and data exchange can be established between any pair of processing elements. Moreover, the process of connection establishment is self-routing and non-blocking. Experimental results demonstrate that, compared with existing CGRAs, the overall performance of the proposed architecture is improved up to 46.2% at the expense of 14.1% increase of area cost.
Interconnection network plays an important role in Coarse-Grained Reconfigurable Arrays (CGRAs), and it has a significant influence on the performance, area cost and power consumption. To reduce the area cost and power consumption caused by the interconnection network, and improve the performance of CGRA, a self-routing and non-blocking interconnection network is proposed, and a hierarchical network topology is constructed. Through the proposed interconnection network, connection and data exchange can be established between any pair of processing elements. Moreover, the process of connection establishment is self-routing and non-blocking. Experimental results demonstrate that, compared with existing CGRAs, the overall performance of the proposed architecture is improved up to 46.2% at the expense of 14.1% increase of area cost.
2014, 36(9): 2258-2264.
doi: 10.3724/SP.J.1146.2013.01536
Abstract:
With the deficiency of the published tabular techniques based algorithms which can only handle small functions in the conversion from AND/OR forms to the Fixed Polarity Reed-Mull (FPRM) forms, a novel parallel tabular technique using the disjoint products is proposed. By utilizing the disjointed products, the proposed algorithm is able to avoid the rapid increase of the minterms which leads the reported tabular technique based algorithms cannot run efficiently or even out of work. Furthermore, unlike the published algorithm for the large functions conversion, the circuit structure in progress has little effect on the performance of the proposed algorithm. The proposed algorithm is implemented in C language and tested under MCNC benchmarks. The experimental results show that the proposed algorithm can finish the polarity conversion fast for the larger circuits and the speed of the algorithm does not depend on the number of inputs of the circuits but the number of the disjointed products.
With the deficiency of the published tabular techniques based algorithms which can only handle small functions in the conversion from AND/OR forms to the Fixed Polarity Reed-Mull (FPRM) forms, a novel parallel tabular technique using the disjoint products is proposed. By utilizing the disjointed products, the proposed algorithm is able to avoid the rapid increase of the minterms which leads the reported tabular technique based algorithms cannot run efficiently or even out of work. Furthermore, unlike the published algorithm for the large functions conversion, the circuit structure in progress has little effect on the performance of the proposed algorithm. The proposed algorithm is implemented in C language and tested under MCNC benchmarks. The experimental results show that the proposed algorithm can finish the polarity conversion fast for the larger circuits and the speed of the algorithm does not depend on the number of inputs of the circuits but the number of the disjointed products.
2014, 36(9): 2265-2271.
doi: 10.3724/SP.J.1146.2013.01693
Abstract:
Considering the issue of the large output voltage ripple of Pulse Skip Modulation (PSM) controlled switching DC-DC converter, a novel control technique named Pulse Skip with Adaptive Duty ratio (ADPS) is proposed in this paper. In ADPS converter with light load, the duty ratio of control pulse of each switching cycle is approximately proportional to the square root of the voltage error between the reference voltage and the output voltage at the beginning of the switching cycle. Also, the lighter the load is, the smaller the ripple is. The experimental results show that ADPS converter not only has lower output voltage ripple than PSM converter, but also has excellent control robustness and transient performance.
Considering the issue of the large output voltage ripple of Pulse Skip Modulation (PSM) controlled switching DC-DC converter, a novel control technique named Pulse Skip with Adaptive Duty ratio (ADPS) is proposed in this paper. In ADPS converter with light load, the duty ratio of control pulse of each switching cycle is approximately proportional to the square root of the voltage error between the reference voltage and the output voltage at the beginning of the switching cycle. Also, the lighter the load is, the smaller the ripple is. The experimental results show that ADPS converter not only has lower output voltage ripple than PSM converter, but also has excellent control robustness and transient performance.
2014, 36(9): 2278-2282.
doi: 10.3724/SP.J.1146.2013.00343
Abstract:
With the requirements of pulsed-triggered Flip-Flop and the threshold-arithmetic algebraic system, a novel universal structure of current-mode CMOS pulsed-triggered D Flip-Flop is proposed for binary and multi-valued current-mode CMOS pulsed-triggered D Flip-Flops design. Based on the proposed structure, a Binary Current-Mode CMOS pulse-triggered D Flip-Flop (BCMPDFF), a Ternary Current-Mode CMOS Pulse-triggered D Flip-Flop (TCMPDFF) and a Quaternary Current-Mode CMOS Pulse-triggered D Flip-Flop (QCMPDFF) are designed, respectively, and the designed Flip-Flops can be easily incorporated into single and double edge-triggered design. The HSPICE simulation using TSMC 180 nm CMOS technology show that the designed D Flip-Flops based on the proposed universal structure have the correct logic function. The setup time and hold time of the designed Flip-Flops are optimalized, respectively. Comparing to the published current-mode CMOS master-slave D Flip-Flops, the worst minimum D-Q delay of BCMPDFF and QCMPDFF can be reduced by 56.97% and 54.99%, respectively, comparing to the published current-mode CMOS edge-triggered D Flip-Flops, the worst minimum D-Q delay can be reduced by at least 4.26%. The designed Flip-Flops have the advantage of fewer transistors, relatively simpler structure and higher performance.
With the requirements of pulsed-triggered Flip-Flop and the threshold-arithmetic algebraic system, a novel universal structure of current-mode CMOS pulsed-triggered D Flip-Flop is proposed for binary and multi-valued current-mode CMOS pulsed-triggered D Flip-Flops design. Based on the proposed structure, a Binary Current-Mode CMOS pulse-triggered D Flip-Flop (BCMPDFF), a Ternary Current-Mode CMOS Pulse-triggered D Flip-Flop (TCMPDFF) and a Quaternary Current-Mode CMOS Pulse-triggered D Flip-Flop (QCMPDFF) are designed, respectively, and the designed Flip-Flops can be easily incorporated into single and double edge-triggered design. The HSPICE simulation using TSMC 180 nm CMOS technology show that the designed D Flip-Flops based on the proposed universal structure have the correct logic function. The setup time and hold time of the designed Flip-Flops are optimalized, respectively. Comparing to the published current-mode CMOS master-slave D Flip-Flops, the worst minimum D-Q delay of BCMPDFF and QCMPDFF can be reduced by 56.97% and 54.99%, respectively, comparing to the published current-mode CMOS edge-triggered D Flip-Flops, the worst minimum D-Q delay can be reduced by at least 4.26%. The designed Flip-Flops have the advantage of fewer transistors, relatively simpler structure and higher performance.
2014, 36(9): 2283-2286.
doi: 10.3724/SP.J.1146.2013.00881
Abstract:
Generally, sequential equivalence circuit checking is to expand the sequential circuit into combinational circuit for verification. While in two sequential circuit to be verified, flip-flops is correspondent, identifying and matching corresponding flip-flops in the two sequential circuits to be verified is proved greatly effective. This paper builds a new miter circuit for Automatic Test Pattern Generation (ATPG) module, and then uses Boolean Satisfiability (SAT) tools to solve the Boolean function with timing frame unrolling transmission. Meanwhile, this method improves the SAT tool of information learning to accelerate the calculation process. Results on industrial- sized circuits ISCAS89 show these methods are both practical and efficient.
Generally, sequential equivalence circuit checking is to expand the sequential circuit into combinational circuit for verification. While in two sequential circuit to be verified, flip-flops is correspondent, identifying and matching corresponding flip-flops in the two sequential circuits to be verified is proved greatly effective. This paper builds a new miter circuit for Automatic Test Pattern Generation (ATPG) module, and then uses Boolean Satisfiability (SAT) tools to solve the Boolean function with timing frame unrolling transmission. Meanwhile, this method improves the SAT tool of information learning to accelerate the calculation process. Results on industrial- sized circuits ISCAS89 show these methods are both practical and efficient.
2014, 36(9): 2272-2277.
doi: 10.3724/SP.J.1146.2013.01783
Abstract:
The effects of various coupling factors on resonant frequency and mode separation of magnetrons should be investigated to improve the stability of the magnetrons. The resonant frequency and the mode separation of the unstrapped magnetron with capacitance and inductance coupling are derived using equivalent circuit method. Besides, the change of resonant frequency and the effects of capacitance and inductance coupling on mode separation are also numerically calculated. Moreover, the resonant frequencies of magnetrons are simulated by CST-MWS, which are compared with theoretical results. Theoretical analysis and computer simulations show that as the mode number increases, the resonant frequency of unstrapped magnetron induced by the capacitance coupling increases, while the inductance coupling is opposite. The two couplings can both enlarge the mode separation by opposite methods. The mode spectrum of unstrapped magnetron with capacitance and inductance coupling depends on the dominant coupling factor.
The effects of various coupling factors on resonant frequency and mode separation of magnetrons should be investigated to improve the stability of the magnetrons. The resonant frequency and the mode separation of the unstrapped magnetron with capacitance and inductance coupling are derived using equivalent circuit method. Besides, the change of resonant frequency and the effects of capacitance and inductance coupling on mode separation are also numerically calculated. Moreover, the resonant frequencies of magnetrons are simulated by CST-MWS, which are compared with theoretical results. Theoretical analysis and computer simulations show that as the mode number increases, the resonant frequency of unstrapped magnetron induced by the capacitance coupling increases, while the inductance coupling is opposite. The two couplings can both enlarge the mode separation by opposite methods. The mode spectrum of unstrapped magnetron with capacitance and inductance coupling depends on the dominant coupling factor.