Email alert
2011 Vol. 33, No. 7
Display Method:
2011, 33(7): 1525-1531.
doi: 10.3724/SP.J.1146.2010.01312
Abstract:
The network lifetime optimization issue is investigated for bidirectional two-hop cooperative Orthogonal Frequency Division Multiplexing (OFDM) systems recruiting multiple relays. Since direct treatment for the network lifetime maximization formulation is not feasible, a suboptimal strategy is proposed, which takes energy pricing concept for each node into account. Specifically, the power allocation for each subcarrier, relay, and source selection is optimized gradually rather than deriving jointly optimal solutions. Actually, by applying standard Lagrange technique, the optimal power assignment for each source/relay pair, which minimizes the total energy cost subject to limited transmission power and network throughput constraints, can be readily solved. An optimum relay is then selected out accordingly among all possible pairs, finally the direction of traffic flows can be determined by choosing the link with smaller price sacrifice. Two practical scenarios are considered, i.e., a direct source-destination link is available or not while implementing the proposal. Moreover, when the direct link can be fully exploited, the impact of two diversity combining techniques including Maximal Ratio Combining (MRC) and Selective Combining (SC) on the power allocation optimization is theoretically derived. Simulation results indicate that, the network lifetime by utilizing the proposed algorithm outperforms the existing approaches significantly.
The network lifetime optimization issue is investigated for bidirectional two-hop cooperative Orthogonal Frequency Division Multiplexing (OFDM) systems recruiting multiple relays. Since direct treatment for the network lifetime maximization formulation is not feasible, a suboptimal strategy is proposed, which takes energy pricing concept for each node into account. Specifically, the power allocation for each subcarrier, relay, and source selection is optimized gradually rather than deriving jointly optimal solutions. Actually, by applying standard Lagrange technique, the optimal power assignment for each source/relay pair, which minimizes the total energy cost subject to limited transmission power and network throughput constraints, can be readily solved. An optimum relay is then selected out accordingly among all possible pairs, finally the direction of traffic flows can be determined by choosing the link with smaller price sacrifice. Two practical scenarios are considered, i.e., a direct source-destination link is available or not while implementing the proposal. Moreover, when the direct link can be fully exploited, the impact of two diversity combining techniques including Maximal Ratio Combining (MRC) and Selective Combining (SC) on the power allocation optimization is theoretically derived. Simulation results indicate that, the network lifetime by utilizing the proposed algorithm outperforms the existing approaches significantly.
2011, 33(7): 1532-1536.
doi: 10.3724/SP.J.1146.2011.00042
Abstract:
The detection time required to reach a decision in sequential detection is a random variable. Although the sequential detection has a high average detection speed, very long detection time may be required in some specific cases. In order to improve the speed of spectrum sensing as well as to avoid some too-long-sensing-time tests in cognitive radio, a truncated sequential detection algorithm is proposed. Firstly, the effect of truncation on the performance of conventional sequential detection is analyzed, and the upper bounds of the false alarm and the miss detection probability are derived. Then the detection thresholds of truncated sequential detection are derived based on the upper bounds. Finally, the process of the truncated sequential detection algorithm is proposed. The simulation results show that the proposed algorithm, under the constraint of limited sensing time, can satisfy the performance requirement, and still have a shorter average sensing time than the conventional energy detection does.
The detection time required to reach a decision in sequential detection is a random variable. Although the sequential detection has a high average detection speed, very long detection time may be required in some specific cases. In order to improve the speed of spectrum sensing as well as to avoid some too-long-sensing-time tests in cognitive radio, a truncated sequential detection algorithm is proposed. Firstly, the effect of truncation on the performance of conventional sequential detection is analyzed, and the upper bounds of the false alarm and the miss detection probability are derived. Then the detection thresholds of truncated sequential detection are derived based on the upper bounds. Finally, the process of the truncated sequential detection algorithm is proposed. The simulation results show that the proposed algorithm, under the constraint of limited sensing time, can satisfy the performance requirement, and still have a shorter average sensing time than the conventional energy detection does.
2011, 33(7): 1537-1543.
doi: 10.3724/SP.J.1146.2010.01378
Abstract:
The data transfer is defined as connection-oriented and two-phase activation model is employed to set up connection in IEEE 802.16 networks. Considering that free resource is exhausted but some resources is reserved for those admitted service flows, a novel Call Admission Control (CAC) algorithm which is based on borrowing the reserved resources to admit the new active service flows is proposed. The 3-D Markov chain model is presented. The performance of algorithm is analyzed theoretically and an algorithm for searching two thresholds is developed. Simulation results show that the proposed CAC algorithm can reduce the blocked probability of new service flows and improve the ratio of bandwidth utilization, while the successful activation ratio of admitted-without-activated service flows declines slightly.
The data transfer is defined as connection-oriented and two-phase activation model is employed to set up connection in IEEE 802.16 networks. Considering that free resource is exhausted but some resources is reserved for those admitted service flows, a novel Call Admission Control (CAC) algorithm which is based on borrowing the reserved resources to admit the new active service flows is proposed. The 3-D Markov chain model is presented. The performance of algorithm is analyzed theoretically and an algorithm for searching two thresholds is developed. Simulation results show that the proposed CAC algorithm can reduce the blocked probability of new service flows and improve the ratio of bandwidth utilization, while the successful activation ratio of admitted-without-activated service flows declines slightly.
2011, 33(7): 1544-1549.
doi: 10.3724/SP.J.1146.2010.01324
Abstract:
Noise-Normalization Combining (NNC) is applied in Differential Frequency Hopping (DFH) receiver to improve the performance of DFH system in rejecting Partial-Band Jamming (PBJ). Symbol Error Rate (SER) performance of this receiver over Nakagami fading channel with PBJ and background thermal noise is analyzed and closed-form expression of SER is derived. A simplified expression for fading parameter m integer is derived based on moment generating function method. Simulation results show that for non-worst case PBJ, NNC-DFH receiver is superior to Linear Combining (LC) DFH receiver except when channel fading is weak and the jamming power is dispersed. Concentrating of the jamming power leads to more performance improvements. For worst case PBJ, this superiority always exists and is not affected by channel fading parameter and jamming bandwidth factor.
Noise-Normalization Combining (NNC) is applied in Differential Frequency Hopping (DFH) receiver to improve the performance of DFH system in rejecting Partial-Band Jamming (PBJ). Symbol Error Rate (SER) performance of this receiver over Nakagami fading channel with PBJ and background thermal noise is analyzed and closed-form expression of SER is derived. A simplified expression for fading parameter m integer is derived based on moment generating function method. Simulation results show that for non-worst case PBJ, NNC-DFH receiver is superior to Linear Combining (LC) DFH receiver except when channel fading is weak and the jamming power is dispersed. Concentrating of the jamming power leads to more performance improvements. For worst case PBJ, this superiority always exists and is not affected by channel fading parameter and jamming bandwidth factor.
2011, 33(7): 1550-1555.
doi: 10.3724/SP.J.1146.2010.01287
Abstract:
In this paper, the Frequncy Domain Equlization (FDE) algorithm for partial response Continuous Phase Modulation (CPM) signals is studied. A new framework of CPM transmitted signals is designed, and a novel low-complexity iterative detection approach for CPM is proposed. The computational complexity and the bit error rate are analysed for this iterative detection algorithm.The computational complexity analysis and simulaitons show that this approach provides not only a significant reduction in the overall computational complexity, but also a performance improvement over previously proposed double Turbo FDE algorithm in multipath fading channels.
In this paper, the Frequncy Domain Equlization (FDE) algorithm for partial response Continuous Phase Modulation (CPM) signals is studied. A new framework of CPM transmitted signals is designed, and a novel low-complexity iterative detection approach for CPM is proposed. The computational complexity and the bit error rate are analysed for this iterative detection algorithm.The computational complexity analysis and simulaitons show that this approach provides not only a significant reduction in the overall computational complexity, but also a performance improvement over previously proposed double Turbo FDE algorithm in multipath fading channels.
2011, 33(7): 1556-1560.
doi: 10.3724/SP.J.1146.2010.01239
Abstract:
Cooperative communication and cognitive radio are the key candidates for the future mobile communication technologies. Focusing on the resource allocation issue in OFDM-based cooperative cognitive radio networks, an efficient cross-layer scheduling scheme is proposed. In the scheme, while taking strict interference control to protect the primary user, the most appropriate subcarrier pairs for the two phases transmission in each frame as well as optimal power is allocated to obtain a maximal rate during a superframes transmission. The optimization issue for scheduling scheme is solved effectively with a decomposition method. Simulation results indicate that the proposed cross-layer scheduling scheme brings a notable increase in the transmission rate, which confirms the feasibility and validity of the proposed scheduling scheme.
Cooperative communication and cognitive radio are the key candidates for the future mobile communication technologies. Focusing on the resource allocation issue in OFDM-based cooperative cognitive radio networks, an efficient cross-layer scheduling scheme is proposed. In the scheme, while taking strict interference control to protect the primary user, the most appropriate subcarrier pairs for the two phases transmission in each frame as well as optimal power is allocated to obtain a maximal rate during a superframes transmission. The optimization issue for scheduling scheme is solved effectively with a decomposition method. Simulation results indicate that the proposed cross-layer scheduling scheme brings a notable increase in the transmission rate, which confirms the feasibility and validity of the proposed scheduling scheme.
2011, 33(7): 1561-1567.
doi: 10.3724/SP.J.1146.2010.01127
Abstract:
A spectrum assignment scheme for cognitive radio networks is proposed by means of combining graph theory with immune optimization algorithm. A binary matrix coding scheme is introduced to represent antibody population. Two operators Random-Constraint Satisfaction Operator (RCSO) and Fair-Constraint Satisfaction Operator (FCSO) are designed to guarantee efficiency and fairness respectively. A novel spectrum assignment algorithm based on Immune-Clonal-Selection (ICS) is proposed, which is an improvement of the classical immune clonal selection algorithm. With Constraint Satisfaction Operation (CSO) applied to the encoded populations, the constraints can be satisfied to achieve the global optimization. The CSO is proved to be effective theoretically, and then the computational complexity and applicability are analyzed. Simulation results show that, compared to the Color-Sensitive Graph Coloring (CSGC) algorithm, the ICS can significantly increases the network utilization. Especially when the spectrum conflict is severe, the fairness reward is efficiently improved by using the ICS with FCSO. Meanwhile, its high convergence speed is validated by simulation.
A spectrum assignment scheme for cognitive radio networks is proposed by means of combining graph theory with immune optimization algorithm. A binary matrix coding scheme is introduced to represent antibody population. Two operators Random-Constraint Satisfaction Operator (RCSO) and Fair-Constraint Satisfaction Operator (FCSO) are designed to guarantee efficiency and fairness respectively. A novel spectrum assignment algorithm based on Immune-Clonal-Selection (ICS) is proposed, which is an improvement of the classical immune clonal selection algorithm. With Constraint Satisfaction Operation (CSO) applied to the encoded populations, the constraints can be satisfied to achieve the global optimization. The CSO is proved to be effective theoretically, and then the computational complexity and applicability are analyzed. Simulation results show that, compared to the Color-Sensitive Graph Coloring (CSGC) algorithm, the ICS can significantly increases the network utilization. Especially when the spectrum conflict is severe, the fairness reward is efficiently improved by using the ICS with FCSO. Meanwhile, its high convergence speed is validated by simulation.
2011, 33(7): 1568-1574.
doi: 10.3724/SP.J.1146.2010.01370
Abstract:
Considering at the characteristic that the energy of node is limited in wireless sensor networks, an energy-efficiency optimizing model both considering transmission energy consumption and its equilibrium characteristic is build in this paper. To make maximum of the total remaining energy and minimum of the remaining energy variance as optimal objectives, it optimizes energy efficiency of the network by reasonably allocating the flow on multipath. The weigh evaluation function is used to solve this model, and then a Multipath Flow Allocating Routing (MFAR) algorithm is proposed based on it. Simulation results show that, MFAR algorithm can reasonably allocate the flow on multipath, improve the network energy efficiency significantly, and achieve the goal of reducing energy consumption and balancing its distribution simultaneously.
Considering at the characteristic that the energy of node is limited in wireless sensor networks, an energy-efficiency optimizing model both considering transmission energy consumption and its equilibrium characteristic is build in this paper. To make maximum of the total remaining energy and minimum of the remaining energy variance as optimal objectives, it optimizes energy efficiency of the network by reasonably allocating the flow on multipath. The weigh evaluation function is used to solve this model, and then a Multipath Flow Allocating Routing (MFAR) algorithm is proposed based on it. Simulation results show that, MFAR algorithm can reasonably allocate the flow on multipath, improve the network energy efficiency significantly, and achieve the goal of reducing energy consumption and balancing its distribution simultaneously.
2011, 33(7): 1575-1581.
doi: 10.3724/SP.J.1146.2010.01364
Abstract:
Different from traditional networks, there is often no contemporaneous end-to-end link between the source and destination in Delay/Disruption Tolerant Networks (DTN). So traditional security mechanism that based on central server is not suitable to DTN, and data dissemination in DTN faces the same challenge. This paper proposes an absolute distributed secure data dissemination mechanism for DTN. The mechanism adopts a distributed identity-based authenticated method, which is no need for a central Private Key Generator (PKG). Besides, depending on threshold cryptography and the mapping from category name to category key, the node only needs to communicate with random neighbor nodes whose number is no less than a certain threshold, and then it can acquire the data category key. Analytical and simulation results show this mechanism can guarantee the security requirements and greatly improve the efficiency of obtaining keys compared to method based on mobile key server, so it suits DTN very well.
Different from traditional networks, there is often no contemporaneous end-to-end link between the source and destination in Delay/Disruption Tolerant Networks (DTN). So traditional security mechanism that based on central server is not suitable to DTN, and data dissemination in DTN faces the same challenge. This paper proposes an absolute distributed secure data dissemination mechanism for DTN. The mechanism adopts a distributed identity-based authenticated method, which is no need for a central Private Key Generator (PKG). Besides, depending on threshold cryptography and the mapping from category name to category key, the node only needs to communicate with random neighbor nodes whose number is no less than a certain threshold, and then it can acquire the data category key. Analytical and simulation results show this mechanism can guarantee the security requirements and greatly improve the efficiency of obtaining keys compared to method based on mobile key server, so it suits DTN very well.
2011, 33(7): 1582-1588.
doi: 10.3724/SP.J.1146.2010.01346
Abstract:
Signcryption is a cryptographic primitive that combines both the function of digital signature and encryption in a logical single step. However, in some occasion there are conflicts of interest between the two entities, so concurrent signature is proposed to ensure fair exchange of the signature without special trusted third party. The notion of concurrent signcryption is defined and the security model is proposed in this paper. And an identity-based concurrent signcryption scheme is established using bilinear based on the framework. The scheme is proved to be secure assuming Bilinear Diffie-Hellman problem and Computational Co-Diffie-Hellman problem are hard in the bilinear context.
Signcryption is a cryptographic primitive that combines both the function of digital signature and encryption in a logical single step. However, in some occasion there are conflicts of interest between the two entities, so concurrent signature is proposed to ensure fair exchange of the signature without special trusted third party. The notion of concurrent signcryption is defined and the security model is proposed in this paper. And an identity-based concurrent signcryption scheme is established using bilinear based on the framework. The scheme is proved to be secure assuming Bilinear Diffie-Hellman problem and Computational Co-Diffie-Hellman problem are hard in the bilinear context.
2011, 33(7): 1589-1593.
doi: 10.3724/SP.J.1146.2010.01222
Abstract:
In this paper, a new method based on Peak-to-Average Power Ratio Threshold (PAPRT) is proposed by combining the eigenvectors with the binary hypothesis testing. The eigenvectors are employed to weigh the received data and then the peak-to-average power ratio is calculated. According to the fact that both the eigenvalues and the peak-to-average power ratio have valuable information in distinguishing signal from noise, the source number is detected by introducing the binary hypothesis testing process. Simulation results show that PAPRT method is superior to the Eigen Threshold (ET) method under lower SNR when two sources are of equal intensity. And it also has a good performance when the sources are of unequal intensity, with no influence by the intensity difference between the targets.
In this paper, a new method based on Peak-to-Average Power Ratio Threshold (PAPRT) is proposed by combining the eigenvectors with the binary hypothesis testing. The eigenvectors are employed to weigh the received data and then the peak-to-average power ratio is calculated. According to the fact that both the eigenvalues and the peak-to-average power ratio have valuable information in distinguishing signal from noise, the source number is detected by introducing the binary hypothesis testing process. Simulation results show that PAPRT method is superior to the Eigen Threshold (ET) method under lower SNR when two sources are of equal intensity. And it also has a good performance when the sources are of unequal intensity, with no influence by the intensity difference between the targets.
2011, 33(7): 1594-1599.
doi: 10.3724/SP.J.1146.2010.01170
Abstract:
An improved fast algorithm is proposed for cyclic spectral estimation, which decreases the requirement of data quantity, without reducing the performance. The windowed overlapped data processing, which is used in time smooth method, is introduced into the frequency smooth algorithm to cut down the original cyclic spectral estimation variance and improve the estimation quality. The paper deduces the estimators asymptotically expressions of mean, variance, resolution and computational complexity. The theoretical analysis and simulation results prove that the improved algorithm displays better performance than DFSM under the same conditions. The new method is an efficient estimator of cyclic spectrum in the low SNR, high resolution demanding and small data quantity environment.
An improved fast algorithm is proposed for cyclic spectral estimation, which decreases the requirement of data quantity, without reducing the performance. The windowed overlapped data processing, which is used in time smooth method, is introduced into the frequency smooth algorithm to cut down the original cyclic spectral estimation variance and improve the estimation quality. The paper deduces the estimators asymptotically expressions of mean, variance, resolution and computational complexity. The theoretical analysis and simulation results prove that the improved algorithm displays better performance than DFSM under the same conditions. The new method is an efficient estimator of cyclic spectrum in the low SNR, high resolution demanding and small data quantity environment.
2011, 33(7): 1600-1605.
doi: 10.3724/SP.J.1146.2010.01271
Abstract:
A novel blind detection algorithm of multi-valued square/non-square QAM signals using complex Continuous Hopfield-type Neural Network (CHNN) is proposed. The blind detection issue of multi-valued QAM signals is transformed into solving a quadratic optimization problem firstly. The method of mapping the cost function of this optimization one to the energy function of CHNN is shown. A complex activation function to fit this special issue is designed, and the energy function of CHNN is analyzed. Meantime, a special connective matrix is constructed to ensure the detect signals correctly and the general law of making correct choice of the number of neurons is illustrated. Finally, simulation results using square and non-square QAM signals demonstrate the effectiveness and robustness of this new algorithm.
A novel blind detection algorithm of multi-valued square/non-square QAM signals using complex Continuous Hopfield-type Neural Network (CHNN) is proposed. The blind detection issue of multi-valued QAM signals is transformed into solving a quadratic optimization problem firstly. The method of mapping the cost function of this optimization one to the energy function of CHNN is shown. A complex activation function to fit this special issue is designed, and the energy function of CHNN is analyzed. Meantime, a special connective matrix is constructed to ensure the detect signals correctly and the general law of making correct choice of the number of neurons is illustrated. Finally, simulation results using square and non-square QAM signals demonstrate the effectiveness and robustness of this new algorithm.
2011, 33(7): 1606-1610.
doi: 10.3724/SP.J.1146.2010.01220
Abstract:
A new fast algorithm based on subset partition for prime-length 2D Discrete Cosine Transform (DCT) is proposed. The rule of subset partition is put forward, and the frequency data of DCT output are separated into several irrelevant subsets according it. The calculation of frequency data is converted to 2(N-1) calculations of even- or odd-indexed N-length 1D-DCT coefficient. The computational complexity of the algorithm is presented. Compared to Roll and Column Method (RCM), this new fast algorithm reduces half of multiplication times, eliminates transposition of data, and retains computational complexity of addition.
A new fast algorithm based on subset partition for prime-length 2D Discrete Cosine Transform (DCT) is proposed. The rule of subset partition is put forward, and the frequency data of DCT output are separated into several irrelevant subsets according it. The calculation of frequency data is converted to 2(N-1) calculations of even- or odd-indexed N-length 1D-DCT coefficient. The computational complexity of the algorithm is presented. Compared to Roll and Column Method (RCM), this new fast algorithm reduces half of multiplication times, eliminates transposition of data, and retains computational complexity of addition.
2011, 33(7): 1611-1617.
doi: 10.3724/SP.J.1146.2010.01182
Abstract:
Considering the variation of illumination, expression and pose, a new face recognition algorithm is proposed based on factor analysis and data mining. The consistence of factor analysis model based on content and style with linear discriminant analysis in face recognition is analyzed. In order to improve the robustness of this method, two-factor analysis of variance and additive model is proposed to reduce the impact of style information on face observed feature. Experimental results show that this method has higher and more stable performance than Fisherface method. Especially, when the fisherface method performance is bad under complex environments while this method demonstrates better performance.
Considering the variation of illumination, expression and pose, a new face recognition algorithm is proposed based on factor analysis and data mining. The consistence of factor analysis model based on content and style with linear discriminant analysis in face recognition is analyzed. In order to improve the robustness of this method, two-factor analysis of variance and additive model is proposed to reduce the impact of style information on face observed feature. Experimental results show that this method has higher and more stable performance than Fisherface method. Especially, when the fisherface method performance is bad under complex environments while this method demonstrates better performance.
2011, 33(7): 1618-1624.
doi: 10.3724/SP.J.1146.2010.01280
Abstract:
Very recently, the sparse representation theory in pattern recognition arouses widespread concern. In this paper, the sparse representation-based face recognition algorithms are studied. In order to make the representation coefficient vector sparser, a Gabor Sparse Representation Classification (GSRC) algorithm is presented, which uses the Gabor local feature to construct dictionary to enhance the robustness for the external environment changes. GSRC algorithm equally treats all the Gabor features, while in consideration that different Gabor features distinctively contribute to the face recognition task, a Weighted Multi-Channel Gabor Sparse Representation Classification (WMC-GSRC) algorithm is further proposed. By introducing the Gabor multi-channel model, WMC-GSRC algorithm extracts Gabor features in different channels to construct dictionaries and sparse representation classifiers, and obtains the final classification result by performing the weighting fusion of classifiers. Experimental results given in the paper on the ORL, AR and FERET face databases show the feasibility and effectiveness of the proposed methods.
Very recently, the sparse representation theory in pattern recognition arouses widespread concern. In this paper, the sparse representation-based face recognition algorithms are studied. In order to make the representation coefficient vector sparser, a Gabor Sparse Representation Classification (GSRC) algorithm is presented, which uses the Gabor local feature to construct dictionary to enhance the robustness for the external environment changes. GSRC algorithm equally treats all the Gabor features, while in consideration that different Gabor features distinctively contribute to the face recognition task, a Weighted Multi-Channel Gabor Sparse Representation Classification (WMC-GSRC) algorithm is further proposed. By introducing the Gabor multi-channel model, WMC-GSRC algorithm extracts Gabor features in different channels to construct dictionaries and sparse representation classifiers, and obtains the final classification result by performing the weighting fusion of classifiers. Experimental results given in the paper on the ORL, AR and FERET face databases show the feasibility and effectiveness of the proposed methods.
2011, 33(7): 1625-1631.
doi: 10.3724/SP.J.1146.2010.01111
Abstract:
Corresponding feature extraction is a key stage in infrared-visible image registration, fusion, and change detection, etc.. Considering the difficulties in correct extraction of related features between infrared and visible images of the same scene, an affine invariant method is proposed based on Maximally Stable Extremal Regions (MSER) algorithm. The approach includes three steps: (1) to extract the maximally stable extremal regions in infrared and visible images; (2) to fit the feature regions to ellipses; and (3) to regularize the elliptical regions to eliminate the deformation disturbance. The final output is coherent features which are convenient for description and matching. Experimental results show appealing effectiveness of the proposed method in corresponding feature extraction between infrared and visible images.
Corresponding feature extraction is a key stage in infrared-visible image registration, fusion, and change detection, etc.. Considering the difficulties in correct extraction of related features between infrared and visible images of the same scene, an affine invariant method is proposed based on Maximally Stable Extremal Regions (MSER) algorithm. The approach includes three steps: (1) to extract the maximally stable extremal regions in infrared and visible images; (2) to fit the feature regions to ellipses; and (3) to regularize the elliptical regions to eliminate the deformation disturbance. The final output is coherent features which are convenient for description and matching. Experimental results show appealing effectiveness of the proposed method in corresponding feature extraction between infrared and visible images.
2011, 33(7): 1632-1638.
doi: 10.3724/SP.J.1146.2010.01237
Abstract:
A new kernel logistic regression model based on two phase sparsity-promoting prior is proposed to render a sparse multi-classifier and enhance the run-time efficiency. For accelerating the building of the model, the bottom-up training algorithm is adopted which controls the capacity of the learned classifier by minimizing the number of basis functions used, resulting in better generalization and faster computation. Experimental results on standard benchmark data sets attest to the accuracy, sparsity, and efficiency of the proposed methods.
A new kernel logistic regression model based on two phase sparsity-promoting prior is proposed to render a sparse multi-classifier and enhance the run-time efficiency. For accelerating the building of the model, the bottom-up training algorithm is adopted which controls the capacity of the learned classifier by minimizing the number of basis functions used, resulting in better generalization and faster computation. Experimental results on standard benchmark data sets attest to the accuracy, sparsity, and efficiency of the proposed methods.
2011, 33(7): 1639-1643.
doi: 10.3724/SP.J.1146.2010.01212
Abstract:
The main problems of the Particle Filter (PF) are the sample degeneracy and impoverishment phenomenon. To deal with the problems, a new PF based on Differential Evolution (DE) is proposed. Firstly, the Importance Distribution (ID) which contains the newest measurements is produced with the Unscented Kalman Filter (UKF). Secondly, the particles sampling from the ID are no longer resampled by the conventional algorithm, however, they are regarded as the sample of the current population and their weights as the fitness function. Finally, a process of mutation, recombination and section is repeated until the optimum particles are found. The simulation result shows that the proposed method relieves effectively the sample degradation and poverty problems, improves the efficiency of particles and achieves preferable precision on estimation.
The main problems of the Particle Filter (PF) are the sample degeneracy and impoverishment phenomenon. To deal with the problems, a new PF based on Differential Evolution (DE) is proposed. Firstly, the Importance Distribution (ID) which contains the newest measurements is produced with the Unscented Kalman Filter (UKF). Secondly, the particles sampling from the ID are no longer resampled by the conventional algorithm, however, they are regarded as the sample of the current population and their weights as the fitness function. Finally, a process of mutation, recombination and section is repeated until the optimum particles are found. The simulation result shows that the proposed method relieves effectively the sample degradation and poverty problems, improves the efficiency of particles and achieves preferable precision on estimation.
2011, 33(7): 1644-1648.
doi: 10.3724/SP.J.1146.2010.00843
Abstract:
An effective adaptive imaging method using cross-correlation weight of dual robust beamforming is proposed for UltraWideBand (UWB) through-the-wall imaging radar. Dual constrained Robust Capon Beamformings (DRCB) is applied to these 2 subarrays which get by dividing array alternately. The energy of the sum of these 2 beamformings output signals is regarded as pixel value in order to achieve higher resolution and much better interference suppression capabilities. Because these 2 beamformings have a well-correlated mainlobe response and a different or uncorrelated sidelobe response, the Cross-correlated Coefficient (CC) of two beamformings output signals is weighted for each pixel to suppress sidelobe so as to improve significantly constrast. The excellent performance of the proposed method is demonstrated by FDTD numerical simulations and the experimentally measured data results.
An effective adaptive imaging method using cross-correlation weight of dual robust beamforming is proposed for UltraWideBand (UWB) through-the-wall imaging radar. Dual constrained Robust Capon Beamformings (DRCB) is applied to these 2 subarrays which get by dividing array alternately. The energy of the sum of these 2 beamformings output signals is regarded as pixel value in order to achieve higher resolution and much better interference suppression capabilities. Because these 2 beamformings have a well-correlated mainlobe response and a different or uncorrelated sidelobe response, the Cross-correlated Coefficient (CC) of two beamformings output signals is weighted for each pixel to suppress sidelobe so as to improve significantly constrast. The excellent performance of the proposed method is demonstrated by FDTD numerical simulations and the experimentally measured data results.
2011, 33(7): 1649-1654.
doi: 10.3724/SP.J.1146.2011.00016
Abstract:
A method of joint estimation of angle and Doppler frequency for bistatic MIMO radar in spatial colored noise based on temporal-spatial structure is presented. In this method, based on the assumption of temporally Gaussian white noise, the cross-correlation of the match filter outputs in different time delay sampling is used to eliminate the spatial colored noise. Then, Direction Of Departure (DOD), Direction Of Arrival (DOA) and Doppler frequencies of the targets are estimated employing ESPRIT using the rotational factor produced by adjacent outputs of match filters in the time domain. This method can eliminate the effect of the spatial colored noise and pair the parameters automatically without array aperture loss, and is applicable to senor arrays without an invariance structure. Numerical results verify the effectiveness of the proposed method.
A method of joint estimation of angle and Doppler frequency for bistatic MIMO radar in spatial colored noise based on temporal-spatial structure is presented. In this method, based on the assumption of temporally Gaussian white noise, the cross-correlation of the match filter outputs in different time delay sampling is used to eliminate the spatial colored noise. Then, Direction Of Departure (DOD), Direction Of Arrival (DOA) and Doppler frequencies of the targets are estimated employing ESPRIT using the rotational factor produced by adjacent outputs of match filters in the time domain. This method can eliminate the effect of the spatial colored noise and pair the parameters automatically without array aperture loss, and is applicable to senor arrays without an invariance structure. Numerical results verify the effectiveness of the proposed method.
2011, 33(7): 1655-1660.
doi: 10.3724/SP.J.1146.2010.01211
Abstract:
The regularized constrained total least square algorithm for near space radar network is discussed in this paper. Firstly the nonlinear equations about range and angle are transformed into linear equations. The influence of error is analyzed by expanding the true range and angle in a first-order Taylor series. Then the location issue is transformed into a regularized constrained total least square issue. The Lagrange function is used to transform the issue into a non-constrained issue. A proper weight is chosen by the least mean square error rule to obtain the location solution. Location accuracy is analyzed. Simulation results show the effectiveness of the algorithm.
The regularized constrained total least square algorithm for near space radar network is discussed in this paper. Firstly the nonlinear equations about range and angle are transformed into linear equations. The influence of error is analyzed by expanding the true range and angle in a first-order Taylor series. Then the location issue is transformed into a regularized constrained total least square issue. The Lagrange function is used to transform the issue into a non-constrained issue. A proper weight is chosen by the least mean square error rule to obtain the location solution. Location accuracy is analyzed. Simulation results show the effectiveness of the algorithm.
2011, 33(7): 1661-1666.
doi: 10.3724/SP.J.1146.2010.00960
Abstract:
Synthetic Aperture Radar (SAR) targets detection and identification technology is one of the choke points for SAR practical application. Extraction of effective feature is the key of SAR targets detection and identification. The information of SAR target scattering center position and type is reflected by target attribute scattering center feature, so accurately extraction target attribute scattering center feature can improve the performance of SAR target detection and identification. A feature extraction method of SAR target attribute scattering centers based on the Improved Space-Wavenumber Distribution (ISWD) is proposed in this paper. The function of scattering center respected to frequency and aspect is computed using the ISWD, then the target attribute scattering centers model parameters are estimated via this function. The results of the computer simulation experiment show the validity of the method.
Synthetic Aperture Radar (SAR) targets detection and identification technology is one of the choke points for SAR practical application. Extraction of effective feature is the key of SAR targets detection and identification. The information of SAR target scattering center position and type is reflected by target attribute scattering center feature, so accurately extraction target attribute scattering center feature can improve the performance of SAR target detection and identification. A feature extraction method of SAR target attribute scattering centers based on the Improved Space-Wavenumber Distribution (ISWD) is proposed in this paper. The function of scattering center respected to frequency and aspect is computed using the ISWD, then the target attribute scattering centers model parameters are estimated via this function. The results of the computer simulation experiment show the validity of the method.
2011, 33(7): 1667-1670.
doi: 10.3724/SP.J.1146.2010.01320
Abstract:
Theory, characteristics and shortcomings of traditional wind vector algorithm are analyzed in the beginning of this paper, and then, a new wind vector algorithm in scanning mode is delivered. The correlation of ocean wave in two neighboring scanning period Synthetic Aperture Radar (SAR) images is discussed, and the wind direction is determined by displacement vector of the wind-induced streaks using gray cross-correlation. Eventually, the GMF (Geophysical Model Function) is adopted to estimate the wind speed. Compared with traditional wind vector algorithm and wind measurement from buoy, the new algorithm, which has no problem with the wind direction ambiguity, is more accurate. The processing results of real airborne radar data prove the effectiveness of the algorithm.
Theory, characteristics and shortcomings of traditional wind vector algorithm are analyzed in the beginning of this paper, and then, a new wind vector algorithm in scanning mode is delivered. The correlation of ocean wave in two neighboring scanning period Synthetic Aperture Radar (SAR) images is discussed, and the wind direction is determined by displacement vector of the wind-induced streaks using gray cross-correlation. Eventually, the GMF (Geophysical Model Function) is adopted to estimate the wind speed. Compared with traditional wind vector algorithm and wind measurement from buoy, the new algorithm, which has no problem with the wind direction ambiguity, is more accurate. The processing results of real airborne radar data prove the effectiveness of the algorithm.
2011, 33(7): 1671-1677.
doi: 10.3724/SP.J.1146.2010.01196
Abstract:
The detection performance of the four different kinds of network radar models on Rician model of pulse-to-pulse fluctuation is analyzed when the total power of single pulse of the transmitters is definite. The simulated and analyzed results show that Rician targets of pulse-to-pulse fluctuation can be divided to three kinds which have different detection performance. In the network radar, the detection performance of Class-Swerling II targets is like Swerling II, and the detection performance of standard Rician targets of pulse-to-pulse fluctuation is different from Swerling II targets, and the detection performance of mixed Rician targets of pulse-to-pulse fluctuation is like Swerling II targets except that it is like Standard Rician targets for MIMO model. The results are useful to design the network radar system.
The detection performance of the four different kinds of network radar models on Rician model of pulse-to-pulse fluctuation is analyzed when the total power of single pulse of the transmitters is definite. The simulated and analyzed results show that Rician targets of pulse-to-pulse fluctuation can be divided to three kinds which have different detection performance. In the network radar, the detection performance of Class-Swerling II targets is like Swerling II, and the detection performance of standard Rician targets of pulse-to-pulse fluctuation is different from Swerling II targets, and the detection performance of mixed Rician targets of pulse-to-pulse fluctuation is like Swerling II targets except that it is like Standard Rician targets for MIMO model. The results are useful to design the network radar system.
2011, 33(7): 1678-1683.
doi: 10.3724/SP.J.1146.2010.01281
Abstract:
A novel Compressive Sensing (CS) based high resolution target range imaging method for Frequency- Coded Pulse Radar (FCPR) is proposed in this paper. Considering spatial sparsity of the target scene, a FCPR target sparse signal model is derived. A FCPR pulses coherent synthesis processing method is presented. Target frequency domain response is sampled with only a few FCPR sub-pulses, from which target high resolution range information is reconstructed exactly. A dynamic creation of deduced dimension sensing matrix based on target velocity pre-estimation using FFT is proposed. This method reduces the computational complexity of CS recovery algorithms and promotes the speed of CS based FCPR pulses coherent synthesis processing. Computer simulations show that the presented method performs better than traditional IFFT pulses coherent synthesis processing algorithm with smaller magnitude estimation error of strong target scattering center and better robustness against velocity estimation error and noise.
A novel Compressive Sensing (CS) based high resolution target range imaging method for Frequency- Coded Pulse Radar (FCPR) is proposed in this paper. Considering spatial sparsity of the target scene, a FCPR target sparse signal model is derived. A FCPR pulses coherent synthesis processing method is presented. Target frequency domain response is sampled with only a few FCPR sub-pulses, from which target high resolution range information is reconstructed exactly. A dynamic creation of deduced dimension sensing matrix based on target velocity pre-estimation using FFT is proposed. This method reduces the computational complexity of CS recovery algorithms and promotes the speed of CS based FCPR pulses coherent synthesis processing. Computer simulations show that the presented method performs better than traditional IFFT pulses coherent synthesis processing algorithm with smaller magnitude estimation error of strong target scattering center and better robustness against velocity estimation error and noise.
2011, 33(7): 1684-1688.
doi: 10.3724/SP.J.1146.2010.01255
Abstract:
A fast angle estimation algorithm of coherently distributed targets based on bistatic MIMO radar is proposed. Firstly, the signal model of coherent distributed targets for bistatic MIMO radar is established. Then, Hadamard product rotational invariance property of the steering vectors of coherently distributed targets is proved based on the signal model. Finally, the estimations of the two-dimensional (2-D) transmit-receive central azimuths are obtained by using the property. Analyses indicate that the proposed algorithm need not search and can pair the parameters simply, which reduces the computational cost efficiently. Because the proposed algorithm does not assume the angular signal distribution functions of the targets, it can deal with different distributed targets with different angular signal distribution functions or unknown angular signal distribution functions, and it is robust. The correction and efficiency of the proposed method are verified with the computer simulation results.
A fast angle estimation algorithm of coherently distributed targets based on bistatic MIMO radar is proposed. Firstly, the signal model of coherent distributed targets for bistatic MIMO radar is established. Then, Hadamard product rotational invariance property of the steering vectors of coherently distributed targets is proved based on the signal model. Finally, the estimations of the two-dimensional (2-D) transmit-receive central azimuths are obtained by using the property. Analyses indicate that the proposed algorithm need not search and can pair the parameters simply, which reduces the computational cost efficiently. Because the proposed algorithm does not assume the angular signal distribution functions of the targets, it can deal with different distributed targets with different angular signal distribution functions or unknown angular signal distribution functions, and it is robust. The correction and efficiency of the proposed method are verified with the computer simulation results.
2011, 33(7): 1689-1693.
doi: 10.3724/SP.J.1146.2010.01373
Abstract:
In order to achieve high accurate measurement of objects velocity, a laser Doppler velocity radar system is established and the frequency estimation algorithm is investigated. The frequency estimation method based on autocorrelation operation is improved and the best performance of it is obtained. A new frequency estimation synthetic algorithm based on the Quinn and improved autocorrelation algorithms is propounded, which is adaptive to the Signal-Noise Ratio (SNR). Monte Carlo simulation and rotating cylinder experiment are carried out, and the experimental results indicate that the new algorithm is better than the methods previously reported. The root-mean-square error of the system is smaller than 2 mm/s and the relative error is less than 0.06 % while utilizing the new algorithm. The experimental results are consistent with the analytical and simulation results.
In order to achieve high accurate measurement of objects velocity, a laser Doppler velocity radar system is established and the frequency estimation algorithm is investigated. The frequency estimation method based on autocorrelation operation is improved and the best performance of it is obtained. A new frequency estimation synthetic algorithm based on the Quinn and improved autocorrelation algorithms is propounded, which is adaptive to the Signal-Noise Ratio (SNR). Monte Carlo simulation and rotating cylinder experiment are carried out, and the experimental results indicate that the new algorithm is better than the methods previously reported. The root-mean-square error of the system is smaller than 2 mm/s and the relative error is less than 0.06 % while utilizing the new algorithm. The experimental results are consistent with the analytical and simulation results.
2011, 33(7): 1694-1699.
doi: 10.3724/SP.J.1146.2010.01259
Abstract:
Sliding spotlight SAR, which is between strip-map SAR and spotlight SAR, is a unique imaging mode. Concerning on the long illuminating time and long accumulating time of high-resolution and wide-coverage-area space-borne sliding spotlight SAR, the precision and the azimuth-time-variant characteristic of the traditional imaging model are analyzed in this paper. Based on the theory of motion compensation for airborne SAR, a method to correct the azimuth-time-variant error, which makes use of range error caused by non-linear movement of space-borne SAR relative to the virtual rotating point, is presented. In addition, a method to compensate the remnant cubic error of the imaging model in Doppler domain is also given here. Finally, a new DCS algorithm with imaging model error mending is introduced, whose validity is verified with computer simulation.
Sliding spotlight SAR, which is between strip-map SAR and spotlight SAR, is a unique imaging mode. Concerning on the long illuminating time and long accumulating time of high-resolution and wide-coverage-area space-borne sliding spotlight SAR, the precision and the azimuth-time-variant characteristic of the traditional imaging model are analyzed in this paper. Based on the theory of motion compensation for airborne SAR, a method to correct the azimuth-time-variant error, which makes use of range error caused by non-linear movement of space-borne SAR relative to the virtual rotating point, is presented. In addition, a method to compensate the remnant cubic error of the imaging model in Doppler domain is also given here. Finally, a new DCS algorithm with imaging model error mending is introduced, whose validity is verified with computer simulation.
2011, 33(7): 1700-1705.
doi: 10.3724/SP.J.1146.2010.01190
Abstract:
Considering the shortage of edge preservation and low direction-resolution for SAR image segmentation based on the conventional wavelet transform domain, a new segmentation method is proposed based on Gray-Level Cooccurrence Probability (GLCP) features in the overcomplete Brushlet domain. This method compresses the redundant GLCP features extracted by the adaptive window Gabor filtering in different direction coefficient blocks using compressed sensing, then the Fuzzy C-Mean (FCM) clustering method is utilized to complete the clustering and obtain the segmentation result. The experiment results show that the new method has advantages in the edge preservation and direction extraction, and obtains better segmentation results with respect to other methods.
Considering the shortage of edge preservation and low direction-resolution for SAR image segmentation based on the conventional wavelet transform domain, a new segmentation method is proposed based on Gray-Level Cooccurrence Probability (GLCP) features in the overcomplete Brushlet domain. This method compresses the redundant GLCP features extracted by the adaptive window Gabor filtering in different direction coefficient blocks using compressed sensing, then the Fuzzy C-Mean (FCM) clustering method is utilized to complete the clustering and obtain the segmentation result. The experiment results show that the new method has advantages in the edge preservation and direction extraction, and obtains better segmentation results with respect to other methods.
2011, 33(7): 1706-1712.
doi: 10.3724/SP.J.1146.2010.01341
Abstract:
An automatic method for detecting and interpreting bridges over water in high-resolution space-borne synthetic aperture radar imagery is proposed. Firstly, the textual features for image classification are computed, including response for Gabor filter, tree-structure wavelet coefficient and statistics of gray level co-occurrence matrix. Then the SAR imagery is classified to low-reflection area, vegetation covered area and built-up area using support vector machine classifier. By analyzing targets space distribution, shape and gray characteristic in low-reflection area, the Regions Of Interested (ROI) are detected. For each ROI, five key parameters of bridge are estimated based on imaging model of radar, including direction, length over water, width, elevation over water, thickness of body and the real position for orthographic projection. Experiment with TerraSAR-X image indicates that the method is effective.
An automatic method for detecting and interpreting bridges over water in high-resolution space-borne synthetic aperture radar imagery is proposed. Firstly, the textual features for image classification are computed, including response for Gabor filter, tree-structure wavelet coefficient and statistics of gray level co-occurrence matrix. Then the SAR imagery is classified to low-reflection area, vegetation covered area and built-up area using support vector machine classifier. By analyzing targets space distribution, shape and gray characteristic in low-reflection area, the Regions Of Interested (ROI) are detected. For each ROI, five key parameters of bridge are estimated based on imaging model of radar, including direction, length over water, width, elevation over water, thickness of body and the real position for orthographic projection. Experiment with TerraSAR-X image indicates that the method is effective.
2011, 33(7): 1713-1717.
doi: 10.3724/SP.J.1146.2010.01163
Abstract:
The detection of navigation signals evil waveform based on multicorrelator is an important issue of GNSS integrity monitoring. This paper defines the effect of 2nd-order step threat model of evil waveform on the code tracking loop, analyzes the multicorrelator technique of detection system based on Local/Wide Area Augmentation System (LAAS/WAAS). From the perspective of simplifying ground integrity channel and shortening the time consumption, this paper puts forward a new technique based on satellite autonomous integrity monitoring. The simulation of the detection capability and efficiency result in a 0.3 Tc decrease of the detection deviation of the technique based on SAIM with a 20 dB increasing of SNR over the technique based on LAAS, which makes a great significance of integrity monitoring.
The detection of navigation signals evil waveform based on multicorrelator is an important issue of GNSS integrity monitoring. This paper defines the effect of 2nd-order step threat model of evil waveform on the code tracking loop, analyzes the multicorrelator technique of detection system based on Local/Wide Area Augmentation System (LAAS/WAAS). From the perspective of simplifying ground integrity channel and shortening the time consumption, this paper puts forward a new technique based on satellite autonomous integrity monitoring. The simulation of the detection capability and efficiency result in a 0.3 Tc decrease of the detection deviation of the technique based on SAIM with a 20 dB increasing of SNR over the technique based on LAAS, which makes a great significance of integrity monitoring.
2011, 33(7): 1718-1721.
doi: 10.3724/SP.J.1146.2010.01230
Abstract:
The Method of Moments (MoM) based on Impedance Boundary Condition (IBC) is presented to analyze the electromagnetic scattering characteristics of three-dimensional target coated with anisotropic materials. According to the surface equivalence principle, Galerkin method is used with electric or magnetic current expanded by three-dimensional Rao-Wilton-Glisson (RWG) vector basis functions. The electromagnetic simulation of target coated with anisotropic materials is performed with the electromagnetic parameters characterized by the surface impedance matrix, while the numerical results agree well with the exact results such as Mie series solution. Analyses for the electromagnetic scattering properties of complex targets coated with anisotropic materials are presented, which provide theoretical support for the radar stealth and anti-stealth.
The Method of Moments (MoM) based on Impedance Boundary Condition (IBC) is presented to analyze the electromagnetic scattering characteristics of three-dimensional target coated with anisotropic materials. According to the surface equivalence principle, Galerkin method is used with electric or magnetic current expanded by three-dimensional Rao-Wilton-Glisson (RWG) vector basis functions. The electromagnetic simulation of target coated with anisotropic materials is performed with the electromagnetic parameters characterized by the surface impedance matrix, while the numerical results agree well with the exact results such as Mie series solution. Analyses for the electromagnetic scattering properties of complex targets coated with anisotropic materials are presented, which provide theoretical support for the radar stealth and anti-stealth.
2011, 33(7): 1722-1726.
doi: 10.3724/SP.J.1146.2010.01219
Abstract:
In order to study electron emission phenomenon of cathode evaporator, a new designed test device is used to collect the electron emission curve of evaporator deposited on polycrystalline tungsten surface, Electron emission microscope and SEM are used to analyze electron emission image, surface appearance and composition. The results show that the emission curve can be divided into three stages, namely sharp-rise stage, fast-rise stage and slow-rise stage, which correspond to electron emission of grain boundaries and scratches, grain surfaces and three-dimensional islands sequentially. It is proved that electron emission performance of M-type cathodes can be greatly improved by building uniformly dispersed island-shape crystal emission spots.
In order to study electron emission phenomenon of cathode evaporator, a new designed test device is used to collect the electron emission curve of evaporator deposited on polycrystalline tungsten surface, Electron emission microscope and SEM are used to analyze electron emission image, surface appearance and composition. The results show that the emission curve can be divided into three stages, namely sharp-rise stage, fast-rise stage and slow-rise stage, which correspond to electron emission of grain boundaries and scratches, grain surfaces and three-dimensional islands sequentially. It is proved that electron emission performance of M-type cathodes can be greatly improved by building uniformly dispersed island-shape crystal emission spots.
2011, 33(7): 1727-1732.
doi: 10.3724/SP.J.1146.2010.01260
Abstract:
This paper presents the design considerations, the simulation results and the test results for a new type S-band high-average-power broadband klystron. In this paper, a method which is used for verifying the quality of an electron-optics-system of a high-average-power broadband klystron is proposed. And then the coordinate method using the 2.5D Arsenal-MSN code and the KLY6 code is also described, which deals with eliminating the potential output-power-sag and optimizing the parameters of the RF-interaction region for broadband klystrons. The further hot-test results prove that both of the methods are effective.
This paper presents the design considerations, the simulation results and the test results for a new type S-band high-average-power broadband klystron. In this paper, a method which is used for verifying the quality of an electron-optics-system of a high-average-power broadband klystron is proposed. And then the coordinate method using the 2.5D Arsenal-MSN code and the KLY6 code is also described, which deals with eliminating the potential output-power-sag and optimizing the parameters of the RF-interaction region for broadband klystrons. The further hot-test results prove that both of the methods are effective.
2011, 33(7): 1733-1737.
doi: 10.3724/SP.J.1146.2010.01208
Abstract:
Regarding the connectivity domain constraint in nano-meter circuit architecture, this paper proposes a circuit equivalent transformation method based on logic replication for reducing mapping complexity. The fanout degrees of all gates in a circuit are recorded and sorted to select the reference of high fanout value. Then a quadratic equation is formulated to evaluate whether the mapping complexities of the gates are reduced. Finally, the gate which has fanout degree larger than the reference high fanout value will be replicated if the complexity degree is reduced. The proposed method can not only make circuits easily to map, but also achieve better timing than buffer insertion.
Regarding the connectivity domain constraint in nano-meter circuit architecture, this paper proposes a circuit equivalent transformation method based on logic replication for reducing mapping complexity. The fanout degrees of all gates in a circuit are recorded and sorted to select the reference of high fanout value. Then a quadratic equation is formulated to evaluate whether the mapping complexities of the gates are reduced. Finally, the gate which has fanout degree larger than the reference high fanout value will be replicated if the complexity degree is reduced. The proposed method can not only make circuits easily to map, but also achieve better timing than buffer insertion.
2011, 33(7): 1738-1742.
doi: 10.3724/SP.J.1146.2010.01244
Abstract:
Kernel-based Support Vector Machine (SVM) is widely used in many fields ( e.g. image classification) for its good generalization, in which the key factor is to design effective kernel functions. As there is not much a priori knowledge introduced into traditional kernel functions, the data-driven kernel building method is proposed to construct a new histogram kernel function which is combined with Bag OF Word (BOW) model and based on TF-IDF Weighted Quadratic Chi-squared (WQC) distance. In the process of calculating distances between histograms, the distinct discriminative power of each histogram bin is fully taken into consideration to boost classification performance of kernel functions. Experiments on several classic image data sets (Caltech101/256, etc.) show the better classification performance of the proposed method.
Kernel-based Support Vector Machine (SVM) is widely used in many fields ( e.g. image classification) for its good generalization, in which the key factor is to design effective kernel functions. As there is not much a priori knowledge introduced into traditional kernel functions, the data-driven kernel building method is proposed to construct a new histogram kernel function which is combined with Bag OF Word (BOW) model and based on TF-IDF Weighted Quadratic Chi-squared (WQC) distance. In the process of calculating distances between histograms, the distinct discriminative power of each histogram bin is fully taken into consideration to boost classification performance of kernel functions. Experiments on several classic image data sets (Caltech101/256, etc.) show the better classification performance of the proposed method.
2011, 33(7): 1743-1747.
doi: 10.3724/SP.J.1146.2010.01295
Abstract:
To improve the performance of salt-and-pepper noise removal, a two-stage scheme is proposed. In the first phase, an improved adaptive median filter is used to identify pixels which are likely to be contaminated by noise (noise candidates). In the second phase, the image is restored using a variational method that applies only to those selected noise candidates. The proposed method can remove salt-and-pepper-noise with a noise level as high as 80%. Simulation results indicate that this algorithm is effective and better than traditional variational method based on adaptive median filter.
To improve the performance of salt-and-pepper noise removal, a two-stage scheme is proposed. In the first phase, an improved adaptive median filter is used to identify pixels which are likely to be contaminated by noise (noise candidates). In the second phase, the image is restored using a variational method that applies only to those selected noise candidates. The proposed method can remove salt-and-pepper-noise with a noise level as high as 80%. Simulation results indicate that this algorithm is effective and better than traditional variational method based on adaptive median filter.
2011, 33(7): 1748-1751.
doi: 10.3724/SP.J.1146.2010.01236
Abstract:
In this paper, a blind despreading algorithm is proposed for the synchronous multi-user long-code Direct Sequence Spread Spectrum (DS-SS) signals in low Signal to Noise Ratio (SNR) scenarios. The synchronous multi-user long-code DS-SS signals are represented as the short-code ones with missing data, and then the users spreading waveform subspace is estimated by the Singular Value Thresholding (SVT) algorithm. Finally, the Expectation Maximization (EM) algorithm is used to blindly despread the signals. The simulations show that the proposed algorithm has nearly performance with the cooperative dispreading in low-SNR scenarios.
In this paper, a blind despreading algorithm is proposed for the synchronous multi-user long-code Direct Sequence Spread Spectrum (DS-SS) signals in low Signal to Noise Ratio (SNR) scenarios. The synchronous multi-user long-code DS-SS signals are represented as the short-code ones with missing data, and then the users spreading waveform subspace is estimated by the Singular Value Thresholding (SVT) algorithm. Finally, the Expectation Maximization (EM) algorithm is used to blindly despread the signals. The simulations show that the proposed algorithm has nearly performance with the cooperative dispreading in low-SNR scenarios.
2011, 33(7): 1752-1755.
doi: 10.3724/SP.J.1146.2010.00958
Abstract:
M-estimator is a new estimator presented by Hao Cheng-peng in 2005, which possesses higher accuracy in the estimation of shape parameter of K-distribution and is enhanced in 2007. In this paper, a further enhanced M-estimator is proposed based on the enhanced M-estimator in 2007, in which some original middle procedures are omitted. The simulation results show that the estimation accuracy and efficiency both increase dramatically.
M-estimator is a new estimator presented by Hao Cheng-peng in 2005, which possesses higher accuracy in the estimation of shape parameter of K-distribution and is enhanced in 2007. In this paper, a further enhanced M-estimator is proposed based on the enhanced M-estimator in 2007, in which some original middle procedures are omitted. The simulation results show that the estimation accuracy and efficiency both increase dramatically.
2011, 33(7): 1756-1760.
doi: 10.3724/SP.J.1146.2010.00798
Abstract:
Because its capability in long-range imaging with simple hardware architecture, the stepped frequency modulated signal has an obvious advantage in modern radar target recognition. The principle of synthesizing the range profile using stepped frequency modulated signal is derived. However, the radar echo is sensitive to target motion, which makes the range profile blurred. To solve this problem, a novel type of stepped frequency modulated signal is put forward, in which the pulse repetition time is variational. The target velocity-frequency coupling is avoided by predesigning the pulse repetition time, and the influence of acceleration is eliminated by phase cancellation. The precision request of the pulse repetition time is discussed. Simulations with synthetic data confirm the effectiveness of the proposed method.
Because its capability in long-range imaging with simple hardware architecture, the stepped frequency modulated signal has an obvious advantage in modern radar target recognition. The principle of synthesizing the range profile using stepped frequency modulated signal is derived. However, the radar echo is sensitive to target motion, which makes the range profile blurred. To solve this problem, a novel type of stepped frequency modulated signal is put forward, in which the pulse repetition time is variational. The target velocity-frequency coupling is avoided by predesigning the pulse repetition time, and the influence of acceleration is eliminated by phase cancellation. The precision request of the pulse repetition time is discussed. Simulations with synthetic data confirm the effectiveness of the proposed method.
2011, 33(7): 1761-1764.
doi: 10.3724/SP.J.1146.2010.01110
Abstract:
This paper presents the concepts of sorted-attack security and generalized sorted-attack security of tweakable enciphering schemes under chosen-plaintext and chosen-ciphertext attacks. Firstly, it is proved that those two notions are equivalence. Secondly, it is proved that the basic distinguishing attack security and the left-or-right distinguishing attack security guarantee sorted-attack security and generalized sorted-attack security, therefore reveals that a strong tweakable enciphering scheme have those two cryptographic properties.
This paper presents the concepts of sorted-attack security and generalized sorted-attack security of tweakable enciphering schemes under chosen-plaintext and chosen-ciphertext attacks. Firstly, it is proved that those two notions are equivalence. Secondly, it is proved that the basic distinguishing attack security and the left-or-right distinguishing attack security guarantee sorted-attack security and generalized sorted-attack security, therefore reveals that a strong tweakable enciphering scheme have those two cryptographic properties.
2011, 33(7): 1765-1769.
doi: 10.3724/SP.J.1146.2010.00853
Abstract:
The k-error linear complexity of the output sequences of single cycle T-function is investigated with the polynomial theory and the Chan Games algorithm as the main tools. All of the linear complexity drop points and the k-error linear complexity on the drop position of the output sequences are given when n=2t. The distribution of k-error linear complexity and k-error linear complexity profile of the output sequences of single cycle T-function are given.
The k-error linear complexity of the output sequences of single cycle T-function is investigated with the polynomial theory and the Chan Games algorithm as the main tools. All of the linear complexity drop points and the k-error linear complexity on the drop position of the output sequences are given when n=2t. The distribution of k-error linear complexity and k-error linear complexity profile of the output sequences of single cycle T-function are given.
2011, 33(7): 1770-1774.
doi: 10.3724/SP.J.1146.2010.01292
Abstract:
The 4-round ARIA differential property is given, and the differential enumeration attack on 7-round and 8-round ARIA-256 is presented in this paper. The attacks need 256 chosen plaintexts. The attack on 7-round ARIA has the time complexity of 2238.2 7-round ARIA encryptions in the preprocessing phase and 2124.2 7-round ARIA encryptions in the processing phase. The attack on 8-round ARIA has the time complexity of 2238 8-round ARIA encryptions in the preprocessing phase and 2253.6 8-round ARIA encryptions in the processing phase.
The 4-round ARIA differential property is given, and the differential enumeration attack on 7-round and 8-round ARIA-256 is presented in this paper. The attacks need 256 chosen plaintexts. The attack on 7-round ARIA has the time complexity of 2238.2 7-round ARIA encryptions in the preprocessing phase and 2124.2 7-round ARIA encryptions in the processing phase. The attack on 8-round ARIA has the time complexity of 2238 8-round ARIA encryptions in the preprocessing phase and 2253.6 8-round ARIA encryptions in the processing phase.
2011, 33(7): 1775-1778.
doi: 10.3724/SP.J.1146.2010.01199
Abstract:
Threshold Logic Gate (TLG) is receiving much attention because of its logic versatility and functionally complete. For the circuit design based on TLG, a method is described to determine whether a function is threshold or not with the spectral technology. The weights and threshold can be calculated by spectral coefficients. As for non-threshold function, a novel logic synthesis algorithm is proposed, which can transform non-threshold function to the sum of some threshold functions. Furthermore, any Boolean logic function can be realized by a collection of TLG using the method in this paper. Proposed algorithm provides a method for circuit design of resonant tunneling diode.
Threshold Logic Gate (TLG) is receiving much attention because of its logic versatility and functionally complete. For the circuit design based on TLG, a method is described to determine whether a function is threshold or not with the spectral technology. The weights and threshold can be calculated by spectral coefficients. As for non-threshold function, a novel logic synthesis algorithm is proposed, which can transform non-threshold function to the sum of some threshold functions. Furthermore, any Boolean logic function can be realized by a collection of TLG using the method in this paper. Proposed algorithm provides a method for circuit design of resonant tunneling diode.