Email alert
2009 Vol. 31, No. 5
column
Display Method:
2009, 31(5): 1017-1021.
doi: 10.3724/SP.J.1146.2008.00563
Abstract:
The application of Internet is being changed from information exchange to information sharing, along with the durative rapid growing of internet information and demands. However, the study on the network survey indicates that the current network architecture is being unbearable due to the masses of redundant data transmission by the information sharing services. This paper proposes a new asymmetric network architecture for ubiquitous information sharing, which bases on small-world theory and distributed caching technique. Then the initiative service mode and some key techniques of the new architecture for mainstream Internet information sharing are presented. The experiment on Tunet shows that new architecture has high covering and efficient indexing characteristic, moreover, has remarkable advantages in delay and bandwidth.
The application of Internet is being changed from information exchange to information sharing, along with the durative rapid growing of internet information and demands. However, the study on the network survey indicates that the current network architecture is being unbearable due to the masses of redundant data transmission by the information sharing services. This paper proposes a new asymmetric network architecture for ubiquitous information sharing, which bases on small-world theory and distributed caching technique. Then the initiative service mode and some key techniques of the new architecture for mainstream Internet information sharing are presented. The experiment on Tunet shows that new architecture has high covering and efficient indexing characteristic, moreover, has remarkable advantages in delay and bandwidth.
2009, 31(5): 1022-1025.
doi: 10.3724/SP.J.1146.2008.00184
Abstract:
A load-balance approach in distributed QoS registry for Web service is proposed. According to peer load status, different load information dissemination policies are applied which significantly reduce network overhead. A load-balance approach based on simple negotiation protocol is proposed. The replication initiator proposes a rational proposal to replication receiver, then it replicates based on opponents offer. The approach improves efficiency of load-balance and reduces unnecessary replication attempt. The approach is tested on the prototype of distributed QoS registry and proved by experimental result.
A load-balance approach in distributed QoS registry for Web service is proposed. According to peer load status, different load information dissemination policies are applied which significantly reduce network overhead. A load-balance approach based on simple negotiation protocol is proposed. The replication initiator proposes a rational proposal to replication receiver, then it replicates based on opponents offer. The approach improves efficiency of load-balance and reduces unnecessary replication attempt. The approach is tested on the prototype of distributed QoS registry and proved by experimental result.
2009, 31(5): 1026-1030.
doi: 10.3724/SP.J.1146.2008.00483
Abstract:
A novel scalable Multiple-Plane and Multiple-Stage (MPMS) packet switching fabric is proposed in this paper. Firstly, the graphic model of the MPMS fabric is built. The neighbouring connectivity and reachability of MPMS are described. Vertexes are classified into balanced vertex and competitive vertex, and the non-blocking condition is proved, which may determine its switching performance. The MPMS fabric is proved to achieve a P times maximum port rate, a square of maximum port count, only linearly increasing structure complexity, and therefore it is much more scalable.
A novel scalable Multiple-Plane and Multiple-Stage (MPMS) packet switching fabric is proposed in this paper. Firstly, the graphic model of the MPMS fabric is built. The neighbouring connectivity and reachability of MPMS are described. Vertexes are classified into balanced vertex and competitive vertex, and the non-blocking condition is proved, which may determine its switching performance. The MPMS fabric is proved to achieve a P times maximum port rate, a square of maximum port count, only linearly increasing structure complexity, and therefore it is much more scalable.
2009, 31(5): 1031-1034.
doi: 10.3724/SP.J.1146.2008.00455
Abstract:
Automated negotiation is a key form of interaction in agent-based systems. Firstly, a flexible one-to-many negotiation model is proposed. This model supports continuous, open and dynamic negotiation. Then the coordination strategy based on relative utility is discussed. Finally the experiment shows that the agreement, negotiation time and cost are optimized base on the system proposed in this paper. The model is proves to be effective and practical.
Automated negotiation is a key form of interaction in agent-based systems. Firstly, a flexible one-to-many negotiation model is proposed. This model supports continuous, open and dynamic negotiation. Then the coordination strategy based on relative utility is discussed. Finally the experiment shows that the agreement, negotiation time and cost are optimized base on the system proposed in this paper. The model is proves to be effective and practical.
2009, 31(5): 1035-1039.
doi: 10.3724/SP.J.1146.2008.00002
Abstract:
Sensor deployment is related directly to the cost and performance of underwater sensor networks. Considering great cooperative capability of sensor nodes, this paper presents a deployment strategy based on detection fusion. Neyman-Pearson criterion is adopted to fuse the detection information from all sensor nodes in one unit grid, which realizes highly efficient coverage for square and triangle unit grid. The grid plot methods of detection field are presented respectively for two types of unit grid. The number of sensor nodes to monitor the detection field and their locations are determined consequently. The effectiveness of proposed deployment strategy is verified by simulation experiments. The results indicate that sensor node redundancy is decreased in contrast to deployment strategy without detection fusion. With fixed quantity of sensor nodes, the new deployment strategy offers larger coverage area with the desired detection accuracy.
Sensor deployment is related directly to the cost and performance of underwater sensor networks. Considering great cooperative capability of sensor nodes, this paper presents a deployment strategy based on detection fusion. Neyman-Pearson criterion is adopted to fuse the detection information from all sensor nodes in one unit grid, which realizes highly efficient coverage for square and triangle unit grid. The grid plot methods of detection field are presented respectively for two types of unit grid. The number of sensor nodes to monitor the detection field and their locations are determined consequently. The effectiveness of proposed deployment strategy is verified by simulation experiments. The results indicate that sensor node redundancy is decreased in contrast to deployment strategy without detection fusion. With fixed quantity of sensor nodes, the new deployment strategy offers larger coverage area with the desired detection accuracy.
2009, 31(5): 1040-1044.
doi: 10.3724/SP.J.1146.2008.00358
Abstract:
Cluster-based Topology is an energy-efficient topology control method in large-scale wireless sensor networks. The cluster head depletes energy more faster than cluster member and the rotation of cluster head is needed to balance the energy consumption for the whole network. In this paper, ACRA is presented, which rotation energy thredhold is estimated using cluster head real-time energy load. The simulation result show that comparing with LEACH and EDAC, ACRA minimizes the rotation times, prolongs the network lifetime.
Cluster-based Topology is an energy-efficient topology control method in large-scale wireless sensor networks. The cluster head depletes energy more faster than cluster member and the rotation of cluster head is needed to balance the energy consumption for the whole network. In this paper, ACRA is presented, which rotation energy thredhold is estimated using cluster head real-time energy load. The simulation result show that comparing with LEACH and EDAC, ACRA minimizes the rotation times, prolongs the network lifetime.
2009, 31(5): 1045-1048.
doi: 10.3724/SP.J.1146.2008.00441
Abstract:
Security of wireless sensor networks has attracted much attention in recent years and key management is a critical issue of it. EBS-based dynamic key management scheme is a new approach for wireless sensor networks. The major advantages of EBS-based dynamic key management scheme are enhanced network survivability, high dynamic performance and better support for network expansion. But it suffers from the collusion problem, which means it is prone to the coordinated attack of the compromised nodes. In this paper, a special kind of polynomial, the common trivariate polynomial, is presented, which can guarantee that all the nodes having the same polynomial can get the same key. The common trivariate polynomial keys are used in stead of the normal keys in EBS system and a new dynamic key management scheme is designed for clustered wireless sensor networks. Analytical and simulation results show that compared with the former works, the proposed scheme can greatly improve the network resilience to the attack of the compromised nodes and decrease the energy consumption in the process of updating the administration and session keys.
Security of wireless sensor networks has attracted much attention in recent years and key management is a critical issue of it. EBS-based dynamic key management scheme is a new approach for wireless sensor networks. The major advantages of EBS-based dynamic key management scheme are enhanced network survivability, high dynamic performance and better support for network expansion. But it suffers from the collusion problem, which means it is prone to the coordinated attack of the compromised nodes. In this paper, a special kind of polynomial, the common trivariate polynomial, is presented, which can guarantee that all the nodes having the same polynomial can get the same key. The common trivariate polynomial keys are used in stead of the normal keys in EBS system and a new dynamic key management scheme is designed for clustered wireless sensor networks. Analytical and simulation results show that compared with the former works, the proposed scheme can greatly improve the network resilience to the attack of the compromised nodes and decrease the energy consumption in the process of updating the administration and session keys.
2009, 31(5): 1049-1053.
doi: 10.3724/SP.J.1146.2008.00443
Abstract:
Bit commitment is a fundamental primitive in secure multi-party computation. It plays an important role in constructions of more complicated multi-party protocols. A new model of bit commitment named three-party bit commitment is proposed in this paper, in which two provers jointly commit a bit to a verifier. The protocol of three-party bit commitment based on elliptic curve cryptography is also given. The scheme is in purely classical means, without restricted assumptions of the computing power imposed on any participant. Moreover, the scheme is proven to be of unconditional security and be immune to channel eavesdropping. The protocol can also be modified easily to realize bit string commitment scheme.
Bit commitment is a fundamental primitive in secure multi-party computation. It plays an important role in constructions of more complicated multi-party protocols. A new model of bit commitment named three-party bit commitment is proposed in this paper, in which two provers jointly commit a bit to a verifier. The protocol of three-party bit commitment based on elliptic curve cryptography is also given. The scheme is in purely classical means, without restricted assumptions of the computing power imposed on any participant. Moreover, the scheme is proven to be of unconditional security and be immune to channel eavesdropping. The protocol can also be modified easily to realize bit string commitment scheme.
2009, 31(5): 1054-1058.
doi: 10.3724/SP.J.1146.2008.00596
Abstract:
Two new steganalysis methods for the Least Significant Bit (LSB) embedding technique are proposed based on the analysis of the laplacian statistics of image pixel. The relation between the current pixel and its four-neighborhood pixel average is defined as statistics, then we propose the method 1, which can detect the existence of hidden messages, and the method 2, which can accurately estimate the amount of hidden messages. These two methods are based on the analysis of effects brought by message embedding, embedding twice and LSB plane flipping. These two methods have remarkable physical significances and can be implemented conveniently. Experimental results show that the estimating precision of method 2 is better than that of the RS method if the embedding ratio is not less than 20%.
Two new steganalysis methods for the Least Significant Bit (LSB) embedding technique are proposed based on the analysis of the laplacian statistics of image pixel. The relation between the current pixel and its four-neighborhood pixel average is defined as statistics, then we propose the method 1, which can detect the existence of hidden messages, and the method 2, which can accurately estimate the amount of hidden messages. These two methods are based on the analysis of effects brought by message embedding, embedding twice and LSB plane flipping. These two methods have remarkable physical significances and can be implemented conveniently. Experimental results show that the estimating precision of method 2 is better than that of the RS method if the embedding ratio is not less than 20%.
2009, 31(5): 1059-1062.
doi: 10.3724/SP.J.1146.2008.00491
Abstract:
For application aimed NoC design, this paper proposes an analytical model of communication performance and designs a buffer optimizing strategy and allocation algorithm. The hardware simulation results show that the model can analyze the average delay of NoC and the blocking probability of each port of routers, and the algorithm can reduce the average delay of NoC using the same amount of resources, which improves the NoC performance.
For application aimed NoC design, this paper proposes an analytical model of communication performance and designs a buffer optimizing strategy and allocation algorithm. The hardware simulation results show that the model can analyze the average delay of NoC and the blocking probability of each port of routers, and the algorithm can reduce the average delay of NoC using the same amount of resources, which improves the NoC performance.
2009, 31(5): 1063-1066.
doi: 10.3724/SP.J.1146.2008.00365
Abstract:
Fairness is an essential property in e-payment protocol. An ideal functionality of fair e-payment is defined in the universal composability model. In the hybrid model which aided by ideal convertible signature functionality, ideal registration functionality and ideal secure session functionality, a fair electronic payment protocol is constructed to realize this ideal functionality. The new protocol has simpler structure, lower communication overhead and holds secure even when running in an arbitrary and unknown multi-party environment.
Fairness is an essential property in e-payment protocol. An ideal functionality of fair e-payment is defined in the universal composability model. In the hybrid model which aided by ideal convertible signature functionality, ideal registration functionality and ideal secure session functionality, a fair electronic payment protocol is constructed to realize this ideal functionality. The new protocol has simpler structure, lower communication overhead and holds secure even when running in an arbitrary and unknown multi-party environment.
2009, 31(5): 1067-1071.
doi: 10.3724/SP.J.1146.2008.00215
Abstract:
Digital communication signals of different modulation schemes, with the nonlinearities applied to the complex envelope, always manifest themselves in spectrum line feature. In this paper, a complete theoretical analysis for the square and quartic spectrum line feature is carried out. The spectrum lines existence, position and amplitude are deduced, which can server as a robust feature for potential application. Computer simulation results indicate that the spectrum line can be extracted even in the serious noisy and multi-path fading environment.
Digital communication signals of different modulation schemes, with the nonlinearities applied to the complex envelope, always manifest themselves in spectrum line feature. In this paper, a complete theoretical analysis for the square and quartic spectrum line feature is carried out. The spectrum lines existence, position and amplitude are deduced, which can server as a robust feature for potential application. Computer simulation results indicate that the spectrum line can be extracted even in the serious noisy and multi-path fading environment.
2009, 31(5): 1072-1076.
doi: 10.3724/SP.J.1146.2008.00240
Abstract:
In this paper, to simplify the computational complexity of intra-frame prediction mode selection for H.264, an improved Intra-frame prediction mode selection algorithm is proposed. Firstly, a new operator is proposed to describe the gray-level info of an image which could be the image complexity degree criterion. Then two QP (Quality Parameter)-based adaptive thresholds are proposed and images can be divided into three kinds: smooth, variability and in fuzzy feature. Intra_1616 mode is used when the macroblock is smooth, Intra_44 mode is used when the macroblock is variability and the original algorithm is used when the macroblock is in fuzzy feature. Experimental results show that the improved algorithm almost has 30% decrement on intra-frame mode selection times, meanwhile the image quality is still good enough with a neglectable rate increase.
In this paper, to simplify the computational complexity of intra-frame prediction mode selection for H.264, an improved Intra-frame prediction mode selection algorithm is proposed. Firstly, a new operator is proposed to describe the gray-level info of an image which could be the image complexity degree criterion. Then two QP (Quality Parameter)-based adaptive thresholds are proposed and images can be divided into three kinds: smooth, variability and in fuzzy feature. Intra_1616 mode is used when the macroblock is smooth, Intra_44 mode is used when the macroblock is variability and the original algorithm is used when the macroblock is in fuzzy feature. Experimental results show that the improved algorithm almost has 30% decrement on intra-frame mode selection times, meanwhile the image quality is still good enough with a neglectable rate increase.
2009, 31(5): 1077-1081.
doi: 10.3724/SP.J.1146.2008.00337
Abstract:
In this paper, a statistic channel information-based relay selection scheme is proposed in non-regenerative cooperative networks. Firstly, a parameter named equivalent channel gain is defined based on the statistic channel information under the equal power allocation constraint. It describes the compositive channel character of two phases in the cooperative process. Then an optimal relay selection scheme is proposed based on the descend order of equivalent channel gain. The scheme implies that different relay nodes should be selected to minimize the outage probability under different SNR ranges. And the analysis shows that the diversity order of the relay selection scheme is, N+1 where N is the number of the relay node in the system. The numerical results show that the outage probability of the proposed scheme is lower than the other schemes. Moreover, a suboptimal relay selection scheme with power allocation is considered. The outage performance of the suboptimal scheme is nearly the same as the optimal exhaust search scheme while the complexity is reduced.
In this paper, a statistic channel information-based relay selection scheme is proposed in non-regenerative cooperative networks. Firstly, a parameter named equivalent channel gain is defined based on the statistic channel information under the equal power allocation constraint. It describes the compositive channel character of two phases in the cooperative process. Then an optimal relay selection scheme is proposed based on the descend order of equivalent channel gain. The scheme implies that different relay nodes should be selected to minimize the outage probability under different SNR ranges. And the analysis shows that the diversity order of the relay selection scheme is, N+1 where N is the number of the relay node in the system. The numerical results show that the outage probability of the proposed scheme is lower than the other schemes. Moreover, a suboptimal relay selection scheme with power allocation is considered. The outage performance of the suboptimal scheme is nearly the same as the optimal exhaust search scheme while the complexity is reduced.
2009, 31(5): 1082-1085.
doi: 10.3724/SP.J.1146.2008.00268
Abstract:
This paper proposes a novel adaptive tracking metric for precoding matrices in finite feedback precoding MIMO systems with wireless correlated channels. From the statistical point of view, this metric has an optimal property to describe the relationship of channel correlation and the changes of precoding matrices. Therefore, based on keeping low amount of feedback data, this metric scheme improves further the precoding system performance. The derivation of the optimal adaptive tracking metric is given. System simulations aim at the correlations in frequency and temporal domains of wireless channels, which are combined with spatial diversity and multiplexing system structures. The theoretical analysis conclusion is verified by these results. In addition, for the lower complexity of this metric scheme, it is more valuable in practice.
This paper proposes a novel adaptive tracking metric for precoding matrices in finite feedback precoding MIMO systems with wireless correlated channels. From the statistical point of view, this metric has an optimal property to describe the relationship of channel correlation and the changes of precoding matrices. Therefore, based on keeping low amount of feedback data, this metric scheme improves further the precoding system performance. The derivation of the optimal adaptive tracking metric is given. System simulations aim at the correlations in frequency and temporal domains of wireless channels, which are combined with spatial diversity and multiplexing system structures. The theoretical analysis conclusion is verified by these results. In addition, for the lower complexity of this metric scheme, it is more valuable in practice.
2009, 31(5): 1086-1089.
doi: 10.3724/SP.J.1146.2008.00300
Abstract:
A spectrum sharing algorithm for downlink cognitive radio systems is proposed to maximize the system capacity under total transmission power and interference constraints. Interference to the licensed users is analyzed, and a method to set interference constraint is given accordingly. Based on the interference constraint, the maximum system capacity is achieved by two steps: first the optimal subcarrier allocation is derived by the maximum channel signal to interference plus noise ratio rule; then transmission power is optimally allocated by the proposed double water-filling method. Simulation results show this algorithm can provide significant capacity gain comparing with the conventional spectrum sharing algorithms.
A spectrum sharing algorithm for downlink cognitive radio systems is proposed to maximize the system capacity under total transmission power and interference constraints. Interference to the licensed users is analyzed, and a method to set interference constraint is given accordingly. Based on the interference constraint, the maximum system capacity is achieved by two steps: first the optimal subcarrier allocation is derived by the maximum channel signal to interference plus noise ratio rule; then transmission power is optimally allocated by the proposed double water-filling method. Simulation results show this algorithm can provide significant capacity gain comparing with the conventional spectrum sharing algorithms.
2009, 31(5): 1090-1094.
doi: 10.3724/SP.J.1146.2008.00203
Abstract:
In this paper, the precise topography-dependent motion compensation for repeat-pass InSAR is studied. The effects of the residual motion errors caused by the flat surface assumption and center-beam approximation are analyzed, which shows the necessity of performing a precise topography-dependent motion compensation for the repeat-pass InSAR systems. The precision requirement for the external DEM data is analyzed to show the feasibility of deriving precise DEM data by motion compensation based on rough DEM data. An improved approach is presented according to the deficiencies of the existing topography-dependent motion compensation approaches. It can adjust the parameters according to the conditions of trajectory deviations and topography variations, so it has the advantage of performing the precise motion compensation efficiently. Finally, the processing results of the X-band airborne repeat-pass interferometric SAR data confirm its validity and superiority in precision and efficiency compared to the original algorithms.
In this paper, the precise topography-dependent motion compensation for repeat-pass InSAR is studied. The effects of the residual motion errors caused by the flat surface assumption and center-beam approximation are analyzed, which shows the necessity of performing a precise topography-dependent motion compensation for the repeat-pass InSAR systems. The precision requirement for the external DEM data is analyzed to show the feasibility of deriving precise DEM data by motion compensation based on rough DEM data. An improved approach is presented according to the deficiencies of the existing topography-dependent motion compensation approaches. It can adjust the parameters according to the conditions of trajectory deviations and topography variations, so it has the advantage of performing the precise motion compensation efficiently. Finally, the processing results of the X-band airborne repeat-pass interferometric SAR data confirm its validity and superiority in precision and efficiency compared to the original algorithms.
2009, 31(5): 1095-1098.
doi: 10.3724/SP.J.1146.2008.00384
Abstract:
A Kernel Uncorrelated Discriminant Subspace (KUDS) method based on Generalized Singular Value Decomposition (GSVD) for radar target recognition is proposed. The new method combines with the advantage of GSVD and kernel trick, which can effectively overcome the limitation of traditional linear methods in solving singular problem, but also improve the class separability further. In addition, a conclusion from Fishers criterion that there exists no useful discriminative information in the null space of the range profile population scatter matrix is derived, which can be used to reduce the dimensionality of original scatter matrices as well as the computation complexity of the following operation of solving kernel optimal discriminant vectors. Experimental results based on three measured airplanes data confirm the effectiveness of the proposed method.
A Kernel Uncorrelated Discriminant Subspace (KUDS) method based on Generalized Singular Value Decomposition (GSVD) for radar target recognition is proposed. The new method combines with the advantage of GSVD and kernel trick, which can effectively overcome the limitation of traditional linear methods in solving singular problem, but also improve the class separability further. In addition, a conclusion from Fishers criterion that there exists no useful discriminative information in the null space of the range profile population scatter matrix is derived, which can be used to reduce the dimensionality of original scatter matrices as well as the computation complexity of the following operation of solving kernel optimal discriminant vectors. Experimental results based on three measured airplanes data confirm the effectiveness of the proposed method.
2009, 31(5): 1099-1102.
doi: 10.3724/SP.J.1146.2008.00192
Abstract:
The interpolation for Range Cell Migration Correction (RCMC) is an important step in Range Doppler (RD) algorithm. The Slant Range to Ground Range (SRGR) conversion also needs interpolation approach. The interpolation introduces errors to SAR imagery. The principles of RCMC and SRGR are analyzed in this paper. An efficiently associative algorithm of RCMC and SRGR is developed. The associative algorithm needs interpolation for RCMC and SRGR only once. Compared with the conventional implements, it has less interpolation error than conventional implements, improves the precision of image processing and reduces the computation load largely. Simulation results on real-world airborne SAR raw data show that the proposed algorithm outperforms the conventional implements. For better precision and less computation cost, the associative algorithm is helpful to real-time implementation for the high resolution SAR data imaging.
The interpolation for Range Cell Migration Correction (RCMC) is an important step in Range Doppler (RD) algorithm. The Slant Range to Ground Range (SRGR) conversion also needs interpolation approach. The interpolation introduces errors to SAR imagery. The principles of RCMC and SRGR are analyzed in this paper. An efficiently associative algorithm of RCMC and SRGR is developed. The associative algorithm needs interpolation for RCMC and SRGR only once. Compared with the conventional implements, it has less interpolation error than conventional implements, improves the precision of image processing and reduces the computation load largely. Simulation results on real-world airborne SAR raw data show that the proposed algorithm outperforms the conventional implements. For better precision and less computation cost, the associative algorithm is helpful to real-time implementation for the high resolution SAR data imaging.
2009, 31(5): 1103-1107.
doi: 10.3724/SP.J.1146.2008.00178
Abstract:
This paper proposes a decentralized Image Feature-based Space-Time Processing (IFSTP) algorithm in Airborne Orthogonal Netted radar (AON-IFSTP) for the detection of ground moving targets, and evaluates its detection performance. Firstly, the iso-range clutter locus of Airborne Orthogonal Netted Radar (AONR) in Angle-Doppler domain is discussed and a closed-form locus in a special case is derived. Then, the distinct image features shown by targets and interference signals are revealed, and the applicability of the IFSTP to AONR is discussed. Thirdly, the implementation of AON-IFSTP is summarized. Finally computer simulations are conducted. The results show that AON-IFSTP avoids the difficulty of the clutter covariance estimation and is suitable for the application to highly inhomogeneous environment. It is also found that the spatial diveristy can combat detection performance degradations induced by fluctuations of target radar cross section and small radial velocity of target.
This paper proposes a decentralized Image Feature-based Space-Time Processing (IFSTP) algorithm in Airborne Orthogonal Netted radar (AON-IFSTP) for the detection of ground moving targets, and evaluates its detection performance. Firstly, the iso-range clutter locus of Airborne Orthogonal Netted Radar (AONR) in Angle-Doppler domain is discussed and a closed-form locus in a special case is derived. Then, the distinct image features shown by targets and interference signals are revealed, and the applicability of the IFSTP to AONR is discussed. Thirdly, the implementation of AON-IFSTP is summarized. Finally computer simulations are conducted. The results show that AON-IFSTP avoids the difficulty of the clutter covariance estimation and is suitable for the application to highly inhomogeneous environment. It is also found that the spatial diveristy can combat detection performance degradations induced by fluctuations of target radar cross section and small radial velocity of target.
2009, 31(5): 1108-1112.
doi: 10.3724/SP.J.1146.2008.01064
Abstract:
Monostatic High Frequency (HF) surface wave radar is vulnerable by the threat of electronic disturbance, stealth targets and so on, and the establishment of monostatic-bistatic composite HF radar network is the most achievable way to solve this problem. In this paper, based on the monostatic-bistatic composite HF radar network, the positioning principle and the detection accuracy are derived and simulated using curved face position analysis, the position precision curve is supplied and a high accuracy subset distributing picture is presented so as to provide a theoretical base of detecting, tracking and data fusion for the HF radar netting.
Monostatic High Frequency (HF) surface wave radar is vulnerable by the threat of electronic disturbance, stealth targets and so on, and the establishment of monostatic-bistatic composite HF radar network is the most achievable way to solve this problem. In this paper, based on the monostatic-bistatic composite HF radar network, the positioning principle and the detection accuracy are derived and simulated using curved face position analysis, the position precision curve is supplied and a high accuracy subset distributing picture is presented so as to provide a theoretical base of detecting, tracking and data fusion for the HF radar netting.
2009, 31(5): 1113-1116.
doi: 10.3724/SP.J.1146.2008.00296
Abstract:
In stepped frequency radar, the velocity of targets must be estimated and compensated to eliminate the range migration and distortion due to target moving. The conjugation method is presented for velocity measurement in stepped frequency radar. And a prototype of a stepped frequency system with dual carrier frequencies is proposed. With processing approach proposed, the system has both the advantage of high range resolution of stepped frequency radar and the capability of high precise velocity measurement. The estimated velocity can be used to compensate range migration and distortion in radar imaging. Therefore, the dual-carrier-frequency stepped-frequency radar can achieve high precise velocity measurement and high resolution simultaneously.
In stepped frequency radar, the velocity of targets must be estimated and compensated to eliminate the range migration and distortion due to target moving. The conjugation method is presented for velocity measurement in stepped frequency radar. And a prototype of a stepped frequency system with dual carrier frequencies is proposed. With processing approach proposed, the system has both the advantage of high range resolution of stepped frequency radar and the capability of high precise velocity measurement. The estimated velocity can be used to compensate range migration and distortion in radar imaging. Therefore, the dual-carrier-frequency stepped-frequency radar can achieve high precise velocity measurement and high resolution simultaneously.
2009, 31(5): 1117-1121.
doi: 10.3724/SP.J.1146.2008.00264
Abstract:
Location of noise-frequency modulation jammer with TDOA(Time Difference Of Arrival) in passive system is studied, a new method of cross-correlation using delta-modulation coding jointed decimation and interpolation is presented in this paper,and location accuracy of this method is discussed. Simulations show that the performance of this method is same as the method of cross-correlation using signal sample data. The most
Location of noise-frequency modulation jammer with TDOA(Time Difference Of Arrival) in passive system is studied, a new method of cross-correlation using delta-modulation coding jointed decimation and interpolation is presented in this paper,and location accuracy of this method is discussed. Simulations show that the performance of this method is same as the method of cross-correlation using signal sample data. The most
2009, 31(5): 1122-1126.
doi: 10.3724/SP.J.1146.2008.00253
Abstract:
An existing approach for beam synchronization, which is based on antenna steering on both sides, can not be applied to Spaceborne/Airborne hybrid Bistatic SAR (SA-BSAR) systems using sources of opportunity since the transmitter beam can not be steered. An approach via wide-beam receiving is proposed. The inspiration comes from the fact that the receiving distance in SA-BSAR is much shorter than the one in spaceborne SAR systems. An approach for compensating the estimation error of the satellite overpass time and the error in the aircraft navigation system is also proposed. The compensation is implemented through the processing of the direct path signal. The results of the performed simulation show that it can achieve a useful scene extension (at least more than 1 km) with an azimuth resolution slightly superior to the one obtained in spaceborne SAR systems.
An existing approach for beam synchronization, which is based on antenna steering on both sides, can not be applied to Spaceborne/Airborne hybrid Bistatic SAR (SA-BSAR) systems using sources of opportunity since the transmitter beam can not be steered. An approach via wide-beam receiving is proposed. The inspiration comes from the fact that the receiving distance in SA-BSAR is much shorter than the one in spaceborne SAR systems. An approach for compensating the estimation error of the satellite overpass time and the error in the aircraft navigation system is also proposed. The compensation is implemented through the processing of the direct path signal. The results of the performed simulation show that it can achieve a useful scene extension (at least more than 1 km) with an azimuth resolution slightly superior to the one obtained in spaceborne SAR systems.
2009, 31(5): 1127-1131.
doi: 10.3724/SP.J.1146.2008.00373
Abstract:
In this paper, the space-frequency instantaneous polarization characteristics of the specific radar target is analyzed, and its successful application to the retrieval of targets geometrical structure is also proposed. The concept and derivation of space-frequency instantaneous polarization characteristics are discussed, and the characteristics of canonical structures are compared. As an illustration of the potential application, a novel scheme of the retrieval of targets geometrical structure is proposed, which is based on space-frequency instantaneous polarization features. Comparing with traditional methods, the novel scheme can obtain a more accurate and robust result. The validity of the scheme is indicated by experimental results.
In this paper, the space-frequency instantaneous polarization characteristics of the specific radar target is analyzed, and its successful application to the retrieval of targets geometrical structure is also proposed. The concept and derivation of space-frequency instantaneous polarization characteristics are discussed, and the characteristics of canonical structures are compared. As an illustration of the potential application, a novel scheme of the retrieval of targets geometrical structure is proposed, which is based on space-frequency instantaneous polarization features. Comparing with traditional methods, the novel scheme can obtain a more accurate and robust result. The validity of the scheme is indicated by experimental results.
2009, 31(5): 1132-1135.
doi: 10.3724/SP.J.1146.2008.00367
Abstract:
In this paper, a novel method,i.e., the equi-amplitude tracing method applicable to the detecting of vital signal is presented based on the analyses of the principle of vital signal detecting by means of base-band UWB radar. The self-developed UWB radar test set-up experiments demonstrate that the detecting scheme and the signal processing algorithm are feasible. The respiratory signal amplitude and frequency spectrum characteristics and the distance between the object and the antenna can be obtained simultaneously with distinct advantage over traditional equi-distance detecting method.
In this paper, a novel method,i.e., the equi-amplitude tracing method applicable to the detecting of vital signal is presented based on the analyses of the principle of vital signal detecting by means of base-band UWB radar. The self-developed UWB radar test set-up experiments demonstrate that the detecting scheme and the signal processing algorithm are feasible. The respiratory signal amplitude and frequency spectrum characteristics and the distance between the object and the antenna can be obtained simultaneously with distinct advantage over traditional equi-distance detecting method.
2009, 31(5): 1136-1139.
doi: 10.3724/SP.J.1146.2008.00834
Abstract:
ESA radar can distribute space power and time as required by flexibly controlling the direction and stare time of the beam, which enables the ESA radar to provide the capability of implementing multiple targets Tracking And Searching (TAS). Tow critical techniques to implement TAS methodtask scheduling and tracking filtering are discussed in this paper. It can be seen from the simulation results that ESA radar can provides high TAS capability using this algorithm.
ESA radar can distribute space power and time as required by flexibly controlling the direction and stare time of the beam, which enables the ESA radar to provide the capability of implementing multiple targets Tracking And Searching (TAS). Tow critical techniques to implement TAS methodtask scheduling and tracking filtering are discussed in this paper. It can be seen from the simulation results that ESA radar can provides high TAS capability using this algorithm.
2009, 31(5): 1140-1143.
doi: 10.3724/SP.J.1146.2008.00446
Abstract:
Layer picking is the base of doing geological explanation correctly in geodesic aims of GPR. A layer picking based on hidden Markov model and Bresenham algorithm is presented in this paper. After tracking Time Of Delay (TOD) of GPR echoes, edge detection of layer is accomplished primarily. Basing on it, edge linking is come true and layer picking is accomplished at last. The results of dealing with actual data show that the layer picking method in this paper improves much on the precision of tracking comparing with layer picking only based on hidden Markov model.
Layer picking is the base of doing geological explanation correctly in geodesic aims of GPR. A layer picking based on hidden Markov model and Bresenham algorithm is presented in this paper. After tracking Time Of Delay (TOD) of GPR echoes, edge detection of layer is accomplished primarily. Basing on it, edge linking is come true and layer picking is accomplished at last. The results of dealing with actual data show that the layer picking method in this paper improves much on the precision of tracking comparing with layer picking only based on hidden Markov model.
2009, 31(5): 1144-1147.
doi: 10.3724/SP.J.1146.2008.00406
Abstract:
The relativity between hyperspectral images of different waveband is differ from each other, contraposing this characteristic, this paper presents a lossless compression algorithm of hyperspectral image which is based on searching the optimal couple prediction wavebands. Through a searching model which is built by consulting the binary tree, two prediction wavebands are found, which have the most relativity to each basal waveband, and then the basal waveband is predicted using the two prediction wavebands. Experimental reault shows that, the compression algorithm has a primely compression performace compared with other compression algorithms present recently.
The relativity between hyperspectral images of different waveband is differ from each other, contraposing this characteristic, this paper presents a lossless compression algorithm of hyperspectral image which is based on searching the optimal couple prediction wavebands. Through a searching model which is built by consulting the binary tree, two prediction wavebands are found, which have the most relativity to each basal waveband, and then the basal waveband is predicted using the two prediction wavebands. Experimental reault shows that, the compression algorithm has a primely compression performace compared with other compression algorithms present recently.
2009, 31(5): 1148-1152.
doi: 10.3724/SP.J.1146.2008.00221
Abstract:
SAR target detection and recognition is sensitive to targets azimuth. To solve the problem, based on correlation theory and kernel feature analysis, a kernel correlation filter which is strongly robust to targets azimuth distortion is proposed. The novel filter exploits eigenvectors to reduce the dependence of the training set and extends linear combination of eigenvectors nonlinearly to improve the classification. Moreover, to keep the computation tractable in high dimensional space, the kernel function is employed. Comparative tests using MSTAR database demonstrate the kernel correlation filter performs high detection probability with low false alarm probability and implements target detection and recognition accurately without templates and target poses estimation.
SAR target detection and recognition is sensitive to targets azimuth. To solve the problem, based on correlation theory and kernel feature analysis, a kernel correlation filter which is strongly robust to targets azimuth distortion is proposed. The novel filter exploits eigenvectors to reduce the dependence of the training set and extends linear combination of eigenvectors nonlinearly to improve the classification. Moreover, to keep the computation tractable in high dimensional space, the kernel function is employed. Comparative tests using MSTAR database demonstrate the kernel correlation filter performs high detection probability with low false alarm probability and implements target detection and recognition accurately without templates and target poses estimation.
2009, 31(5): 1153-1156.
doi: 10.3724/SP.J.1146.2008.00445
Abstract:
For present analog Loran C receiver, it is very complicated and takes a long time to complete cycle identification. Moreover, estimate error when arriving time of sky-wave and ground-wave is less than 50 under algorithms presented by some relative documents is intolerant. For estimating the arriving time of sky-wave and ground-wave exactly, a new algorithm for the detection is presented based on IFFT(Inverse Fast Fourier Transform) spectral division. After proving its feasibility, veracity of arriving time for sky-wave and ground-wave is simulated. The results show that using the new algorithm, veracity of arriving time for sky-wave and ground-wave can get up to 99.6% and no less than 98.6% respectively,and signal to noise ratio (SNR) can be improved 6dB relative to the standard of USCG.
For present analog Loran C receiver, it is very complicated and takes a long time to complete cycle identification. Moreover, estimate error when arriving time of sky-wave and ground-wave is less than 50 under algorithms presented by some relative documents is intolerant. For estimating the arriving time of sky-wave and ground-wave exactly, a new algorithm for the detection is presented based on IFFT(Inverse Fast Fourier Transform) spectral division. After proving its feasibility, veracity of arriving time for sky-wave and ground-wave is simulated. The results show that using the new algorithm, veracity of arriving time for sky-wave and ground-wave can get up to 99.6% and no less than 98.6% respectively,and signal to noise ratio (SNR) can be improved 6dB relative to the standard of USCG.
2009, 31(5): 1157-1160.
doi: 10.3724/SP.J.1146.2008.00350
Abstract:
In order to improve the precision of RBF regression, this article advances a novel RBF regression modeling method using fuzzy partition and supervised clustering. The proposed method first splits the training data into several subsets using supervised clustering. Then local regression models are independently built with RBF network for each subset. Finally, the output of the network is formed with a weighted combination of each local model. Experiments show that the proposed method achieves more accurate interpretation of local behavior of the target model.
In order to improve the precision of RBF regression, this article advances a novel RBF regression modeling method using fuzzy partition and supervised clustering. The proposed method first splits the training data into several subsets using supervised clustering. Then local regression models are independently built with RBF network for each subset. Finally, the output of the network is formed with a weighted combination of each local model. Experiments show that the proposed method achieves more accurate interpretation of local behavior of the target model.
2009, 31(5): 1161-1165.
doi: 10.3724/SP.J.1146.2008.01317
Abstract:
A new pitch detection of noisy speech signal for lower SNR is proposed in this paper, which is based on Reverse CAMDF Autocorrelation Function (RCAF) and searching tentative smooth measurement. The algorithm can estimate noise during speech presence, which employs the method of expanded spectral subtraction based on noise compensation structure. RCAF algorithm improves the robustness and precision of pitch detection. A number of experiments show that by RCAF method, higher efficiency and better detection accuracy can be obtained while the SNR is equal to -10dB. However, such performance can not be achieved by traditional methods, AMDF, CAMDF and AWAC under the same SNR.
A new pitch detection of noisy speech signal for lower SNR is proposed in this paper, which is based on Reverse CAMDF Autocorrelation Function (RCAF) and searching tentative smooth measurement. The algorithm can estimate noise during speech presence, which employs the method of expanded spectral subtraction based on noise compensation structure. RCAF algorithm improves the robustness and precision of pitch detection. A number of experiments show that by RCAF method, higher efficiency and better detection accuracy can be obtained while the SNR is equal to -10dB. However, such performance can not be achieved by traditional methods, AMDF, CAMDF and AWAC under the same SNR.
2009, 31(5): 1166-1169.
doi: 10.3724/SP.J.1146.2008.00467
Abstract:
As the noise spectral estimation based on the minimum statistics introduces significant tracking latency when the noise spectral rises, an improved algorithm based on the weight minimum statistics is presented. Analyzing the influence of weight on the noise spectral estimation based on the minimum statistics, three kinds of typical simple curves are used to compute the weight, and the experiment shows that the weight computed by the cosine curve is the best. The simulation results show that the improved algorithm traces the change of noise spectral quickly in most cases, improves the accuracy of the noise spectral estimation and the quality of speech in the non-stationary noise environment.
As the noise spectral estimation based on the minimum statistics introduces significant tracking latency when the noise spectral rises, an improved algorithm based on the weight minimum statistics is presented. Analyzing the influence of weight on the noise spectral estimation based on the minimum statistics, three kinds of typical simple curves are used to compute the weight, and the experiment shows that the weight computed by the cosine curve is the best. The simulation results show that the improved algorithm traces the change of noise spectral quickly in most cases, improves the accuracy of the noise spectral estimation and the quality of speech in the non-stationary noise environment.
2009, 31(5): 1170-1174.
doi: 10.3724/SP.J.1146.2008.00232
Abstract:
An automatic optic nerve head localization method in the fundus images based on cross-network is studied in this paper. To describe the space properties of the retinal vessel, a new conception cross-network and measures of the network are proposed. Based on the model of fundus organs structure, automatic localization of optic nerve head is realized using the cross-networks measure parameter cross density. Experimental results verify the effectiveness of the algorithm using the STARE, DRIVE fundus image databases and the clinic images over different image qualities. And it can satisfy the requirement of clinic ophthalmological diagnosis.
An automatic optic nerve head localization method in the fundus images based on cross-network is studied in this paper. To describe the space properties of the retinal vessel, a new conception cross-network and measures of the network are proposed. Based on the model of fundus organs structure, automatic localization of optic nerve head is realized using the cross-networks measure parameter cross density. Experimental results verify the effectiveness of the algorithm using the STARE, DRIVE fundus image databases and the clinic images over different image qualities. And it can satisfy the requirement of clinic ophthalmological diagnosis.
2009, 31(5): 1175-1179.
doi: 10.3724/SP.J.1146.2008.00146
Abstract:
Blind CFA interpolation detection, which identifies the demosaicing method used in digital camera by analyzing output images, provides an efficient tool for digital image forensics. This paper proposes an approach of blind CFA interpolation detection based on interpolation coefficients estimation. By solving the covariance matrix equation, a vector of the interpolation coefficients is obtained, which is further fed to SVM classifier. The experimental results show a high accuracy on blind CFA interpolation detection. Compared with existing ones, the proposed method in this paper indicates a better performance on the robustness against additive Gaussian white noises and lossy JPEG compression.
Blind CFA interpolation detection, which identifies the demosaicing method used in digital camera by analyzing output images, provides an efficient tool for digital image forensics. This paper proposes an approach of blind CFA interpolation detection based on interpolation coefficients estimation. By solving the covariance matrix equation, a vector of the interpolation coefficients is obtained, which is further fed to SVM classifier. The experimental results show a high accuracy on blind CFA interpolation detection. Compared with existing ones, the proposed method in this paper indicates a better performance on the robustness against additive Gaussian white noises and lossy JPEG compression.
2009, 31(5): 1180-1184.
doi: 10.3724/SP.J.1146.2008.00382
Abstract:
Steganography is the technology of hiding a secret message in plain sight. The goal of steganalysis is to detect the presence of embedded data and to eventually extract the secret message. Current blind steganalytic methods, which relied on two-class or multi-class classifier, have offered strong detection capabilities against known embedding algorithms, but they suffer from an inability to detect previously unknown forms of steganography. In this paper, a new JPEG blind steganalytic technique for detecting both known and unknown steganography is proposed. On the basis of co-occurrence features and multiple hyperspheres One-Class SVM(OC-SVM) classifier, the proposed method can effectively model the statistics distribution boundary of innocent JPEG images. Bagging ensemble learning algorithm is also used to achieve higher detecting performance. Experimental results show the superiority of the method over other analogous steganalytic techniques.
Steganography is the technology of hiding a secret message in plain sight. The goal of steganalysis is to detect the presence of embedded data and to eventually extract the secret message. Current blind steganalytic methods, which relied on two-class or multi-class classifier, have offered strong detection capabilities against known embedding algorithms, but they suffer from an inability to detect previously unknown forms of steganography. In this paper, a new JPEG blind steganalytic technique for detecting both known and unknown steganography is proposed. On the basis of co-occurrence features and multiple hyperspheres One-Class SVM(OC-SVM) classifier, the proposed method can effectively model the statistics distribution boundary of innocent JPEG images. Bagging ensemble learning algorithm is also used to achieve higher detecting performance. Experimental results show the superiority of the method over other analogous steganalytic techniques.
2009, 31(5): 1185-1188.
doi: 10.3724/SP.J.1146.2008.00510
Abstract:
Due to the dynamic nature of contexts in pervasive computing, a context reasoner has to support real-time scheduling of reasoning jobs. Due to the fact that reasoning results remain fresh within a period of time, the concept of reasoning result reuse efficiency and its computation method are proposed. Then a Fresh-aware Real-time Scheduling Algorithm (FRSA) is proposed to promote the system throughput when the reasoner is overloaded, which schedules reasoning jobs according to their result reuse efficiencies and deadlines. The simulation demonstrates that when the reasoner is heavily overloaded, the throughput of FRSA is 10% to 30% better than those of classic scheduling algorithms SJF, EDF, LSF and FCFS.
Due to the dynamic nature of contexts in pervasive computing, a context reasoner has to support real-time scheduling of reasoning jobs. Due to the fact that reasoning results remain fresh within a period of time, the concept of reasoning result reuse efficiency and its computation method are proposed. Then a Fresh-aware Real-time Scheduling Algorithm (FRSA) is proposed to promote the system throughput when the reasoner is overloaded, which schedules reasoning jobs according to their result reuse efficiencies and deadlines. The simulation demonstrates that when the reasoner is heavily overloaded, the throughput of FRSA is 10% to 30% better than those of classic scheduling algorithms SJF, EDF, LSF and FCFS.
2009, 31(5): 1189-1192.
doi: 10.3724/SP.J.1146.2008.00477
Abstract:
The curse of dimensionality is a central difficulty in many fields such as machine learning, pattern recognition and data mining etc. The dimensionality reduction method of characteristic data is one of the current research hotspots in data-driven calculation methods, which high-dimensional data is mapped into a low-dimensional space. In this paper, a special nonlinear dimensionality reduction method called Autoencoder is introduced, which uses Continuous Restricted Boltzmann Machine (CRBM) and converts high-dimensional data to low-dimensional codes by training a neural network with multiple hidden layers. In particular, the autoencoder provides such a bi-directional mapping between the high-dimensional data space and the low-dimensional manifold space and is therefore able to overcome the inherited deficiency of most nonlinear dimensionality reduction methods that do not have an inverse mapping. The experiments on synthetic datasets and true image data show that the autoencoder network not only can find the embedded manifold of high-dimensional datasets but also reconstruct exactly the original high-dimension datasets from low-dimensional structure.
The curse of dimensionality is a central difficulty in many fields such as machine learning, pattern recognition and data mining etc. The dimensionality reduction method of characteristic data is one of the current research hotspots in data-driven calculation methods, which high-dimensional data is mapped into a low-dimensional space. In this paper, a special nonlinear dimensionality reduction method called Autoencoder is introduced, which uses Continuous Restricted Boltzmann Machine (CRBM) and converts high-dimensional data to low-dimensional codes by training a neural network with multiple hidden layers. In particular, the autoencoder provides such a bi-directional mapping between the high-dimensional data space and the low-dimensional manifold space and is therefore able to overcome the inherited deficiency of most nonlinear dimensionality reduction methods that do not have an inverse mapping. The experiments on synthetic datasets and true image data show that the autoencoder network not only can find the embedded manifold of high-dimensional datasets but also reconstruct exactly the original high-dimension datasets from low-dimensional structure.
2009, 31(5): 1193-1196.
doi: 10.3724/SP.J.1146.2008.00297
Abstract:
One related key issue in Content Based Image Retrieval (CBIR) is the representation of image visual content. However, traditional image features such as color, shape and texture are not capable of representing the visual content completely. So as to improve the retrieval accuracy, an image retrieval method based on Multi-scale Phase Feature (MPF) is proposed according to the human vision. Firstly, scale space theory is adopted here to decompose the image into Multi-scale Description (MD). And then the global statistical MPF is acquired by histogram projection from the multi-scale phase information, which is extracted by complex steerable filtering of MD. Finally, experiments on general purpose database COREL 5,000 demonstrate that the proposed MPF has a no less than 5% accuracy improvement over classic color features, and it also effectively complements classic color features.
One related key issue in Content Based Image Retrieval (CBIR) is the representation of image visual content. However, traditional image features such as color, shape and texture are not capable of representing the visual content completely. So as to improve the retrieval accuracy, an image retrieval method based on Multi-scale Phase Feature (MPF) is proposed according to the human vision. Firstly, scale space theory is adopted here to decompose the image into Multi-scale Description (MD). And then the global statistical MPF is acquired by histogram projection from the multi-scale phase information, which is extracted by complex steerable filtering of MD. Finally, experiments on general purpose database COREL 5,000 demonstrate that the proposed MPF has a no less than 5% accuracy improvement over classic color features, and it also effectively complements classic color features.
2009, 31(5): 1197-1200.
doi: 10.3724/SP.J.1146.2008.00469
Abstract:
In this paper, a new neighborhood adaptive image denoising method is proposed using dual-tree complex wavelet transforms. It is an improvement of the existing denoising method NeighShrink. The optimal thresholds and neighboring window sizes are determined for every subband in the wavelet domain using Steins unbiased risk estimate, and NeighShrink is also extended from orthogonal wavelet transforms to dual-tree complex wavelet transforms in this paper. Experimental results show that the proposed method performs?better than some of the existing methods.
In this paper, a new neighborhood adaptive image denoising method is proposed using dual-tree complex wavelet transforms. It is an improvement of the existing denoising method NeighShrink. The optimal thresholds and neighboring window sizes are determined for every subband in the wavelet domain using Steins unbiased risk estimate, and NeighShrink is also extended from orthogonal wavelet transforms to dual-tree complex wavelet transforms in this paper. Experimental results show that the proposed method performs?better than some of the existing methods.
2009, 31(5): 1201-1204.
doi: 10.3724/SP.J.1146.2008.00318
Abstract:
In this paper, a time delay estimate algorithm based on analysis of cross-correlation sequence of model error and input signal is proposed, and a method to build behavioral model of power amplifiers dynamically depend on the new delay estimate algorithm is discussed. Using this method, a non-union delay Memory Polynomial(MP) model is built. Simulation results show the new algorithm can find main delays in output signal effectively and reduce the delay length of traditional MP model from 6 to 2 at the cost of 3 dB increase of models NMSE, which means the algorithm can achieve a good balance between model complexity and its precision.
In this paper, a time delay estimate algorithm based on analysis of cross-correlation sequence of model error and input signal is proposed, and a method to build behavioral model of power amplifiers dynamically depend on the new delay estimate algorithm is discussed. Using this method, a non-union delay Memory Polynomial(MP) model is built. Simulation results show the new algorithm can find main delays in output signal effectively and reduce the delay length of traditional MP model from 6 to 2 at the cost of 3 dB increase of models NMSE, which means the algorithm can achieve a good balance between model complexity and its precision.
2009, 31(5): 1205-1209.
doi: 10.3724/SP.J.1146.2008.00166
Abstract:
An application-specific bus scheduling scheme was proposed in this paper. Two-fold optimization was considered in this scheme based on the communication events collected by system modeling and simulation. The first one, which had higher priority, was real time constraints of tasks while the other was making use of bus idle time to transfer data as much as possible. A configurable optimization parameter was also proposed for the tradeoff between the total bus time consumed and the extra on-chip buffer requirements. This scheme was implemented in a dual-core SoC (System on Chip) for the H.264/AVC decoder and compared with RR (Round Robin), FP (Fixed Priority) and SBA (Slack Based Arbitration) schemes. The results showed that the proposed scheme had an average 16.6%, 13.2% and 9.7% less bus time when was set to 0.5. The number of missed real time constraints tasks was 59.4% less than the SBA scheme, which was the closest to our scheme. The relationship between and the extra on-chip buffer cost showed that under worst condition (=0), it was only 435 bytes.
An application-specific bus scheduling scheme was proposed in this paper. Two-fold optimization was considered in this scheme based on the communication events collected by system modeling and simulation. The first one, which had higher priority, was real time constraints of tasks while the other was making use of bus idle time to transfer data as much as possible. A configurable optimization parameter was also proposed for the tradeoff between the total bus time consumed and the extra on-chip buffer requirements. This scheme was implemented in a dual-core SoC (System on Chip) for the H.264/AVC decoder and compared with RR (Round Robin), FP (Fixed Priority) and SBA (Slack Based Arbitration) schemes. The results showed that the proposed scheme had an average 16.6%, 13.2% and 9.7% less bus time when was set to 0.5. The number of missed real time constraints tasks was 59.4% less than the SBA scheme, which was the closest to our scheme. The relationship between and the extra on-chip buffer cost showed that under worst condition (=0), it was only 435 bytes.
2009, 31(5): 1210-1213.
doi: 10.3724/SP.J.1146.2008.00335
Abstract:
Fast implementation of CDF9/7 wavelet is strictly retained for its complex coefficient. This paper constructs a new biorthogonal and lifting-based wavelet model with parameters. The new wavelet basis of rational number proposed is adapted to shift operation with the same compression performance to CDF9/7 by using energy concentration and iterative searching algorithm. One multiplication can be implemented by one addition and shift in the hardware design. It amounts to the original calculation quantity of 25% and its precision is not influenced by the word length. The 4 level pipelined architecture of one dimensional transform is demonstrated on the FPGA. Compared with the related design, it reduces half resource requirement and improves the working frequency of system prominently.
Fast implementation of CDF9/7 wavelet is strictly retained for its complex coefficient. This paper constructs a new biorthogonal and lifting-based wavelet model with parameters. The new wavelet basis of rational number proposed is adapted to shift operation with the same compression performance to CDF9/7 by using energy concentration and iterative searching algorithm. One multiplication can be implemented by one addition and shift in the hardware design. It amounts to the original calculation quantity of 25% and its precision is not influenced by the word length. The 4 level pipelined architecture of one dimensional transform is demonstrated on the FPGA. Compared with the related design, it reduces half resource requirement and improves the working frequency of system prominently.
2009, 31(5): 1214-1217.
doi: 10.3724/SP.J.1146.2008.00248
Abstract:
Based on the single-mode and multi-mode space charge wave theory, the analytical expressions for the electronic conductance in a double gap coupling MBK cavity are derived. The analytical theories show that the single-mode theory is sufficiently accurate for common coupling double gap cavity. In addition, the results of analytical theory are shown to agree well with particle-in-cell simulations. Moreover the effects of the modulation coefficient and axial magnetic field on the electronic conductance are researched by PIC simulation. The results show that the electronic conductance calculated by small signal theory is accurate when the modulation coefficient is less than 0.1 and the magnetic field exceeds 1.5 times of the Brillouin field.
Based on the single-mode and multi-mode space charge wave theory, the analytical expressions for the electronic conductance in a double gap coupling MBK cavity are derived. The analytical theories show that the single-mode theory is sufficiently accurate for common coupling double gap cavity. In addition, the results of analytical theory are shown to agree well with particle-in-cell simulations. Moreover the effects of the modulation coefficient and axial magnetic field on the electronic conductance are researched by PIC simulation. The results show that the electronic conductance calculated by small signal theory is accurate when the modulation coefficient is less than 0.1 and the magnetic field exceeds 1.5 times of the Brillouin field.
2009, 31(5): 1221-1224.
doi: 10.3724/SP.J.1146.2008.00229
Abstract:
Distributed wireless communication system can suppress the interference and enlarge the capacity. This paper proposes a novel orderly distributed hexangular cell system to introduce the gain of distributed system into the existing cellular system. It uses 3 adjacent BSs of the existing system to constitute a new cell based on the typical directional antenna configuration. The co-channel interference and outage capacity are analyzed, then a parameter design criterion is proposed. Simulation results show that the new system offers large capacity gain over the same system designed based on signal to noise ration and generalized distributed antenna system.
Distributed wireless communication system can suppress the interference and enlarge the capacity. This paper proposes a novel orderly distributed hexangular cell system to introduce the gain of distributed system into the existing cellular system. It uses 3 adjacent BSs of the existing system to constitute a new cell based on the typical directional antenna configuration. The co-channel interference and outage capacity are analyzed, then a parameter design criterion is proposed. Simulation results show that the new system offers large capacity gain over the same system designed based on signal to noise ration and generalized distributed antenna system.
2009, 31(5): 1225-1228.
doi: 10.3724/SP.J.1146.2008.00403
Abstract:
With respect to channel coded Multiple-Input Multiple-Output (MIMO) systems, a novel soft-output MMSE V-BLAST detector is derived by considering both the channel estimation error and decision error propagation. Compared with the conventional detection algorithm, simulation results show that proposed scheme can decrease the error floor drastically and obtain significant performance gain at the cost of negligible increased complexity. Furthermore, as proposed scheme is not sensitive to the error of the variance of the estimated channel estimation error, it is desirable for practical applications.
With respect to channel coded Multiple-Input Multiple-Output (MIMO) systems, a novel soft-output MMSE V-BLAST detector is derived by considering both the channel estimation error and decision error propagation. Compared with the conventional detection algorithm, simulation results show that proposed scheme can decrease the error floor drastically and obtain significant performance gain at the cost of negligible increased complexity. Furthermore, as proposed scheme is not sensitive to the error of the variance of the estimated channel estimation error, it is desirable for practical applications.
2009, 31(5): 1229-1232.
doi: 10.3724/SP.J.1146.2008.00036
Abstract:
Practical maximum SJNR-precoding based adaptive resource allocation schemes are proposed for multi-user MIMO OFDM systems. According to the SJNR value, incremental and decremental algorithms are presented for user selection on each subcarrier that multi-user diversity is exploited by allocating the subcarriers to the right users and meanwhile a maximum capacity can be achieved. Besides, an allocation algorithm with users different QoS is given based on decremental algorithm. Analysis and simulation results show that both IA and DA can achieve the similar performance as the optimal one with low complexity while QDA can satisfy the users different requirement as well as increase the total throughput.
Practical maximum SJNR-precoding based adaptive resource allocation schemes are proposed for multi-user MIMO OFDM systems. According to the SJNR value, incremental and decremental algorithms are presented for user selection on each subcarrier that multi-user diversity is exploited by allocating the subcarriers to the right users and meanwhile a maximum capacity can be achieved. Besides, an allocation algorithm with users different QoS is given based on decremental algorithm. Analysis and simulation results show that both IA and DA can achieve the similar performance as the optimal one with low complexity while QDA can satisfy the users different requirement as well as increase the total throughput.
2009, 31(5): 1233-1236.
doi: 10.3724/SP.J.1146.2008.00108
Abstract:
In this paper, based on the analysis of the existing encapsulation schemes, Generic Stream Encapsulation (GSE) is chosen to transmit IP-based service in the Terrestrial-Digital Multimedia Broadcasting (T-DMB) system. As a new emerging technology, no corresponding Forward Error Correction (FEC) function is considered for GSE. This paper proposes a GSE-FEC method to provide additional error protection. In order to further improve the performance of GSE-FEC, an Improved GSE (IGSE)scheme is devised to provide the GSE-FEC frame reconstruction information and the erasure information, and an Improved GSE Erasure (IGE) decoding scheme is presented based on the IGSE for the decoding in GSE-FEC system. The simulation results demonstrate that the IGE scheme shows better error correction capability than the schemes based on Non-Erasure (NE) decoding and GSE Erasure (GE) decoding. Moreover, comparing with GE scheme, the proposed scheme can reserve the correct bytes to the best of its ability for avoiding information wasting.
In this paper, based on the analysis of the existing encapsulation schemes, Generic Stream Encapsulation (GSE) is chosen to transmit IP-based service in the Terrestrial-Digital Multimedia Broadcasting (T-DMB) system. As a new emerging technology, no corresponding Forward Error Correction (FEC) function is considered for GSE. This paper proposes a GSE-FEC method to provide additional error protection. In order to further improve the performance of GSE-FEC, an Improved GSE (IGSE)scheme is devised to provide the GSE-FEC frame reconstruction information and the erasure information, and an Improved GSE Erasure (IGE) decoding scheme is presented based on the IGSE for the decoding in GSE-FEC system. The simulation results demonstrate that the IGE scheme shows better error correction capability than the schemes based on Non-Erasure (NE) decoding and GSE Erasure (GE) decoding. Moreover, comparing with GE scheme, the proposed scheme can reserve the correct bytes to the best of its ability for avoiding information wasting.
2009, 31(5): 1237-1240.
doi: 10.3724/SP.J.1146.2008.00989
Abstract:
A concatenated hybrid decoding algorithm for convolutional codes is presented. The algorithm is complemented by using two-stage decoding, where the first stage uses the Belief-Propagation (BP) algorithm, while the second stage uses the Modified Viterbi Decoding (MVD) algorithm. Firstly, the received sequence is pre-decoded by BP, and its outputs are divided into two groups which are reliable Log-Likelihood Ratios (LLRs) and unreliable LLRs. The hard decision symbols corresponding to reliable LLRs and the parts of received symbols corresponding to unreliable LLRs are formed a hybrid sequence, which is further corrected by MVD. Simulation shows that compared with the conventional Viterbi decoding algorithm, the proposed algorithm has a little performance deterioration with much lower average complexity at moderate-to-high signal to noise ratio.
A concatenated hybrid decoding algorithm for convolutional codes is presented. The algorithm is complemented by using two-stage decoding, where the first stage uses the Belief-Propagation (BP) algorithm, while the second stage uses the Modified Viterbi Decoding (MVD) algorithm. Firstly, the received sequence is pre-decoded by BP, and its outputs are divided into two groups which are reliable Log-Likelihood Ratios (LLRs) and unreliable LLRs. The hard decision symbols corresponding to reliable LLRs and the parts of received symbols corresponding to unreliable LLRs are formed a hybrid sequence, which is further corrected by MVD. Simulation shows that compared with the conventional Viterbi decoding algorithm, the proposed algorithm has a little performance deterioration with much lower average complexity at moderate-to-high signal to noise ratio.
2009, 31(5): 1241-1244.
doi: 10.3724/SP.J.1146.2008.00003
Abstract:
Most available Identity-based Authenticated Key agreement (ID-AK) protocols require expensive bilinear pairing operation. This paper proposes a pairing-free ID-AK protocol from additive elliptic curve group. The new protocol eliminates the pairing operations, and reduces overall computation time by at least 33.3 percent compared with previous ID-AK protocols. The new protocol also satisfies master key forward secrecy, perfect forward secrecy and key compromise impersonation resilience. The security of the proposed protocol can be reduced to the standard Computational Diffie-Hellman assumption in the random oracle model.
Most available Identity-based Authenticated Key agreement (ID-AK) protocols require expensive bilinear pairing operation. This paper proposes a pairing-free ID-AK protocol from additive elliptic curve group. The new protocol eliminates the pairing operations, and reduces overall computation time by at least 33.3 percent compared with previous ID-AK protocols. The new protocol also satisfies master key forward secrecy, perfect forward secrecy and key compromise impersonation resilience. The security of the proposed protocol can be reduced to the standard Computational Diffie-Hellman assumption in the random oracle model.
2009, 31(5): 1245-1248.
doi: 10.3724/SP.J.1146.2008.00407
Abstract:
Based on the social psychology idea behind the Particle Swarm Optimization (PSO) algorithm and the feature of adaptive FIR filter, the proper expressions for the inertial, cognitive and social parts are designed and applied to the optimization of the adaptive FIR filter in the combined adaptive filter. A combined adaptive filtering algorithm based on the idea of PSO is presented, and the complexity of the new algorithm is also analyzed. The theory analysis and the simulation results of the adaptive system identification under different conditions show that the new algorithm can balance the steady state misadjustment and tracking ability well, and its convergence performance is better than that of some other new LMS algorithms.
Based on the social psychology idea behind the Particle Swarm Optimization (PSO) algorithm and the feature of adaptive FIR filter, the proper expressions for the inertial, cognitive and social parts are designed and applied to the optimization of the adaptive FIR filter in the combined adaptive filter. A combined adaptive filtering algorithm based on the idea of PSO is presented, and the complexity of the new algorithm is also analyzed. The theory analysis and the simulation results of the adaptive system identification under different conditions show that the new algorithm can balance the steady state misadjustment and tracking ability well, and its convergence performance is better than that of some other new LMS algorithms.
2009, 31(5): 1249-1252.
doi: 10.3724/SP.J.1146.2008.00073
Abstract:
The compensation for space-variant motion errors is one of the main issues in ultra-high resolution airborne SAR imaging. This paper first discusses the characteristics of space-variant motion errors of airborne spotlight SAR, and deduces the quantificational relationship between space-variant motion errors and SAR imaging parameters such as scan angle, size of illuminated scene and grazing angle. Then, it presents a comprehensive processing scheme for motion compensation of high resolution airborne spotlight SAR, which is based on the sub-patches processing and a new method of range resample using chirp disturbance. This scheme can make up for the deficiency of traditional process and deals well with severe space-variant motion errors in ultra-high resolution imaging. Finally, computer simulations prove the validity of the scheme.
The compensation for space-variant motion errors is one of the main issues in ultra-high resolution airborne SAR imaging. This paper first discusses the characteristics of space-variant motion errors of airborne spotlight SAR, and deduces the quantificational relationship between space-variant motion errors and SAR imaging parameters such as scan angle, size of illuminated scene and grazing angle. Then, it presents a comprehensive processing scheme for motion compensation of high resolution airborne spotlight SAR, which is based on the sub-patches processing and a new method of range resample using chirp disturbance. This scheme can make up for the deficiency of traditional process and deals well with severe space-variant motion errors in ultra-high resolution imaging. Finally, computer simulations prove the validity of the scheme.
2009, 31(5): 1253-1255.
doi: 10.3724/SP.J.1146.2008.00368
Abstract:
A Tapered Slot Antenna (TSA) is investigated in view of Ultra Wide Band(UWB) applications. Simulation and measurement results show that, bandwidth of the antenna is 12-20GHz, half-power beamwidths are over 40 degrees, and distortions of the radiation patterns are quite small within the band. The antennas gain increases with frequency. This TSA is a candidate for UWB system.
A Tapered Slot Antenna (TSA) is investigated in view of Ultra Wide Band(UWB) applications. Simulation and measurement results show that, bandwidth of the antenna is 12-20GHz, half-power beamwidths are over 40 degrees, and distortions of the radiation patterns are quite small within the band. The antennas gain increases with frequency. This TSA is a candidate for UWB system.
2009, 31(5): 1256-1259.
doi: 10.3724/SP.J.1146.2008.00247
Abstract:
In order to cover the shortage of the current antenna for high power microwave weapon, the idea of a magnetized plasma channel used as antenna for radiating electromagnetic pulse is proposed. The normal modes of Magnetized Plasma Channel Antenna (MPCA) in lossy gas are analyzed. The concrete realization method of MPCA is simply described. The geometric-model of MPCA is created based on the operating principle of the antenna. The wave equations for the longitudinal electromagnetic fields and the relations between the transverse electromagnetic fields and longitudinal ones of magnetized plasma in generalized cylindrical coordinate are given. The strict characteristic equation of MPCA is deduced by using the boundary conditions of electromagnetic fields. Discussion is stressed on the variations of propagation constants with plasma channel parameters (plasma frequency and channel radius). The analysis shows that the influence of plasma frequency on the attenuation constant of MPCA is increasing, and an extremum point is appeared.
In order to cover the shortage of the current antenna for high power microwave weapon, the idea of a magnetized plasma channel used as antenna for radiating electromagnetic pulse is proposed. The normal modes of Magnetized Plasma Channel Antenna (MPCA) in lossy gas are analyzed. The concrete realization method of MPCA is simply described. The geometric-model of MPCA is created based on the operating principle of the antenna. The wave equations for the longitudinal electromagnetic fields and the relations between the transverse electromagnetic fields and longitudinal ones of magnetized plasma in generalized cylindrical coordinate are given. The strict characteristic equation of MPCA is deduced by using the boundary conditions of electromagnetic fields. Discussion is stressed on the variations of propagation constants with plasma channel parameters (plasma frequency and channel radius). The analysis shows that the influence of plasma frequency on the attenuation constant of MPCA is increasing, and an extremum point is appeared.
2009, 31(5): 1260-1263.
doi: 10.3724/SP.J.1146.2008.00223
Abstract:
This paper demonstrates a design of bandgap voltage reference with high precision and low temperature coefficient. A dual-differential input operational amplifier is used for high-temperature coefficient compensation. The chip is simulated by using Cadences Spectre software and implemented by using TSMC 0.35 m mixed-signal process. Simulation results indicate that the bandgap voltage reference has a temperature coefficient of 2.2 ppm/℃ in -40~125℃.
This paper demonstrates a design of bandgap voltage reference with high precision and low temperature coefficient. A dual-differential input operational amplifier is used for high-temperature coefficient compensation. The chip is simulated by using Cadences Spectre software and implemented by using TSMC 0.35 m mixed-signal process. Simulation results indicate that the bandgap voltage reference has a temperature coefficient of 2.2 ppm/℃ in -40~125℃.
2009, 31(5): 1264-1267.
doi: 10.3724/SP.J.1146.2008.00255
Abstract:
This paper models and analyses differential inductors and series-connected inductors from SMICs 0.18m CMOS process, and then proposes a design rule of inductors in RF CMOS differential circuits. This paper also presents a new model of series-connected inductors, in which mutual inductance, substrate capacitive loss and capacitive effects between windings are taken into consideration. Finally, a group of series-connected inductors with different spacing are designed and fabricated, and their measured results verify this model.
This paper models and analyses differential inductors and series-connected inductors from SMICs 0.18m CMOS process, and then proposes a design rule of inductors in RF CMOS differential circuits. This paper also presents a new model of series-connected inductors, in which mutual inductance, substrate capacitive loss and capacitive effects between windings are taken into consideration. Finally, a group of series-connected inductors with different spacing are designed and fabricated, and their measured results verify this model.
2009, 31(5): 1268-1270.
doi: 10.3724/SP.J.1146.2008.00038
Abstract:
The high-frequency method for solving the scattering of electrically large conductive targets in half space is presented in this paper. Consider the environmental impact of half-space, the half-space physical optics integral equation is deduced by introducing the half-space Greens function into the Physical Optics (PO) method. Combined with the GRaphical-Electromagnetic COmputing (GRECO) method, the shadow regions are eliminated quickly and the geometry information is attained by reading the color and depths of each pixel. The Radar Cross Section (RCS) of conductive targets can be exactly calculated in half space. The numerical results show that this method is efficient and accurate.
The high-frequency method for solving the scattering of electrically large conductive targets in half space is presented in this paper. Consider the environmental impact of half-space, the half-space physical optics integral equation is deduced by introducing the half-space Greens function into the Physical Optics (PO) method. Combined with the GRaphical-Electromagnetic COmputing (GRECO) method, the shadow regions are eliminated quickly and the geometry information is attained by reading the color and depths of each pixel. The Radar Cross Section (RCS) of conductive targets can be exactly calculated in half space. The numerical results show that this method is efficient and accurate.
2009, 31(5): 1218-1220.
doi: 10.3724/SP.J.1146.2008.00179
Abstract:
Analyses of two nominative proxy signature schemes proposed by Seo et al.(2003) and Huang et al. (2004) are given in this paper. The results show that neither the scheme has the property of strong unforgeability. A forgery attack on the two schemes is given, respectively. Using the forgery attack, a dishonest original signer can forge a proxy signing key on behalf of the designated proxy signer by assigning some parameters and produce valid nominative proxy signatures, which does harm to the benefits of the proxy signers.
Analyses of two nominative proxy signature schemes proposed by Seo et al.(2003) and Huang et al. (2004) are given in this paper. The results show that neither the scheme has the property of strong unforgeability. A forgery attack on the two schemes is given, respectively. Using the forgery attack, a dishonest original signer can forge a proxy signing key on behalf of the designated proxy signer by assigning some parameters and produce valid nominative proxy signatures, which does harm to the benefits of the proxy signers.