Email alert
2015 Vol. 37, No. 8
Display Method:
2015, 37(8): 1779-1785.
doi: 10.11999/JEIT150053
Abstract:
To improve the correct radar emitter recognition rate in cases that radar emitter characteristic parameters are overlapped with each other and existence of multiple modes, a DSm (Dezert-Smarandache) evidence modeling and radar emitter fusion recognition method based on cloud model is proposed. First, the radar emitter characteristic parameters which are overlapped and have multiple modes are modeled in DSm frame based on cloud model, then the degree of membership of unkonwn radar emitter signal belonging to prior radar types of each characteristic parameter is obtained by this model. Second, the basic belief assignments in DSm frame based on cloud model are obtained by the relationship between degree of membership and basic belief assignments. Thirdly, the basic belief assignments of the same characteristic parameters of multi-source unkown emitter signal are fused by DSmT+PCR5, then the fusion results of each characteristic parameters are fused to get the final recognition results. If there are only single-source unknown signal characteristic parameters, the basic belief assignments of each characteristic parameter are fused by DSmT+PCR5 to get the final recognition results. Finally, through the simulation experiments in multiple conditions, the superiority of the proposed method is testified well.
To improve the correct radar emitter recognition rate in cases that radar emitter characteristic parameters are overlapped with each other and existence of multiple modes, a DSm (Dezert-Smarandache) evidence modeling and radar emitter fusion recognition method based on cloud model is proposed. First, the radar emitter characteristic parameters which are overlapped and have multiple modes are modeled in DSm frame based on cloud model, then the degree of membership of unkonwn radar emitter signal belonging to prior radar types of each characteristic parameter is obtained by this model. Second, the basic belief assignments in DSm frame based on cloud model are obtained by the relationship between degree of membership and basic belief assignments. Thirdly, the basic belief assignments of the same characteristic parameters of multi-source unkown emitter signal are fused by DSmT+PCR5, then the fusion results of each characteristic parameters are fused to get the final recognition results. If there are only single-source unknown signal characteristic parameters, the basic belief assignments of each characteristic parameter are fused by DSmT+PCR5 to get the final recognition results. Finally, through the simulation experiments in multiple conditions, the superiority of the proposed method is testified well.
2015, 37(8): 1786-1792.
doi: 10.11999/JEIT141505
Abstract:
In heterogeneous clutter environments, Space-Time Adaptive Processing (STAP) shows notable performance degradation for lacking sufficient Independent Identically Distributed (IID) training samples. To solve this problem, a STAP approach is proposed based on dynamic environment sensing. With transmitted signal being orthogonal waveform, the clutter information is achieved. Then the clutter information and platform parameters are used and a clutter covariance matrix at future time is obtained incorporating system parameters. Finally, the space-time processor can be built based on the combination of the predicted clutter covariance matrix and the sample covariance matrix. The simulation results demonstrate that the new approach still can achieve better clutter suppression performance under circumstance of inaccurate environmental knowledge.
In heterogeneous clutter environments, Space-Time Adaptive Processing (STAP) shows notable performance degradation for lacking sufficient Independent Identically Distributed (IID) training samples. To solve this problem, a STAP approach is proposed based on dynamic environment sensing. With transmitted signal being orthogonal waveform, the clutter information is achieved. Then the clutter information and platform parameters are used and a clutter covariance matrix at future time is obtained incorporating system parameters. Finally, the space-time processor can be built based on the combination of the predicted clutter covariance matrix and the sample covariance matrix. The simulation results demonstrate that the new approach still can achieve better clutter suppression performance under circumstance of inaccurate environmental knowledge.
2015, 37(8): 1793-1800.
doi: 10.11999/JEIT141300
Abstract:
Conventional matched filtering based algorithms are not sufficiently good at dealing with the anisotropic backscattering behavior of targets in Wide Angle SAR (WASAR) imaging. Sparse signal processing provides a new idea for this problem, the anisotropic problem is modeled as a group of under-determined linear equations. However, the scale of unknowns in the under-determined equations is in linear order of the number of the observation angle. As the observation angle increases, the anisotropic problem becomes more and more difficult to be solved, even failed for conventional sparse signal processing algorithms. This paper presents a Group- sparse Complex Approximated Message Passing (GCAMP) algorithm for WASAR imaging. Firstly, a group sparse based WASAR imaging model is provided according to the structured property of backscattering coefficients across different observation angles. Secondly, the GCAMP algorithm is derived from the imaging model using message passing theory. Results of simulation demonstrate the effectiveness of the proposed algorithm.
Conventional matched filtering based algorithms are not sufficiently good at dealing with the anisotropic backscattering behavior of targets in Wide Angle SAR (WASAR) imaging. Sparse signal processing provides a new idea for this problem, the anisotropic problem is modeled as a group of under-determined linear equations. However, the scale of unknowns in the under-determined equations is in linear order of the number of the observation angle. As the observation angle increases, the anisotropic problem becomes more and more difficult to be solved, even failed for conventional sparse signal processing algorithms. This paper presents a Group- sparse Complex Approximated Message Passing (GCAMP) algorithm for WASAR imaging. Firstly, a group sparse based WASAR imaging model is provided according to the structured property of backscattering coefficients across different observation angles. Secondly, the GCAMP algorithm is derived from the imaging model using message passing theory. Results of simulation demonstrate the effectiveness of the proposed algorithm.
2015, 37(8): 1801-1807.
doi: 10.11999/JEIT141563
Abstract:
By adjusting the transmitting time and the initial phase of distributed radar antennas, the spatial distribution of signal energy transmission can be controlled and then the signal energy can be improved in the spatial region of interest. The fundamentals of distributed coherent transmitting are analyzed, the conditions are presented to form an interference peak, and a spatial interference energy distribution function is defined to represent the gain of the transmitting energy compared to the mean energy, which has a maximum of the number of transmitting antennas. The characteristic of the spatial interference energy distribution function is analyzed in both radar near field and radar far field, indicating that the spatial interference energy distribution function exhibits a stripe shape in radar far field with collocated antennas, and an ellipse shape or a mono-peak shape in radar near field with widely separated antennas. To make a real target contained in a signal interference peak, the distributed coherent transmission works better for lower frequencies and smaller targets in the target tracking mode.
By adjusting the transmitting time and the initial phase of distributed radar antennas, the spatial distribution of signal energy transmission can be controlled and then the signal energy can be improved in the spatial region of interest. The fundamentals of distributed coherent transmitting are analyzed, the conditions are presented to form an interference peak, and a spatial interference energy distribution function is defined to represent the gain of the transmitting energy compared to the mean energy, which has a maximum of the number of transmitting antennas. The characteristic of the spatial interference energy distribution function is analyzed in both radar near field and radar far field, indicating that the spatial interference energy distribution function exhibits a stripe shape in radar far field with collocated antennas, and an ellipse shape or a mono-peak shape in radar near field with widely separated antennas. To make a real target contained in a signal interference peak, the distributed coherent transmission works better for lower frequencies and smaller targets in the target tracking mode.
2015, 37(8): 1808-1813.
doi: 10.11999/JEIT141633
Abstract:
Fast Factorized Back-Projection (FFBP) is originally developed for Ultra-WideBand (UWB) Synthetic Aperture Radar (SAR), and it shows great success for spotlight SAR signal processing. However, its implementation is not straightforward for stripmap SAR due to the limitation of integration aperture and angular upsampling. To investigate the applicability of FFBP to stripmap SAR, this paper describes a reasonable implementation of overlapped-image method based on integration aperture and angular wavenumber bandwidth. This approach retains high efficiency of the original FFBP. Finally, the simulated squinted SAR data are utilized to verify the effectiveness of the proposed method.
Fast Factorized Back-Projection (FFBP) is originally developed for Ultra-WideBand (UWB) Synthetic Aperture Radar (SAR), and it shows great success for spotlight SAR signal processing. However, its implementation is not straightforward for stripmap SAR due to the limitation of integration aperture and angular upsampling. To investigate the applicability of FFBP to stripmap SAR, this paper describes a reasonable implementation of overlapped-image method based on integration aperture and angular wavenumber bandwidth. This approach retains high efficiency of the original FFBP. Finally, the simulated squinted SAR data are utilized to verify the effectiveness of the proposed method.
2015, 37(8): 1814-1820.
doi: 10.11999/JEIT141516
Abstract:
The ordinary full-aperture SAR imaging algorithms are inapplicable to focus high squint diving SAR subaperture data due to its property of vertical velocity which brings variance in azimuth. Based on the equivalent squint model and the characteristic of subaperture imaging, this paper explores a Frequency Phase Filtering Algorithm (FPFA) to implement high squint SAR subaperture data focusing. The innovative idea is the introduced filtering phase in the azimuth frequency domain in order to eliminate the azimuth dependence. Finally, the equivalent squint model causes the geometric deformation; due to this issue, a modified inverse-projection method corresponding to FPFA is proposed to get the final image without deformation. The simulation results and raw data processing validate the effectiveness of the proposed method.
The ordinary full-aperture SAR imaging algorithms are inapplicable to focus high squint diving SAR subaperture data due to its property of vertical velocity which brings variance in azimuth. Based on the equivalent squint model and the characteristic of subaperture imaging, this paper explores a Frequency Phase Filtering Algorithm (FPFA) to implement high squint SAR subaperture data focusing. The innovative idea is the introduced filtering phase in the azimuth frequency domain in order to eliminate the azimuth dependence. Finally, the equivalent squint model causes the geometric deformation; due to this issue, a modified inverse-projection method corresponding to FPFA is proposed to get the final image without deformation. The simulation results and raw data processing validate the effectiveness of the proposed method.
2015, 37(8): 1821-1827.
doi: 10.11999/JEIT141468
Abstract:
Target decomposition is an important tool to realize target classification, detection and recognition applications with Polarimetric SAR (PolSAR). However, the traditional method with priority of volume scattering component extraction seriously performs overestimation in the volume scattering energy or underestimation in the dihedral scattering energy. In this paper, by introducing polarimetric similarity measure, data-driven model- matching for basic scattering mechanism is proposed. On this basis, the priority of scattering mechanisms energy extraction is determined with the similarity measure. Based on the non-negative constraint of energy, all the orders of residual matrix are reextracted for the final energy contribution of the dihedral scattering, volume scattering, and surface scattering mechanism. The processing results of real data and their comparison with the optical image results show that the proposal is better than traditional methods for the accurate extracttion of the basic scattering characteristics in the targets region.
Target decomposition is an important tool to realize target classification, detection and recognition applications with Polarimetric SAR (PolSAR). However, the traditional method with priority of volume scattering component extraction seriously performs overestimation in the volume scattering energy or underestimation in the dihedral scattering energy. In this paper, by introducing polarimetric similarity measure, data-driven model- matching for basic scattering mechanism is proposed. On this basis, the priority of scattering mechanisms energy extraction is determined with the similarity measure. Based on the non-negative constraint of energy, all the orders of residual matrix are reextracted for the final energy contribution of the dihedral scattering, volume scattering, and surface scattering mechanism. The processing results of real data and their comparison with the optical image results show that the proposal is better than traditional methods for the accurate extracttion of the basic scattering characteristics in the targets region.
2015, 37(8): 1828-1835.
doi: 10.11999/JEIT141295
Abstract:
In order to solve the issue of two dimensional angles estimation for MIMO radar with L-shaped array, two novel reduced-dimensional Direction Of Arrival (DOA) estimation methods using ESPRIT algorithm are proposed. Firstly, through the reduced-dimensional matrix design and reduced-dimensional transformation, the high dimensional received data can be transformed into a lower dimensional signal space. Then, the signal space can be achieved via the eigen-value decomposition and propagator operator method respectively, and two dimensional spatial angle parameters can be joint estimated using ESPRIT algorithm with automatic pairing. The proposed two methods remove data redundancy of high dimensional received data at the greatest degree without costing the aperture of array and have lower computation complexity. Simulation results verify the correctness of theoretical analysis and the effectiveness of proposed algorithm.
In order to solve the issue of two dimensional angles estimation for MIMO radar with L-shaped array, two novel reduced-dimensional Direction Of Arrival (DOA) estimation methods using ESPRIT algorithm are proposed. Firstly, through the reduced-dimensional matrix design and reduced-dimensional transformation, the high dimensional received data can be transformed into a lower dimensional signal space. Then, the signal space can be achieved via the eigen-value decomposition and propagator operator method respectively, and two dimensional spatial angle parameters can be joint estimated using ESPRIT algorithm with automatic pairing. The proposed two methods remove data redundancy of high dimensional received data at the greatest degree without costing the aperture of array and have lower computation complexity. Simulation results verify the correctness of theoretical analysis and the effectiveness of proposed algorithm.
2015, 37(8): 1836-1842.
doi: 10.11999/JEIT140950
Abstract:
Circular SAR (CSAR) high resolution imaging requires a more accurate and stable Inertial Navigation System (INS), therefore motion compensation is one of the key technologies. In this paper, the phase error caused by trajectory measurement error is firstly analyzed. Then, the CSAR trajectory reconstruction algorithm based on the extraction of calibrators phase gradient is proposed. This algorithm is aiming at the spatial variant property of phase errors. The azimuth differential phase of each calibrator is extracted from SAR echo data. The high precision CSAR trajectory is reconstructed via trilateration in combination with multiple calibrators phase gradient. The high quality CSAR image is formed with this trajectory. The proposed algorithm solves the problem of image defocusing by trajectory measurement error, meanwhile the entire scene gets focused efficiently. The simulation results verify the correctness and effectiveness of the proposed algorithm.
Circular SAR (CSAR) high resolution imaging requires a more accurate and stable Inertial Navigation System (INS), therefore motion compensation is one of the key technologies. In this paper, the phase error caused by trajectory measurement error is firstly analyzed. Then, the CSAR trajectory reconstruction algorithm based on the extraction of calibrators phase gradient is proposed. This algorithm is aiming at the spatial variant property of phase errors. The azimuth differential phase of each calibrator is extracted from SAR echo data. The high precision CSAR trajectory is reconstructed via trilateration in combination with multiple calibrators phase gradient. The high quality CSAR image is formed with this trajectory. The proposed algorithm solves the problem of image defocusing by trajectory measurement error, meanwhile the entire scene gets focused efficiently. The simulation results verify the correctness and effectiveness of the proposed algorithm.
2015, 37(8): 1843-1848.
doi: 10.11999/JEIT141485
Abstract:
In Over-The-Horizon Radar (OTHR), the maneuvering target detection algorithms based on time- frequency analysis methods have the advantage of low signal to noise ratio requirement and high parameter estimation precision. However, the calculation amount of traditional time-frequency analysis methods is large and it is difficult to meet the requirement of engineering application. To solve this problem, this paper proposes a new time-frequency analysis method by constructing the joint time-frequency rate domain and calculating the integral value along the time axis to achieve maneuvering target detection and parameter estimation. Due to the special integral path, the proposed algorithm avoids the use of Hough transform. It can reduce the computation amount and make it meet the practical requirement. Simulation results show that the algorithm achieves better detection effect in low SNR, lower computation complexity and higher estimation accuracy. In addition, the frequency rate is accumulated according to different time, so it can suppress cross terms caused by multiple maneuvering targets and reduce false alarm rate.
In Over-The-Horizon Radar (OTHR), the maneuvering target detection algorithms based on time- frequency analysis methods have the advantage of low signal to noise ratio requirement and high parameter estimation precision. However, the calculation amount of traditional time-frequency analysis methods is large and it is difficult to meet the requirement of engineering application. To solve this problem, this paper proposes a new time-frequency analysis method by constructing the joint time-frequency rate domain and calculating the integral value along the time axis to achieve maneuvering target detection and parameter estimation. Due to the special integral path, the proposed algorithm avoids the use of Hough transform. It can reduce the computation amount and make it meet the practical requirement. Simulation results show that the algorithm achieves better detection effect in low SNR, lower computation complexity and higher estimation accuracy. In addition, the frequency rate is accumulated according to different time, so it can suppress cross terms caused by multiple maneuvering targets and reduce false alarm rate.
2015, 37(8): 1849-1854.
doi: 10.11999/JEIT141466
Abstract:
In the application of planning and management of the limited resources of radar, the fluctuation of the targets Radar Cross Section (RCS) area will have a significant impact on the effects of allocation of resources. For this problem, this paper puts forward a prediction of target radar cross section method. This method firstly gets the measurement of the targets radar cross section, and predicts the value by using the method of probability density transfer. Based on the calculation of radar measured data from three types of aircraft, the method can get more accurate prediction. Finally, the optimization of power distribution equation is built and simulation results demonstrate that the accurate prediction of the RCS will improve the measurement accuracy after the power allocation.
In the application of planning and management of the limited resources of radar, the fluctuation of the targets Radar Cross Section (RCS) area will have a significant impact on the effects of allocation of resources. For this problem, this paper puts forward a prediction of target radar cross section method. This method firstly gets the measurement of the targets radar cross section, and predicts the value by using the method of probability density transfer. Based on the calculation of radar measured data from three types of aircraft, the method can get more accurate prediction. Finally, the optimization of power distribution equation is built and simulation results demonstrate that the accurate prediction of the RCS will improve the measurement accuracy after the power allocation.
2015, 37(8): 1855-1861.
doi: 10.11999/JEIT141472
Abstract:
The classical multi-sensor anti-bias association algorithms require complex calculation procedure, can not be used in real-time condition, for instance the algorithms based on image matching and reference topology feature. On the basis of tracks distribution feature on sea surface and Cell-Average Constant False Alarm Rate (CA-CFAR) detection theory, a real-time anti-bias association algorithm for Automatic Identification System (AIS) and the data track of radar is proposed, named confidential-association algorithm, to make real-time radar systematic error registration and multi-sensor information fusion come true. Monte-Carlo simulation results show that the accuracy of confidential-association algorithm maintains on a high level on the sea-surface environment, it has the superiority of simply calculation procedure and substantially reduces run-time in comparison to current anti-bias association algorithms. Radar automatically registration technique based on confidential-association algorithm reduces average error of the measured data by nearly 90%.
The classical multi-sensor anti-bias association algorithms require complex calculation procedure, can not be used in real-time condition, for instance the algorithms based on image matching and reference topology feature. On the basis of tracks distribution feature on sea surface and Cell-Average Constant False Alarm Rate (CA-CFAR) detection theory, a real-time anti-bias association algorithm for Automatic Identification System (AIS) and the data track of radar is proposed, named confidential-association algorithm, to make real-time radar systematic error registration and multi-sensor information fusion come true. Monte-Carlo simulation results show that the accuracy of confidential-association algorithm maintains on a high level on the sea-surface environment, it has the superiority of simply calculation procedure and substantially reduces run-time in comparison to current anti-bias association algorithms. Radar automatically registration technique based on confidential-association algorithm reduces average error of the measured data by nearly 90%.
2015, 37(8): 1862-1867.
doi: 10.11999/JEIT141615
Abstract:
A moving ship detection method is presented for ocean moving objects detection of remote sensing satellite in geostationary orbit. First, the multi-structural and multiscale element morphological filter is used to suppress background information of oceanic remote sensing images. Then, image segmentation is done by adopting the adaptive threshold algorithm. Accordingly, the connected domains of pre-detection targets are obtained by utilizing self-organized clustering. Finally, real targets from many candidate targets can be obtained by multi-object variable region decision based on moving targets feature. The experiment results and analysis show that the proposed method can detect moving warship targets and the trajectories of moving targets efficiently, and has high detection probability and robustness. This method provides technical support for on-board image processing of remote sensing satellite in geostationary orbit.
A moving ship detection method is presented for ocean moving objects detection of remote sensing satellite in geostationary orbit. First, the multi-structural and multiscale element morphological filter is used to suppress background information of oceanic remote sensing images. Then, image segmentation is done by adopting the adaptive threshold algorithm. Accordingly, the connected domains of pre-detection targets are obtained by utilizing self-organized clustering. Finally, real targets from many candidate targets can be obtained by multi-object variable region decision based on moving targets feature. The experiment results and analysis show that the proposed method can detect moving warship targets and the trajectories of moving targets efficiently, and has high detection probability and robustness. This method provides technical support for on-board image processing of remote sensing satellite in geostationary orbit.
2015, 37(8): 1868-1873.
doi: 10.11999/JEIT141238
Abstract:
In order to meet the requirement for measurement and detection of complex missile targets, a complex missile model with empennages is established and its scattering characteristics are studied under the conditions of the missile irradiated by extremely short pulse. Transient scattering echoes of the missile model are calculated using Finite-Difference Time Domain (FDTD) algorithm. The characteristics of the missile scattering echoes are analyzed at different incident angle conditions in far field and at different rotational angle of the missile in near field. These analyses on the scattering echo characteristics reveal the causes of the missile scattering centers and characteristics of scattering waveforms, which can provide theoretical reference in radar application.
In order to meet the requirement for measurement and detection of complex missile targets, a complex missile model with empennages is established and its scattering characteristics are studied under the conditions of the missile irradiated by extremely short pulse. Transient scattering echoes of the missile model are calculated using Finite-Difference Time Domain (FDTD) algorithm. The characteristics of the missile scattering echoes are analyzed at different incident angle conditions in far field and at different rotational angle of the missile in near field. These analyses on the scattering echo characteristics reveal the causes of the missile scattering centers and characteristics of scattering waveforms, which can provide theoretical reference in radar application.
2015, 37(8): 1874-1878.
doi: 10.11999/JEIT141542
Abstract:
The Multiple Measurement Vectors (MMV) problem addresses the recovery of unknown input vectors which share the same sparse support. The Compressed Sensing (CS) has the capability of estimating the sparse support even in coherent cases, where the traditional array processing approaches like MUltiple SIgnal Classification (MUSIC) often fail. However, CS guarantees the accurate recovery in a probabilistic manner, and often shows inferior performance in cases where the traditional ways succeed. Recently, a novel compressive MUSIC (or CS-MUSIC) algorithm is proposed by Kim et al., in which both the advantages of CS and traditional MUSIC-like methods are combined together. As an iterative projecting algorithm, Difference Map (DM) is first used to solve the phase retrieval problem in crystallography. Recent results show that it has excellent performance in solving a wide variety of non-convex problems like compressed sensing. In this paper, a DM-based CS-MUSIC algorithm is proposed. Experiments show that the proposed algorithm is very effective in MMV problem solving and the success rate of CS-MUSIC is dramatically improved.
The Multiple Measurement Vectors (MMV) problem addresses the recovery of unknown input vectors which share the same sparse support. The Compressed Sensing (CS) has the capability of estimating the sparse support even in coherent cases, where the traditional array processing approaches like MUltiple SIgnal Classification (MUSIC) often fail. However, CS guarantees the accurate recovery in a probabilistic manner, and often shows inferior performance in cases where the traditional ways succeed. Recently, a novel compressive MUSIC (or CS-MUSIC) algorithm is proposed by Kim et al., in which both the advantages of CS and traditional MUSIC-like methods are combined together. As an iterative projecting algorithm, Difference Map (DM) is first used to solve the phase retrieval problem in crystallography. Recent results show that it has excellent performance in solving a wide variety of non-convex problems like compressed sensing. In this paper, a DM-based CS-MUSIC algorithm is proposed. Experiments show that the proposed algorithm is very effective in MMV problem solving and the success rate of CS-MUSIC is dramatically improved.
2015, 37(8): 1879-1885.
doi: 10.11999/JEIT141538
Abstract:
A novel algorithm is proposed for Two-Dimensional (2D) Direction Of Arrival (DOA) estimation issue with L-shaped array. By introducing an auxiliary electrical angle, 2D-DOA estimation problem is solved by two-step 1D-DOA estimation. Firstly auxiliary electrical angle estimation is given by a propagator based RAnk Reduction Estimator (RARE). Then a cost function about one incident angle is obtained, and the incident angle estimation is given by K zeros of polynomial relating to the cost function. Finally, The other incident angle estimation is given by simple algebraic operation between the obtained auxiliary electrical angle and incident angle estimations. Computational burden analysis is given in this paper. It is shown that the proposed algorithm has a roughly same computational burden with JEADE algorithm, while both of them has larger burden than CODE and root-MUSIC algorithms. Further, the Root-Mean-Square-Error (RMSE) expressions of the incident angle estimates are derived to validate the performance of the proposed algorithm.
A novel algorithm is proposed for Two-Dimensional (2D) Direction Of Arrival (DOA) estimation issue with L-shaped array. By introducing an auxiliary electrical angle, 2D-DOA estimation problem is solved by two-step 1D-DOA estimation. Firstly auxiliary electrical angle estimation is given by a propagator based RAnk Reduction Estimator (RARE). Then a cost function about one incident angle is obtained, and the incident angle estimation is given by K zeros of polynomial relating to the cost function. Finally, The other incident angle estimation is given by simple algebraic operation between the obtained auxiliary electrical angle and incident angle estimations. Computational burden analysis is given in this paper. It is shown that the proposed algorithm has a roughly same computational burden with JEADE algorithm, while both of them has larger burden than CODE and root-MUSIC algorithms. Further, the Root-Mean-Square-Error (RMSE) expressions of the incident angle estimates are derived to validate the performance of the proposed algorithm.
2015, 37(8): 1886-1891.
doi: 10.11999/JEIT141208
Abstract:
In view of the poor performance of traditional Direction of Arrival (DOA) methods at low signal-to-noise ratios, an improved MUltiple SIgnal Classification (MUSIC) algorithm for DOA estimation applied to active detection system based on covariance matrix decomposition of cross-correlation (I-MUSIC) is proposed. Exploiting the transmission feature of active sonar, cross-correlation sequence between the transmitted signal and the array output is formulated. The spatial covariance matrix is then constructed from the sequence. Then matrix decomposition is implemented over the new spatial covariance matrix to estimate the DOA. It is proved that cross-correlation can suppress noise while preserving the phase information between array elements, which facilitate the subspace separation at low SNRs. Furthermore, another novel method based on correlation Time threshold (T-MUSIC) is proposed to further improve the DOA performance. Simulation results indicate that I-MUSIC and T-MUSIC can obtain a performance gain of 3 dB and 6 dB, with the estimate error being 77% and 53% of the original method respectively. Due to data selection via time threshold, T-MUSIC is not appreciably affected by noise, and thus outperforms IM-MUISC for 8 dB at low SNRs. I-MUSIC and T-MUSIC can improve the DOA performance at low SNRs significantly if applied to active multi-target detection system.
In view of the poor performance of traditional Direction of Arrival (DOA) methods at low signal-to-noise ratios, an improved MUltiple SIgnal Classification (MUSIC) algorithm for DOA estimation applied to active detection system based on covariance matrix decomposition of cross-correlation (I-MUSIC) is proposed. Exploiting the transmission feature of active sonar, cross-correlation sequence between the transmitted signal and the array output is formulated. The spatial covariance matrix is then constructed from the sequence. Then matrix decomposition is implemented over the new spatial covariance matrix to estimate the DOA. It is proved that cross-correlation can suppress noise while preserving the phase information between array elements, which facilitate the subspace separation at low SNRs. Furthermore, another novel method based on correlation Time threshold (T-MUSIC) is proposed to further improve the DOA performance. Simulation results indicate that I-MUSIC and T-MUSIC can obtain a performance gain of 3 dB and 6 dB, with the estimate error being 77% and 53% of the original method respectively. Due to data selection via time threshold, T-MUSIC is not appreciably affected by noise, and thus outperforms IM-MUISC for 8 dB at low SNRs. I-MUSIC and T-MUSIC can improve the DOA performance at low SNRs significantly if applied to active multi-target detection system.
2015, 37(8): 1892-1899.
doi: 10.11999/JEIT141420
Abstract:
Image interpolation is a basic issue in digital image processing, which can be used to realize image magnification and restoration, etc.. Traditional interpolation methods are easy to make the edge structures produce staircase artifacts or make the interpolated results blurred. An image interpolation method with corner preserving based on Partial Differential Equation (PDE) is proposed, which provides different interpolation applications for different characteristics of the image. The proposed interpolation scheme is not only able to?maintain the edge structure clear, but also able to keep the corner sharp. Then, the overall visual and the Peak Signal to Noise Ratio (PSNR) of the interpolation image can be improved effectively. In addition, this paper puts forward methods for selecting the parameters through analyzing the equation, thus it improves the adaptability of the proposed method.
Image interpolation is a basic issue in digital image processing, which can be used to realize image magnification and restoration, etc.. Traditional interpolation methods are easy to make the edge structures produce staircase artifacts or make the interpolated results blurred. An image interpolation method with corner preserving based on Partial Differential Equation (PDE) is proposed, which provides different interpolation applications for different characteristics of the image. The proposed interpolation scheme is not only able to?maintain the edge structure clear, but also able to keep the corner sharp. Then, the overall visual and the Peak Signal to Noise Ratio (PSNR) of the interpolation image can be improved effectively. In addition, this paper puts forward methods for selecting the parameters through analyzing the equation, thus it improves the adaptability of the proposed method.
2015, 37(8): 1900-1905.
doi: 10.11999/JEIT141515
Abstract:
For the linear discrete time multisensor system with uncertain model parameters and noise variances, a Covariance Intersection (CI) fusion robust steady-state Kalman filter based on the minimax robust estimation principle is presented. Firstly, introducing the fictitious noise, the model parameter uncertainty can be compensated, so the multisensory system with both the model parameter and noise variance uncertainties is converted into that with only uncertain noise variances. Secondly, using the Lyapunov equation, the robustness of the local robust Kalman filter is proved, so the robustness of the CI fused Kalman filter is guaranteed and it is proved that the robust accuracy of the CI fuser is higher than that of each local filter. Finally, a simulation example shows that how to search the robust region of uncertain parameters and shows the good performance of the proposed robust Kalman filter.
For the linear discrete time multisensor system with uncertain model parameters and noise variances, a Covariance Intersection (CI) fusion robust steady-state Kalman filter based on the minimax robust estimation principle is presented. Firstly, introducing the fictitious noise, the model parameter uncertainty can be compensated, so the multisensory system with both the model parameter and noise variance uncertainties is converted into that with only uncertain noise variances. Secondly, using the Lyapunov equation, the robustness of the local robust Kalman filter is proved, so the robustness of the CI fused Kalman filter is guaranteed and it is proved that the robust accuracy of the CI fuser is higher than that of each local filter. Finally, a simulation example shows that how to search the robust region of uncertain parameters and shows the good performance of the proposed robust Kalman filter.
2015, 37(8): 1906-1912.
doi: 10.11999/JEIT141613
Abstract:
Researchers have done a great number of studies on the object recognition and the video coding transmission respectively. However, there are still no public reports about the influence on the object recognition raised by the video encoding parameters. For this issue, the Deformable Part Model (DPM), a typical object recognition algorithm and the most commonly-used video encoding methods-H.264/AVC are chosen as the test objects. In order to study how the code rates and the resolution affect the performance of video object recognition, the coding and detection experiments are designed and the function of recognition performance changes caused by the code rates and the resolution is fitted. The result shows that the compromise can be achieved between the channel bandwidth and the video object recognition performance through selecting the appropriate the code rates and the resolution parameters for the encoder which provides basis for encoding optimization object function of different video applications.
Researchers have done a great number of studies on the object recognition and the video coding transmission respectively. However, there are still no public reports about the influence on the object recognition raised by the video encoding parameters. For this issue, the Deformable Part Model (DPM), a typical object recognition algorithm and the most commonly-used video encoding methods-H.264/AVC are chosen as the test objects. In order to study how the code rates and the resolution affect the performance of video object recognition, the coding and detection experiments are designed and the function of recognition performance changes caused by the code rates and the resolution is fitted. The result shows that the compromise can be achieved between the channel bandwidth and the video object recognition performance through selecting the appropriate the code rates and the resolution parameters for the encoder which provides basis for encoding optimization object function of different video applications.
2015, 37(8): 1913-1919.
doi: 10.11999/JEIT141194
Abstract:
In order to address the non-rigid deformation (e.g., misalignment, poses, and expression) of facial images, this paper proposes a novel sparse representation face recognition algorithm using Dense Scale Invariant Feature Transform (SIFT) Feature Alignment (DSFA). The whole method consists of two steps: first, DSFA is employed as a generic transformation to roughly align training and testing samples; and then, input facial images are identified based on proposed sparse representation model. A novel coarse-to-fine scheme is designed to accelerate facial image alignment. The experimental results demonstrate the superiority of the proposed method over other methods on ORL, AR, and LFW datasets. The proposed approach improves 4.3% in terms of recognition accuracy and runs nearly 6 times faster than previous sparse approximation methods on three datasets.
In order to address the non-rigid deformation (e.g., misalignment, poses, and expression) of facial images, this paper proposes a novel sparse representation face recognition algorithm using Dense Scale Invariant Feature Transform (SIFT) Feature Alignment (DSFA). The whole method consists of two steps: first, DSFA is employed as a generic transformation to roughly align training and testing samples; and then, input facial images are identified based on proposed sparse representation model. A novel coarse-to-fine scheme is designed to accelerate facial image alignment. The experimental results demonstrate the superiority of the proposed method over other methods on ORL, AR, and LFW datasets. The proposed approach improves 4.3% in terms of recognition accuracy and runs nearly 6 times faster than previous sparse approximation methods on three datasets.
2015, 37(8): 1920-1925.
doi: 10.11999/JEIT141532
Abstract:
The Viterbi decoding algorithm is widely used in the wireless digital communication system, generally using the bit Log-Likelihood Ratio (LLR) as its input. For an M-ary Frequency Shift Keying (M-FSK) signal, a corresponding Viterbi decoding algorithm by directly adopting the M-dimensions energy information of the signal demodulation as the decoder branch metrics is proposed. This paper analyzes the theoretical performance of the proposed algorithm in the AWGN and the Rayleigh fading channels, and the upper bound for closed-form expressions of the Bit Error Rate (BER) performance are derived. The validity of the theoretical derivation is demonstrated by the simulations. Compared with the existing Viterbi algorithm, the proposed scheme can avoid the computing of the bit LLR and the branch metric, also it can descend the complex of the algorithm and decrease the loss of the information, improve the BER performance in the presence of Viterbi decoding algorithm which based on the M-FSK signal soft demodulation. Thus, the proposed scheme is a Viterbi decoding algorithm that is more adaptive to the actual project based on the M-FSK signal.
The Viterbi decoding algorithm is widely used in the wireless digital communication system, generally using the bit Log-Likelihood Ratio (LLR) as its input. For an M-ary Frequency Shift Keying (M-FSK) signal, a corresponding Viterbi decoding algorithm by directly adopting the M-dimensions energy information of the signal demodulation as the decoder branch metrics is proposed. This paper analyzes the theoretical performance of the proposed algorithm in the AWGN and the Rayleigh fading channels, and the upper bound for closed-form expressions of the Bit Error Rate (BER) performance are derived. The validity of the theoretical derivation is demonstrated by the simulations. Compared with the existing Viterbi algorithm, the proposed scheme can avoid the computing of the bit LLR and the branch metric, also it can descend the complex of the algorithm and decrease the loss of the information, improve the BER performance in the presence of Viterbi decoding algorithm which based on the M-FSK signal soft demodulation. Thus, the proposed scheme is a Viterbi decoding algorithm that is more adaptive to the actual project based on the M-FSK signal.
2015, 37(8): 1926-1930.
doi: 10.11999/JEIT141556
Abstract:
An algorithm to recover a Turbo-code interleaver is proposed at high Bit Error Rate (BER), and it is applied to the 1/3 parallel concatenated Turbo-code. The recognition of channel coding plays an important part in the field of non-cooperative signal processing; recovering a Turbo-code interleaver is one difficulty. There are already some effective algorithms for the noiseless condition, but in actual communication system, Turbo code is often used in a high noisy level, where the BER is high and the word length is long: these algorithms would be ineffective. Using the characteristic of the parity-heck vector, each position of the interleaver can be separated and solved independently. Thus, it makes the recovery of every position only rely on several correlative positions, which avoids the error accumulation effect. The algorithm solves the problem when the BER is high and the code length is long, and it also has low complexity. Simulations show that for a Turbo code with interleaver length 10000 and BER 10%, the algorithm runs successfully.
An algorithm to recover a Turbo-code interleaver is proposed at high Bit Error Rate (BER), and it is applied to the 1/3 parallel concatenated Turbo-code. The recognition of channel coding plays an important part in the field of non-cooperative signal processing; recovering a Turbo-code interleaver is one difficulty. There are already some effective algorithms for the noiseless condition, but in actual communication system, Turbo code is often used in a high noisy level, where the BER is high and the word length is long: these algorithms would be ineffective. Using the characteristic of the parity-heck vector, each position of the interleaver can be separated and solved independently. Thus, it makes the recovery of every position only rely on several correlative positions, which avoids the error accumulation effect. The algorithm solves the problem when the BER is high and the code length is long, and it also has low complexity. Simulations show that for a Turbo code with interleaver length 10000 and BER 10%, the algorithm runs successfully.
2015, 37(8): 1931-1936.
doi: 10.11999/JEIT141530
Abstract:
A novel scheme named EWF-RLT codes, which produces Unequal Error Protection (UEP) for Luby Transform (LT) codes over Additive White Gaussian Noise (AWGN) channel by using a windowing technique before regularizing variable-node distribution, is proposed in this paper. Firstly, the idea of windowing the data sets according to their protection requirements is applied to allow coded symbols to make more edge connections with more important parts of the information bit stream with high probability. Then, variable-node degree distribution is exploited to improve the error floor and ensure the more important class of information bit stream have a higher minimum variable-node degree by modifying the traditional method of choosing neighbor nodes randomly in encoding. Compared with the conventional UEP scheme, what is confirmed both theoretically and experimentally is that the proposed approach can provide significant performance improvement in the most important bits class and improve network transmission performance. Furthermore, the proposed scheme introduces additional parameters in the UEP LT code design, making it more general and flexible in terms of the realization of UEP scheme.
A novel scheme named EWF-RLT codes, which produces Unequal Error Protection (UEP) for Luby Transform (LT) codes over Additive White Gaussian Noise (AWGN) channel by using a windowing technique before regularizing variable-node distribution, is proposed in this paper. Firstly, the idea of windowing the data sets according to their protection requirements is applied to allow coded symbols to make more edge connections with more important parts of the information bit stream with high probability. Then, variable-node degree distribution is exploited to improve the error floor and ensure the more important class of information bit stream have a higher minimum variable-node degree by modifying the traditional method of choosing neighbor nodes randomly in encoding. Compared with the conventional UEP scheme, what is confirmed both theoretically and experimentally is that the proposed approach can provide significant performance improvement in the most important bits class and improve network transmission performance. Furthermore, the proposed scheme introduces additional parameters in the UEP LT code design, making it more general and flexible in terms of the realization of UEP scheme.
2015, 37(8): 1937-1943.
doi: 10.11999/JEIT141609
Abstract:
Faced with the complex environment of deep space communication, the adaptive capacity can have an impact on the ability of the Low Density Parity Check (LDPC) code decoder to maintain long-term stability. This paper proposes a design method of dynamic adaptive LDPC code decoder. Through the IP-based design of each function module, the design method of dynamic adaptive can be mapped to each function module in DVB-S2 LDPC code decoder. The verification results based on the Stratix IV FPGA show the dynamic adaptive LDPC code decoder not only can decode under the different code length and code rate, but also can decode under the different decoding performance. Meanwhile, the single-channel decoder can ensure the information throughput to reach to 40.9~71.7 Mbps.
Faced with the complex environment of deep space communication, the adaptive capacity can have an impact on the ability of the Low Density Parity Check (LDPC) code decoder to maintain long-term stability. This paper proposes a design method of dynamic adaptive LDPC code decoder. Through the IP-based design of each function module, the design method of dynamic adaptive can be mapped to each function module in DVB-S2 LDPC code decoder. The verification results based on the Stratix IV FPGA show the dynamic adaptive LDPC code decoder not only can decode under the different code length and code rate, but also can decode under the different decoding performance. Meanwhile, the single-channel decoder can ensure the information throughput to reach to 40.9~71.7 Mbps.
2015, 37(8): 1944-1949.
doi: 10.11999/JEIT141454
Abstract:
Redundant Residue Number System (RRNS) is widely used in communication systems for WLAN (Wireless LAN) and CDMA (Code Division Multiple Access) etc. due to its strong ability to enhance robustness of information in parallel processing environments. Error detection and correction of RRNS is an important guarantee for information reliability in communication systems. The overflow detection theorem, the unique theorem, and the searching theorem are proposed and proved in the paper based on properties of residue classes in finite rings. With the theorems, a single-error-correction algorithm using modular operations with reduced complexityO(k,r) is proposed. The uniqueness test algorithm is proposed. Furthermore, for any general types of errors, the searching multiple-error-correction algorithm is proposed. The computational complexity of the searching multiple-error- correction algorithm is reduced from polynomial order to logarithmic order according to the analysis, and the method can reach the extreme correction capability efficiently with only comparison operations instead of complex modular arithmetic.
Redundant Residue Number System (RRNS) is widely used in communication systems for WLAN (Wireless LAN) and CDMA (Code Division Multiple Access) etc. due to its strong ability to enhance robustness of information in parallel processing environments. Error detection and correction of RRNS is an important guarantee for information reliability in communication systems. The overflow detection theorem, the unique theorem, and the searching theorem are proposed and proved in the paper based on properties of residue classes in finite rings. With the theorems, a single-error-correction algorithm using modular operations with reduced complexityO(k,r) is proposed. The uniqueness test algorithm is proposed. Furthermore, for any general types of errors, the searching multiple-error-correction algorithm is proposed. The computational complexity of the searching multiple-error- correction algorithm is reduced from polynomial order to logarithmic order according to the analysis, and the method can reach the extreme correction capability efficiently with only comparison operations instead of complex modular arithmetic.
2015, 37(8): 1950-1956.
doi: 10.11999/JEIT141507
Abstract:
To combat the effect of InterSymbol Interference (ISI) while transmitting data over wireless fading channels, the issue of single carrier communication signal receiving with multiple antennas is studied and an iterative frequency domain combining equalization algorithm is proposed. The proposed algorithm derives the theoretical frequency domain transfer function of the combining equalizer with a priori information. An efficient implementation is proposed which employs the Fast Fourier Transform (FFT) to compute the combining equalizer coefficients and equalization filter. Numerical results show that the proposed algorithm reduces complexity enormously with nearly no performance loss compared with the time domain algorithm. Compared with Single Carrier Frequency Domain Equalization (SC-FDE), the Cyclic Prefix (CP) overhead can be avoided, and the computationally efficient frequency domain algorithm can be applied to the existing single carrier communication systems.
To combat the effect of InterSymbol Interference (ISI) while transmitting data over wireless fading channels, the issue of single carrier communication signal receiving with multiple antennas is studied and an iterative frequency domain combining equalization algorithm is proposed. The proposed algorithm derives the theoretical frequency domain transfer function of the combining equalizer with a priori information. An efficient implementation is proposed which employs the Fast Fourier Transform (FFT) to compute the combining equalizer coefficients and equalization filter. Numerical results show that the proposed algorithm reduces complexity enormously with nearly no performance loss compared with the time domain algorithm. Compared with Single Carrier Frequency Domain Equalization (SC-FDE), the Cyclic Prefix (CP) overhead can be avoided, and the computationally efficient frequency domain algorithm can be applied to the existing single carrier communication systems.
2015, 37(8): 1957-1963.
doi: 10.11999/JEIT141585
Abstract:
Most interference alignment algorithms assume that the senders know perfect Channel State Information (CSI), but in practical communication systems, due to the channel estimation error, the delayed feedback and so on, the CSI often exists the error. Therefore, a robust interference alignment algorithm is presented based on the QR decomposition. Firstly, the QR is used to preprocess the jointly received signal with the of error for eliminating half of the interference terms. Then this paper minimizes the interference power from the sender to the other receivers to design the pre-coding matrix, and utilizes Minimum Mean Square Error (MMSE) criterion to design the interference suppression matrix. Finally, under the conditions of perfect CSI and error CSI, the simulation results verify that the proposed algorithm improves effectively the performance of the system.
Most interference alignment algorithms assume that the senders know perfect Channel State Information (CSI), but in practical communication systems, due to the channel estimation error, the delayed feedback and so on, the CSI often exists the error. Therefore, a robust interference alignment algorithm is presented based on the QR decomposition. Firstly, the QR is used to preprocess the jointly received signal with the of error for eliminating half of the interference terms. Then this paper minimizes the interference power from the sender to the other receivers to design the pre-coding matrix, and utilizes Minimum Mean Square Error (MMSE) criterion to design the interference suppression matrix. Finally, under the conditions of perfect CSI and error CSI, the simulation results verify that the proposed algorithm improves effectively the performance of the system.
2015, 37(8): 1964-1970.
doi: 10.11999/JEIT141442
Abstract:
In order to address the problem of reducing the resources of transponder and spectrum in flexible grid optical networks, a lightpath circle mechanism is studied for many-to-many multicast requests, and a method of optical grooming is proposed based on distance-adaptive and effective sharing path-aware. By designing a strategy of traffic pre-processing based on distance-adaptive, a lightpath circle is constructed according to the distribution characteristics?of member nodes and the distance-adaptive criterion in the proposed method. In the process of routing and spectrum allocation, by constructing a decision matrix oriented optical grooming and a priority scheduling vector, the multicast request is groomed into the established traffic with the highest effective sharing links. Moreover, the appropriate spectrum resources are allocated for the groomed requests to increase the success rates of grooming and to save the resources of transponder and spectrum. Simulation results show that the proposed method can significantly reduce the number of traffic consumed transponders and sub-carriers.
In order to address the problem of reducing the resources of transponder and spectrum in flexible grid optical networks, a lightpath circle mechanism is studied for many-to-many multicast requests, and a method of optical grooming is proposed based on distance-adaptive and effective sharing path-aware. By designing a strategy of traffic pre-processing based on distance-adaptive, a lightpath circle is constructed according to the distribution characteristics?of member nodes and the distance-adaptive criterion in the proposed method. In the process of routing and spectrum allocation, by constructing a decision matrix oriented optical grooming and a priority scheduling vector, the multicast request is groomed into the established traffic with the highest effective sharing links. Moreover, the appropriate spectrum resources are allocated for the groomed requests to increase the success rates of grooming and to save the resources of transponder and spectrum. Simulation results show that the proposed method can significantly reduce the number of traffic consumed transponders and sub-carriers.
2015, 37(8): 1971-1977.
doi: 10.11999/JEIT141604
Abstract:
To analyze the immunity of ZUC stream cipher in aspect of correlation power analysis attack, some relevant researches are conducted. In order to improve the pertinence of attack, a rapid assessment method of the attack scheme is presented, and accordingly a correlation power analysis scheme of ZUC is proposed. Finally, based on the simulation platform raised by ASIC development environment, the attack scheme is validated. Experiment results turn out that the scheme can successfully attack 48-bit key, confirming that ZUC is unable to resist the correlation power analysis attack, and the proposed assessment method of attack scheme takes effect. Compared with Tang Mings experimental, which conducted differential power analysis of ZUC with random initial vectors and observing distinct differential power peak with 5000 initial vectors, the proposed attack scheme only uses 256 initial vectors, and gets better results.
To analyze the immunity of ZUC stream cipher in aspect of correlation power analysis attack, some relevant researches are conducted. In order to improve the pertinence of attack, a rapid assessment method of the attack scheme is presented, and accordingly a correlation power analysis scheme of ZUC is proposed. Finally, based on the simulation platform raised by ASIC development environment, the attack scheme is validated. Experiment results turn out that the scheme can successfully attack 48-bit key, confirming that ZUC is unable to resist the correlation power analysis attack, and the proposed assessment method of attack scheme takes effect. Compared with Tang Mings experimental, which conducted differential power analysis of ZUC with random initial vectors and observing distinct differential power peak with 5000 initial vectors, the proposed attack scheme only uses 256 initial vectors, and gets better results.
2015, 37(8): 1978-1983.
doi: 10.11999/JEIT141385
Abstract:
By designing the block construction for each share, which is divided into several blocks according to the number of qualified sets, the secret sharing and recovering algorithms of the XOR-based region incrementing visual cryptography are designed with the encoding matrices of the (n, n) XOR-based single secret sharing visual cryptography. Comparing with the existing schemes, the proposed scheme realizes the perfect recovery of decoded regions in secret image, and the sizes of shares are also decreased efficiently.
By designing the block construction for each share, which is divided into several blocks according to the number of qualified sets, the secret sharing and recovering algorithms of the XOR-based region incrementing visual cryptography are designed with the encoding matrices of the (n, n) XOR-based single secret sharing visual cryptography. Comparing with the existing schemes, the proposed scheme realizes the perfect recovery of decoded regions in secret image, and the sizes of shares are also decreased efficiently.
2015, 37(8): 1984-1988.
doi: 10.11999/JEIT141506
Abstract:
Using the hard assumption of Ring-Decision Learning With Errors (Ring-DLWE) in the lattice, a new Authenticated Key Exchange (AKE) scheme is proposed, which is based on the Peikerts reconciliation technique. Under the standard model, the proposed scheme is provably secure in the CK model, which is additionally achieves weak Perfect Forward Secrecy (wPFS). Compared with the current Key Exchange (KE) schemes based on the LWE, the proposed scheme not only protects the shared session key with balanced key derivation function but also resists quantum attacks because of the hard assumption on lattice problem.
Using the hard assumption of Ring-Decision Learning With Errors (Ring-DLWE) in the lattice, a new Authenticated Key Exchange (AKE) scheme is proposed, which is based on the Peikerts reconciliation technique. Under the standard model, the proposed scheme is provably secure in the CK model, which is additionally achieves weak Perfect Forward Secrecy (wPFS). Compared with the current Key Exchange (KE) schemes based on the LWE, the proposed scheme not only protects the shared session key with balanced key derivation function but also resists quantum attacks because of the hard assumption on lattice problem.
2015, 37(8): 1989-1993.
doi: 10.11999/JEIT141601
Abstract:
Since the F5 algorithm is proposed, a bunch of signature-based Gr?bner basis algorithms appear. They use different selection strategies to get the basis gradually and use different criteria to discard redundant polynomials as many as possible. The strategies and criteria should satisfy some general rules for correct termination. Based on these rules, a framework which include many algorithms as instances is proposed. Using the property of rewrite basis, a simple proof of the correct termination of the framework is obtained. For the simple proof of the F5 algorithm, the reduction process is simplified. In particular, for homogeneous F5 algorithm, its complicated selection strategy is proved equivalent to selecting polynomials with respect to module order. In this way, the F5 algorithm can be seen as an instance of the framework and has a rather short proof.
Since the F5 algorithm is proposed, a bunch of signature-based Gr?bner basis algorithms appear. They use different selection strategies to get the basis gradually and use different criteria to discard redundant polynomials as many as possible. The strategies and criteria should satisfy some general rules for correct termination. Based on these rules, a framework which include many algorithms as instances is proposed. Using the property of rewrite basis, a simple proof of the correct termination of the framework is obtained. For the simple proof of the F5 algorithm, the reduction process is simplified. In particular, for homogeneous F5 algorithm, its complicated selection strategy is proved equivalent to selecting polynomials with respect to module order. In this way, the F5 algorithm can be seen as an instance of the framework and has a rather short proof.
2015, 37(8): 1994-1999.
doi: 10.11999/JEIT141635
Abstract:
The security of certificateless signature scheme which was proposed by He et al. (2014) is analyzed, and the security of the certificateless aggregate signature scheme which was proposed by Ming et al. (2014) is analyzed too. It is pointed out that the Key Generation Center (KGC) can realize the passive attacks in the Mings scheme. It is also pointed out that KGC can realize the passive attack and initiative attack respectively in the Nimgs scheme. The processes of concrete forgery attacks which perfored by KGC are shown, and the possible reasons are analyzed. Finally, two improved Mings schemes are proposed. The improved schemes not only overcome the security problem of original scheme but also have an advantage that the length of aggregated signature is fixed.
The security of certificateless signature scheme which was proposed by He et al. (2014) is analyzed, and the security of the certificateless aggregate signature scheme which was proposed by Ming et al. (2014) is analyzed too. It is pointed out that the Key Generation Center (KGC) can realize the passive attacks in the Mings scheme. It is also pointed out that KGC can realize the passive attack and initiative attack respectively in the Nimgs scheme. The processes of concrete forgery attacks which perfored by KGC are shown, and the possible reasons are analyzed. Finally, two improved Mings schemes are proposed. The improved schemes not only overcome the security problem of original scheme but also have an advantage that the length of aggregated signature is fixed.
2015, 37(8): 2000-2006.
doi: 10.11999/JEIT141284
Abstract:
The method of secure communications is the critical techniques for Wireless Sensor Networks (WSNs) to guarantee the routing information security in the process of transmission. But most of wireless sensor network routing protocol have problems involving security. From the perspective of balanced energy consumption, a balanced energy consumption of Secret Communication Protocol(SCP) is proposed and the SCP secret communication method is also introduced. In addition, the security scheme of the agreement is analysed in this paper. Finally, simulation is conducted on the performance of the protocol. The results show that this protocol has high performance in terms of energy and security.
The method of secure communications is the critical techniques for Wireless Sensor Networks (WSNs) to guarantee the routing information security in the process of transmission. But most of wireless sensor network routing protocol have problems involving security. From the perspective of balanced energy consumption, a balanced energy consumption of Secret Communication Protocol(SCP) is proposed and the SCP secret communication method is also introduced. In addition, the security scheme of the agreement is analysed in this paper. Finally, simulation is conducted on the performance of the protocol. The results show that this protocol has high performance in terms of energy and security.
2015, 37(8): 2007-2013.
doi: 10.11999/JEIT141286
Abstract:
Cloud computing data centers generally consist of a large number of servers connected via high speed network. One promising approach to saving energy is to maintain enough active severs in proportion to system load, while switch left servers to idle mode whenever possible. Then operating cost and switching cost is brought about respectively. The problem of right-sizing active severs to minimize energy consumption (total cost of operating and switching) in data centers is discussed. Firstly, the NP-hard model is established, and the characteristics of the optimal solution when omitting the switching cost are analyzed. Then by revising the solution procedure carefully, the recursive procedure is successfully eliminated. The optimal static algorithm with polynomial complexity is achieved. Finally, the online strategy is developed using the worst predicting load as the constraints. Simulation results show that the proposed offline and online algorithm can adapt the dramatic trend of external load and always carefully adjust the proportion of active servers, to guarantee minimum power consumption with a smooth computing process.
Cloud computing data centers generally consist of a large number of servers connected via high speed network. One promising approach to saving energy is to maintain enough active severs in proportion to system load, while switch left servers to idle mode whenever possible. Then operating cost and switching cost is brought about respectively. The problem of right-sizing active severs to minimize energy consumption (total cost of operating and switching) in data centers is discussed. Firstly, the NP-hard model is established, and the characteristics of the optimal solution when omitting the switching cost are analyzed. Then by revising the solution procedure carefully, the recursive procedure is successfully eliminated. The optimal static algorithm with polynomial complexity is achieved. Finally, the online strategy is developed using the worst predicting load as the constraints. Simulation results show that the proposed offline and online algorithm can adapt the dramatic trend of external load and always carefully adjust the proportion of active servers, to guarantee minimum power consumption with a smooth computing process.
2015, 37(8): 2014-2020.
doi: 10.11999/JEIT141574
Abstract:
For the protocol headers of wireless network data prone to errors, this paper puts forward with a bit-flip subset restriction header recovery algorithm after studying the one based on Cyclic Redundancy Check (CRC). A constraint subset of the received vector centric is set up to narrow the search space by exploiting the confidence information of each bit, overcoming the defect of high complexity of the former header recovery algorithm. Then, the theatrical analysis and experimental verification about the value range of the test vectors length are done combining the models of wireless signal and wireless channel. The simulation results show that this method can maintain the well performance with a low computing cost, adjusting the test vectors length towards wireless signals with different Signal to Noise Ratio (SNR).
For the protocol headers of wireless network data prone to errors, this paper puts forward with a bit-flip subset restriction header recovery algorithm after studying the one based on Cyclic Redundancy Check (CRC). A constraint subset of the received vector centric is set up to narrow the search space by exploiting the confidence information of each bit, overcoming the defect of high complexity of the former header recovery algorithm. Then, the theatrical analysis and experimental verification about the value range of the test vectors length are done combining the models of wireless signal and wireless channel. The simulation results show that this method can maintain the well performance with a low computing cost, adjusting the test vectors length towards wireless signals with different Signal to Noise Ratio (SNR).
2015, 37(8): 2021-2027.
doi: 10.11999/JEIT141527
Abstract:
The energy optimized virtual network embedding problem in the substrate network with heterogeneous nodes is not to minimize the number of working nodes and links. The load-based energy consumption models of the node and link in the substrate network are built, a mathematical model of the virtual network embedding problem is modeled in order to reduce energy consumption, and an energy-aware virtual network embedding heuristic algorithm is proposed. Based on the principles of energy optimization and coordination with link mapping, the virtual node is mapped onto the substrate node with the highest comprehensive resource capacity in the node mapping phase, and the link mapping phase is based on the energy-aware k shortest path algorithm. Simulation results show that the proposed algorithm reduces the energy consumption significantly, and the heterogeneity of substrate network nodes is greater, reducing the energy consumption is more obvious.
The energy optimized virtual network embedding problem in the substrate network with heterogeneous nodes is not to minimize the number of working nodes and links. The load-based energy consumption models of the node and link in the substrate network are built, a mathematical model of the virtual network embedding problem is modeled in order to reduce energy consumption, and an energy-aware virtual network embedding heuristic algorithm is proposed. Based on the principles of energy optimization and coordination with link mapping, the virtual node is mapped onto the substrate node with the highest comprehensive resource capacity in the node mapping phase, and the link mapping phase is based on the energy-aware k shortest path algorithm. Simulation results show that the proposed algorithm reduces the energy consumption significantly, and the heterogeneity of substrate network nodes is greater, reducing the energy consumption is more obvious.
2015, 37(8): 2028-2032.
doi: 10.11999/JEIT141654
Abstract:
Compressive sensing has a preliminary application to the field of Tracking, Telemetry, and Command (TTC) and communication, which can effectively decrease sampling and data rate, but there is a contradiction between the real-time requirement and the computationally expensive recovery algorithm. In this paper, based on the sparsity of Direct Sequence (DS) TTC and communication signals, a compressive domain Pseudo-Noise(PN) code tracking loop based on random demodulation compressive sampler is proposed. The loop can directly extract the code phase from compressive signal samples, which does not need to recover original signal. Firstly, the loop model and its identification characteristics are analyzed. Secondly, through research on cross noise, tracking accuracy is analyzed. Theoretical analysis and simulation results show that the proposed loop can track PN code phase in compressive domain. The loop may have an important application value to the field of DS Spread Spectrum (SS) and DS/Frequency Hopping (FH) Hybrid SS (HSS) signal processing based on compressive sensing.
Compressive sensing has a preliminary application to the field of Tracking, Telemetry, and Command (TTC) and communication, which can effectively decrease sampling and data rate, but there is a contradiction between the real-time requirement and the computationally expensive recovery algorithm. In this paper, based on the sparsity of Direct Sequence (DS) TTC and communication signals, a compressive domain Pseudo-Noise(PN) code tracking loop based on random demodulation compressive sampler is proposed. The loop can directly extract the code phase from compressive signal samples, which does not need to recover original signal. Firstly, the loop model and its identification characteristics are analyzed. Secondly, through research on cross noise, tracking accuracy is analyzed. Theoretical analysis and simulation results show that the proposed loop can track PN code phase in compressive domain. The loop may have an important application value to the field of DS Spread Spectrum (SS) and DS/Frequency Hopping (FH) Hybrid SS (HSS) signal processing based on compressive sensing.