Email alert
2016 Vol. 38, No. 10
Display Method:
2016, 38(10): 2415-2422.
doi: 10.11999/JEIT151453
Abstract:
MIMO radar is an emerging radar system that has significant potential. MIMO radar can provide high resolution and real-time imaging solution. Because of the sparsity of the observation zone, the task of MIMO radar imaging can be formulated as a problem of sparse signal recovery based on Compressed Sensing (CS). In MIMO radar imaging application based on CS, existing greedy algorithms, such as the Orthogonal Matching Pursuit (OMP) algorithm and the Subspace Pursuit (SP) algorithm, suffer from artifacts and low-resolution, respectively. To deal with the drawback of existing greedy algorithms, a Hybrid Matching Pursuit (HMP) algorithm is proposed to combine the strengths of OMP and SP. By using of the orthogonality among selected basis-signals and the backtracking strategy for basis-signal reevaluation, the HMP algorithm can reconstruct high-resolution radar image with no artifacts. Simulation results demonstrate the effectiveness and superiority of the proposed algorithm.
MIMO radar is an emerging radar system that has significant potential. MIMO radar can provide high resolution and real-time imaging solution. Because of the sparsity of the observation zone, the task of MIMO radar imaging can be formulated as a problem of sparse signal recovery based on Compressed Sensing (CS). In MIMO radar imaging application based on CS, existing greedy algorithms, such as the Orthogonal Matching Pursuit (OMP) algorithm and the Subspace Pursuit (SP) algorithm, suffer from artifacts and low-resolution, respectively. To deal with the drawback of existing greedy algorithms, a Hybrid Matching Pursuit (HMP) algorithm is proposed to combine the strengths of OMP and SP. By using of the orthogonality among selected basis-signals and the backtracking strategy for basis-signal reevaluation, the HMP algorithm can reconstruct high-resolution radar image with no artifacts. Simulation results demonstrate the effectiveness and superiority of the proposed algorithm.
2016, 38(10): 2423-2429.
doi: 10.11999/JEIT151391
Abstract:
Recently, the Compressed Sensing (CS) theory becomes the researching hot point in SAR imaging. The Multiple Measurement Vectors (MMV) model of CS theory can be used to effectively represent the jointly sparse signals, and it can obtain better performance than Single Measurement Vector (SMV) model. Because the SAR range profiles at different pulses have different sparse structures, which result in the MMV model can not be directly used in the scenario of synthetic aperture radar imaging. In this paper, a modified MMV model is proposed for SAR imaging, and the Range Migration (RM) effect is embedded into the proposed model. Correspondingly, a modified Orthogonal Matching Pursuit (OMP) algorithm is developed to obtain the high-resolution range profile. Experiments based on simulated and measured data demonstrate the validity of the proposed model and the algorithm.
Recently, the Compressed Sensing (CS) theory becomes the researching hot point in SAR imaging. The Multiple Measurement Vectors (MMV) model of CS theory can be used to effectively represent the jointly sparse signals, and it can obtain better performance than Single Measurement Vector (SMV) model. Because the SAR range profiles at different pulses have different sparse structures, which result in the MMV model can not be directly used in the scenario of synthetic aperture radar imaging. In this paper, a modified MMV model is proposed for SAR imaging, and the Range Migration (RM) effect is embedded into the proposed model. Correspondingly, a modified Orthogonal Matching Pursuit (OMP) algorithm is developed to obtain the high-resolution range profile. Experiments based on simulated and measured data demonstrate the validity of the proposed model and the algorithm.
2016, 38(10): 2430-2436.
doi: 10.11999/JEIT151163
Abstract:
For multisite radar system, to solve the data transmission rate problem, two kinds of Double Threshold Constant False Alarm Rate (DT-CFAR) detectors, the DT Generalized Likelihood Ratio Test (DT-GLRT) detector and the DT Adaptive Matched Filter (DT-AMF) detector, are proposed based on the GLRT and the AMF algorithms. Fisrt, the local test statistics which exceed the first threshold are transferred to the fusion center. Then, the global test statistic is obtained from the local test statistics and the final decision is made compared to the second threshold in the fusion center. The closed form expression for probabilities of false alarm and detection of the DT-AMF detector are also given when the Signal to Clutter plus Noise Ratios (SCNRs) are identical in the spatial diversity channels. Simulation results illustrate that the DT-CFAR detectors can maintain a good performance with a low communication rate.
For multisite radar system, to solve the data transmission rate problem, two kinds of Double Threshold Constant False Alarm Rate (DT-CFAR) detectors, the DT Generalized Likelihood Ratio Test (DT-GLRT) detector and the DT Adaptive Matched Filter (DT-AMF) detector, are proposed based on the GLRT and the AMF algorithms. Fisrt, the local test statistics which exceed the first threshold are transferred to the fusion center. Then, the global test statistic is obtained from the local test statistics and the final decision is made compared to the second threshold in the fusion center. The closed form expression for probabilities of false alarm and detection of the DT-AMF detector are also given when the Signal to Clutter plus Noise Ratios (SCNRs) are identical in the spatial diversity channels. Simulation results illustrate that the DT-CFAR detectors can maintain a good performance with a low communication rate.
2016, 38(10): 2437-2444.
doi: 10.11999/JEIT151469
Abstract:
In this paper, monostatic MIMO radar with cross array using electromagnetic vector antennas is utilized and a novel algorithm for fast Two Dimensional (2D) Direction Of Arrival (DOA) with high accuracy and polarization estimation is proposed. First, given the virtual steering vector of monostatic MIMO radar, a reduced-dimensional matrix is employed and the high dimensional received data is transformed into a lower dimensional signal space via the reduced-dimensional transformation. Then the Propagator Method (PM) is utilized to estimate the corresponding signal subspace by linear operation. Second, rotational invariance relationships with long baseline in the proposed scheme and polarization vector cross product between the normalized electric vector and the normalized magnetic vector can be used to obtain the 2D DOA estimation with high accuracy and non-ambiguity. The polarization rotational invariance relationship, which is irrespective of array geometry, is utilized to estimate the auxiliary polarization angle and polarization phase difference. The proposed system, extending array aperture without increasing sensors and hardware costs, can obtain the waveform diversity offered by MIMO radar and the polarization diversity offered by vector sensor together and achieve better estimation performance. Meanwhile, through the reduced-dimensional and linear operation, the proposed algorithm, obtaining signal to ratio gain and joint estimation for 2D DOA with high accuracy and 2D polarization parameters with automatic pairing, can reduce the dimension of received data and the computational complexity of parameters estimation effectively. Lastly, simulation results verify the correctness of theoretical analysis and the effectiveness of proposed algorithm.
In this paper, monostatic MIMO radar with cross array using electromagnetic vector antennas is utilized and a novel algorithm for fast Two Dimensional (2D) Direction Of Arrival (DOA) with high accuracy and polarization estimation is proposed. First, given the virtual steering vector of monostatic MIMO radar, a reduced-dimensional matrix is employed and the high dimensional received data is transformed into a lower dimensional signal space via the reduced-dimensional transformation. Then the Propagator Method (PM) is utilized to estimate the corresponding signal subspace by linear operation. Second, rotational invariance relationships with long baseline in the proposed scheme and polarization vector cross product between the normalized electric vector and the normalized magnetic vector can be used to obtain the 2D DOA estimation with high accuracy and non-ambiguity. The polarization rotational invariance relationship, which is irrespective of array geometry, is utilized to estimate the auxiliary polarization angle and polarization phase difference. The proposed system, extending array aperture without increasing sensors and hardware costs, can obtain the waveform diversity offered by MIMO radar and the polarization diversity offered by vector sensor together and achieve better estimation performance. Meanwhile, through the reduced-dimensional and linear operation, the proposed algorithm, obtaining signal to ratio gain and joint estimation for 2D DOA with high accuracy and 2D polarization parameters with automatic pairing, can reduce the dimension of received data and the computational complexity of parameters estimation effectively. Lastly, simulation results verify the correctness of theoretical analysis and the effectiveness of proposed algorithm.
2016, 38(10): 2445-2452.
doi: 10.11999/JEIT151425
Abstract:
In order to solve the problem of performance losses in the MIMO radar waveform design when target signal is uncertainty, a novel joint optimization of transmitting waveforms and receiving filters in the MIMO radar for the case of target in flat ellipsoid uncertainty set is proposed. Firstly, constraint of impulse response of target is extended to flat ellipsoid uncertainty set, Lagrange multiplier is used to solve the optimization problem, and the closed form solution is got under the constrain. Secondly, in order to improve SINR output of waveform design when the uncertainty set is large, Iterative Robust Minimum Variance Beamforming (IRMVB) is used to get more precise target impulse response. Thirdly, the relationship between flat ellipsoid uncertainty set and sphere uncertainty set is analyzed, and a solution which has the form of diagonal loading is derived. Finally, simulation results show that the proposed algorithm has excellent performance and it is robust for the uncertainty of impulse response of the target.
In order to solve the problem of performance losses in the MIMO radar waveform design when target signal is uncertainty, a novel joint optimization of transmitting waveforms and receiving filters in the MIMO radar for the case of target in flat ellipsoid uncertainty set is proposed. Firstly, constraint of impulse response of target is extended to flat ellipsoid uncertainty set, Lagrange multiplier is used to solve the optimization problem, and the closed form solution is got under the constrain. Secondly, in order to improve SINR output of waveform design when the uncertainty set is large, Iterative Robust Minimum Variance Beamforming (IRMVB) is used to get more precise target impulse response. Thirdly, the relationship between flat ellipsoid uncertainty set and sphere uncertainty set is analyzed, and a solution which has the form of diagonal loading is derived. Finally, simulation results show that the proposed algorithm has excellent performance and it is robust for the uncertainty of impulse response of the target.
Resource Allocation Approach in Distributed MIMO Radar with Multiple Targets for Velocity Estimation
2016, 38(10): 2453-2460.
doi: 10.11999/JEIT151452
Abstract:
In order to improve the velocity estimation accuracy for multiple targets in distributed MIMO radar, this paper analyses the influence of transmitted power and signal effective time width on the estimation accuracy, and a joint resource allocation algorithm is proposed. Firstly, a criterion minimizing the maximum Cramer Rao Lower Bound (CRLB) on the mean square error in multiple targets velocity estimation is derived, and the corresponding optimization model with transmitted power and signal effective time width is solved by SPCA (Sequential Parametric Convex Approximation) algorithm. Finally, simulations demonstrate that the velocity estimation accuracy is improved by the proposed algorithm. The results also reveal that signal effective time width has a greater impact on the velocity estimation accuracy than transmitted power.
In order to improve the velocity estimation accuracy for multiple targets in distributed MIMO radar, this paper analyses the influence of transmitted power and signal effective time width on the estimation accuracy, and a joint resource allocation algorithm is proposed. Firstly, a criterion minimizing the maximum Cramer Rao Lower Bound (CRLB) on the mean square error in multiple targets velocity estimation is derived, and the corresponding optimization model with transmitted power and signal effective time width is solved by SPCA (Sequential Parametric Convex Approximation) algorithm. Finally, simulations demonstrate that the velocity estimation accuracy is improved by the proposed algorithm. The results also reveal that signal effective time width has a greater impact on the velocity estimation accuracy than transmitted power.
2016, 38(10): 2461-2467.
doi: 10.11999/JEIT151457
Abstract:
A classification algorithm based on the fast density search clustering method is proposed for polarimetric High Resolution Range Profile (HRRP) of man-made target. The polarization and frequency features are used to discriminate scattering centers in order to obtain the feature vectors for target classification. After that, the fast density search clustering method is applied to classifying the man-made target. The experiments show that the feature vectors for target classification can describe the structural properties of the target and can easily be classified. The fast density search clustering method operates simply and efficiently and can be applied to the man-made target classification with excellent performance.
A classification algorithm based on the fast density search clustering method is proposed for polarimetric High Resolution Range Profile (HRRP) of man-made target. The polarization and frequency features are used to discriminate scattering centers in order to obtain the feature vectors for target classification. After that, the fast density search clustering method is applied to classifying the man-made target. The experiments show that the feature vectors for target classification can describe the structural properties of the target and can easily be classified. The fast density search clustering method operates simply and efficiently and can be applied to the man-made target classification with excellent performance.
2016, 38(10): 2468-2474.
doi: 10.11999/JEIT160023
Abstract:
The tradition tropospheric delay models and the method of ray-tracing have some limits like inefficiency, high cost and the the restrictions of radiosonde data and surface parameters. An improved method based on ray tracing is proposed. In this method, meteorological parameter formulas in the middle latitude model and meteorological parameter models in UNB3m model are combined to modify the arithmetic of refractive index, which gets rid of the restrictions of radiosonde data. Meteorological data from 10 Asia stations in 2012 are analyzed based on ray tracing technology and tradition models like Hopfield and Saastamoinen. Slant delays for 15 directions, from the zenith to elevations, are computed. The results are compared with ray-tracing tropospheric slant delays from nearby radiosonde measurements, which demonstrates that the accuracy of improved ray tracing method is superior to tradition models, and the proposed method provides a new real time way for estimating tropospheric slant delay in the case of lack of meteorological data.
The tradition tropospheric delay models and the method of ray-tracing have some limits like inefficiency, high cost and the the restrictions of radiosonde data and surface parameters. An improved method based on ray tracing is proposed. In this method, meteorological parameter formulas in the middle latitude model and meteorological parameter models in UNB3m model are combined to modify the arithmetic of refractive index, which gets rid of the restrictions of radiosonde data. Meteorological data from 10 Asia stations in 2012 are analyzed based on ray tracing technology and tradition models like Hopfield and Saastamoinen. Slant delays for 15 directions, from the zenith to elevations, are computed. The results are compared with ray-tracing tropospheric slant delays from nearby radiosonde measurements, which demonstrates that the accuracy of improved ray tracing method is superior to tradition models, and the proposed method provides a new real time way for estimating tropospheric slant delay in the case of lack of meteorological data.
2016, 38(10): 2475-2481.
doi: 10.11999/JEIT151462
Abstract:
Ground moving target detection is a major application in multichannel Synthetic Aperture Radar (SAR) system. In recent years, method based on Robust Principal Component Analysis (RPCA) has attracted much attention for its good performance in distinguishing the difference among a set of correlative database. However, this kind of method might be disturbed by strong clutter points since some non-ideal factors exist. Therefore, a combined RPCA shape constraint based algorithm for moving target detection is proposed in this paper. By estimating the shape information of the moving target with system parameters, the moving target would be effectively detected, and the disturbed points would be removed at the same time. The experimental data demonstrate its good performance to detect motive target under the strong clutter background.
Ground moving target detection is a major application in multichannel Synthetic Aperture Radar (SAR) system. In recent years, method based on Robust Principal Component Analysis (RPCA) has attracted much attention for its good performance in distinguishing the difference among a set of correlative database. However, this kind of method might be disturbed by strong clutter points since some non-ideal factors exist. Therefore, a combined RPCA shape constraint based algorithm for moving target detection is proposed in this paper. By estimating the shape information of the moving target with system parameters, the moving target would be effectively detected, and the disturbed points would be removed at the same time. The experimental data demonstrate its good performance to detect motive target under the strong clutter background.
2016, 38(10): 2482-2487.
doi: 10.11999/JEIT151475
Abstract:
In order to excise transient interferences in the skywave over-the-horizon radar, regular time-domain methods use steps as interference localization, interference blanking and data restoration. The interference excision performance depends highly on the interference localization performance. In the real system, the constant threshold detection method is adopted. However, it can not provide reliable localization performance. Some other existing methods need large computational costs and are sensitive to the parameters. To solve these problems, an iteratively censored average detector is proposed. The proposed detector removes the interference samples from the iterative estimating procedure and adopts a forward-backward localizing method to ensure the reliability of the localization performance. The experimental data collected from a trial skywave over-the-horizon radar are used. Results verify the effectiveness of the proposed method.
In order to excise transient interferences in the skywave over-the-horizon radar, regular time-domain methods use steps as interference localization, interference blanking and data restoration. The interference excision performance depends highly on the interference localization performance. In the real system, the constant threshold detection method is adopted. However, it can not provide reliable localization performance. Some other existing methods need large computational costs and are sensitive to the parameters. To solve these problems, an iteratively censored average detector is proposed. The proposed detector removes the interference samples from the iterative estimating procedure and adopts a forward-backward localizing method to ensure the reliability of the localization performance. The experimental data collected from a trial skywave over-the-horizon radar are used. Results verify the effectiveness of the proposed method.
2016, 38(10): 2488-2494.
doi: 10.11999/JEIT151310
Abstract:
RDNLMS algorithm is used to cancel Doppler-spreading direct signal and strong echo of airborne passive radar. The transfer function of RDNLMS filter is deduced, based on which a method of non-uniform Doppler extraction is developed to decrease the computing load while minimizing performance loss. With this method, the Doppler of strong echo will be surely extracted hence strong echo will be cancelled efficiently. In addition, the interval of extracted Doppler will not be too large to offer proper suppression for weak echo. Simulations show that when the order of RDNLMS filter is fixed, the performance of non-uniform Doppler extraction will be 2.4 dB better than the uniform one.
RDNLMS algorithm is used to cancel Doppler-spreading direct signal and strong echo of airborne passive radar. The transfer function of RDNLMS filter is deduced, based on which a method of non-uniform Doppler extraction is developed to decrease the computing load while minimizing performance loss. With this method, the Doppler of strong echo will be surely extracted hence strong echo will be cancelled efficiently. In addition, the interval of extracted Doppler will not be too large to offer proper suppression for weak echo. Simulations show that when the order of RDNLMS filter is fixed, the performance of non-uniform Doppler extraction will be 2.4 dB better than the uniform one.
2016, 38(10): 2495-2501.
doi: 10.11999/JEIT151354
Abstract:
Synthetic Aperture Radar ALtimeter (SARAL) is a new generation radar altimeter and has the best height precision now. As using synthetic aperture technique, the height precision of SARAL is improved by one fold. Based on studying the height precision of Conventional Radar Altimeter (CRA) and SARAL, a novel comparison method is developed to process the airborne flight experiment data. And the precision comparison result shows that the height precision of SARAL is increased by one fold.
Synthetic Aperture Radar ALtimeter (SARAL) is a new generation radar altimeter and has the best height precision now. As using synthetic aperture technique, the height precision of SARAL is improved by one fold. Based on studying the height precision of Conventional Radar Altimeter (CRA) and SARAL, a novel comparison method is developed to process the airborne flight experiment data. And the precision comparison result shows that the height precision of SARAL is increased by one fold.
2016, 38(10): 2502-2508.
doi: 10.11999/JEIT160095
Abstract:
At low frequency, the assumption of independent scattering of the scatterers in vegetation medium is no longer valid. The coherent effect and near field interactions should be considered. In this paper, a high-order coherent scattering model for vegetation with fractal structure is presented. The fractal theory is employed to generate a realistic 3-D spatial structure of vegetation. The near field interaction between scatterers is formulated using an efficient algorithm based on the reciprocity theorem. For the coherent effect, every scatterer with a deterministic location is taken into account. The main scattering mechanisms are defined in the way of layered vegetation model, allowing better understanding of microwave interaction with trunk-crown structure. Good agreements are obtained from the comparisons of the theoretical predictions with the multifrequency and multipolarization measurement results of boreal forest. Through an extensive ground truth, theoretical analysis of the contribution of the scattering mechanisms for various frequencies, incident angles and vegetation structures is carried out. It is found that under specified conditions the vegetation scattering model can be simplified according to the main contribution scattering mechanism which can be applied to the inversion issue.
At low frequency, the assumption of independent scattering of the scatterers in vegetation medium is no longer valid. The coherent effect and near field interactions should be considered. In this paper, a high-order coherent scattering model for vegetation with fractal structure is presented. The fractal theory is employed to generate a realistic 3-D spatial structure of vegetation. The near field interaction between scatterers is formulated using an efficient algorithm based on the reciprocity theorem. For the coherent effect, every scatterer with a deterministic location is taken into account. The main scattering mechanisms are defined in the way of layered vegetation model, allowing better understanding of microwave interaction with trunk-crown structure. Good agreements are obtained from the comparisons of the theoretical predictions with the multifrequency and multipolarization measurement results of boreal forest. Through an extensive ground truth, theoretical analysis of the contribution of the scattering mechanisms for various frequencies, incident angles and vegetation structures is carried out. It is found that under specified conditions the vegetation scattering model can be simplified according to the main contribution scattering mechanism which can be applied to the inversion issue.
2016, 38(10): 2509-2514.
doi: 10.11999/JEIT160208
Abstract:
Based on the property that the scene radiance is of high contrast and the atmospheric veil is locally smooth, a novel single hazy image restoration method based on nonlocal total variation regularization optimization is proposed in this paper. In order to obtain the atmospheric veil of a hazy image, a constrained nonlocal total variation regularization is firstly applied. Then, the accurate atmospheric veil is estimated using a nonlocal Rudin- Osher-Fatemi model, which is solved by a modified split Bregman method. Experimental results demonstrate that the proposed approach is capable of recovering the scene radiance from a single hazy image effectively, especially for the regions with multi-texture.
Based on the property that the scene radiance is of high contrast and the atmospheric veil is locally smooth, a novel single hazy image restoration method based on nonlocal total variation regularization optimization is proposed in this paper. In order to obtain the atmospheric veil of a hazy image, a constrained nonlocal total variation regularization is firstly applied. Then, the accurate atmospheric veil is estimated using a nonlocal Rudin- Osher-Fatemi model, which is solved by a modified split Bregman method. Experimental results demonstrate that the proposed approach is capable of recovering the scene radiance from a single hazy image effectively, especially for the regions with multi-texture.
2016, 38(10): 2515-2522.
doi: 10.11999/JEIT151343
Abstract:
As to the problem that the base classifiers in ternary Error Correcting Output Codes (ECOC) matrix do not contain the prior information of classes which are ignored in binary splits, a new recoding ECOC based on Receiver Operating Characteristic (ROC) curve is presented. To recode the ternary matrix, the two thresholds of reject region are obtained based on ROC to build the optimal classifiers. Then, the optimal classifiers are used to classify the ignored classes based on bipartition in training phase. In so doing, the classical two-symbol output expands to three-symbol to recode the zeros. Finally, the Hamming decoding strategy is adopted for decision in decoding. This method can avoid a second training and is applied to any kind of ternary matrix. The experiments based on Synthetic and UCI datasets validate the better efficiency and remarkable promotion without increasing training complexity of the proposed approach.
As to the problem that the base classifiers in ternary Error Correcting Output Codes (ECOC) matrix do not contain the prior information of classes which are ignored in binary splits, a new recoding ECOC based on Receiver Operating Characteristic (ROC) curve is presented. To recode the ternary matrix, the two thresholds of reject region are obtained based on ROC to build the optimal classifiers. Then, the optimal classifiers are used to classify the ignored classes based on bipartition in training phase. In so doing, the classical two-symbol output expands to three-symbol to recode the zeros. Finally, the Hamming decoding strategy is adopted for decision in decoding. This method can avoid a second training and is applied to any kind of ternary matrix. The experiments based on Synthetic and UCI datasets validate the better efficiency and remarkable promotion without increasing training complexity of the proposed approach.
2016, 38(10): 2523-2530.
doi: 10.11999/JEIT151426
Abstract:
An efficient Coding Unit (CU) decision algorithm is proposed for depth intra coding, in which the depth level of CU is predicted by Corner-Point (CP) and the co-located texture CU. More specially, firstly, the CPs are obtained by corner detector in junction with the quantization parameter, which are further used to pre-allocate the depth level. After that, the refinement of pre-allocation depth level is performed by considering the block partition of the co-located texture. Finally, different depth search range is selected based on the final pre-allocation depth levels. Simulation results show that the proposed algorithm can provide about 63% time saving with maintaining coding performance compared with the original 3D-HEVC method. On the other hand, it can achieve about 13% time saving while the BD-rate 3% decreased over the CU decision method that only considers the texture information.
An efficient Coding Unit (CU) decision algorithm is proposed for depth intra coding, in which the depth level of CU is predicted by Corner-Point (CP) and the co-located texture CU. More specially, firstly, the CPs are obtained by corner detector in junction with the quantization parameter, which are further used to pre-allocate the depth level. After that, the refinement of pre-allocation depth level is performed by considering the block partition of the co-located texture. Finally, different depth search range is selected based on the final pre-allocation depth levels. Simulation results show that the proposed algorithm can provide about 63% time saving with maintaining coding performance compared with the original 3D-HEVC method. On the other hand, it can achieve about 13% time saving while the BD-rate 3% decreased over the CU decision method that only considers the texture information.
2016, 38(10): 2531-2537.
doi: 10.11999/JEIT151433
Abstract:
The generalized principal component analysis plays an important roles in many fields of modern signal processing. However, up to now, there are few algorithms, which can extract the generalized principal component adaptively. In this paper, a generalized principal component extraction algorithm, which has fast convergence speed, is proposed. The corresponding Deterministic Discrete Time (DDT) system of the proposed algorithm is analyzed and some conditions about the learning rate and initial weight vector are also obtained. Finally, computer simulation and practical application results show that compared with some existing algorithms, the proposed algorithm has faster convergence speed and higher estimation accuracy.
The generalized principal component analysis plays an important roles in many fields of modern signal processing. However, up to now, there are few algorithms, which can extract the generalized principal component adaptively. In this paper, a generalized principal component extraction algorithm, which has fast convergence speed, is proposed. The corresponding Deterministic Discrete Time (DDT) system of the proposed algorithm is analyzed and some conditions about the learning rate and initial weight vector are also obtained. Finally, computer simulation and practical application results show that compared with some existing algorithms, the proposed algorithm has faster convergence speed and higher estimation accuracy.
2016, 38(10): 2538-2545.
doi: 10.11999/JEIT151422
Abstract:
The Forward-Backward Pursuit (FBP) algorithm, a novel two stage greedy approach, receives wide attention due to the high reconstruction accuracy and the feature without prior information of the sparsity. However, FBP has to run more time to get a higher precision. To alleviate this drawback, this paper proposes the Acceleration Forward-Backward Pursuit (AFBP) algorithm based on Compressed Sensing (CS). In order to reduce the number of iterations, the algorithm exploits the information available in the support estimate to add the deleted atoms again. The run time of AFBP is sharply shorter than that of FBP, while the precision of AFBP is not lower than FBP. The efficacy of the proposed scheme is demonstrated by simulations using random sparse signals with different nonzero coefficient distributions and a sparse image.
The Forward-Backward Pursuit (FBP) algorithm, a novel two stage greedy approach, receives wide attention due to the high reconstruction accuracy and the feature without prior information of the sparsity. However, FBP has to run more time to get a higher precision. To alleviate this drawback, this paper proposes the Acceleration Forward-Backward Pursuit (AFBP) algorithm based on Compressed Sensing (CS). In order to reduce the number of iterations, the algorithm exploits the information available in the support estimate to add the deleted atoms again. The run time of AFBP is sharply shorter than that of FBP, while the precision of AFBP is not lower than FBP. The efficacy of the proposed scheme is demonstrated by simulations using random sparse signals with different nonzero coefficient distributions and a sparse image.
2016, 38(10): 2546-2552.
doi: 10.11999/JEIT151445
Abstract:
In order to solve the problem of low recognition accuracy of Continuous Phase Modulation (CPM) which is non-linear and with memory, a new maximum likelihood modulation recognition approach using memory factor is proposed in this paper. The approach defines the mapping symbol which has the time-homogeneous Markov property and generates the memory factor by calculating the posterior probability of the mapping symbol. Then, combining with the CPM decomposition and the EM algorithm, the time separable and channel parameter estimable likelihood function is deduced for CPM signals. The proposed approach has characters of low required symbol number, wide range of applicable SNR, large variety of recognizable CPM signals, high recognition accuracy and strong robustness to phase error. Simulation results show that the recognition rate of 8 kinds of CPM signals can reach more than 95% when the symbol number is 200, SNR is 0 dB and the phase error is arbitrary.
In order to solve the problem of low recognition accuracy of Continuous Phase Modulation (CPM) which is non-linear and with memory, a new maximum likelihood modulation recognition approach using memory factor is proposed in this paper. The approach defines the mapping symbol which has the time-homogeneous Markov property and generates the memory factor by calculating the posterior probability of the mapping symbol. Then, combining with the CPM decomposition and the EM algorithm, the time separable and channel parameter estimable likelihood function is deduced for CPM signals. The proposed approach has characters of low required symbol number, wide range of applicable SNR, large variety of recognizable CPM signals, high recognition accuracy and strong robustness to phase error. Simulation results show that the recognition rate of 8 kinds of CPM signals can reach more than 95% when the symbol number is 200, SNR is 0 dB and the phase error is arbitrary.
2016, 38(10): 2553-2559.
doi: 10.11999/JEIT151429
Abstract:
In order to transfer effective, reliable and secure information in resource-constrained networks such as deep space communications and mobile communications, a joint source channel security arithmetic coding method controlled by chaotic keys is proposed. At encoding, the first chaotic map allocates the probability of multiple forbidden symbols in arithmetic code, combining error detection by channel coding and disorder of key streams; meanwhile, the second chaotic map controls the source symbols in arithmetic code, combining source coding and information security. Simulation results show that the proposed method can not only achieve 0.4 dB signal-to-noise ratio gains compared with the existing similar arithmetic codes under the condition of same error rate, but also be of high reliability and security.
In order to transfer effective, reliable and secure information in resource-constrained networks such as deep space communications and mobile communications, a joint source channel security arithmetic coding method controlled by chaotic keys is proposed. At encoding, the first chaotic map allocates the probability of multiple forbidden symbols in arithmetic code, combining error detection by channel coding and disorder of key streams; meanwhile, the second chaotic map controls the source symbols in arithmetic code, combining source coding and information security. Simulation results show that the proposed method can not only achieve 0.4 dB signal-to-noise ratio gains compared with the existing similar arithmetic codes under the condition of same error rate, but also be of high reliability and security.
2016, 38(10): 2560-2567.
doi: 10.11999/JEIT151438
Abstract:
A fast nulls tracking pattern synthesis algorithm based on jammer subspace orthogonal projection is proposed, which can suppress the dynamic active jamming for LEO spaceborne array antenna. The algorithm corrects the nulls positions of radiation pattern synchronously through dynamically jammer subspace updating and iterative orthogonal projection, while the Iterative Fourier Transform (IFT) technique?is adopted to accelerate the correction. The proposed algorithm can maintain the mainlobe region and control the dynamic range ratio of excitations robustly and precisely, while minimizing the pattern sidelobe adaptively, so it is suitable for online real-time calculation in spaceborne array antenna. Simulation results verify the rapidity, effectiveness, and robustness of the proposed algorithm.
A fast nulls tracking pattern synthesis algorithm based on jammer subspace orthogonal projection is proposed, which can suppress the dynamic active jamming for LEO spaceborne array antenna. The algorithm corrects the nulls positions of radiation pattern synchronously through dynamically jammer subspace updating and iterative orthogonal projection, while the Iterative Fourier Transform (IFT) technique?is adopted to accelerate the correction. The proposed algorithm can maintain the mainlobe region and control the dynamic range ratio of excitations robustly and precisely, while minimizing the pattern sidelobe adaptively, so it is suitable for online real-time calculation in spaceborne array antenna. Simulation results verify the rapidity, effectiveness, and robustness of the proposed algorithm.
2016, 38(10): 2568-2574.
doi: 10.11999/JEIT151470
Abstract:
In wireless communication network, the channel state information is complicated. A joint Hierarchical Modulation and Physical-layer Network Coding (HM-PNC) scheme is proposed for the asymmetric Two-Way Relay Channel (TWRC). In this scheme, the two source nodes and the relay node adopt the hierarchical modulation technology (2/4-PSK). In the relay node, a special demodulation/modulation and PNC mapping rule is designed. Corrupted by Additive White Gaussian Noise (AWGN), the relay Bit Error Ratio (BER) and the end-to-end BER expressions are derived. Simulation results show that the HM-PNC scheme not only improves the data rate in better links, but also takes into account the transmission reliability of the poor channels. Compared with the traditional QPSK-PNC scheme, the HM-PNC scheme has better mobile performance.
In wireless communication network, the channel state information is complicated. A joint Hierarchical Modulation and Physical-layer Network Coding (HM-PNC) scheme is proposed for the asymmetric Two-Way Relay Channel (TWRC). In this scheme, the two source nodes and the relay node adopt the hierarchical modulation technology (2/4-PSK). In the relay node, a special demodulation/modulation and PNC mapping rule is designed. Corrupted by Additive White Gaussian Noise (AWGN), the relay Bit Error Ratio (BER) and the end-to-end BER expressions are derived. Simulation results show that the HM-PNC scheme not only improves the data rate in better links, but also takes into account the transmission reliability of the poor channels. Compared with the traditional QPSK-PNC scheme, the HM-PNC scheme has better mobile performance.
2016, 38(10): 2575-2581.
doi: 10.11999/JEIT160053
Abstract:
To improve the secrecy performance of relay networks in the presence of one eavesdropper, the Artificial Noise Precoding (ANP) and Eigen-Beamforming (EB) secure transmission schemes are appilied at the multiple-antenna amplify-and-forward relay, and the new tight closed-form expressions of the Ergodic Achievable Secrecy Rate (EASR) for two schemes are derived. The lower bound of the EASR for ANP is derived with a large antenna array at the relay, and its corresponding asymptotic performance is investigated in the high SNR and low SNR regimes to show valuable intrinsic insights. Analysis and Simulation results show that, in the moderate-to-high SNR regime, ANP achieves remarkable performance gain over EB, while in the low SNR regime, EB outperforms ANP. Moreover, in the high SNR regime, it is optimal to allocate around half of total power to artificial noise for ANP.
To improve the secrecy performance of relay networks in the presence of one eavesdropper, the Artificial Noise Precoding (ANP) and Eigen-Beamforming (EB) secure transmission schemes are appilied at the multiple-antenna amplify-and-forward relay, and the new tight closed-form expressions of the Ergodic Achievable Secrecy Rate (EASR) for two schemes are derived. The lower bound of the EASR for ANP is derived with a large antenna array at the relay, and its corresponding asymptotic performance is investigated in the high SNR and low SNR regimes to show valuable intrinsic insights. Analysis and Simulation results show that, in the moderate-to-high SNR regime, ANP achieves remarkable performance gain over EB, while in the low SNR regime, EB outperforms ANP. Moreover, in the high SNR regime, it is optimal to allocate around half of total power to artificial noise for ANP.
2016, 38(10): 2582-2589.
doi: 10.11999/JEIT151478
Abstract:
In Video on Demand (VoD) applications it is desired that the encrypted multimedia data are still partially perceptible after encryption in order to stimulate the purchase of the high-quality versions of the multimedia products. This perceptual encryption requires specific algorithms for encrypting the video data. In this paper, a Context-based Adaptive Binary Arithmetic Coding (CABAC) video perceptual encryption scheme is designed. The video quality of this encryption scheme is controllable. The important syntax elements and sensitive coded elements are chosen to encrypt using mathematical XOR operations with stream ciphers generated by 2D hyper chaos system. The encryption scheme is composed of encrypting Motion Vector Differences (MVD) sign, non-zero coefficients sign and significant_coeff_flag. Theoretical analysis and experimental results show that the proposed scheme has no impact on bit rate. With the 7 encoding time increasing, the video quality can be controlled by the change of the quality factor, which meets the requirement of the video perceptual encryption.
In Video on Demand (VoD) applications it is desired that the encrypted multimedia data are still partially perceptible after encryption in order to stimulate the purchase of the high-quality versions of the multimedia products. This perceptual encryption requires specific algorithms for encrypting the video data. In this paper, a Context-based Adaptive Binary Arithmetic Coding (CABAC) video perceptual encryption scheme is designed. The video quality of this encryption scheme is controllable. The important syntax elements and sensitive coded elements are chosen to encrypt using mathematical XOR operations with stream ciphers generated by 2D hyper chaos system. The encryption scheme is composed of encrypting Motion Vector Differences (MVD) sign, non-zero coefficients sign and significant_coeff_flag. Theoretical analysis and experimental results show that the proposed scheme has no impact on bit rate. With the 7 encoding time increasing, the video quality can be controlled by the change of the quality factor, which meets the requirement of the video perceptual encryption.
2016, 38(10): 2590-2597.
doi: 10.11999/JEIT151400
Abstract:
With the development of the technology of cognitive radio, the standards of spectrum sensing performance become the higher and the higher, especially in low Signal-to-Noise Ratio (SNR) environments. A Dynamic Double-threshold Energy sensing method based on Markov Model (DDEMM) is proposed in this paper. By following the double-threshold energy sensing approach, the modified Markov model that accounts for the time varying nature of the channel occupancy is presented to resolve the confused channel state. Furthermore, in order to overcome the effect of noise uncertainty, a dynamic double-threshold spectrum sensing method is proposed, which adjusts its thresholds according to the achievable maximal detection probability. The results of extensive simulation demonstrate that the proposed DDEMM can achieve better detection performance than the conventional double-threshold energy sensing schemes, especially under very low SNR region.
With the development of the technology of cognitive radio, the standards of spectrum sensing performance become the higher and the higher, especially in low Signal-to-Noise Ratio (SNR) environments. A Dynamic Double-threshold Energy sensing method based on Markov Model (DDEMM) is proposed in this paper. By following the double-threshold energy sensing approach, the modified Markov model that accounts for the time varying nature of the channel occupancy is presented to resolve the confused channel state. Furthermore, in order to overcome the effect of noise uncertainty, a dynamic double-threshold spectrum sensing method is proposed, which adjusts its thresholds according to the achievable maximal detection probability. The results of extensive simulation demonstrate that the proposed DDEMM can achieve better detection performance than the conventional double-threshold energy sensing schemes, especially under very low SNR region.
2016, 38(10): 2598-2604.
doi: 10.11999/JEIT151459
Abstract:
In Orthogonal Frequency Division Multiple Access (OFDMA)-based cellular networks, the statistical characteristics of the Inter-Cell Interference (ICI) are closely related to network performances. There is no closed-form expression for the Cumulative Distribution Function (CDF) of the ICI. The Gaussian Mixture Model (GMM) whose parameters can be computed explicitly is proposed to approximate the distribution of the downlink ICI. Then using the GMM, the CDF of the ICI is approximated as the weighted sum of some error functions. Simulation verifies the accuracy of the GMM and shows that the CDF based on the GMM can well approximate the CDF of the ICI.
In Orthogonal Frequency Division Multiple Access (OFDMA)-based cellular networks, the statistical characteristics of the Inter-Cell Interference (ICI) are closely related to network performances. There is no closed-form expression for the Cumulative Distribution Function (CDF) of the ICI. The Gaussian Mixture Model (GMM) whose parameters can be computed explicitly is proposed to approximate the distribution of the downlink ICI. Then using the GMM, the CDF of the ICI is approximated as the weighted sum of some error functions. Simulation verifies the accuracy of the GMM and shows that the CDF based on the GMM can well approximate the CDF of the ICI.
2016, 38(10): 2605-2611.
doi: 10.11999/JEIT151460
Abstract:
The rational spectrum resource allocation is one of the goals of Cognitive Radio (CR) technology. With the rapid increase of Secondary Users (SUs) numbers, the precise and real-time management becomes more and more difficult to achieve. In order to solve this problem, a hierarchical Cognitive Radio Network (CRN) architecture that several administration entities focus on providing spectrum services for users of variety tiers is proposed. The corresponding resource allocation algorithm based on stable matching in this architecture is also given. This algorithm guarantees the restriction on SUs transmission power for Primary Users (PUs), and also considers both utility functions of users. Simulation results demonstrate that the proposed method can roughly achieve the same performance of optimal solution with lower computation complexity and system delay.
The rational spectrum resource allocation is one of the goals of Cognitive Radio (CR) technology. With the rapid increase of Secondary Users (SUs) numbers, the precise and real-time management becomes more and more difficult to achieve. In order to solve this problem, a hierarchical Cognitive Radio Network (CRN) architecture that several administration entities focus on providing spectrum services for users of variety tiers is proposed. The corresponding resource allocation algorithm based on stable matching in this architecture is also given. This algorithm guarantees the restriction on SUs transmission power for Primary Users (PUs), and also considers both utility functions of users. Simulation results demonstrate that the proposed method can roughly achieve the same performance of optimal solution with lower computation complexity and system delay.
2016, 38(10): 2612-2618.
doi: 10.11999/JEIT160032
Abstract:
Physical layer secret key capacity is affected by such factors as additive noise, the time difference of channel sampling, terminals moving speed, sampling period, and the number of samples, whose effects on the physical layer secret key capacity are analyzed quantitatively using the single-input single-output wireless channel over the uniform scattering environment. Specifically, a closed-form solution to the secret key capacity is derived to determine the constraints on the optimal sampling period. Analysis and simulation results reveal that the results can also be applied to the nonuniform scattering environment. Furthermore, the feasibility to utilize the physical layer secret key extraction techniques in the mobile communication systems is verified.
Physical layer secret key capacity is affected by such factors as additive noise, the time difference of channel sampling, terminals moving speed, sampling period, and the number of samples, whose effects on the physical layer secret key capacity are analyzed quantitatively using the single-input single-output wireless channel over the uniform scattering environment. Specifically, a closed-form solution to the secret key capacity is derived to determine the constraints on the optimal sampling period. Analysis and simulation results reveal that the results can also be applied to the nonuniform scattering environment. Furthermore, the feasibility to utilize the physical layer secret key extraction techniques in the mobile communication systems is verified.
2016, 38(10): 2619-2626.
doi: 10.11999/JEIT151443
Abstract:
The security of hierarchical identity based authenticated key agreement scheme which was proposed by CAO et al. (2014) is cryptanalyzed. First, it is pointed out that the scheme is not completely secure against the basic impersonation attack. Then, the process and the reasons of the attack are described. Finally, an improvement scheme to mend the security leaks is proposed based on the hierarchical identity based encryption (BONEH et al. 2005). The security proof of the proposal is presented in the BJM model. The computation efficiency of the proposed scheme is nearly equivalent to the CAO et al.s.
The security of hierarchical identity based authenticated key agreement scheme which was proposed by CAO et al. (2014) is cryptanalyzed. First, it is pointed out that the scheme is not completely secure against the basic impersonation attack. Then, the process and the reasons of the attack are described. Finally, an improvement scheme to mend the security leaks is proposed based on the hierarchical identity based encryption (BONEH et al. 2005). The security proof of the proposal is presented in the BJM model. The computation efficiency of the proposed scheme is nearly equivalent to the CAO et al.s.
2016, 38(10): 2627-2632.
doi: 10.11999/JEIT151476
Abstract:
To solve the problem that Sybil attack damages the uniqueness of node identity in ZigBee network, an adaptive link fingerprint authentication scheme against Sybil attack is proposed. First, a link fingerprint based on the characteristics of wireless link is designed. Based on this fingerprint, two algorithms are presented. One is the estimation algorithm of coherence time reflecting channels quality and the other is the dynamic application algorithm of Guaranteed Time Slot (GTS) adapting to changes in child nodes number. At the same time, the authenticating procedure for Sybil attack is presented. Security analysis and experiment results show that the node authentication rate of the proposed scheme can reach more than 97% under the condition of security boundary in communication environment. Due to the usage of link fingerprint, the scheme has lower resource requirements.
To solve the problem that Sybil attack damages the uniqueness of node identity in ZigBee network, an adaptive link fingerprint authentication scheme against Sybil attack is proposed. First, a link fingerprint based on the characteristics of wireless link is designed. Based on this fingerprint, two algorithms are presented. One is the estimation algorithm of coherence time reflecting channels quality and the other is the dynamic application algorithm of Guaranteed Time Slot (GTS) adapting to changes in child nodes number. At the same time, the authenticating procedure for Sybil attack is presented. Security analysis and experiment results show that the node authentication rate of the proposed scheme can reach more than 97% under the condition of security boundary in communication environment. Due to the usage of link fingerprint, the scheme has lower resource requirements.
2016, 38(10): 2633-2639.
doi: 10.11999/JEIT160015
Abstract:
To overcome the shortages in security and privacy of existing handover authentication protocols for vehicle network, an improved scheme based on the Lightweight Identity Authentication Protocol (LIAP) protocol is proposed in this paper. Firstly, terminals pseudo-identity is concatenated with a random number, then quadratic residues operation is utilized to encrypt the connected information and to generate a dynamic identity, which can protect the users location privacy. Meanwhile, the new road side unit regenerates a new session secret sequence and computes the challenge sequence with the terminal users pseudo-identity by XOR encryption, which can provide secure protection against parallel session attack during the handover process. Theoretical analysis and experiments show that the proposed protocol can not only meet security requirements of providing terminal anonymity and defending various attacks, but also achieve a faster switching speed. Therefore, the improved protocol shows obvious superiorities over most existing schemes.
To overcome the shortages in security and privacy of existing handover authentication protocols for vehicle network, an improved scheme based on the Lightweight Identity Authentication Protocol (LIAP) protocol is proposed in this paper. Firstly, terminals pseudo-identity is concatenated with a random number, then quadratic residues operation is utilized to encrypt the connected information and to generate a dynamic identity, which can protect the users location privacy. Meanwhile, the new road side unit regenerates a new session secret sequence and computes the challenge sequence with the terminal users pseudo-identity by XOR encryption, which can provide secure protection against parallel session attack during the handover process. Theoretical analysis and experiments show that the proposed protocol can not only meet security requirements of providing terminal anonymity and defending various attacks, but also achieve a faster switching speed. Therefore, the improved protocol shows obvious superiorities over most existing schemes.
2016, 38(10): 2640-2646.
doi: 10.11999/JEIT151344
Abstract:
Implementation of file fault tolerance is the key for preventing data loss in cloud. However, cloud storage service providers may not offer the committed level, which results in that users may suffer data loss and economic loss. Existing inspection algorithms of testing of data fault tolerance in cloud have disadvantages such as spoofing attack of pre-fetch, low efficiency and poor practicality, which can not detect the foul behavior of cloud storage providers within a certain probability. To deal with the above problems, utilizing the difference of sequential access and random access, a remote testing algorithm of data fault tolerance in cloud named (Difference of Random and Sequential access Time) (DRST) is designed. The core idea is that the time of reading blocks of a file stored in order on a disk is much shorter than reading blocks of a file stored random on different disks. A strict theoretical proof and a in-depth performance analysis to the proposed scheme are carried out. The results show that the proposed scheme can accurately detect whether the cloud storage provider supplies clients with the committed level of fault tolerance.Whats more, the proposed scheme is much more efficient than the existing ones.
Implementation of file fault tolerance is the key for preventing data loss in cloud. However, cloud storage service providers may not offer the committed level, which results in that users may suffer data loss and economic loss. Existing inspection algorithms of testing of data fault tolerance in cloud have disadvantages such as spoofing attack of pre-fetch, low efficiency and poor practicality, which can not detect the foul behavior of cloud storage providers within a certain probability. To deal with the above problems, utilizing the difference of sequential access and random access, a remote testing algorithm of data fault tolerance in cloud named (Difference of Random and Sequential access Time) (DRST) is designed. The core idea is that the time of reading blocks of a file stored in order on a disk is much shorter than reading blocks of a file stored random on different disks. A strict theoretical proof and a in-depth performance analysis to the proposed scheme are carried out. The results show that the proposed scheme can accurately detect whether the cloud storage provider supplies clients with the committed level of fault tolerance.Whats more, the proposed scheme is much more efficient than the existing ones.
2016, 38(10): 2647-2653.
doi: 10.11999/JEIT151448
Abstract:
In order to optimize the recovery quality of Region Incrementing Visual Cryptography Scheme (RIVCS), by adding identities for shares and combing the random numbers, an XOR-based single-secret sharing Visual Cryptography Scheme (XVCS) with individual participant holding multi-share is designed. On basis of this, the secret sharing and recovering algorithms for XOR-based RIVCS (XRIVCS) are designed. For the decrypt regions, XVCS is used to share, and for the not decrypt regions, the random numbers are filled to keep the secret. The experimental results show that, the proposed scheme can realize the perfect recovery of decrypt regions, and decrease the storage and transmission cost effectively.
In order to optimize the recovery quality of Region Incrementing Visual Cryptography Scheme (RIVCS), by adding identities for shares and combing the random numbers, an XOR-based single-secret sharing Visual Cryptography Scheme (XVCS) with individual participant holding multi-share is designed. On basis of this, the secret sharing and recovering algorithms for XOR-based RIVCS (XRIVCS) are designed. For the decrypt regions, XVCS is used to share, and for the not decrypt regions, the random numbers are filled to keep the secret. The experimental results show that, the proposed scheme can realize the perfect recovery of decrypt regions, and decrease the storage and transmission cost effectively.
2016, 38(10): 2654-2659.
doi: 10.11999/JEIT151485
Abstract:
Network virtualization is widely deployed in network experiment platforms and data center networks. As a key networking equipment in virtualized environment, the virtual router can build many virtual router instances to run different virtual networks. The key problem for a virtual router lies in how to schedule the packets into different virtual instances according to the virtual networks bandwidth requirement. In this article, a model is given to the scheduling problem and a dynamical weighted scheduling algorithm is proposed. The experimental results show that the proposed algorithm has superiority over miDRR algorithm in terms of the efficiency and the fairness.
Network virtualization is widely deployed in network experiment platforms and data center networks. As a key networking equipment in virtualized environment, the virtual router can build many virtual router instances to run different virtual networks. The key problem for a virtual router lies in how to schedule the packets into different virtual instances according to the virtual networks bandwidth requirement. In this article, a model is given to the scheduling problem and a dynamical weighted scheduling algorithm is proposed. The experimental results show that the proposed algorithm has superiority over miDRR algorithm in terms of the efficiency and the fairness.
Energy Aware Virtual Network Embedding Using Particle Swarm Optimization Algorithm Based on Adaptive
2016, 38(10): 2660-2666.
doi: 10.11999/JEIT151434
Abstract:
A novel adaptive co-evolutionary particle swarm optimization algorithm is presented for energy aware virtual network embedding problem. The polymerization degree is designed, which is used to adaptively select searching method, namely variation search, internal search or external search. Second, the algorithm adaptively determine whether to terminate the searching process of particle swarm according to the evolution result. Moreover, extensive simulation under common test environment compares results in energy consumption performing goal, and the results indicate the efficiency of the proposed algorithm.
A novel adaptive co-evolutionary particle swarm optimization algorithm is presented for energy aware virtual network embedding problem. The polymerization degree is designed, which is used to adaptively select searching method, namely variation search, internal search or external search. Second, the algorithm adaptively determine whether to terminate the searching process of particle swarm according to the evolution result. Moreover, extensive simulation under common test environment compares results in energy consumption performing goal, and the results indicate the efficiency of the proposed algorithm.
2016, 38(10): 2667-2673.
doi: 10.11999/JEIT151437
Abstract:
A novel non-iterative method, named unitary matrix pencil method, is presented in this paper for the pattern synthesis of sparse linear arrays. Through unitary transformation of the centro-Hermit matrix constructed using sample data of the desired pattern, an equivalent real-valued matrix pencil can be achieved so as to determine the relation between non-uniform element positions and new generalized eigenvalues. Then, the lower order left singular vector matrix can be obtained by discarding the non-principal singular values generated by Singular Value Decomposition (SVD) of the real-valued matrix. The element positions and excitations are thereby estimated efficiently. Compared with other algorithms, this method can be utilized to directly obtain the real-valued solution of sparse array locations. Furthermore, Singular Value Decomposition (SVD) and Eigen Value Decomposition (EVD) are computed in the real-valued field with a lower computational cost. Simulation results validate the high-efficiency of the proposed synthesis method for the design of arbitrary linear array pattern with a fewer number of antenna elements.
A novel non-iterative method, named unitary matrix pencil method, is presented in this paper for the pattern synthesis of sparse linear arrays. Through unitary transformation of the centro-Hermit matrix constructed using sample data of the desired pattern, an equivalent real-valued matrix pencil can be achieved so as to determine the relation between non-uniform element positions and new generalized eigenvalues. Then, the lower order left singular vector matrix can be obtained by discarding the non-principal singular values generated by Singular Value Decomposition (SVD) of the real-valued matrix. The element positions and excitations are thereby estimated efficiently. Compared with other algorithms, this method can be utilized to directly obtain the real-valued solution of sparse array locations. Furthermore, Singular Value Decomposition (SVD) and Eigen Value Decomposition (EVD) are computed in the real-valued field with a lower computational cost. Simulation results validate the high-efficiency of the proposed synthesis method for the design of arbitrary linear array pattern with a fewer number of antenna elements.
2016, 38(10): 2674-2680.
doi: 10.11999/JEIT160003
Abstract:
X-band pulsed space Traveling Wave Tube (TWT) is mainly used in such radar system as light-weight SAR, which is required to have high power, high efficiency and high reliability. The output structure is an important part of the TWT, and its reliability not only affects the output power of TWT, but also influences the stability and reliability of the TWT. In this paper, the reliability of the output structure of an X-band pulsed space TWT is studied. The thermal and structural reliability is studied by means of multi-physics coupling including electrical and magnetic. The shock resistance ability of the output structure is improved according to the analysis results. Furthermore, in the course of above 1000 hours aging and space environmental test, the output structure has high reliability, meeting the space environment test and use requirements.
X-band pulsed space Traveling Wave Tube (TWT) is mainly used in such radar system as light-weight SAR, which is required to have high power, high efficiency and high reliability. The output structure is an important part of the TWT, and its reliability not only affects the output power of TWT, but also influences the stability and reliability of the TWT. In this paper, the reliability of the output structure of an X-band pulsed space TWT is studied. The thermal and structural reliability is studied by means of multi-physics coupling including electrical and magnetic. The shock resistance ability of the output structure is improved according to the analysis results. Furthermore, in the course of above 1000 hours aging and space environmental test, the output structure has high reliability, meeting the space environment test and use requirements.
2016, 38(10): 2681-2688.
doi: 10.11999/JEIT160178
Abstract:
Memristor is a kind of resistance and in possession of the function of memory. The research hotspots and difficulties at present memristor lie in the application of the new model and related areas. A novel magnetron memristor model based on hyperbolic sine function is designed and found that its characteristic is consistent with the typical memristor from the trajectory of voltage and current phase. A new memristor chaotic systemshe using the new memristor model is also presented, and then the phase trajectories, the bifurcation diagram and Lyapunov exponent spectrum of the new system are plotted through numerical simulations. In addition, based on Multisim circuit simulation software for the new system simulation, both experimental and simulation results validate the proposed equivalent circuit realization. Finally, the chaotic sequences generated by the new system are used for scrambling the pixel position to protect image information security. The correlation and statistic histogram entropy of adjacent pixels, anti-attack capability and key sensitivity of the encrypted image are analyzed, which indicate that the new memristive chaotic system has much better potential advantages than other existing chaotic system in image encryption application with high safety performance.
Memristor is a kind of resistance and in possession of the function of memory. The research hotspots and difficulties at present memristor lie in the application of the new model and related areas. A novel magnetron memristor model based on hyperbolic sine function is designed and found that its characteristic is consistent with the typical memristor from the trajectory of voltage and current phase. A new memristor chaotic systemshe using the new memristor model is also presented, and then the phase trajectories, the bifurcation diagram and Lyapunov exponent spectrum of the new system are plotted through numerical simulations. In addition, based on Multisim circuit simulation software for the new system simulation, both experimental and simulation results validate the proposed equivalent circuit realization. Finally, the chaotic sequences generated by the new system are used for scrambling the pixel position to protect image information security. The correlation and statistic histogram entropy of adjacent pixels, anti-attack capability and key sensitivity of the encrypted image are analyzed, which indicate that the new memristive chaotic system has much better potential advantages than other existing chaotic system in image encryption application with high safety performance.
2016, 38(10): 2689-2694.
doi: 10.11999/JEIT151416
Abstract:
There are increasing interests in hardware support for decimal arithmetic due to the demand of high accuracy computation in commercial computing, financial analysis, and other applications. New specifications for decimal floating-point arithmetic have been added to the revised IEEE 754-2008 standard. In this paper, the algorithm and architecture of decimal addition is studied comprehensively. A decimal adder is designed by using the parallel-prefix/carry-select architecture. The parallel-prefix unit is used to optimize the decimal carry select adder. The decimal adder has been realized by Verilog HDL and simulated with ModelSim. The synthesis results of this design by Design Compiler is also given and analyzed under Nangate Open Cell 45nm library. The results show that the delay performance of the proposed circuit can be improved by up to 12.3%.
There are increasing interests in hardware support for decimal arithmetic due to the demand of high accuracy computation in commercial computing, financial analysis, and other applications. New specifications for decimal floating-point arithmetic have been added to the revised IEEE 754-2008 standard. In this paper, the algorithm and architecture of decimal addition is studied comprehensively. A decimal adder is designed by using the parallel-prefix/carry-select architecture. The parallel-prefix unit is used to optimize the decimal carry select adder. The decimal adder has been realized by Verilog HDL and simulated with ModelSim. The synthesis results of this design by Design Compiler is also given and analyzed under Nangate Open Cell 45nm library. The results show that the delay performance of the proposed circuit can be improved by up to 12.3%.
2016, 38(10): 2695-2700.
doi: 10.11999/JEIT151350
Abstract:
Zhang et al. (2015) proposed two certificateless aggregate signature schemes, and they demonstrated that both of their schemes are provably secure in the random oracle model. This paper analyzes the security of two schemes proposed by Zhang et al. and indicates that the first scheme can resist the attacks by Type 1 and Type 2 adversaries, and the second scheme can not resist the attacks by Type 1 and Type 2 adversaries. The study shows the processes of concrete forgery attacks, and proves the validity of the forged signature by attackers. The reasons of forgery attacks in the second scheme are analyzed, and the modified scheme is proposed.
Zhang et al. (2015) proposed two certificateless aggregate signature schemes, and they demonstrated that both of their schemes are provably secure in the random oracle model. This paper analyzes the security of two schemes proposed by Zhang et al. and indicates that the first scheme can resist the attacks by Type 1 and Type 2 adversaries, and the second scheme can not resist the attacks by Type 1 and Type 2 adversaries. The study shows the processes of concrete forgery attacks, and proves the validity of the forged signature by attackers. The reasons of forgery attacks in the second scheme are analyzed, and the modified scheme is proposed.