Email alert
2020 Vol. 42, No. 11
Display Method:
2020, 42(11): 2573-2578.
doi: 10.11999/JEIT190228
Abstract:
The anti-interference technology in wireless communication is great significance to the stability and security of communication. As an important part of anti-interference technology, interference recognition is a research hotspot. An interference recognition method based on singular value decomposition and neural network is proposed. This method only calculates the singular value of the signal matrix as the feature. Compared with the traditional method, it saves the computational complexity of multiple spectral features. The simulation results show that the recognition accuracy based on singular value decomposition and neural network is 10%~25% higher than the traditional method under the condition of jamming-signal ratio at 0 dB.
The anti-interference technology in wireless communication is great significance to the stability and security of communication. As an important part of anti-interference technology, interference recognition is a research hotspot. An interference recognition method based on singular value decomposition and neural network is proposed. This method only calculates the singular value of the signal matrix as the feature. Compared with the traditional method, it saves the computational complexity of multiple spectral features. The simulation results show that the recognition accuracy based on singular value decomposition and neural network is 10%~25% higher than the traditional method under the condition of jamming-signal ratio at 0 dB.
2020, 42(11): 2579-2586.
doi: 10.11999/JEIT190455
Abstract:
In order to handle the problem that the existing DOA estimation algorithm for L-shaped array of coherent signals is not accurate and the aperture loss is large, a method named L-shaped array Principal-singular-vector Utilization for Modal Analysis (L-PUMA) and its modified algorithm named L-shaped array Modified PUMA (L-MPUMA) are proposed. L-PUMA algorithm first denoises the cross-covariance matrix, then obtains the two-dimensional main singular vector by singular value decomposition, and then obtains the polynomial coefficient of the linear prediction equation by weighted least squares method. The root of the linear prediction equation is the DOA estimation of the signals. Finally, a new pairing algorithm is proposed to realize the pairing of elevation and azimuth. L-MPUMA algorithm uses the inverse conjugate transform to obtain the augmented main singular vector, which further improves the data utilization rate and overcomes the problem that the performance of L-PUMA deteriorates seriously when the signals are completely coherent. Simulation experiments verify the efficiency of the proposed algorithm.
In order to handle the problem that the existing DOA estimation algorithm for L-shaped array of coherent signals is not accurate and the aperture loss is large, a method named L-shaped array Principal-singular-vector Utilization for Modal Analysis (L-PUMA) and its modified algorithm named L-shaped array Modified PUMA (L-MPUMA) are proposed. L-PUMA algorithm first denoises the cross-covariance matrix, then obtains the two-dimensional main singular vector by singular value decomposition, and then obtains the polynomial coefficient of the linear prediction equation by weighted least squares method. The root of the linear prediction equation is the DOA estimation of the signals. Finally, a new pairing algorithm is proposed to realize the pairing of elevation and azimuth. L-MPUMA algorithm uses the inverse conjugate transform to obtain the augmented main singular vector, which further improves the data utilization rate and overcomes the problem that the performance of L-PUMA deteriorates seriously when the signals are completely coherent. Simulation experiments verify the efficiency of the proposed algorithm.
2020, 42(11): 2587-2591.
doi: 10.11999/JEIT190521
Abstract:
To deal with wideband band Direction Of Arrival (DOA) estimation in the presence of impulsive noise and co-channel interferences, a novel method is proposed with the help of Cyclic CorrEntropy (CCE) and sparse reconstruction. Firstly, the received signal model of wideband sources is analyzed and a virtual array output is constructed, which shows resistance to impulsive noise and co-channel interferences via the characteristics of CCE. Then, to extract the DOA of wideband signals, the virtual array output with a sparse structure is represented and the Normalized Iterative Hard Thresholding (NIHT) is utilized to solve the sparse reconstruction problem. Comprehensive simulation results demonstrate that the proposed method has efficient suppression on impulsive noise and co-channel interference and it can improve both accuracy and efficiency than existing methods.
To deal with wideband band Direction Of Arrival (DOA) estimation in the presence of impulsive noise and co-channel interferences, a novel method is proposed with the help of Cyclic CorrEntropy (CCE) and sparse reconstruction. Firstly, the received signal model of wideband sources is analyzed and a virtual array output is constructed, which shows resistance to impulsive noise and co-channel interferences via the characteristics of CCE. Then, to extract the DOA of wideband signals, the virtual array output with a sparse structure is represented and the Normalized Iterative Hard Thresholding (NIHT) is utilized to solve the sparse reconstruction problem. Comprehensive simulation results demonstrate that the proposed method has efficient suppression on impulsive noise and co-channel interference and it can improve both accuracy and efficiency than existing methods.
2020, 42(11): 2592-2599.
doi: 10.11999/JEIT190562
Abstract:
On account of the large coverage of space based radar, a lot of discrete strong side-lobe clutter, which shares familiar Doppler feature with the real moving targets, can be received by the radar system and hence results in false alarms. For this problem, a discrete side-lobe clutter recognition method using space-time steering vectors for space based radar system is proposed. In this method, the “Suspected targets”, including both the real moving targets and discrete side-lobe clutter, are detected after suppressing clutter by employing the Space-Time Adaptive Processing (STAP). The range-Doppler cells where “suspected targets” located in or around are selected. Afterwards, the space time steering vectors of them are obtained based on the coupling relationship between Doppler frequencies and space angles of clutter. Lastly, the above range-Doppler cells are processed again by the adaptive processing filters which are derived from the new space-time steering vectors. Obviously, the signal-clutter-noise ratio of real moving target will be reduced significantly, while it will not change much for the discrete side-lobe clutter. Therefore, the discrete side-lobe clutter can be identified by using the proposed method. Theoretical analyses and multi-channel airborne radar experiments demonstrate the effectiveness and stability of this method.
On account of the large coverage of space based radar, a lot of discrete strong side-lobe clutter, which shares familiar Doppler feature with the real moving targets, can be received by the radar system and hence results in false alarms. For this problem, a discrete side-lobe clutter recognition method using space-time steering vectors for space based radar system is proposed. In this method, the “Suspected targets”, including both the real moving targets and discrete side-lobe clutter, are detected after suppressing clutter by employing the Space-Time Adaptive Processing (STAP). The range-Doppler cells where “suspected targets” located in or around are selected. Afterwards, the space time steering vectors of them are obtained based on the coupling relationship between Doppler frequencies and space angles of clutter. Lastly, the above range-Doppler cells are processed again by the adaptive processing filters which are derived from the new space-time steering vectors. Obviously, the signal-clutter-noise ratio of real moving target will be reduced significantly, while it will not change much for the discrete side-lobe clutter. Therefore, the discrete side-lobe clutter can be identified by using the proposed method. Theoretical analyses and multi-channel airborne radar experiments demonstrate the effectiveness and stability of this method.
2020, 42(11): 2600-2606.
doi: 10.11999/JEIT200325
Abstract:
To solve the problem of passive wireless monitoring and positioning in complex electromagnetic environments, a generalized auto-correntropy for suppressing the impulsive noise in the array output signals is proposed and its properties are derived. To obtain the estimates of both central Direction Of Arrival (DOA) and angular spread for coherently distributed sources in the impulsive noise, a novel DOA estimation method based on the generalized auto-correntropy is proposed, and its boundedness is proved. To improve the robustness of the proposed algorithm, a new adaptive kernel function, which only depends on the array output signals, is also derived. The simulation results show that the proposed algorithm can obtain the joint estimation for coherently distributed sources under impulsive noise environments, and has higher estimation accuracy and robustness than existing algorithms.
To solve the problem of passive wireless monitoring and positioning in complex electromagnetic environments, a generalized auto-correntropy for suppressing the impulsive noise in the array output signals is proposed and its properties are derived. To obtain the estimates of both central Direction Of Arrival (DOA) and angular spread for coherently distributed sources in the impulsive noise, a novel DOA estimation method based on the generalized auto-correntropy is proposed, and its boundedness is proved. To improve the robustness of the proposed algorithm, a new adaptive kernel function, which only depends on the array output signals, is also derived. The simulation results show that the proposed algorithm can obtain the joint estimation for coherently distributed sources under impulsive noise environments, and has higher estimation accuracy and robustness than existing algorithms.
2020, 42(11): 2607-2614.
doi: 10.11999/JEIT190534
Abstract:
Based on the target detection equation of the passive interferometric microwave system, the effects of complex weather on the detection ability of the passive interferometric microwave system are discussed for the sea surface target, such as clouds, fog and rain, and sea winds. Quantitative simulations are also performed to assess the effects of these previous mentioned factors. The experiments are also performed to demonstrate the passive interferometric microwave system penetrating the clouds. Both theoretical and simulation results show that complex weather has a negative impact on the passive interferometric microwave systems in the sea target detection, such as clouds, fog and rain. However, the impacts can be neglected in low frequency, since the impacts of clouds in low frequency is very small. On the other hand, rainfall will seriously degrade the system’s target detection capability. Sea winds have a positive impact in the metallic target detection. However, sea winds have a negative impact and reduce the system’s detection capability for the stealthy target detection.
Based on the target detection equation of the passive interferometric microwave system, the effects of complex weather on the detection ability of the passive interferometric microwave system are discussed for the sea surface target, such as clouds, fog and rain, and sea winds. Quantitative simulations are also performed to assess the effects of these previous mentioned factors. The experiments are also performed to demonstrate the passive interferometric microwave system penetrating the clouds. Both theoretical and simulation results show that complex weather has a negative impact on the passive interferometric microwave systems in the sea target detection, such as clouds, fog and rain. However, the impacts can be neglected in low frequency, since the impacts of clouds in low frequency is very small. On the other hand, rainfall will seriously degrade the system’s target detection capability. Sea winds have a positive impact in the metallic target detection. However, sea winds have a negative impact and reduce the system’s detection capability for the stealthy target detection.
Research on Mesh Generation Optimization of Finite Element Model of Human Body Based on Energy Error
2020, 42(11): 2615-2620.
doi: 10.11999/JEIT190765
Abstract:
Meshing is the most important part of the finite element modeling and analysis process, and it also has the largest workload, which directly affects the accuracy and time of the finite element analysis. Based on the research of adaptive meshing and finite element discrete errors, three-dimensional human body models of different complexity are established in the environment of high-voltage power transmission fields. By comparing the results of the electric field simulation between the adaptive meshing of the human body model and the manual meshing, the trend of energy error changes is analyzed, so as to guide the establishment of the human body model and the setting of the optimal mesh size.The research results have certain reference significance to the optimization research of other finite element splitting schemes.
Meshing is the most important part of the finite element modeling and analysis process, and it also has the largest workload, which directly affects the accuracy and time of the finite element analysis. Based on the research of adaptive meshing and finite element discrete errors, three-dimensional human body models of different complexity are established in the environment of high-voltage power transmission fields. By comparing the results of the electric field simulation between the adaptive meshing of the human body model and the manual meshing, the trend of energy error changes is analyzed, so as to guide the establishment of the human body model and the setting of the optimal mesh size.The research results have certain reference significance to the optimization research of other finite element splitting schemes.
2020, 42(11): 2621-2628.
doi: 10.11999/JEIT190564
Abstract:
A hybrid reflector antenna is presented to generate a contoured beam over service area, an un-scanned and a scanned pencil beam from two shaped reflectors and three feeds, simultaneously. The shaped main reflector is shared by three beams, and the antenna is equivalent to two single-reflector antennas with single-feed for each beam and a pair of dual offset Gregorian shaped reflector antennas, and generating the contoured, un-scanned and scanned pencil beam, respectively. The proposed method is successfully applied to a 1.2 m hybrid reflector antenna. Simulations and experimentations of each beam has been performed. The Edge of Coverage(EoC) directivity over service area is 27.5 dBi for contoured beam in Tx and Rx working frequency of Ku-band, and the un-scanned pencil beam has a aperture efficiency higher than 70% in Tx and Rx working frequency of C-band. Meanwhile, the scanned pencil beam inside and outside the service area is realized by the lateral defocus of the sub-reflector and the corresponding feed in Tx and Rx working frequency of Ka-band. Simulation results show that the hybrid reflector antenna can realize C/Ku/Ka-band communication tasks simultaneously.
A hybrid reflector antenna is presented to generate a contoured beam over service area, an un-scanned and a scanned pencil beam from two shaped reflectors and three feeds, simultaneously. The shaped main reflector is shared by three beams, and the antenna is equivalent to two single-reflector antennas with single-feed for each beam and a pair of dual offset Gregorian shaped reflector antennas, and generating the contoured, un-scanned and scanned pencil beam, respectively. The proposed method is successfully applied to a 1.2 m hybrid reflector antenna. Simulations and experimentations of each beam has been performed. The Edge of Coverage(EoC) directivity over service area is 27.5 dBi for contoured beam in Tx and Rx working frequency of Ku-band, and the un-scanned pencil beam has a aperture efficiency higher than 70% in Tx and Rx working frequency of C-band. Meanwhile, the scanned pencil beam inside and outside the service area is realized by the lateral defocus of the sub-reflector and the corresponding feed in Tx and Rx working frequency of Ka-band. Simulation results show that the hybrid reflector antenna can realize C/Ku/Ka-band communication tasks simultaneously.
2020, 42(11): 2629-2635.
doi: 10.11999/JEIT190645
Abstract:
In order to meet the needs of measuring and detecting combinatorial target placed on the rough surface, Dobson semi-empirical model and dielectric complex permittivity formula are used to represent the real and imaginary parts of the soil dielectric constant, the soil surface is simulated with the model of exponential distribution and Monte Carlo method. The strategy of the Finite Difference Time Domain (FDTD) method for calculating the composite scattering from rough surface with target and the modeling method are presented with their validity evaluated by the method of moment, then the composite scattering of soil surface and combinatorial target placed on it is studied by this method, the angular distribution curve of the composite scattering coefficient is obtained. The results show that the composite scattering coefficient oscillates with the scattering angle, and the scattering enhancement effect occurs in the mirror reflection direction; the larger the root mean square of the fluctuation of soil surface, the larger the composite scattering coefficient; the larger the correlation length, the smaller the composite scattering coefficient; the larger the soil moisture content, the smaller the composite scattering coefficient; the influence of the scale and dielectric constant of combinatorial target, incident angle on composite scattering coefficient is complex. The results obtained in this paper can be used to solve the composite electromagnetic scattering from rough land surface and rough sea surface with any target placed on it. Compared with other numerical methods, the finite difference time domain method can not only obtain higher accuracy, but also reduce the calculation time and the amount of memory occupying.
In order to meet the needs of measuring and detecting combinatorial target placed on the rough surface, Dobson semi-empirical model and dielectric complex permittivity formula are used to represent the real and imaginary parts of the soil dielectric constant, the soil surface is simulated with the model of exponential distribution and Monte Carlo method. The strategy of the Finite Difference Time Domain (FDTD) method for calculating the composite scattering from rough surface with target and the modeling method are presented with their validity evaluated by the method of moment, then the composite scattering of soil surface and combinatorial target placed on it is studied by this method, the angular distribution curve of the composite scattering coefficient is obtained. The results show that the composite scattering coefficient oscillates with the scattering angle, and the scattering enhancement effect occurs in the mirror reflection direction; the larger the root mean square of the fluctuation of soil surface, the larger the composite scattering coefficient; the larger the correlation length, the smaller the composite scattering coefficient; the larger the soil moisture content, the smaller the composite scattering coefficient; the influence of the scale and dielectric constant of combinatorial target, incident angle on composite scattering coefficient is complex. The results obtained in this paper can be used to solve the composite electromagnetic scattering from rough land surface and rough sea surface with any target placed on it. Compared with other numerical methods, the finite difference time domain method can not only obtain higher accuracy, but also reduce the calculation time and the amount of memory occupying.
2020, 42(11): 2636-2642.
doi: 10.11999/JEIT190172
Abstract:
Wireless ultraviolet communication becomes an effective means of communication under strong electromagnetic interference, which meets the need of reliable and secret communication between vehicles when the fleet performs strategic material transportation and the missile vehicle fleet of concealed driving vehicles in a complex battlefield environment. Each vehicle acts as a relay for other vehicles while driving, and establishes a stable and reliable communication link between non-line-of-sight vehicles through a multi-hop model. Therefore, based on the single-scattering model of ultraviolet, the optimal multi-hop relay problem is studied, and the relationship between the elevation angle of the transmitting and receiving and the spectral efficiency is theoretically analyzed. According to the principle of maximizing the spectral efficiency, the approximate expression of the optimum number of hops is obtained. The simulation results show that the optimum number of hops correspond to different distance shift range and elevation angles. Compared with the optimum energy calculation method, the proposed method has better transmission capability in low power transmission and achieves the requirement of power saving. In the long-distance ultraviolet communication, system performance does not increase with the number of cooperative relays. The system can obtain a higher transmission capacity by selecting a suitable number of relays and a small transmission elevation angle and a large receiving elevation angle.
Wireless ultraviolet communication becomes an effective means of communication under strong electromagnetic interference, which meets the need of reliable and secret communication between vehicles when the fleet performs strategic material transportation and the missile vehicle fleet of concealed driving vehicles in a complex battlefield environment. Each vehicle acts as a relay for other vehicles while driving, and establishes a stable and reliable communication link between non-line-of-sight vehicles through a multi-hop model. Therefore, based on the single-scattering model of ultraviolet, the optimal multi-hop relay problem is studied, and the relationship between the elevation angle of the transmitting and receiving and the spectral efficiency is theoretically analyzed. According to the principle of maximizing the spectral efficiency, the approximate expression of the optimum number of hops is obtained. The simulation results show that the optimum number of hops correspond to different distance shift range and elevation angles. Compared with the optimum energy calculation method, the proposed method has better transmission capability in low power transmission and achieves the requirement of power saving. In the long-distance ultraviolet communication, system performance does not increase with the number of cooperative relays. The system can obtain a higher transmission capacity by selecting a suitable number of relays and a small transmission elevation angle and a large receiving elevation angle.
2020, 42(11): 2643-2648.
doi: 10.11999/JEIT190658
Abstract:
In the military and civilian fields, the Morse telegraph is always as an important means of short-wave communication, but the current automatic decoding algorithms still have problems such as low accuracy, inability to adapt to low signal-to-noise ratio and unstable signals. A deep learning method is introduced to construct a Morse code automatic recognition system. The neural network model consists of convolutional neural network, bidirectional long short-term memory network and connectionist temporal classification layer. The structure is simple and can implement end-to-end training. Related experiments show that the decoding system can achieve good recognition results under different signal-to-noise ratio, code rate, frequency drift and code length deviation caused by different sending manipulation, and the performance is better than the traditional recognition algorithms.
In the military and civilian fields, the Morse telegraph is always as an important means of short-wave communication, but the current automatic decoding algorithms still have problems such as low accuracy, inability to adapt to low signal-to-noise ratio and unstable signals. A deep learning method is introduced to construct a Morse code automatic recognition system. The neural network model consists of convolutional neural network, bidirectional long short-term memory network and connectionist temporal classification layer. The structure is simple and can implement end-to-end training. Related experiments show that the decoding system can achieve good recognition results under different signal-to-noise ratio, code rate, frequency drift and code length deviation caused by different sending manipulation, and the performance is better than the traditional recognition algorithms.
2020, 42(11): 2649-2655.
doi: 10.11999/JEIT190471
Abstract:
As a key technology of the fifth generation communication system, large-scale Multi-Input and Multi-Output(MIMO) technology can effectively improve spectrum utilization. The base station side uses the Message Passing Detection (MPD) algorithm to achieve good detection performance. However, the computational complexity of the MPD algorithm increases with the increase of the modulation order and the number of user antennas, and the Probability Approximation Message Passing Detection (PA-MPD) algorithm can reduce the computational complexity of the MPD algorithm. In order to further reduce the complexity of PA-MPD algorithm, this paper introduces an early termination iteration strategy based on PA-MPD algorithm, and proposes an Improved PA-MPD (IPA-MPD) algorithm. Firstly, the convergence rate of the symbol probability of different users in the iterative process is determined, and then the convergence probability is used to determine whether the user’s symbol probability reaches the best convergence. Finally, the user termination algorithm that the symbol probability reaches the best convergence is iterated. The simulation results show that the computational complexity of the IPA-MPD algorithm can be reduced to 52%~77% of the PA-MPD algorithm under different single-antenna user configurations without loss of the detection performance of the algorithm.
As a key technology of the fifth generation communication system, large-scale Multi-Input and Multi-Output(MIMO) technology can effectively improve spectrum utilization. The base station side uses the Message Passing Detection (MPD) algorithm to achieve good detection performance. However, the computational complexity of the MPD algorithm increases with the increase of the modulation order and the number of user antennas, and the Probability Approximation Message Passing Detection (PA-MPD) algorithm can reduce the computational complexity of the MPD algorithm. In order to further reduce the complexity of PA-MPD algorithm, this paper introduces an early termination iteration strategy based on PA-MPD algorithm, and proposes an Improved PA-MPD (IPA-MPD) algorithm. Firstly, the convergence rate of the symbol probability of different users in the iterative process is determined, and then the convergence probability is used to determine whether the user’s symbol probability reaches the best convergence. Finally, the user termination algorithm that the symbol probability reaches the best convergence is iterated. The simulation results show that the computational complexity of the IPA-MPD algorithm can be reduced to 52%~77% of the PA-MPD algorithm under different single-antenna user configurations without loss of the detection performance of the algorithm.
2020, 42(11): 2656-2663.
doi: 10.11999/JEIT190492
Abstract:
This paper proposes a power allocation scheme for energy efficiency maximization in a downlink Non-Orthogonal Multiple Access (NOMA)-based Heterogeneous Network (HetNets) with considering the out-of-cell interference and in-cell interference. The scheme contains mainly two parts. One is the power allocation between the users at the same sub-channel, where Difference of Convex (DC) functions -programming is exploited to solve the problem. Another is the power allocation between the sub-channels, in which ConCave–Convex Procedure (CCCP) method and Lagrangian multiplier method are combined to solve the problem. The simulation results show that the fast convergence property, and demonstrate that the EE obtained by the proposed algorithms based on NOMA is at least 44% higher than that obtained by the conventional orthogonal multiple access scheme.
This paper proposes a power allocation scheme for energy efficiency maximization in a downlink Non-Orthogonal Multiple Access (NOMA)-based Heterogeneous Network (HetNets) with considering the out-of-cell interference and in-cell interference. The scheme contains mainly two parts. One is the power allocation between the users at the same sub-channel, where Difference of Convex (DC) functions -programming is exploited to solve the problem. Another is the power allocation between the sub-channels, in which ConCave–Convex Procedure (CCCP) method and Lagrangian multiplier method are combined to solve the problem. The simulation results show that the fast convergence property, and demonstrate that the EE obtained by the proposed algorithms based on NOMA is at least 44% higher than that obtained by the conventional orthogonal multiple access scheme.
2020, 42(11): 2664-2670.
doi: 10.11999/JEIT190483
Abstract:
In the vehicular cloud computing environments, computation offloading faces the problems such as high network delay and large load of the remote cloud. The vehicular edge computing takes advantage of the edge servers to be close to the vehicular terminals, and provides the cloud computing service to solve the problem mentioned above. However, due to the dynamic change of communication environment caused by vehicle movement, the task completion time will increase. For this reason, this paper proposes a Mobility Prediction-based computation Offloading Handoff Strategy (MPOHS), which tries to minimize the average completion time of offloaded tasks by migrating tasks among edge servers according to the prediction of vehicle movement. The experimental results show that, compared with the existing research, the proposed strategy can reduce the average task completion time, cut down the handoff times and handoff time overhead, and effectively reduce the impact of vehicle movement on the performance of computation offloading.
In the vehicular cloud computing environments, computation offloading faces the problems such as high network delay and large load of the remote cloud. The vehicular edge computing takes advantage of the edge servers to be close to the vehicular terminals, and provides the cloud computing service to solve the problem mentioned above. However, due to the dynamic change of communication environment caused by vehicle movement, the task completion time will increase. For this reason, this paper proposes a Mobility Prediction-based computation Offloading Handoff Strategy (MPOHS), which tries to minimize the average completion time of offloaded tasks by migrating tasks among edge servers according to the prediction of vehicle movement. The experimental results show that, compared with the existing research, the proposed strategy can reduce the average task completion time, cut down the handoff times and handoff time overhead, and effectively reduce the impact of vehicle movement on the performance of computation offloading.
2020, 42(11): 2671-2679.
doi: 10.11999/JEIT190542
Abstract:
To solve the problem of high system delay caused by unreasonable resource allocation because of randomness and unpredictability of service requests in 5G network slicing, this paper proposes a deployment scheme of Service Function Chain (SFC) based on Transfer Actor-Critic (A-C) Algorithm (TACA). Firstly, an end-to-end delay minimization model is built based on Virtual Network Function (VNF) placement, and joint allocation of computing resources, link resources and fronthaul bandwidth resources, then the model is transformed into a discrete-time Markov Decision Process (MDP). Next, A-C learning algorithm is adopted in the MDP to adjust dynamically SFC deployment scheme by interacting with environment, so as to optimize the end-to-end delay. Furthermore, in order to realize and accelerate the convergence of the A-C algorithm in similar target tasks (such as the arrival rate of service requests is generally higher), the transfer A-C algorithm is adopted to utilize the SFC deployment knowledge learned from source tasks to find quickly the deployment strategy in target tasks. Simulation results show that the proposed algorithm can reduce and stabilize the queuing length of SFC packets, optimize the system end-to-end delay, and improve resource utilization.
To solve the problem of high system delay caused by unreasonable resource allocation because of randomness and unpredictability of service requests in 5G network slicing, this paper proposes a deployment scheme of Service Function Chain (SFC) based on Transfer Actor-Critic (A-C) Algorithm (TACA). Firstly, an end-to-end delay minimization model is built based on Virtual Network Function (VNF) placement, and joint allocation of computing resources, link resources and fronthaul bandwidth resources, then the model is transformed into a discrete-time Markov Decision Process (MDP). Next, A-C learning algorithm is adopted in the MDP to adjust dynamically SFC deployment scheme by interacting with environment, so as to optimize the end-to-end delay. Furthermore, in order to realize and accelerate the convergence of the A-C algorithm in similar target tasks (such as the arrival rate of service requests is generally higher), the transfer A-C algorithm is adopted to utilize the SFC deployment knowledge learned from source tasks to find quickly the deployment strategy in target tasks. Simulation results show that the proposed algorithm can reduce and stabilize the queuing length of SFC packets, optimize the system end-to-end delay, and improve resource utilization.
2020, 42(11): 2680-2688.
doi: 10.11999/JEIT190515
Abstract:
For the 5G New Radio in Unlicensed (NR-U) spectrum scenario, a novel random access mechanism is proposed, which first adds the channel idle timer in Random Access Response Window (RARW) and contention resolution window to reduce the accessing delay caused by the contention-based accessing and employs the Request To Send/Clear To Send (RTS /CTS) mechanism to address the hidden node issue. The mechanism can alleviate the latency incurred by the legacy mechanism which did not consider the intrinsic attribute of unlicensed band and the hidden node problem. Specifically, the legacy random access mechanism applied to NR-U is analyzed. Then, the detailed elaboration of the network entity interaction sequence defined in novel mechanism is proposed. Finally, the performance evaluation processes are carried out in the way of mathematical modeling and experimental simulation, and the analysis result demonstrates that the novel scheme outperforms the benchmark one in the respect of the average random access delay.
For the 5G New Radio in Unlicensed (NR-U) spectrum scenario, a novel random access mechanism is proposed, which first adds the channel idle timer in Random Access Response Window (RARW) and contention resolution window to reduce the accessing delay caused by the contention-based accessing and employs the Request To Send/Clear To Send (RTS /CTS) mechanism to address the hidden node issue. The mechanism can alleviate the latency incurred by the legacy mechanism which did not consider the intrinsic attribute of unlicensed band and the hidden node problem. Specifically, the legacy random access mechanism applied to NR-U is analyzed. Then, the detailed elaboration of the network entity interaction sequence defined in novel mechanism is proposed. Finally, the performance evaluation processes are carried out in the way of mathematical modeling and experimental simulation, and the analysis result demonstrates that the novel scheme outperforms the benchmark one in the respect of the average random access delay.
2020, 42(11): 2689-2697.
doi: 10.11999/JEIT190683
Abstract:
The Signal-In-Space (SIS) quality affects directly the user performance of Global Navigation Satellite System (GNSS). Unlike BDS-2, the BDS-3 satellites not only broadcast old signals, but also broadcast new signals such as B1C and B2a at the same time. The signal structure of BDS-3 with multi-frequency, multi-signal and multi-component is more complex than BDS-2, which is a great challenge to signal quality control of BDS-3 satellites. By the end of 2018, 18 BDS-3 satellites were successfully launched and the BDS-3 preliminary system is completed to provide global services. It is necessary to evaluate the signal quality of BDS-3. Traditional signal quality assessment methods focus on the qualitative assessment of a single item, but lacks systematic and quantitative analysis results for the complex signal structure of BDS-3. Based on the Interface Control Document (ICD) of BDS, this paper studies the influence of different parameter configurations on the evaluation results from the aspects of power characteristics, frequency characteristics, time characteristics, correlation characteristics and signal consistency, and forms a set of quantitative evaluation methods for new modulations and multi-frequency, multi-component signals. Based on the signal quality assessment system with 40-meter aperture antenna, 18 MEO satellites of BDS-3 preliminary system were monitored, and the signal quality of BDS-3 satellites were comprehensively and quantitatively evaluated for the first time. The results show that, signal qualities of BDS-3 satellites are good, and the 18 MEOs have a good consistency, which can meet the requirements of ICD and GNSS users. The evaluation methods can be also used to quantitatively evaluate the signal quality of other satellites.
The Signal-In-Space (SIS) quality affects directly the user performance of Global Navigation Satellite System (GNSS). Unlike BDS-2, the BDS-3 satellites not only broadcast old signals, but also broadcast new signals such as B1C and B2a at the same time. The signal structure of BDS-3 with multi-frequency, multi-signal and multi-component is more complex than BDS-2, which is a great challenge to signal quality control of BDS-3 satellites. By the end of 2018, 18 BDS-3 satellites were successfully launched and the BDS-3 preliminary system is completed to provide global services. It is necessary to evaluate the signal quality of BDS-3. Traditional signal quality assessment methods focus on the qualitative assessment of a single item, but lacks systematic and quantitative analysis results for the complex signal structure of BDS-3. Based on the Interface Control Document (ICD) of BDS, this paper studies the influence of different parameter configurations on the evaluation results from the aspects of power characteristics, frequency characteristics, time characteristics, correlation characteristics and signal consistency, and forms a set of quantitative evaluation methods for new modulations and multi-frequency, multi-component signals. Based on the signal quality assessment system with 40-meter aperture antenna, 18 MEO satellites of BDS-3 preliminary system were monitored, and the signal quality of BDS-3 satellites were comprehensively and quantitatively evaluated for the first time. The results show that, signal qualities of BDS-3 satellites are good, and the 18 MEOs have a good consistency, which can meet the requirements of ICD and GNSS users. The evaluation methods can be also used to quantitatively evaluate the signal quality of other satellites.
2020, 42(11): 2698-2705.
doi: 10.11999/JEIT190510
Abstract:
Inner product encryption is a kind of function encryption which supports inner product form. The public parameter scale of the existing inner product encryption schemes are large. In order to solve this problem, based on prime-order bilinear entropy expansion lemma and Double Pairing Vector Space (DPVS), an inner product encryption scheme is proposed in this paper, which has fewer public parameters and adaptive security. In the private key generation algorithm of the scheme, the components of the user’s attribute with the main private key are combined to generate a vector that can be combined with the key components in the entropy expansion lemma, and in encryption algorithm of the scheme, each component of the inner product vector is combined with ciphertext component in the entropy expansion lemma. Finally, under the condition of prime order bilinear entropy extension lemma and\begin{document}$\textstyle{{\rm{MDDH}}_{k, k + 1}^n}$\end{document} ![]()
![]()
difficult assumption, the adaptive secure of the scheme is proved. The proposed scheme has only 10 group elements as public parameters, which is the smallest compared with the existing inner product encryption schemes.
Inner product encryption is a kind of function encryption which supports inner product form. The public parameter scale of the existing inner product encryption schemes are large. In order to solve this problem, based on prime-order bilinear entropy expansion lemma and Double Pairing Vector Space (DPVS), an inner product encryption scheme is proposed in this paper, which has fewer public parameters and adaptive security. In the private key generation algorithm of the scheme, the components of the user’s attribute with the main private key are combined to generate a vector that can be combined with the key components in the entropy expansion lemma, and in encryption algorithm of the scheme, each component of the inner product vector is combined with ciphertext component in the entropy expansion lemma. Finally, under the condition of prime order bilinear entropy extension lemma and
2020, 42(11): 2706-2712.
doi: 10.11999/JEIT190655
Abstract:
Convolutional Neural Network (CNN) is widely used in the field of intrusion detection technology. It is generally believed that the deeper the network structure, the more accurate in feature extraction and detection accuracy. However, it is accompanied with the problems of gradient dispersion, insufficient generalization ability and low accuracy of parameters. In view of the above problems, the Densely Connected Convolutional Network (DCCNet) is applied into the intrusion detection technology, and achieve the purpose of improving the detection accuracy by using the hybrid loss function. Experiments are performed with the KDD 99 data set, and the experimental results are compared with the commonly used LeNet neural network and VggNet neural network structure. Finally, the analysis shows that the accuracy of detection is improved, and the problem of gradient vanishing during training is alleviated.
Convolutional Neural Network (CNN) is widely used in the field of intrusion detection technology. It is generally believed that the deeper the network structure, the more accurate in feature extraction and detection accuracy. However, it is accompanied with the problems of gradient dispersion, insufficient generalization ability and low accuracy of parameters. In view of the above problems, the Densely Connected Convolutional Network (DCCNet) is applied into the intrusion detection technology, and achieve the purpose of improving the detection accuracy by using the hybrid loss function. Experiments are performed with the KDD 99 data set, and the experimental results are compared with the commonly used LeNet neural network and VggNet neural network structure. Finally, the analysis shows that the accuracy of detection is improved, and the problem of gradient vanishing during training is alleviated.
2020, 42(11): 2713-2719.
doi: 10.11999/JEIT190752
Abstract:
Public Key Encryption with Equality Test (PKEET) is an important method to achieve the equality test of ciphertexts which are generated by the different public key aiming to the same plaintext in cloud environment. In other words, it can tests the plaintext corresponding to the two ciphertext’s equivalence without decrypting the ciphertext, but does not supply the searchable function. Nowadays, the existing PKEET scheme takes directly the message to generate a trapdoor as the proof of equality test, which has low test accuracy and search efficiency. To solve the above problems, a certificateless public key encryption with equality test scheme supporting keyword search (CertificateLess Equality test EncrypTion with keyword Search, CLEETS) is proposed. The scheme determines whether it contains information needed by the user through the keyword search, then performs the equality test according to the search result, which can avoid invalid test. Then, it is proved that the scheme satisfies the indistinguishability of adaptive selection of keywords under the random oracle model. Finally, the comparison analyses of function and efficiency are performed. The results indicate the computation cost of CLEETS scheme is less efficient. Fortunately, it can realizes the function of keyword search in encryption with equality test, which can remedies the inefficiency.
Public Key Encryption with Equality Test (PKEET) is an important method to achieve the equality test of ciphertexts which are generated by the different public key aiming to the same plaintext in cloud environment. In other words, it can tests the plaintext corresponding to the two ciphertext’s equivalence without decrypting the ciphertext, but does not supply the searchable function. Nowadays, the existing PKEET scheme takes directly the message to generate a trapdoor as the proof of equality test, which has low test accuracy and search efficiency. To solve the above problems, a certificateless public key encryption with equality test scheme supporting keyword search (CertificateLess Equality test EncrypTion with keyword Search, CLEETS) is proposed. The scheme determines whether it contains information needed by the user through the keyword search, then performs the equality test according to the search result, which can avoid invalid test. Then, it is proved that the scheme satisfies the indistinguishability of adaptive selection of keywords under the random oracle model. Finally, the comparison analyses of function and efficiency are performed. The results indicate the computation cost of CLEETS scheme is less efficient. Fortunately, it can realizes the function of keyword search in encryption with equality test, which can remedies the inefficiency.
2020, 42(11): 2720-2726.
doi: 10.11999/JEIT190673
Abstract:
The Automatic Dependent Surveillance-Broadcast (ADS-B), as a new surveillance technology, is being vigorously promoted by International Civil Aviation Organization (ICAO). However, overlapping among multiple signals is inevitable because of the randomness of ADS-B signal transmission. An improved Project Algorithm Single Antenna (PASA) algorithm which separates the overlapping signals with one single channel is proposed. Firstly, a new matrix reconstruction method for the data that received by single channel is proposed to decrease the requirement of relative time delay and frequency difference between two ADS-B signals, and then the overlapping signals can be separated by utilizing the projection algorithm. The effectiveness of the algorithm is verified by simulation experiments.
The Automatic Dependent Surveillance-Broadcast (ADS-B), as a new surveillance technology, is being vigorously promoted by International Civil Aviation Organization (ICAO). However, overlapping among multiple signals is inevitable because of the randomness of ADS-B signal transmission. An improved Project Algorithm Single Antenna (PASA) algorithm which separates the overlapping signals with one single channel is proposed. Firstly, a new matrix reconstruction method for the data that received by single channel is proposed to decrease the requirement of relative time delay and frequency difference between two ADS-B signals, and then the overlapping signals can be separated by utilizing the projection algorithm. The effectiveness of the algorithm is verified by simulation experiments.
2020, 42(11): 2727-2734.
doi: 10.11999/JEIT190767
Abstract:
As a new generation of Air Traffic Management(ATM) communication protocol, Automatic Dependent Surveillance-Broadcast(ADS-B) is the key technology of ATM monitoring system in the future. At present, the security of ADS-B is challenged because it broadcasts data in plaintext format. Because ADS-B is susceptible to spoofing, the difference between ADS-B position data and synchronous Secondary Surveillance Radar(SSR) data is taken as sample data. Using Multi-Kernel Support Vector Data Description(MKSVDD) to train samples, a hypersphere classifier is obtained, which can detect anomalous data in ADS-B test samples. In addition, Particle Swarm Optimization (PSO) is used to optimize GaussLapl and GaussTanh MKSVDD penalty factors, coefficients of multi-kernel functions and kernel parameters.The performance of anomaly detection is improved. Experimental results show that PSO-MKSVDD can detect anomalous data of random position deviation, fixed position deviation, Denial Of Service(DOS) attack and replay attack. In addition, compared with other machine learning and deep learning methods, this model has better adaptability and better recall rate and detection rate of anomaly detection.It is proved that this model can be used to detect ADS-B anomalous data.
As a new generation of Air Traffic Management(ATM) communication protocol, Automatic Dependent Surveillance-Broadcast(ADS-B) is the key technology of ATM monitoring system in the future. At present, the security of ADS-B is challenged because it broadcasts data in plaintext format. Because ADS-B is susceptible to spoofing, the difference between ADS-B position data and synchronous Secondary Surveillance Radar(SSR) data is taken as sample data. Using Multi-Kernel Support Vector Data Description(MKSVDD) to train samples, a hypersphere classifier is obtained, which can detect anomalous data in ADS-B test samples. In addition, Particle Swarm Optimization (PSO) is used to optimize GaussLapl and GaussTanh MKSVDD penalty factors, coefficients of multi-kernel functions and kernel parameters.The performance of anomaly detection is improved. Experimental results show that PSO-MKSVDD can detect anomalous data of random position deviation, fixed position deviation, Denial Of Service(DOS) attack and replay attack. In addition, compared with other machine learning and deep learning methods, this model has better adaptability and better recall rate and detection rate of anomaly detection.It is proved that this model can be used to detect ADS-B anomalous data.
2020, 42(11): 2735-2741.
doi: 10.11999/JEIT190426
Abstract:
As a generalized linear model, Sparse Multinomial Logistic Regression (SMLR) is widely used in various multi-class task scenarios. SMLR introduces Laplace priori into Multinomial Logistic Regression (MLR) to make its solution sparse, which allows the classifier to embed feature selection in the process of classification. In order to solve the problem of non-linear data classification, Kernel Sparse Multinomial Logistic Regression (KSMLR) is obtained by kernel trick. KSMLR can map nonlinear feature data into high-dimensional and even infinite-dimensional feature spaces through kernel functions, so that its features can be fully expressed and eventually classified effectively. In addition, the multi-kernel learning algorithm based on centered alignment is used to map the data in different dimensions through different kernel functions. Then center-aligned similarity can be used to select flexibly multi-kernel learning weight coefficients, so that the classifier has better generalization ability. The experimental results show that the sparse multinomial logistic regression algorithm based on center-aligned multi-kernel learning is superior to the conventional classification algorithm in classification accuracy.
As a generalized linear model, Sparse Multinomial Logistic Regression (SMLR) is widely used in various multi-class task scenarios. SMLR introduces Laplace priori into Multinomial Logistic Regression (MLR) to make its solution sparse, which allows the classifier to embed feature selection in the process of classification. In order to solve the problem of non-linear data classification, Kernel Sparse Multinomial Logistic Regression (KSMLR) is obtained by kernel trick. KSMLR can map nonlinear feature data into high-dimensional and even infinite-dimensional feature spaces through kernel functions, so that its features can be fully expressed and eventually classified effectively. In addition, the multi-kernel learning algorithm based on centered alignment is used to map the data in different dimensions through different kernel functions. Then center-aligned similarity can be used to select flexibly multi-kernel learning weight coefficients, so that the classifier has better generalization ability. The experimental results show that the sparse multinomial logistic regression algorithm based on center-aligned multi-kernel learning is superior to the conventional classification algorithm in classification accuracy.
2020, 42(11): 2742-2748.
doi: 10.11999/JEIT190473
Abstract:
The Adaboost algorithm provides noteworthy benefits over the traditional machine algorithms for numerous applications, including face recognition, text recognition, and pedestrian detection. However, it takes a lot of time during the training process that affects the overall performance. Adaboost fast training algorithm based on adaptive weight (Adaptable Weight Trimming Adaboost, AWTAdaboost) is proposed in this work to address the aforementioned issue. First, the algorithm counts the current sample weight distribution of each iteration. Then, it combines the maximum value of current sample weights with data size to calculate the adaptable coefficients. The sample whose weight is less than the adaptable coefficients is discarded, that speeds up the training. The experimental results validate that it can significantly speed up the training speed while ensuring the detection effect. Compared with other fast training algorithms, the detection effect is better when the training time is close to each other.
The Adaboost algorithm provides noteworthy benefits over the traditional machine algorithms for numerous applications, including face recognition, text recognition, and pedestrian detection. However, it takes a lot of time during the training process that affects the overall performance. Adaboost fast training algorithm based on adaptive weight (Adaptable Weight Trimming Adaboost, AWTAdaboost) is proposed in this work to address the aforementioned issue. First, the algorithm counts the current sample weight distribution of each iteration. Then, it combines the maximum value of current sample weights with data size to calculate the adaptable coefficients. The sample whose weight is less than the adaptable coefficients is discarded, that speeds up the training. The experimental results validate that it can significantly speed up the training speed while ensuring the detection effect. Compared with other fast training algorithms, the detection effect is better when the training time is close to each other.
2020, 42(11): 2749-2755.
doi: 10.11999/JEIT190516
Abstract:
The existing Augmented State-Interracting Multiple Model (AS-IMM) algorithm suffers from the problem that it relies on the prior information of the covariance matrix of the measurement noise. When the prior information is unavailable or inaccurate, the tracking performance of AS-IMM will be degraded. In order to overcome this problem, a novel adaptive Bayesian Variational Augmented State-Interracting Multiple Model (VB-AS-IMM) algorithm is proposed. Firstly, the variational Bayesian inference probabilistic model of the augmented state and the covariance matrix of the measurement noise for the jump Markovarian system is presented. Secondly, the probabilistic model is proven to be non-conjugated. Finally, by introducing a novel post processing method, the suboptimal solution to calculate the joint posterior distribution is proposed. The proposed algorithm can estimate the unknown covariance matrix of the measurement noise online, thus it is more robust and has higher adaptability. Simulation result verifies good performance of the proposed algorithm.
The existing Augmented State-Interracting Multiple Model (AS-IMM) algorithm suffers from the problem that it relies on the prior information of the covariance matrix of the measurement noise. When the prior information is unavailable or inaccurate, the tracking performance of AS-IMM will be degraded. In order to overcome this problem, a novel adaptive Bayesian Variational Augmented State-Interracting Multiple Model (VB-AS-IMM) algorithm is proposed. Firstly, the variational Bayesian inference probabilistic model of the augmented state and the covariance matrix of the measurement noise for the jump Markovarian system is presented. Secondly, the probabilistic model is proven to be non-conjugated. Finally, by introducing a novel post processing method, the suboptimal solution to calculate the joint posterior distribution is proposed. The proposed algorithm can estimate the unknown covariance matrix of the measurement noise online, thus it is more robust and has higher adaptability. Simulation result verifies good performance of the proposed algorithm.
2020, 42(11): 2756-2764.
doi: 10.11999/JEIT190617
Abstract:
Extreme Learning Machine (ELM) has unique advantages such as fast learning speed, simplicity of implementation, and excellent generalization performance. However, the performance of a single ELM is unstable in classification. Ensemble learning can effectively improve the classification ability of single ELMs, but it may incur the rapid increase in memory space and computational overheads as the increase of the data size and the number of ELMs. To address this issue, a Selective Ensemble approach of ELM based on Double-Fault measure (DFSEE) is proposed, and it is evaluated by theoretical and experimental analysis simultaneously. Firstly, multiple training subsets extracted from a training dataset are obtained employing the bootstrap sampling method, and an initial pool of base ELMs is constructed by independently training multiple ELMs on different training subsets; Secondly, the ELMs in pool are sorted in ascending order according to their double-fault measures of those ELMs. Finally, it starts with one ELM and grows the ensemble by adding new base ELMs according to the order, the final ensemble of ELMs can be achieved with the best classification ability, and the theoretical basis of DFSEE is analyzed. Experimental results on 10 benchmark classification tasks show that DFSEE can achieve better results with less number of ELMs by comparing with other approaches, and its validity and significance.
Extreme Learning Machine (ELM) has unique advantages such as fast learning speed, simplicity of implementation, and excellent generalization performance. However, the performance of a single ELM is unstable in classification. Ensemble learning can effectively improve the classification ability of single ELMs, but it may incur the rapid increase in memory space and computational overheads as the increase of the data size and the number of ELMs. To address this issue, a Selective Ensemble approach of ELM based on Double-Fault measure (DFSEE) is proposed, and it is evaluated by theoretical and experimental analysis simultaneously. Firstly, multiple training subsets extracted from a training dataset are obtained employing the bootstrap sampling method, and an initial pool of base ELMs is constructed by independently training multiple ELMs on different training subsets; Secondly, the ELMs in pool are sorted in ascending order according to their double-fault measures of those ELMs. Finally, it starts with one ELM and grows the ensemble by adding new base ELMs according to the order, the final ensemble of ELMs can be achieved with the best classification ability, and the theoretical basis of DFSEE is analyzed. Experimental results on 10 benchmark classification tasks show that DFSEE can achieve better results with less number of ELMs by comparing with other approaches, and its validity and significance.
2020, 42(11): 2765-2772.
doi: 10.11999/JEIT190496
Abstract:
Making use of image structure information is a difficult problem in dictionary learning, the traditional nonparametric Bayesian algorithms lack the ability to make full use of image structure information, and faces problem of inefficiency. To this end, a dictionary learning algorithm called Structure Similarity Clustering-Beta Process Factor Analysis (SSC-BPFA) is proposed in this paper, which completes efficient learning of the probabilistic model via variational Bayesian inference and ensures the convergence and self-adaptability of the algorithm. Image denoising and inpainting experiments show that this algorithm has significant advantages in representation accuracy, structure similarity index and running efficiency compared with the existing nonparametric Bayesian dictionary learning algorithms.
Making use of image structure information is a difficult problem in dictionary learning, the traditional nonparametric Bayesian algorithms lack the ability to make full use of image structure information, and faces problem of inefficiency. To this end, a dictionary learning algorithm called Structure Similarity Clustering-Beta Process Factor Analysis (SSC-BPFA) is proposed in this paper, which completes efficient learning of the probabilistic model via variational Bayesian inference and ensures the convergence and self-adaptability of the algorithm. Image denoising and inpainting experiments show that this algorithm has significant advantages in representation accuracy, structure similarity index and running efficiency compared with the existing nonparametric Bayesian dictionary learning algorithms.
2020, 42(11): 2773-2780.
doi: 10.11999/JEIT190243
Abstract:
To solve the problem that the traditional Compressed Sensing (CS) algorithm based on Total Variation (TV) model can not effectively restore details and texture of image, which leads to over-smoothing of reconstructed image, an image Compressed Sensing (CS) reconstruction algorithm based on Structural Group TV (SGTV) model is proposed. The proposed algorithm utilizes the non-local self-similarity and structural sparsity of image, and converts the CS recovery problem into the total variation minimization problem of the structural group constructed by non-local self-similar image blocks. In addition, the optimization model of the proposed algorithm is built with regularization constraint of the structural group total variation model, and it uses the split Bregman iterative algorithm to separate it into multiple sub-problems, and then solves them respectively. The proposed algorithm makes full use of the information and structural sparsity of image to protects the image details and texture. The experimental results demonstrate that the proposed algorithm achieves significant performance improvements over the state-of-the-art total variation based algorithm in both PSNR and visual perception.
To solve the problem that the traditional Compressed Sensing (CS) algorithm based on Total Variation (TV) model can not effectively restore details and texture of image, which leads to over-smoothing of reconstructed image, an image Compressed Sensing (CS) reconstruction algorithm based on Structural Group TV (SGTV) model is proposed. The proposed algorithm utilizes the non-local self-similarity and structural sparsity of image, and converts the CS recovery problem into the total variation minimization problem of the structural group constructed by non-local self-similar image blocks. In addition, the optimization model of the proposed algorithm is built with regularization constraint of the structural group total variation model, and it uses the split Bregman iterative algorithm to separate it into multiple sub-problems, and then solves them respectively. The proposed algorithm makes full use of the information and structural sparsity of image to protects the image details and texture. The experimental results demonstrate that the proposed algorithm achieves significant performance improvements over the state-of-the-art total variation based algorithm in both PSNR and visual perception.
2020, 42(11): 2781-2787.
doi: 10.11999/JEIT190330
Abstract:
For the high complexity of High Efficiency Video Coding (HEVC) intra prediction coding algorithm, an HEVC intra prediction optimization algorithm based on Region Of Interest (ROI) is proposed. Firstly, the algorithm divides the Region Of Interest and Non-Region Of Interest (NROI) of the current frame according to image saliency; Then, the final grading depth of the current coding unit is determined by the proposed fast Coding Unit (CU) partitioning algorithm based on spatial correlation in the ROI, and the unnecessary CU partitioning process is skipped. Finally, the proposed Prediction Unit (PU) mode fast selection algorithm is used to calculate the energy and direction of the current PU based on the ROI, and the current PU prediction mode is determined according to the energy and direction, and the correlation calculation of the rate distortion cost is reduced, Achieving the purposes of reducing coding complexity and saving coding time. The experimental results show that the proposed algorithm can reduce the coding time by 47.37% on average when the Peak Signal-to-Noise Ratio (PSNR) loss is only 0.0390 dB.
For the high complexity of High Efficiency Video Coding (HEVC) intra prediction coding algorithm, an HEVC intra prediction optimization algorithm based on Region Of Interest (ROI) is proposed. Firstly, the algorithm divides the Region Of Interest and Non-Region Of Interest (NROI) of the current frame according to image saliency; Then, the final grading depth of the current coding unit is determined by the proposed fast Coding Unit (CU) partitioning algorithm based on spatial correlation in the ROI, and the unnecessary CU partitioning process is skipped. Finally, the proposed Prediction Unit (PU) mode fast selection algorithm is used to calculate the energy and direction of the current PU based on the ROI, and the current PU prediction mode is determined according to the energy and direction, and the correlation calculation of the rate distortion cost is reduced, Achieving the purposes of reducing coding complexity and saving coding time. The experimental results show that the proposed algorithm can reduce the coding time by 47.37% on average when the Peak Signal-to-Noise Ratio (PSNR) loss is only 0.0390 dB.
2020, 42(11): 2788-2795.
doi: 10.11999/JEIT190452
Abstract:
Focusing on the issue that the detection accuracy of moving object is significantly reduced by background motion, a low-rank and sparse decomposition based moving object detection method is developed. Firstly, in order to solve the problem that the nuclear norm over-penalizing large singular values lead to the optimal solution of the obtained minimization problem can not be obtained and then the detection performance is decreased, the gamma norm (\begin{document}$\gamma {\rm{ - norm}}$\end{document} ![]()
![]()
) is introduced to acquire almost unbiased approximation of rank function. In what follows, the \begin{document}${L_{{1 / 2}}}$\end{document} ![]()
![]()
norm is used to extract the sparse foreground object to enhance the robustness to noise, and the spatial continuity constraint is proposed to suppress dynamic background pixels such that the moving object detection model can be constructed on the basis of the sparse and spatially discontinuous nature of the false alarm pixels. After that, the Augmented Lagrange Multiplier (ALM) method, which is the extension of the Alternating Direction Minimizing (ADM) strategy, can be employed to deal with the acquired constrained minimization problem. Compared with some state-of-the-art algorithms, the experimental results show that the proposed method can significantly improve the accuracy of moving object detection in the case of dynamic background.
Focusing on the issue that the detection accuracy of moving object is significantly reduced by background motion, a low-rank and sparse decomposition based moving object detection method is developed. Firstly, in order to solve the problem that the nuclear norm over-penalizing large singular values lead to the optimal solution of the obtained minimization problem can not be obtained and then the detection performance is decreased, the gamma norm (
2020, 42(11): 2796-2804.
doi: 10.11999/JEIT190403
Abstract:
Because of the existent video dehazing algorithm lacks the analysis of the video structure correlation constraint and inter-frame consistency, it is easy to cause the dehazing results of continuous frames to have sudden changes in color and brightness. Meanwhile, the edge of foreground target is also prone to degradation. Focus on the aforementioned problems, a novel video dehazing algorithm via haze-line prior with spatiotemporal correlation constraint is proposed, which improves the accuracy and robustness of video dehazing result by bringing the structural relevance and temporal consistency of each frame. Firstly, the dark channel and haze-line prior are utilized to estimate the atmospheric light vector and initial transmission image of each frame. Then a weighted least square edge preserving smoothing filter is introduced to smooth the initial transmission image and eliminate the influence of singularities and noises on the estimated results. Furthermore, the camera parameters are calculated to describe the time series variation of the transmission image between continuous frames, and the independently obtained transmission image of each frame is corrected with temporal correlation constraint. Finally, according to the physical model, the video dehazing results are obtained. The experimental results of qualitative and quantitative comparison show that the proposed algorithm could make the inter-frame transition more smooth, and restore the color of each frame more accurately. Besides, more details are displayed at the edge of the dehazing results.
Because of the existent video dehazing algorithm lacks the analysis of the video structure correlation constraint and inter-frame consistency, it is easy to cause the dehazing results of continuous frames to have sudden changes in color and brightness. Meanwhile, the edge of foreground target is also prone to degradation. Focus on the aforementioned problems, a novel video dehazing algorithm via haze-line prior with spatiotemporal correlation constraint is proposed, which improves the accuracy and robustness of video dehazing result by bringing the structural relevance and temporal consistency of each frame. Firstly, the dark channel and haze-line prior are utilized to estimate the atmospheric light vector and initial transmission image of each frame. Then a weighted least square edge preserving smoothing filter is introduced to smooth the initial transmission image and eliminate the influence of singularities and noises on the estimated results. Furthermore, the camera parameters are calculated to describe the time series variation of the transmission image between continuous frames, and the independently obtained transmission image of each frame is corrected with temporal correlation constraint. Finally, according to the physical model, the video dehazing results are obtained. The experimental results of qualitative and quantitative comparison show that the proposed algorithm could make the inter-frame transition more smooth, and restore the color of each frame more accurately. Besides, more details are displayed at the edge of the dehazing results.
2020, 42(11): 2805-2812.
doi: 10.11999/JEIT190604
Abstract:
Mask R-CNN is a relatively mature method for image instance segmentation at this stage. For the problems of segmentation boundary accuracy and poor robustness of fuzzy pictures in Mask R-CNN algorithm, an improved Mask R-CNN method for image instance segmentation is proposed. This method first proposes that on the Mask branch, Convolution Condition Random Field(ConvCRF) is used to optimize the Mask branch, and the candidate area is further segmented, and uses FCN-ConvCRF branch to replace the original branch. Then, a new anchor size and IOU standard are proposed to enable the RPN candidate box cover all the instance areas. Finally, a training method is used to add a part of data transformed by the transformation network. Compared with the original algorithm, the total mAP value is improved by 3%, and the accuracy and robustness of segmentation boundary are improved to some extent.
Mask R-CNN is a relatively mature method for image instance segmentation at this stage. For the problems of segmentation boundary accuracy and poor robustness of fuzzy pictures in Mask R-CNN algorithm, an improved Mask R-CNN method for image instance segmentation is proposed. This method first proposes that on the Mask branch, Convolution Condition Random Field(ConvCRF) is used to optimize the Mask branch, and the candidate area is further segmented, and uses FCN-ConvCRF branch to replace the original branch. Then, a new anchor size and IOU standard are proposed to enable the RPN candidate box cover all the instance areas. Finally, a training method is used to add a part of data transformed by the transformation network. Compared with the original algorithm, the total mAP value is improved by 3%, and the accuracy and robustness of segmentation boundary are improved to some extent.
2020, 42(11): 2813-2818.
doi: 10.11999/JEIT200123
Abstract:
Canonical Correlation Analysis (CCA) is a classic multi-modal feature learning method, which can learn low-dimensional features with the maximum correlation from different modalities. However, it is difficult for CCA to find the nonlinear manifold structures hidden in the sample spaces. This paper proposes a multi-modal feature learning method based on geodesic manifolds, namely Geodesic Locality Canonical Correlation Analysis (GeoLCCA).The geodesic distances are used to construct the geodesic scatters of low-dimensional correlation features, and the nonlinear correlation features with better discriminative power are learned by maximizing the between-modal correlation and minimizing the within-modal geodesic scatters. This paper not only analyzes the proposed method in theory, but also verifies the effective of the proposed method on the real-world image datasets.
Canonical Correlation Analysis (CCA) is a classic multi-modal feature learning method, which can learn low-dimensional features with the maximum correlation from different modalities. However, it is difficult for CCA to find the nonlinear manifold structures hidden in the sample spaces. This paper proposes a multi-modal feature learning method based on geodesic manifolds, namely Geodesic Locality Canonical Correlation Analysis (GeoLCCA).The geodesic distances are used to construct the geodesic scatters of low-dimensional correlation features, and the nonlinear correlation features with better discriminative power are learned by maximizing the between-modal correlation and minimizing the within-modal geodesic scatters. This paper not only analyzes the proposed method in theory, but also verifies the effective of the proposed method on the real-world image datasets.
2020, 42(11): 2819-2826.
doi: 10.11999/JEIT190482
Abstract:
The advantages of digital control in the field of power electronics lead to an increasing use of Digital Pulse Width Modulation (DPWM). However, the insufficient resolution of DPWM is one of the main factors that constrain the development of digital control technology in the field of switch mode power supplies. For the application requirements of high-resolution DPWM, this paper proposes a high-resolution DPWM circuit based on high-speed carry chain structure. The circuit comprises of counters, comparators, fixed phase shift PLL units and high-speed carry chains, which can effectively increase resolution. The circuit is also implemented on Altera’s Cyclone IV low-cost Field-Programmable Gate Array (FPGA) devices. The experimental results show that the resolution of the structure can reach 56 ps with 70 MHz input reference clock. In addition, the circuit also has wide switching frequency adjustment range and good linearity.
The advantages of digital control in the field of power electronics lead to an increasing use of Digital Pulse Width Modulation (DPWM). However, the insufficient resolution of DPWM is one of the main factors that constrain the development of digital control technology in the field of switch mode power supplies. For the application requirements of high-resolution DPWM, this paper proposes a high-resolution DPWM circuit based on high-speed carry chain structure. The circuit comprises of counters, comparators, fixed phase shift PLL units and high-speed carry chains, which can effectively increase resolution. The circuit is also implemented on Altera’s Cyclone IV low-cost Field-Programmable Gate Array (FPGA) devices. The experimental results show that the resolution of the structure can reach 56 ps with 70 MHz input reference clock. In addition, the circuit also has wide switching frequency adjustment range and good linearity.